Re:invent 2024 News


AWS News Articles by Tag

Tags

2024 (510) AWS-App-Studio (1) amazon-amp (1) amazon-api-gateway (1) amazon-athena (1) amazon-aurora (9) amazon-aurora-dsql (1) amazon-bedrock (37) amazon-bedrock-partyrock (1) amazon-braket (1) amazon-cloudfront (8) amazon-cloudwatch (22) amazon-cloudwatch-logs (3) amazon-cognito (4) amazon-connect (23) amazon-datazone (6) amazon-documentdb (1) amazon-dynamodb (7) amazon-ebs-snapshots-archive (1) amazon-ec2 (32) amazon-ec2-auto-scaling (6) amazon-ec2-trn2 (1) amazon-ecr (1) amazon-ecs (6) amazon-efs (1) amazon-eks (7) amazon-elastic-block-store (2) amazon-elastic-file-system (1) amazon-elastic-load-balancing (3) amazon-elastic-vmware-service (1) amazon-elasticache (2) amazon-emr (2) amazon-eventBridge (4) amazon-fsx-for-lustre (2) amazon-fsx-for-openzfs (1) amazon-gamelift (1) amazon-guardduty (2) amazon-ivs (1) amazon-kendra (1) amazon-keyspaces (2) amazon-kinesis (2) amazon-kinesis-firehose (2) amazon-kinesis-streams (5) amazon-location-service (1) amazon-machine-learning (47) amazon-managed-service-for-apache-flink (5) amazon-memorydb (1) amazon-mq (1) amazon-msk (2) amazon-mwaa (1) amazon-neptune (5) amazon-nova (1) amazon-omics (2) amazon-opensearch-service (15) amazon-polly (2) amazon-q (39) amazon-quicksight (13) amazon-rds (15) amazon-rds-for-mysql (1) amazon-rds-for-oracle (2) amazon-rds-for-sql-server (2) amazon-redshift (11) amazon-route-53 (1) amazon-s3 (21) amazon-sagemaker (21) amazon-sagemaker-canvas (1) amazon-sagemaker-hyperpod (3) amazon-sagemaker-lakehouse (1) amazon-security-lake (2) amazon-ses (2) amazon-sns (3) amazon-sqs (2) amazon-timestream (3) amazon-verified-permissions (1) amazon-virtual-private-cloud (2) amazon-vpc (5) amazon-workspaces (4) analytics (82) application-services (22) applications (2) artificial-intelligence (94) aws-account-billing (7) aws-amplify (3) aws-appconfig (1) aws-application-discovery-service (4) aws-appsync (3) aws-artifact (1) aws-b2b-data-interchange (2) aws-backup (4) aws-batch (1) aws-chatbot (3) aws-clean-rooms (2) aws-client-vpn (1) aws-cloud-wan (2) aws-cloudformation (7) aws-cloudtrail (3) aws-codebuild (2) aws-codepipeline (2) aws-command-line-interface (1) aws-compute-optimizer (2) aws-config (2) aws-console-mobile-application (1) aws-control-tower (6) aws-cost-explorer (1) aws-data-exchange (1) aws-data-transfer-terminal (1) aws-database-migration-service (4) aws-deadline-cloud (1) aws-directory-service (1) aws-elastic-beanstalk (3) aws-elastic-disaster-recovery (1) aws-elemental-medialive (1) aws-elemental-mediapackage (1) aws-fault-injection-simulator (1) aws-firewall-manager (1) aws-glue (12) aws-govcloud-us (118) aws-health (1) aws-healthimaging (1) aws-iam (4) aws-iam-identity-center (4) aws-iot-core (1) aws-iot-device-management (1) aws-iot-sitewise (1) aws-lake-formation (5) aws-lambda (11) aws-license-manager (1) aws-mainframe-modernization (1) aws-managed-services (1) aws-management-console (3) aws-marketplace (15) aws-marketplace-and-partners (14) aws-network-firewall (1) aws-organizations (5) aws-outposts (3) aws-private-certificate-authority (1) aws-privatelink (3) aws-resilience-hub (1) aws-resource-explorer (1) aws-security-hub (2) aws-security-incident-response (1) aws-shield (1) aws-step-functions (2) aws-support (2) aws-systems-manager (2) aws-tools-and-sdks (1) aws-transfer-family (2) aws-transit-gateway (1) aws-user-notifications (3) aws-verified-access (1) aws-well-architected-tool (1) aws-wickr (1) aws-x-ray (1) bottlerocket (1) business-productivity (24) cloud-financial-management (8) compute (61) containers (13) cost-management (10) cost-usage-reports (1) customer-enablement (2) databases (53) desktop-and-app-streaming (4) developer-tools (34) game-development (2) internet-of-things (3) management-and-governance (59) media-services (3) messaging (19) migration (9) mobile-services (9) networking (14) networking-and-content-delivery (17) nice-dcv (1) partner-network (23) quantum-technologies (1) security-identity-and-compliance (38) serverless (13) storage (35) tag-policies (1)

Articles by Tag

AWS-App-Studio

AWS App Studio is now generally available

AWS App Studio, a generative AI–powered app-building service that uses natural language to build enterprise-grade applications, is now generally available. App Studio helps technical professionals (such as IT project managers, data engineers, enterprise architects, and solution architects) build intelligent, secure, and scalable applications without requiring deep software development skills. App Studio handles deployments, operations, and maintenance, allowing users to focus on solving business challenges and boosting productivity.

App Studio is the fastest and easiest way to build enterprise-grade applications. Getting started is simple. Users describe the application they need in natural language, and App Studio’s generative AI–powered assistant creates an application with a multipage UI, a data model, and business logic. Builders can easily modify applications using natural language, or with App Studio’s visual canvas. They can also enhance their applications with generative AI using built-in components to generate content, summarize information, and analyze files. Applications can connect to existing data using built-in connectors for AWS (such as Amazon Aurora, Amazon DynamoDB, and Amazon S3) and Salesforce, and also hundreds of third-party services (such as HubSpot, Jira, Twilio, and Zendesk) using an API connector. Users can customize the look and feel of their applications to align with brand guidelines by selecting their logo and company color palette. With App Studio it’s free to build—you only pay for the time employees spend using the published applications, saving up to 80% compared to other comparable offerings.

App Studio is generally available in the following AWS Regions: US West (Oregon) and Europe (Ireland).

To learn more and get started, visit AWS App Studio, review the documentation, and read the announcement.

Read more


amazon-amp

Amazon Managed Service for Prometheus collector adds support for update and AWS console

Amazon Managed Service for Prometheus collector, a fully-managed agentless collector for Prometheus metrics, adds support for updating the scrape configuration inline and support for configuration via the Amazon Managed Service for Prometheus AWS console. Starting today, you can update collector parameters including scrape configuration as well as the destination Amazon Managed Service for Prometheus workspace. Further, you can view and edit collectors from within the Amazon Managed Service for Prometheus console.

Customers can now quickly iterate on the scrape configuration of Amazon Managed Service for Prometheus collectors. With this launch, customers can add, remove, and update scrape targets and jobs without downtime. In addition, you can now use the Amazon Managed Service for Prometheus AWS console to list, create, edit, and delete collectors.

Amazon Managed Service for Prometheus collector is available in all regions where Amazon Managed Service for Prometheus is available. To learn more about Amazon Managed Service for Prometheus collector, visit the user guide or product page.

Read more


amazon-api-gateway

Amazon API Gateway now supports Custom Domain Name for private REST APIs

Amazon API Gateway (APIGW) now gives you the ability to manage your private REST APIs using custom user-friendly private DNS name like private.example.com, simplifying API discovery. This feature enhances your security posture by continuing to encrypt your private API traffic with Transport Layer Security (TLS), while providing full control over managing the lifecycle of the TLS certificate associated with your domain.

API providers can get started with this feature in four simple steps using APIGW console and/or API(s). First, create a private custom domain. Second, configure an Amazon Certificate Manager (ACM) provided or imported certificate for the domain. Third, map multiple private APIs using base path mappings. Fourth, control invokes to the domain using resource policies. API providers can optionally share the domain across accounts using Amazon Resource Access Manager (RAM) to provide consumers the ability to access APIs from different accounts. Once a domain is shared using RAM, a consumer can use VPC endpoint(s) to invoke multiple private custom domains across accounts.

Custom domain name for private REST APIs is now available on API Gateway in all AWS Regions, including the AWS GovCloud (US) Regions. Please visit the API Gateway documentation and AWS blog post to learn more.
 

Read more


amazon-athena

Amazon SageMaker Lakehouse integrated access controls now available in Amazon Athena federated queries

Amazon SageMaker now supports connectivity, discovery, querying, and enforcing fine-grained data access controls on federated sources when querying data with Amazon Athena. Athena is a query service that makes it simple to analyze your data lake and federated data sources such as Amazon Redshift, Amazon DynamoDB, or Snowflake using SQL without extract, transform, and load (ETL) scripts. Now, data workers can connect to and unify these data sources within SageMaker Lakehouse. Federated source metadata is unified in SageMaker Lakehouse, where you apply fine-grained policies in one place, helping to streamline analytics workflows and secure your data.

Log into Amazon SageMaker Unified Studio, connect to a federated data source in SageMaker Lakehouse, and govern data with column- and tag-based permissions that are enforced when querying federated data sources with Athena. In addition to the SageMaker Unified Studio, you can connect to these data sources through the Athena console and API. To help you automate and streamline connector set up, the new user experiences allow you to create and manage connections to data sources with ease.

Now, organizations can extract insights from a unified set of data sources while strengthening security posture, wherever your data is stored. The unification and fine-grained access controls on federated sources are available in all AWS Regions where SageMaker Lakehouse is available. To learn more, visit SageMaker Lakehouse documentation.

Read more


amazon-aurora

Amazon Aurora now available as a quick create vector store in Amazon Bedrock Knowledge Bases

Amazon Aurora PostgreSQL is now available as a quick create vector store in Amazon Bedrock Knowledge Bases. With the new Aurora quick create option, developers and data scientists building generative AI applications can select Aurora PostgreSQL as their vector store with one click to deploy an Aurora Serverless cluster preconfigured with pgvector in minutes. Aurora Serverless is an on-demand, autoscaling configuration where capacity is adjusted automatically based on application demand, making it ideal as a developer vector store.

Knowledge Bases securely connects foundation models (FMs) running in Bedrock to your company data sources for Retrieval Augmented Generation (RAG) to deliver more relevant, context-specific, and accurate responses that make your FM more knowledgeable about your business. To implement RAG, organizations must convert data into embeddings (vectors) and store these embeddings in a vector store for similarity search in generative artificial intelligence (AI) applications. Aurora PostgreSQL, with the pgvector extension, has been supported as a vector store in Knowledge Bases for existing Aurora databases. With the new quick create integration with Knowledge Bases, Aurora is now easier to set up as a vector store for use with Bedrock.

The quick create option in Bedrock Knowledge Bases is available in these regions with the exception of AWS GovCloud (US-West) which is planned for Q4 2024. To learn more about RAG with Amazon Bedrock and Aurora, see Amazon Bedrock Knowledge Bases.

Amazon Aurora combines the performance and availability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. To get started using Amazon Aurora PostgreSQL as a vector store for Amazon Bedrock Knowledge Bases, take a look at our documentation.

Read more


Announcing Amazon Aurora DSQL (Preview)

Today, AWS announces the preview of Amazon Aurora DSQL, a new serverless, distributed SQL database with active-active high availability. Aurora DSQL allows you to build always available applications with virtually unlimited scalability, the highest availability, and zero infrastructure management. It is designed to make scaling and resiliency effortless for your applications, and offers the fastest distributed SQL reads and writes.

Aurora DSQL provides virtually unlimited horizontal scaling with the flexibility to independently scale reads, writes, compute, and storage. It automatically scales to meet any workload demand without database sharding or instance upgrades. Its active-active distributed architecture is designed for 99.99% single-Region and 99.999% multi-Region availability with no single point of failure, and automated failure recovery. This ensures that all reads and writes to any Regional endpoint are strongly consistent and durable. Aurora DSQL is PostgreSQL compatible, offering an easy-to-use developer experience.

Aurora DSQL is now available in preview in the following AWS Regions: US East (N. Virginia), US East (Ohio), and US West (Oregon). 

To learn more about Aurora DSQL features and benefits, check out the Aurora DSQL overview page and documentation. Aurora DSQL is available at no charge during preview. Get started in only a few steps by going to the Aurora DSQL console or using the Aurora DSQL API or AWS CLI.

Read more


Amazon Aurora now supports Graviton4-based R8g database instances

AWS Graviton4-based R8g database instances are now generally available for Amazon Aurora with PostgreSQL compatibility and Amazon Aurora with MySQL compatibility in US East (N. Virginia, Ohio), US West (Oregon), and Europe (Frankfurt) regions. R8g instances offer larger instance sizes, up to 48xlarge and features an 8:1 ratio of memory to vCPU, and the latest DDR5 memory. Graviton4-based instances provide up to a 40% performance improvement and up to 29% price/performance improvement for on-demand pricing over Graviton3-based instances of equivalent sizes on Amazon Aurora databases, depending on database engine, version, and workload.

You can spin up R8g database instances in the Amazon RDS Management Console or using the AWS CLI. Upgrading a database instance to R8g instance family requires a simple instance type modification. For more details, refer to the Aurora documentation.

Amazon Aurora is designed for unparalleled high performance and availability at global scale with PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. To get started with Amazon Aurora, take a look at our getting started page.

Read more


AWS Compute Optimizer now supports rightsizing recommendations for Amazon Aurora

AWS Compute Optimizer now provides recommendations for Amazon Aurora DB instances. These recommendations help you identify idle database instances and choose the optimal DB instance class, so you can reduce costs for unused resources and increase the performance of under-provisioned workloads.

AWS Compute Optimizer automatically analyzes Amazon CloudWatch metrics such as CPU utilization, network throughput, and database connections to generate recommendations for your DB instances running Amazon Aurora MySQL-Compatible Edition and Aurora PostgreSQL-Compatible Edition engines. If you enable Amazon RDS Performance Insights on your DB instances, Compute Optimizer will analyze additional metrics such as DBLoad and out-of-memory counters to give you more insights to choose the optimal DB instance configuration. With this launch, AWS Compute Optimizer now supports recommendations for Amazon RDS for MySQL, Amazon RDS for PostgreSQL, and Amazon Aurora database engines.

This new feature is available in all AWS Regions where AWS Compute Optimizer is available except the AWS GovCloud (US) and the China Regions. To learn more about the new feature updates, please visit Compute Optimizer’s product page and user guide.

Read more


Amazon Aurora now supports PostgreSQL 17.0 in the Amazon RDS Database preview environment

Amazon Aurora PostgreSQL-Compatible Edition now supports PostgreSQL version 17.0 in the Amazon RDS Database Preview Environment, allowing you to evaluate PostgreSQL 17.0 on Amazon Aurora PostgreSQL. PostgreSQL 17.0 was released by the PostgreSQL community on September 26, 2024. PostgreSQL 17 adds new features like a new memory management system for VACUUM and new SQL/JSON capabilities, including constructors, identity functions, and the JSON_TABLE()function. To learn more about PostgreSQL 17, read here.

Database instances in the RDS Database Preview Environment allow testing of a new database engine without the hassle of having to self-install, provision, and manage a preview version of the Aurora PostgreSQL database software. Clusters are retained for a maximum period of 60 days and are automatically deleted after this retention period. Amazon RDS Database Preview Environment database instances are priced the same as production Aurora instances created in the US East (Ohio) Region.
 

Amazon Aurora is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. To get started with Amazon Aurora, take a look at our getting started page.

Read more


Amazon Aurora Serverless v2 supports scaling to zero capacity

Amazon Aurora Serverless v2 now supports scaling to 0 Aurora Capacity Units (ACUs). This launch enables the database to automatically pause after a period of inactivity based on database connections. When the first connection is requested, the database will automatically resume and scale to meet the application demand. Aurora Serverless v2 measures capacity in ACUs where each ACU is a combination of approximately 2 gibibytes (GiB) of memory, corresponding CPU, and networking. You specify the capacity range and the database scales within this range to support your application’s needs.

With 0 ACUs, customers can now save cost during periods of database inactivity. Instead of scaling down to 0.5 ACUs, the database can now scale down to 0 ACUs. You can get started with this feature with a new cluster or your existing cluster with just a few clicks in the AWS Management console. For a new cluster, set 0 ACUs for the minimum capacity setting. For existing clusters, update to supported versions and then modify the minimum capacity setting to 0 ACUs. 0 ACUs is supported for Aurora PostgreSQL 13.15+, 14.12+, 15.7+, and 16.3+, and Aurora MySQL 3.08+.

Aurora Serverless is an on-demand, automatic scaling configuration for Amazon Aurora. It adjusts capacity in fine-grained increments to provide just the right amount of database resources for an application’s needs. For pricing details and Region availability, visit Amazon Aurora Pricing. To learn more, read the documentation, and get started by creating an Aurora Serverless v2 database using only a few steps in the AWS Management Console.
 

Read more


AWS Advanced NodeJS Driver is Generally Available

The Amazon Web Services (AWS) Advanced NodeJS Driver is now generally available for use with Amazon RDS and Amazon Aurora PostgreSQL and MySQL-compatible database clusters. This database driver provides support for faster switchover and failover times, Federated Authentication, and authentication with AWS Secrets Manager or AWS Identity and Access Management (IAM).

The Amazon Web Services (AWS) Advanced NodeJS Driver is a standalone driver and supports the underlying NodeJS driver with the PostgreSQL Client or the MySQL2 Client. You can install the PostgreSQL and MySQL packages for Windows, Mac or Linux by following established installation guides in GitHub. The driver relies on monitoring the database cluster status and being aware of the cluster topology to determine the new writer. This approach reduces writer failover times to single digit seconds compared to the open-source driver.

The AWS Advanced NodeJS driver is released as an open-source project under the Apache 2.0 Public License. For more details click here to view Getting Started instructions and guidance on how to raise issues.

Read more


Amazon Aurora MySQL 3.08 (compatible with MySQL 8.0.39) is generally available

Starting today, Amazon Aurora MySQL-Compatible Edition 3 (with MySQL 8.0 compatibility) will support MySQL 8.0.39. In addition to several security enhancements and bug fixes, MySQL 8.0.39 contains enhancements that improve database availability when handling large number of tables and reduce InnoDB issues related to redo logging, and index handling.

Aurora MySQL 3.08 also includes multiple availability improvements to reduce database restarts, memory management telemetry improvements with new CloudWatch metrics, major version upgrade optimizations for Aurora MySQL 2 to 3 upgrades, and general improvements around memory management and observability. For more details, refer to the Aurora MySQL 3.08 and MySQL 8.0.39 release notes.

To upgrade to Aurora MySQL 3.08, you can initiate a minor version upgrade manually by modifying your DB cluster, or you can enable the “Auto minor version upgrade” option when creating or modifying a DB cluster. This release is available in all AWS regions where Aurora MySQL is available.

Amazon Aurora is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. To get started with Amazon Aurora, take a look at our getting started page.

Read more


Amazon Aurora MySQL now supports R7i instances

Amazon Aurora with MySQL compatibility now supports R7i database instances powered by custom 4th Generation Intel Xeon Scalable processors. R7i instances offer larger instance sizes, up to 48xlarge and features an 8:1 ratio of memory to vCPU, and the latest DDR5 memory. These instances are now available in the following AWS Regions: US East (N. Virginia, Ohio), US West (N. California, Oregon), Asia Pacific (Jakarta, Mumbai, Seoul, Singapore, Sydney, Tokyo), Canada (Central), and Europe (Frankfurt, Ireland, London, Milan, Paris, Spain, Stockholm).

You can spin up R7i database instances in the Amazon RDS Management Console or using the AWS CLI. Upgrading a database instance to R7i instance family requires a simple instance type modification. For more details, refer to the Aurora documentation.

Amazon Aurora is designed for unparalleled high performance and availability at global scale with MySQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. To get started with Amazon Aurora, take a look at our getting started page.
 

Read more


amazon-aurora-dsql

Announcing Amazon Aurora DSQL (Preview)

Today, AWS announces the preview of Amazon Aurora DSQL, a new serverless, distributed SQL database with active-active high availability. Aurora DSQL allows you to build always available applications with virtually unlimited scalability, the highest availability, and zero infrastructure management. It is designed to make scaling and resiliency effortless for your applications, and offers the fastest distributed SQL reads and writes.

Aurora DSQL provides virtually unlimited horizontal scaling with the flexibility to independently scale reads, writes, compute, and storage. It automatically scales to meet any workload demand without database sharding or instance upgrades. Its active-active distributed architecture is designed for 99.99% single-Region and 99.999% multi-Region availability with no single point of failure, and automated failure recovery. This ensures that all reads and writes to any Regional endpoint are strongly consistent and durable. Aurora DSQL is PostgreSQL compatible, offering an easy-to-use developer experience.

Aurora DSQL is now available in preview in the following AWS Regions: US East (N. Virginia), US East (Ohio), and US West (Oregon). 

To learn more about Aurora DSQL features and benefits, check out the Aurora DSQL overview page and documentation. Aurora DSQL is available at no charge during preview. Get started in only a few steps by going to the Aurora DSQL console or using the Aurora DSQL API or AWS CLI.

Read more


amazon-bedrock

Amazon Bedrock Marketplace brings over 100 models to Amazon Bedrock

Amazon Bedrock Marketplace provides generative AI developers access to over 100 publicly available and proprietary foundation models (FMs), in addition to Amazon Bedrock’s industry-leading, serverless models. Customers deploy these models onto SageMaker endpoints where they can select their desired number of instances and instance types. Amazon Bedrock Marketplace models can be accessed through Bedrock’s unified APIs, and models which are compatible with Bedrock’s Converse APIs can be used with Amazon Bedrock’s tools such as Agents, Knowledge Bases, and Guardrails.

Amazon Bedrock Marketplace empowers generative AI developers to rapidly test and incorporate a diverse array of emerging, popular, and leading FMs of various types and sizes. Customers can choose from a variety of models tailored to their unique requirements, which can help accelerate the time-to-market, improve the accuracy, or reduce the cost of their generative AI workflows. For example, customers can incorporate models highly-specialized for finance or healthcare, or language translation models for Asian languages, all from a single place.

Amazon Bedrock Marketplace is supported in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), and South America (São Paulo).

For more information, please refer to Amazon Bedrock Marketplace's announcement blog or documentation.

Read more


Amazon Bedrock Guardrails supports multimodal toxicity detection for image content (Preview)

Organizations are increasingly using applications with multimodal data to drive business value, improve decision-making, and enhance customer experiences. Amazon Bedrock Guardrails now supports multimodal toxicity detection for image content, enabling organizations to apply content filters to images. This new capability with Guardrails, now in public preview, removes the heavy lifting required by customers to build their own safeguards for image data or spend cycles with manual evaluation that can be error-prone and tedious.

Bedrock Guardrails helps customers build and scale their generative AI applications responsibly for a wide range of use cases across industry verticals including healthcare, manufacturing, financial services, media and advertising, transportation, marketing, education, and much more. With this new capability, Amazon Bedrock Guardrails offers a comprehensive solution, enabling the detection and filtration of undesirable and potentially harmful image content while retaining safe and relevant visuals. Customers can now use content filters for both text and image data in a single solution with configurable thresholds to detect and filter undesirable content across categories such as hate, insults, sexual, and violence, and build generative AI applications based on their responsible AI policies.

This new capability in preview is available with all foundation models (FMs) on Amazon Bedrock that support images including fine-tuned FMs in 11 AWS regions globally: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Europe (London), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Tokyo), and Asia Pacific (Mumbai), and AWS GovCloud (US-West).

To learn more, visit the Amazon Bedrock Guardrails product page, read the News blog, and documentation.

Read more


Announcing new AWS AI Service Cards to advance responsible generative AI

Today, AWS announces the availability of new AWS AI Service Cards for Amazon Nova Reel; Amazon Canvas; Amazon Nova Micro, Lite, and Pro; Amazon Titan Image Generator; and Amazon Titan Text Embeddings. AI Service Cards are a resource designed to enhance transparency by providing customers with a single place to find information on the intended use cases and limitations, responsible AI design choices, and performance optimization best practices for AWS AI services.

AWS AI Service Cards are part of our comprehensive development process to build services in a responsible way. They focus on key aspects of AI development and deployment, including fairness, explainability, privacy and security, safety, controllability, veracity and robustness, governance, and transparency. By offering these cards, AWS aims to empower customers with the knowledge they need to make informed decisions about using AI services in their applications and workflows. Our AI Service Cards will continue to evolve and expand as we engage with our customers and the broader community to gather feedback and continually iterate on our approach.

For more information, see the AI Service Cards for

To learn more about AI Service Cards, as well as our broader approach to building AI in a responsible way, see our Responsible AI webpage.

Read more


Amazon Bedrock announces preview of prompt caching

Today, AWS announces that Amazon Bedrock now supports prompt caching. Prompt caching is a new capability that can reduce costs by up to 90% and latency by up to 85% for supported models by caching frequently used prompts across multiple API calls. It allows you to cache repetitive inputs and avoid reprocessing context, such as long system prompts and common examples that help guide the model’s response. When cache is used, fewer computing resources are needed to generate output. As a result, not only can we process your request faster, but we can also pass along the cost savings from using fewer resources.

Amazon Bedrock is a fully managed service that offers a choice of high-performing FMs from leading AI companies via a single API. Amazon Bedrock also provides a broad set of capabilities customers need to build generative AI applications with security, privacy, and responsible AI capabilities built in. These capabilities help you build tailored applications for multiple use cases across different industries, helping organizations unlock sustained growth from generative AI while providing tools to build customer trust and data governance.

Prompt caching is now available on Claude 3.5 Haiku and Claude 3.5 Sonnet v2 in US West (Oregon) and US East (N. Virginia) via cross-region inference, and Nova Micro, Nova Lite, and Nova Pro models in US East (N. Virginia). At launch, only a select number of customers will have access to this feature. To learn more about participating in the preview, see this page. To learn more about prompt caching, see our documentation and blog.

Read more


Amazon Bedrock Data Automation now available in preview

Today, we are announcing the preview launch of Amazon Bedrock Data Automation (BDA), a new feature of Amazon Bedrock that enables developers to automate the generation of valuable insights from unstructured multimodal content such as documents, images, video, and audio to build GenAI-based applications. These insights include video summaries of key moments, detection of inappropriate image content, automated analysis of complex documents, and much more. Developers can also customize BDA’s output to generate specific insights in consistent formats required by their systems and applications.

By leveraging BDA, developers can reduce development time and effort, making it easier to build intelligent document processing, media analysis, and other multimodal data-centric automation solutions. BDA offers high accuracy at lower cost than alternative solutions, along with features such as visual grounding with confidence scores for explainability and built-in hallucination mitigation. This ensures accurate insights from unstructured, multi-modal data content. Developers can get started with BDA on the Bedrock console, where they can configure and customize output using their sample data. They can then integrate BDA’s unified multi-modal inference API into their applications to process their unstructured content at scale with high accuracy and consistency. BDA is also integrated with Bedrock Knowledge Bases, making it easier for developers to generate meaningful information from their unstructured multi-modal content to provide more relevant responses for retrieval augmented generation (RAG).

Bedrock Data Automation is available in preview in US West (Oregon) AWS Region.

To learn more, visit the Bedrock Data Automation page.

Read more


Amazon Bedrock Knowledge Bases now supports structured data retrieval

Amazon Bedrock Knowledge Bases now supports natural language querying to retrieve structured data from your data sources. With this launch, Bedrock Knowledge Bases offers an end-to-end managed workflow for customers to build custom generative AI applications that can access and incorporate contextual information from a variety of structured and unstructured data sources. Using advanced natural language processing, Bedrock Knowledge Bases can transform natural language queries into SQL queries, allowing users to retrieve data directly from the source without the need to move or preprocess the data.

Developers often face challenges integrating structured data into generative AI applications. This includes difficulties training large language models (LLMs) to convert natural language queries to SQL queries based on complex database schemas, as well as ensuring appropriate data governance and security controls are in place. Bedrock Knowledge Bases eliminates these hurdles by providing a managed natural language to SQL (NL2SQL) module. A retail analyst can now simply ask "What were my top 5 selling products last month?", and then Bedrock Knowledge Base automatically translates that query into SQL, execute the query against the database, and return the results - or even provide a summarized narrative response. To generate accurate SQL queries, Bedrock Knowledge Base leverages database schema, previous query history, and other contextual information that are provided about the data sources.

Bedrock Knowledge Bases supports structured data retrieval from Amazon Redshift and Amazon Sagemaker Lakehouse at this time and is available in all commercial regions where Bedrock Knowledge Bases is supported. To learn more, visit here and here. For details on pricing, please refer here.

Read more


Amazon Bedrock Knowledge Bases now supports GraphRAG (preview)

Today, we are announcing the support of GraphRAG, a new capability in Amazon Bedrock Knowledge Bases that enhances Generative AI applications by providing more comprehensive, relevant and explainable responses using RAG techniques combined with graph data. Amazon Bedrock Knowledge Bases offers fully-managed, end-to-end Retrieval-Augmented Generation (RAG) workflows to create highly accurate, low latency, and custom Generative AI applications by incorporating contextual information from your company's data sources. Amazon Bedrock Knowledge Bases now offers a fully-managed GraphRAG capability with Amazon Neptune Analytics.

Previously, customers faced challenges in conducting exhaustive, multi-step searches across disparate content. By identifying key entities across documents, GraphRAG delivers insights that leverage relationships within the data, enabling improved responses to end users. For example, users can ask a travel application for family-friendly beach destinations with direct flights and good seafood restaurants. Developers building Generative AI applications can enable GraphRAG in just a few clicks by specifying their data sources and choosing Amazon Neptune Analytics as their vector store when creating a knowledge base. This will automatically generate and store vector embeddings in Amazon Neptune Analytics, along with a graph representation of entities and their relationships.

GraphRAG with Amazon Neptune is built right into Amazon Bedrock Knowledge Bases, offering an integrated experience with no additional setup or additional charges beyond the underlying services. GraphRAG is available in AWS Regions where Amazon Bedrock Knowledge Bases and Amazon Neptune Analytics are both available (see current list of supported regions). To learn more, visit the Amazon Bedrock User Guide.

Read more


Amazon Bedrock Knowledge Bases now processes multimodal data

Amazon Bedrock Knowledge Bases now enables developers to build generative AI applications that can analyze and leverage insights from both textual and visual data, such as images, charts, diagrams, and tables. Bedrock Knowledge Bases offers end-to-end managed Retrieval-Augmented Generation (RAG) workflow that enables customers to create highly accurate, low-latency, secure, and custom generative AI applications by incorporating contextual information from their own data sources. With this launch, Bedrock Knowledge Bases extracts content from both text and visual data, generates semantic embeddings using the selected embedding model, and stores them in the chosen vector store. This enables users to retrieve and generate answers to questions derived not only from text but also from visual data. Additionally, retrieved results now include source attribution for visual data, enhancing transparency and building trust in the generated outputs.

To get started, customers can choose between: Amazon Bedrock Data Automation, a managed service that automatically extracts content from multimodal data (currently in Preview), or FMs such as Claude 3.5 Sonnet or Claude 3 Haiku, with the flexibility to customize the default prompt.

Multimodal data processing with Bedrock Data Automation is available in the US West (Oregon) region in preview. FM-based parsing is supported in all regions where Bedrock Knowledge Bases is available. For details on pricing for using Bedrock Data Automation or FM as a parser, please refer to the pricing page.

To learn more, visit Amazon Bedrock Knowledge Bases product documentation.

Read more


Amazon Bedrock Intelligent Prompt Routing is now available in preview

Amazon Bedrock Intelligent Prompt Routing routes prompts to different foundational models within a model family, helping you optimize for quality of responses and cost. Using advanced prompt matching and model understanding techniques, Intelligent Prompt Routing predicts the performance of each model for each request and dynamically routes each request to the model that it predicts is most likely to give the desired response at the lowest cost. Customers can choose from two prompt routers in preview that route requests either between Claude Sonnet 3.5 and Claude Haiku, or between Llama 3.1 8B and Llama 3.1 70B.

Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models from leading AI companies via a single API. Amazon Bedrock also provides a broad set of capabilities customers need to build generative AI applications with security, privacy, and responsible AI capabilities built in. These capabilities help you build tailored applications for multiple use cases across different industries, helping organizations unlock sustained growth from generative AI while ensuring customer trust and data governance. With Intelligent Prompt Routing, Amazon Bedrock can help customers build cost-effective generative AI applications with a combination of foundation models to get better performance at lower cost than a single foundation model.

During preview, customers are charged regular on-demand pricing for the models that requests are routed to. Learn more in our documentation and blog.

Read more


Announcing Amazon S3 Metadata (Preview) – Easiest and fastest way to manage your metadata

Amazon S3 Metadata is the easiest and fastest way to help you instantly discover and understand your S3 data with automated, easily-queried metadata that updates in near real-time. This helps you to curate, identify, and use your S3 data for business analytics, real-time inference applications, and more. S3 Metadata supports object metadata, which includes system-defined details like size and the source of the object, and custom metadata, which allows you to use tags to annotate your objects with information like product SKU, transaction ID, or content rating, for example.

S3 Metadata is designed to automatically capture metadata from objects as they are uploaded into a bucket, and to make that metadata queryable in a read-only table. As data in your bucket changes, S3 Metadata updates the table within minutes to reflect the latest changes. These metadata tables are stored in S3 Tables, the new S3 storage offering optimized for tabular data. S3 Tables integration with AWS Glue Data Catalog is in preview, allowing you to stream, query, and visualize data—including S3 Metadata tables—using AWS Analytics services such as Amazon Data Firehose, Athena, Redshift, EMR, and QuickSight. Additionally, S3 Metadata integrates with Amazon Bedrock, allowing for the annotation of AI-generated videos with metadata that specifies its AI origin, creation timestamp, and the specific model used for its generation.

Amazon S3 Metadata is currently available in preview in the US East (N. Virginia), US East (Ohio), and US West (Oregon) Regions, and coming soon to additional Regions. For pricing details, visit the S3 pricing page. To learn more, visit the product page, documentation, and AWS News Blog.

Read more


Amazon Bedrock now supports multi-agent collaboration

Amazon Bedrock now supports multi-agent collaboration, allowing organizations to build and manage multiple AI agents that work together to solve complex workflows. This feature allows developers to create agents with specialized roles tailored for specific business needs, such as financial data collection, research, and decision-making. By enabling seamless agent collaboration, Amazon Bedrock empowers organizations to optimize performance across industries like finance, customer service, and healthcare.

With multi-agent collaboration on Amazon Bedrock, organizations can effortlessly master complex workflows, achieving highly accurate and scalable results across diverse applications. In financial services, for example, specialized agents coordinate to gather data, analyze trends, and provide actionable recommendations—working in parallel to improve response times and precision. This collaborative feature allows businesses to quickly build, deploy, and scale multi-agent setups, reducing development time while ensuring seamless integration and adaptability to evolving needs.

Multi-agent collaboration is currently available in the US East (N. Virginia), US West (Oregon), and Europe (Ireland) AWS Regions.

To learn more, visit Amazon Bedrock Agents

Read more


Amazon Bedrock Model Distillation is now available in preview

With Amazon Bedrock Model Distillation, customers can use smaller, faster, more cost-effective models that deliver use-case specific accuracy that is comparable to the most capable models in Amazon Bedrock.

Today, fine-tuning a smaller cost-efficient model to increase its accuracy for a customers’ use-case is an iterative process where customers need to write prompts and response, refine the training dataset, ensure that the training dataset captures diverse examples, and adjust the training parameters.

Amazon Bedrock Model Distillation automates the process needed to generate synthetic data from the teacher model, trains and evaluates the student model, and then hosts the final distilled model for inference. To remove some of the burden of iteration, Model Distillation may choose to apply different data synthesis methods that are best suited for your use-case to create a distilled model that approximately matches the advanced model for the specific use-case. For example, Bedrock may expand the training dataset by generating similar prompts or generate high-quality synthetic responses using customer provided prompt-response pairs as golden examples.

Learn more in our documentation and blog.
 

Read more


Amazon Bedrock Guardrails now supports Automated Reasoning checks (Preview)

With the launch of the Automated Reasoning checks safeguard in Amazon Bedrock Guardrails, AWS becomes the first and only major cloud provider to integrate automated reasoning in our generative AI offerings. Automated Reasoning checks help detect hallucinations and provide a verifiable proof that a large language model (LLM) response is accurate. Automated Reasoning tools are not guessing or predicting accuracy. Instead, they rely on sound mathematical techniques to definitively verify compliance with expert-created Automated Reasoning Policies, consequently improving transparency. Organizations increasingly use LLMs to improve user experiences and reduce operational costs by enabling conversational access to relevant, contextualized information. However, LLMs are prone to hallucinations. Due to the ability of LLMs to generate compelling answers, these hallucinations are often difficult to detect. The possibility of hallucinations and an inability to explain why they occurred slows generative AI adoption for use cases where accuracy is critical.

With Automated Reasoning checks, domain experts can more easily build specifications called Automated Reasoning Policies that encapsulate their knowledge in fields such as operational workflows and HR policies. Users of Amazon Bedrock Guardrails can validate generated content against an Automated Reasoning Policy to identify inaccuracies and unstated assumptions, and explain why statements are accurate in a verifiable way. For example, you can configure Automated Reasoning checks to validate answers on topics defined in complex HR policies (which can include constraints on employee tenure, location, and performance) and explain why an answer is accurate with supporting evidence.

Contact your AWS account team to request access to Automated Reasoning checks in Amazon Bedrock Guardrails in US East (N. Virginia) and US West (Oregon) AWS regions. To learn more, visit Amazon Bedrock Guardrails and read the News blog.
 

Read more


Announcing Amazon Bedrock IDE in preview as part of Amazon SageMaker Unified Studio

Today we are announcing the preview launch of Amazon Bedrock IDE, a governed collaborative environment integrated within Amazon SageMaker Unified Studio (preview) that enables developers to swiftly build and tailor generative AI applications. It provides an intuitive interface for developers across various skill levels to access Amazon Bedrock's high-performing foundation models (FMs) and advanced customization capabilities in order to collaboratively build custom generative AI applications.

Amazon Bedrock IDE's integration into Amazon SageMaker Unified Studio removes barriers between data, tools, and builders, for generative AI development. Teams can now access their preferred analytics and ML tools alongside Amazon Bedrock IDE's specialized tools for building generative AI applications. Developers can leverage Retrieval Augmented Generation (RAG) to create Knowledge Bases from their proprietary data sources, Agents for complex task automation, and Guardrails for responsible AI development. This unified workspace reduces complexity, accelerating the prototyping, iteration, and deployment of production-ready, responsible generative AI apps aligned with business needs.

Amazon Bedrock IDE is now available in Amazon SageMaker Unified Studio and supported in 5 regions. For more information on supported regions, please refer to the Amazon SageMaker Unified Studio regions guide.

Learn more about Amazon Bedrock IDE and its features by visiting the Amazon Bedrock IDE user guide and get started with Bedrock IDE by enabling a “Generative AI application development” project profile using this admin guide.
 

Read more


Announcing Amazon Nova foundation models available today in Amazon Bedrock

We’re excited to announce Amazon Nova, a new generation of state-of-the-art (SOTA) foundation models (FMs) that deliver frontier intelligence and industry leading price performance. Amazon Nova models available today on Amazon Bedrock are:

  • Amazon Nova Micro, a text only model that delivers the lowest latency responses at very low cost.
  • Amazon Nova Lite, a very low-cost multimodal model that is lightning fast for processing image, video, and text inputs
  • Amazon Nova Pro, a highly capable multimodal model with the best combination of accuracy, speed, and cost for a wide range of tasks.
  • Amazon Nova Canvas, a state-of-the-art image generation model.
  • Amazon Nova Reel, a state-of-the-art video generation model.

Amazon Nova Micro, Amazon Nova Lite, and Amazon Nova Pro are among the fastest and most cost-effective models in their respective intelligence classes. These models have also been optimized to make them easy to use and effective in RAG and agentic applications. With text and vision fine-tuning on Amazon Bedrock, you can customize Amazon Micro, Lite, and Pro to deliver the optimal intelligence, speed, and cost for your needs. With Amazon Nova Canvas and Amazon Nova Reel, you get access to production-grade visual content, with built-in controls for safe and responsible AI use like watermarking and content moderation. You can see the latest benchmarks and examples of these models on the Amazon Nova product page.

Amazon Nova foundation models are available in Amazon Bedrock in the US East (N. Virginia) region. Amazon Nova Micro, Lite, and Pro models are also available in the US West (Oregon), and US East (Ohio) regions via cross-region inference. Learn more about Amazon Nova at the AWS News Blog, the Amazon Nova product page, or the Amazon Nova user guide. You can get started with Amazon Nova foundation models in Amazon Bedrock from the Amazon Bedrock console.

Read more


Introducing latency-optimized inference for foundation models in Amazon Bedrock

Latency-optimized inference for foundation models in Amazon Bedrock now available in public preview, delivering faster response times and improved responsiveness for AI applications. Currently, these new inference options support Anthropic's Claude 3.5 Haiku model and Meta's Llama 3.1 405B and 70B models offering reduced latency compared to standard models without compromising accuracy. As verified by Anthropic, with latency-optimized inference in Amazon Bedrock, Claude 3.5 Haiku runs faster on AWS than anywhere else. Additionally, with latency-optimized inference in Bedrock, Llama 3.1  405B and 70B runs faster on AWS than any other major cloud provider.

As more customers move their generative AI applications to production, optimizing the end-user experience becomes crucial, particularly for latency-sensitive applications such as real-time customer service chatbots and interactive coding assistants. Using purpose-built AI chips like AWS Trainium2 and advanced software optimizations in Amazon Bedrock, customers can access more options to optimize their inference for a particular use case. Accessing these capabilities requires no additional setup or model fine-tuning, allowing for immediate enhancement of existing applications with faster response times.

Latency-optimized inference is available for Anthropic’s Claude 3.5 Haiku and Meta’s Llama 3.1 405B and 70B in the US East (Ohio) Region via cross-region inference. To get started, visit the Amazon Bedrock console. For more information about Amazon Bedrock and its capabilities, visit the Amazon Bedrock product page, pricing page, and documentation.

Read more


Amazon Bedrock Knowledge Bases now supports RAG evaluation (Preview)

Today, we are announcing RAG evaluation support in Amazon Bedrock Knowledge Bases. This capability allows you to evaluate your retrieval-augmented generation (RAG) applications built on Amazon Bedrock Knowledge Bases. You can evaluate either information retrieval or the retrieval plus content generation. Evaluations are powered by LLM-as-a-Judge technology, with customers having a choice of several judge models to use. For retrieval evaluation, you can select from metrics such as context relevance and coverage. For retrieve plus generation evaluation, you can select from quality metrics such as correctness, completeness, and faithfulness (hallucination detection), as well as responsible AI metrics such as harmfulness, answer refusal, and stereotyping. You can also compare across evaluation jobs in order to compare Knowledge Bases with different settings like chunking strategy or vector length, or different content generating models.

Evaluating RAG applications can be difficult, as there are many components in the retrieval and generation that need to be optimized. Now, Amazon Bedrock Knowledge Bases’s RAG evaluation tool allows customers to evaluate their Knowledge Base-powered applications conveniently and quickly where their data and LLMs already live. Additionally, you can incorporate Amazon Bedrock Guardrails directly into your evaluation for even more thorough testing. Using these RAG evaluation tools on Amazon Bedrock can save cost as well as weeks of time compared to a full offline human-based evaluation, allowing you to make improvements in your application faster and easier.

To learn more, including region availability, read the AWS News blog and visit the Amazon Bedrock Evaluations page. To get started, log into the Amazon Bedrock Console or use the Amazon Bedrock APIs.

Read more


Amazon Bedrock Model Evaluation now includes LLM-as-a-judge (Preview)

Amazon Bedrock Model Evaluation allows you to evaluate, compare, and select the best foundation models for your use case. Now, you can use a new evaluation capability: LLM-as-a-judge in Preview. This allows you to choose an LLM as your judge to ensure you have the right combination of evaluator models and models being evaluated. You can choose from several available judge LLMs on Amazon Bedrock. You can also select curated quality metrics such as correctness, completeness, and professional style and tone, as well as responsible AI metrics such as harmfulness and answer refusal. You can now also bring your own prompt dataset to ensure the evaluation is customized for your data, and you can compare results across evaluation jobs to make decisions faster.

Previously, you had a choice between human-based model evaluation and automatic evaluation with exact string matching and other traditional NLP metrics. These methods, while fast, did not provide a strong correlation with human evaluators. Now, with LLM-as-a-judge, you can get human-like evaluation quality at a much lower cost than full human-based evaluations, while saving weeks of time. You can use built-in metrics to evaluate objective facts or perform subjective evaluations of writing style and tone on your dataset.

To learn more about Amazon Bedrock Model Evaluation’s new LLM-as-a-judge, including available AWS regions read the AWS News Blog and visit the Amazon Bedrock Evaluations page. To get started, sign in to the AWS Management Console or use the Amazon Bedrock APIs.

Read more


Amazon Bedrock Knowledge Bases now provides auto-generated query filters for improved retrieval

Amazon Bedrock Knowledge Bases offers fully-managed, end-to-end Retrieval-Augmented Generation (RAG) workflows to create highly accurate, low latency, secure, and custom GenAI applications by incorporating contextual information from your data sources. Today, we are launching automatically-generated query filters which improves retrieval accuracy by ensuring the documents retrieved are relevant to the query. This feature extends the existing capability of manual metadata filtering, by allowing customers to narrow down search results without the need to manually construct complex filter expressions.

RAG applications process user queries by searching across a large set of documents. However, in many situations you may need to retrieve documents with specific attributes and/or content. With automatic generated query filters enabled, you can receive filtered search results which are based on the document’s metadata without the need to manually construct complex filter expressions. For example, for a query like "How to file a claim in Washington", the state as "Washington" will be automatically applied as a filter to retrieve only those documents pertaining to the particular state.

The capability is available in US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Seoul), Europe (Frankfurt), Europe (Zurich) and AWS GovCloud (US-West). To learn more, visit the documentation.

Read more


Amazon Bedrock Knowledge Bases now supports custom connectors and ingestion of streaming data

Amazon Bedrock Knowledge Bases now supports custom connector and ingestion of streaming data, allowing developers to add, update, or delete data in their knowledge base through direct API calls. Amazon Bedrock Knowledge Bases offers fully-managed, end-to-end Retrieval-Augmented Generation (RAG) workflows to create highly accurate, low latency, secure, and custom GenAI applications by incorporating contextual information from your company's data sources. With this new capability, customers can easily ingest specific documents from custom data sources or Amazon S3 without requiring a full sync, and ingest streaming data without the need for intermediary storage.

This enhancement enables customers to ingest specific documents from any custom data source and reduce latency and operational costs for intermediary storage while ingesting streaming data. For instance, a financial services firm can now keep its knowledge base continuously updated with the latest market data, ensuring that their GenAI applications deliver the most relevant information to end-users. By eliminating time-consuming full syncs and storage steps, customers gain faster access to data, reducing latency, and improving application performance.

Customers can start using this feature either through the console or programmatically via the APIs. In the console, users can select a custom connector as the data source, then add documents, text, or base64 encoded text strings.

This capability is available in all regions where Amazon Bedrock Knowledge Bases is supported. There is no additional cost for using this new custom connector capability.

To learn more, visit Amazon Bedrock Knowledge Bases product documentation.
 

Read more


Amazon Bedrock Knowledge Bases now supports streaming responses

Amazon Bedrock Knowledge Bases offers fully-managed, end-to-end Retrieval-Augmented Generation (RAG) workflows to create highly accurate, low latency, secure, and custom GenAI applications by incorporating contextual information from your company's data sources. Today, we are announcing the support of RetrieveAndGenerateStream API in Bedrock Knowledge Bases. This new streaming API allows Bedrock Knowledge Base customers to receive the response as it is being generated by the Large Language Model (LLM), rather than waiting for the complete response.

RAG workflow involves several steps, including querying the data store, gathering relevant context, and then sending the query to a LLM for response summarization. This final step of response generation could take few seconds, depending on the latency of the underlying model used in response generation. To reduce this latency for building latency-sensitive applications, we're now offering the RetrieveAndGenerateStream API which provides the response as a stream as it is being generated by the model. This results in a reduced latency for the first response, providing users with a more seamless and responsive experience when interacting with Bedrock Knowledge Bases.

This new capability is currently supported in all existing Amazon Bedrock Knowledge Base regions. To learn more, visit the documentation.
 

Read more


Amazon Bedrock now supports Rerank API to improve accuracy of RAG applications

Amazon Bedrock announces support for reranker models through the Rerank API, enabling developers to improve the relevance of responses in Retrieval-Augmented Generation (RAG) applications. The reranker models rank a set of retrieved documents based on their relevance to user's query, helping to prioritize the most relevant content to be passed to the foundation models (FM) for response generation. Amazon Bedrock Knowledge Bases offers fully-managed, end-to-end RAG workflows to create custom generative AI applications by incorporating contextual information from various data sources. For Amazon Bedrock Knowledge Base users, enabling the reranker is through a setting available in Retrieve and RetrieveAndGenerate APIs.

Semantic search in RAG systems can improve document retrieval relevance but may struggle with complex or ambiguous queries. For example, a customer service chatbot asked about returning an online purchase might retrieve documents on both return policies and shipping guidelines. Without proper ranking, the generated response could focus on shipping instead of returns, missing the user's intent. Now, Amazon Bedrock provides access to reranking models which will address this by reordering retrieved documents based on their relevance to the user query. This ensures the most useful information is sent to the foundation model for response generation, optimizing the context window usage and potentially reducing costs.

The Rerank API supports Amazon Rerank 1.0 and Cohere Rerank 3.5 models. These models are available in US West (Oregon), Canada (Central), Europe (Frankfurt) and Asia Pacific (Tokyo).

Please visit the Amazon Bedrock product documentation. For details on pricing, please refer to the pricing page.
 

Read more


Amazon Bedrock Agents now supports custom orchestration

Amazon Bedrock Agents now supports custom orchestration, allowing developers to control how agents handle multistep tasks, make decisions, and execute complex workflows. This capability enables developers to define custom orchestration logic for their agents using AWS Lambda, providing them flexibility to tailor agent’s behavior to fit specific use cases.

With Custom Orchestration, developers can implement any customized orchestration strategy for their agents, including Plan and Solve, Tree of Thought, and Standard Operating Procedures (SOP). This ensures agents perform tasks in the desired order, manage states effectively, and integrate seamlessly with external tools. Whether handling complex business processes or automating intricate workflows, custom orchestration offers greater control, accuracy, and efficiency to meet business objectives.

Custom Orchestration is now available in all AWS Regions where Amazon Bedrock Agents are supported. To learn more, visit the documentation.
 

Read more


Announcing InlineAgents for Agents for Amazon Bedrock

Agents for Amazon Bedrock now offers InlineAgents, a new feature that allows developers to define and configure Bedrock Agents dynamically at runtime. This enhancement provides greater flexibility and control over agent capabilities, enabling users to specify foundation models, instructions, action groups, guardrails, and knowledge bases on-the-fly without relying on pre-configured control plane settings.

With InlineAgents, developers can easily customize their agents for specific tasks or user requirements without creating new agent versions or preparing the agent. This feature enables rapid experimentation with different AI configurations, trying out various agent features and dynamically updating the tools available to an Agent without creating separate agents.
InlineAgents is available through the new InvokeInlineAgent API in the Amazon Bedrock Agent Runtime service. This feature maintains full compatibility with existing Bedrock Agents while offering improved flexibility and ease of use. InlineAgents is now available in all AWS Regions where Agents Amazon Bedrock is supported.

To learn more about InlineAgents and how to get started, see the Amazon Bedrock Developer Guide and the AWS SDK documentation for the InvokeInlineAgent API and a code sample to create dynamic tooling.

Read more


Amazon Bedrock Model Evaluation now available in Asia Pacific (Seoul)

Model Evaluation on Amazon Bedrock allows you to evaluate, compare, and select the best foundation models for your use case. Amazon Bedrock offers a choice of automatic evaluation and human evaluation. You can use automatic evaluation with predefined algorithms for metrics such as accuracy, robustness, and toxicity. Additionally, for those metrics or subjective and custom metrics, such as friendliness, style, and alignment to brand voice, you can set up a human evaluation workflow with a few clicks. Human evaluation workflows can leverage your own employees or an AWS-managed team as reviewers. Model evaluation provides built-in curated datasets or you can bring your own datasets.

Now, customers can evaluate models in the Asia Pacific (Seoul) region.

Model Evaluation on Amazon Bedrock is now Generally Available in these commercial regions and the AWS GovCloud (US-West) Region.

To learn more about Model Evaluation on Amazon Bedrock, see the Amazon Bedrock developer experience web page. To get started, sign in to Amazon Bedrock on the AWS Management Console or use the Amazon Bedrock APIs.
 

Read more


Amazon Bedrock Flows is now generally available with two new capabilities

Today, we’re announcing the general availability of Amazon Bedrock Flows, previously known as Prompt Flows, and adding two key new capabilities. Bedrock Flows enables you to link the latest foundation models, Prompts, Agents, Knowledge Base and other AWS services together in an intuitive visual builder to accelerate the creation and execution of generative AI workflows. Bedrock Flows now also provides real-time visibility into workflow execution and safeguards with Amazon Bedrock Guardrails.

Authoring multi-step generative AI workflows is an iterative, time-consuming process, and requires manually adding output nodes to each step to validate the flow execution. With Bedrock Flows, you can now view the input and output of each step from the test window to quickly validate and debug the flow execution in real-time. You can also configure Amazon Bedrock Runtime API InvokeFlow to publish trace events to track the flow execution programmatically. Next, to safeguard your workflows from potential harmful content, you can attach Bedrock Guardrails for Prompt and Knowledge Base nodes directly in the Flows builder. This seamless integration allows you to block unwanted topics, and filter out harmful content, or sensitive information in the flows.

Bedrock Flows with the new capabilities are now generally available in all regions that Amazon Bedrock is available except for GovCloud regions. For pricing, visit the Amazon Bedrock Pricing page. To get started, see the following list of resources:

  1. Video demo
  2. Blog post
  3. AWS user guide

Read more


Amazon Bedrock Knowledge Bases now supports binary vector embeddings to build RAG applications

Amazon Bedrock Knowledge Bases now supports binary vector embeddings for building Retrieval Augmented Generation (RAG) applications. This feature is available with Titan Text Embeddings V2 model and Cohere Embed models. Amazon Bedrock Knowledge Bases offers fully-managed RAG workflows to create highly accurate, low latency, secure and customizable retrieval-augmented-generation (RAG) applications by incorporating contextual information from an organization's data sources.

Binary vector embeddings represent document embeddings as binary vectors, with each dimension encoded as a single binary digit (0 or 1). Binary embeddings in RAG applications offer significant benefits in storage efficiency, computational speed, and scalability. They are particularly useful for large-scale information retrieval, resource-constrained environments, and real-time applications.

This new capability is currently supported with Amazon OpenSearch Serverless as vector store. It is supported in all Amazon Bedrock Knowledge Bases regions where Amazon Opensearch Serverless and Amazon Titan Text Embeddings V2 or Cohere Embed are available.

For more information, please refer to the documentation.

Read more


AWS Partner Network automates Foundational Technical Reviews using Amazon Bedrock

Today, AWS is announcing automation for the Foundational Technical Review (FTR) process using Amazon Bedrock. The new generative AI-driven automation process for the FTR optimizes the review timeline for AWS Partners, offering review decisions in minutes, accelerating a process that previously could take weeks. Gaining FTR approval allows Partners to fast-track their AWS Partner journey, unlocking access to AWS Partner Network (APN) programs and co-sell opportunities with AWS.

Partners seeking access to AWS funding programs, the AWS Competency Program to validate expertise, and the AWS ISV Accelerate Program for co-sell support must qualify their solutions by completing the FTR. With this launch, AWS has automated the FTR and enhanced the experience for Partners, with successful reviews being approved in minutes. Unsuccessful reviews will be forwarded for manual review, and an AWS expert will make contact within two weeks to remediate potential gaps. Partners will receive an email notification informing them of the review result, reducing wait time from weeks to minutes. Additionally, partners will be able to submit responses in several non-English languages, saving time for translation and improving the accuracy of their submissions. This generative AI-based automation accelerates the technical validation step, allowing Partners to spend more time on business initiatives.

AWS Partners can request the FTR for their solution in AWS Partner Central. To learn more about the FTR, sign in to AWS Partner Central and download the FTR Guide (software or service solution).
 

Read more


Introducing Prompt Optimization in Preview in Amazon Bedrock

Today we are announcing the preview launch of Prompt Optimization in Amazon Bedrock. Prompt Optimization rewrites prompts for higher quality responses from foundational models.

Prompt engineering is the process of designing prompts to guide foundational models to generating relevant responses. These prompts need to be tailored for each specific foundational model, following best practices and guidelines for each model. Developers can now use Prompt Optimization in Amazon Bedrock to rewrite their prompts for improved performance on Claude Sonnet 3.5, Claude Sonnet, Claude Opus, Claude Haiku, Llama 3 70B, Llama 3.1 70B, Mistral Large 2 and Titan Text Premier models. Developers can easily compare the performance of optimized prompts against the original prompts without the need of any deployment. All optimized prompts are saved as part of Prompt Builder for developers to use for their generative AI applications.

Amazon Bedrock Prompt Optimization is now available in preview. Learn more here.
 

Read more


Introducing Binary Embeddings for Titan Text Embeddings model in Amazon Bedrock

Amazon Titan Text Embeddings V2 now supports Binary Embeddings. With Binary Embeddings, customers can reduce the storage cost for their Retrieval Augmented Generation (RAG) applications while maintaining similar accuracy of regular embeddings.

Amazon Titan Text Embeddings model generates semantic representations of documents, paragraphs, and sentences, as 1,024 (default), 512, or 256 dimensional vector. With Binary Embeddings, Titan Text Embeddings V2 will represent data as binary vectors with each dimension encoded as a single binary digit (0 or 1). This binary representation converts high-dimensional data into a more efficient format for storage in Amazon OpenSearch Serverless in Bedrock Knowledge Bases for cost-effective RAG applications.

Binary Embeddings is supported in Titan Text Embeddings V2, Amazon OpenSearch Serverless and Amazon Bedrock Knowledge Bases in all regions where Amazon Titan Text Embeddings V2 is supported. To learn more, visit the documentation for Binary Embeddings.

Read more


AWS Amplify launches the full-stack AI kit for Amazon Bedrock

Today, AWS announces the general availability of the AWS Amplify AI kit for Amazon Bedrock, the quickest way for fullstack developers to build web apps with AI capabilities such as chat, conversational search, and summarization. The Amplify AI kit allows developers to easily leverage their data to get customized responses from Amazon Bedrock AI models. The Amplify AI kit allows anyone with knowledge of JavaScript or TypeScript, and web frameworks like React or Next.js, to add AI experiences to their apps, without any prior machine learning expertise.

The AI kit offers the following capabilities:

  • A pre-built, fully customizable <AIConversation> React UI component that offers a real-time, streaming chat experience along with features like UI responses instead of plain-text, chat history, and resumable conversations.
  • A type-safe client that provides secure server-side access to Amazon Bedrock.
  • Secure, built-in capabilities to share user context (e.g. data the user can access) with Amazon Bedrock models.
  • Define tools with additional context that can be invoked by the models.
  • A fullstack TypeScript developer experience layered on Amplify Gen 2 and AWS AppSync.


To get started with the AI kit, see our launch blog.

Read more


Amazon Bedrock now available in the AWS GovCloud (US-East) Region

Beginning today, customers can use Amazon Bedrock in the AWS GovCloud (US-East) region to easily build and scale generative AI applications using a variety of foundation models (FMs) as well as powerful tools to build generative AI applications. Visit the Amazon Bedrock documentation pages for information about model availability and cross-region inferencing.

Amazon Bedrock is a fully managed service that offers a choice of high-performing large language models (LLMs) and other FMs from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, as well as Amazon via a single API. Amazon Bedrock also provides a broad set of capabilities customers need to build generative AI applications with security, privacy, and responsible AI built in. These capabilities help you build tailored applications for multiple use cases across different industries, helping organizations unlock sustained growth from generative AI while ensuring customer trust and data governance.

To get started, visit the Amazon Bedrock page and see the Amazon Bedrock documentation for more details.

Read more


Amazon Bedrock Prompt Management is now generally available

Earlier this year, we launched Amazon Bedrock Prompt Management in preview to simplify the creation, testing, versioning, and sharing of prompts. Today, we’re announcing its general availability and adding several new key features. First, we are introducing the ability to easily run prompts stored in your AWS account. Amazon Bedrock Runtime APIs Converse and InvokeModel now support executing a prompt using a Prompt identifier. Next, while creating and storing the prompts, you can now specify system prompt, multiple user/assistant messages, and tool configuration in addition to the model choice and inference configuration available in preview — this enables advanced prompt engineers to leverage function calling capabilities provided by certain model families such as the Anthropic Claude models. You can now store prompts for Bedrock Agents in addition to Foundation Models, and we have also introduced the ability to compare two versions of a prompt to quickly review the differences between versions. Finally, we now support custom metadata to be stored with the prompts via the Bedrock SDK, enabling you to store metadata such as author, team, department, etc. to meet your enterprise prompt management needs.

Amazon Bedrock is a fully managed service that offers a choice of high-performing large language models (LLMs) and other FMs from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, as well as Amazon via a single API.

Learn more here and in our documentation. Read our blog here.
 

Read more


Amazon Bedrock now available in the Europe (Zurich) Regions

Beginning today, customers can use Amazon Bedrock in the Europe (Zurich) region to easily build and scale generative AI applications using a variety of foundation models (FMs) as well as powerful tools to build generative AI applications.

Amazon Bedrock is a fully managed service that offers a choice of high-performing large language models (LLMs) and other FMs from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, as well as Amazon via a single API. Amazon Bedrock also provides a broad set of capabilities customers need to build generative AI applications with security, privacy, and responsible AI built in. These capabilities help you build tailored applications for multiple use cases across different industries, helping organizations unlock sustained growth from generative AI while ensuring customer trust and data governance.

To get started, visit the Amazon Bedrock page and see the Amazon Bedrock documentation for more details.

Read more


Anthropic’s Claude 3.5 Haiku model now available in Amazon Bedrock

Anthropic’s Claude 3.5 Haiku model is now available in Amazon Bedrock. Claude 3.5 Haiku is the next generation of Anthropic’s fastest model, combining rapid response times with improved reasoning capabilities, making it ideal for tasks that require both speed and intelligence. Claude 3.5 Haiku improves across every skill set and surpasses even Claude 3 Opus, the largest model in Anthropic’s previous generation, on many intelligence benchmarks—including coding.

With improved instruction following and more accurate tool use, Claude 3.5 Haiku is well suited for entry level user-facing products, specialized sub-agent tasks, and generating personalized experiences from huge volumes of data—like purchase history, pricing, or inventory data. Claude 3.5 Haiku can help efficiently process and categorize large volumes of unstructured data in finance, healthcare, research, and other industries. Claude 3.5 Haiku can also help with use cases such as fast and accurate code suggestions, highly interactive customer service chatbots that require rapid response times, e-commerce solutions, and educational platforms. The new Claude 3.5 Haiku is currently available as a text-only model with support for image inputs to follow.

The Claude 3.5 Haiku model is now available in Amazon Bedrock in the US West (Oregon) Region and in the US East (N. Virginia) Region via cross-region inference. To learn more, read the AWS News launch blog, Claude in Amazon Bedrock product page, and documentation. To get started with Claude, visit the Amazon Bedrock console.

Read more


Amazon Bedrock announces support for cost allocation tags on inference profiles

Amazon Bedrock now enables customers to allocate and track on-demand foundation model usage. Customers can categorize their GenAI inference costs by department, team, or application using AWS cost allocation tags. You can leverage this feature by creating an application inference profile and tagging it.

Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models from leading AI companies via a single API. Amazon Bedrock also provides a broad set of capabilities customers need to build generative AI applications with security, privacy, and responsible AI capabilities built in. These capabilities help you build tailored applications for multiple use cases across different industries, helping organizations unlock sustained growth from generative AI while ensuring customer trust and data governance.

For more information about Amazon Bedrock, visit the Amazon Bedrock page and see the Amazon Bedrock documentation for more details. For more information about the AWS Regions where application inference profiles are available, see this page.

Read more


Fine-tuning for Anthropic’s Claude 3 Haiku in Amazon Bedrock is now generally available

Fine-tuning for Anthropic's Claude 3 Haiku model in Amazon Bedrock is now generally available. Amazon Bedrock is the only fully managed service that provides you with the ability to fine tune Claude models. Claude 3 Haiku is Anthropic’s most compact model, and is one of the most affordable and fastest options on the market for its intelligence category, according to Anthropic. By providing your own task-specific training dataset, you can fine tune and customize Claude 3 Haiku to boost model accuracy, quality, and consistency to further tailor generative AI for your business.

Fine-tuning allows Claude 3 Haiku to excel in areas crucial to your business compared to more general models by encoding company and domain knowledge. By fine tuning Claude 3 Haiku within your secure AWS environment and adapting its knowledge to your exact business requirements, you can generate higher-quality results and create unique user experiences that reflect your company’s proprietary information, brand, products, and more. You can also enhance performance for domain-specific actions such as classification, interactions with custom APIs, or industry-specific data interpretation. Amazon Bedrock makes a separate copy of the base foundation model that is accessible only by you and trains this private copy of the model.

Fine-tuning for Anthropic's Claude 3 Haiku in Amazon Bedrock is now generally available in the US West (Oregon) AWS Region. To learn more, read the launch blog, technical blog, and documentation. To get started with Claude 3 in Amazon Bedrock, visit the Amazon Bedrock console.

Read more


amazon-bedrock-partyrock

PartyRock improves app discovery and announces upcoming free daily use

Starting today, PartyRock is supporting improved app discovery using search, making it even easier to explore and build with generative AI. In addition, a new and improved daily free usage model will replace the current free trial grant in 2025 to further empower everyone to build AI apps on PartyRock with daily recurring free use.

Previously, AWS offered new PartyRock users a free trial for a limited time, but starting in 2025 you can access and experiment with PartyRock apps, without the worry of exhausting the free trial credits through a free daily use grant. Since its launch in November 2023, more than a half million apps have been created by PartyRock users. Until now, discovering those apps required link or playlist sharing, or browsing featured apps on the PartyRock Discover page. Users can now use the search bar on the homepage to explore all publicly published PartyRock apps.

Discover how you can build apps to help improve your everyday individual productivity and experiment with these new features by trying PartyRock today. To learn more, read our AWS News Blog.
 

Read more


amazon-braket

Announcing the Quantum Embark advisory program for customers new to quantum computing

AWS announces Quantum Embark, a new program aimed at getting customers ready for quantum computing by providing an expert-led approach as they begin their quantum computing journey. With this program, customers can explore the value of quantum computing for their business, understand the pace of development of the technology, and prepare for its impact. Quantum Embark is designed to cut through the hype and focus on actionable outcomes.

Quantum computing has the potential to revolutionize industries by solving problems that are beyond the ability of even the most powerful computers. However, to get buy-in from internal stakeholders and establish a long-term quantum roadmap, customers need trustworthy guidance specific to their most important use cases. Quantum Embark is a program of advisory services consisting of three modules: (1) Use Case Discovery, which focuses on the most tangible opportunities; (2) Technical Enablement, where users get hands-on experience with quantum computing via Amazon Braket; and (3) Deep Dive, which deepens customers’ understanding of mapping quantum algorithms to target applications identified in the Use Case Discovery module. Upon completion, customers have a reusable runbook consisting of recommended tooling, a projected roadmap and documentation to engage leadership and line of business teams for target application areas.

With Quantum Embark, you only pay for the modules you choose with no long-term commitments. Check out our blog to learn how some customers are already getting value out of this program. Visit the Braket console or contact your AWS Account Team to get started.

Read more


amazon-cloudfront

Amazon CloudFront announces origin modifications using CloudFront Functions

Amazon CloudFront now supports origin modification within CloudFront Functions, enabling you to conditionally change or update origin servers on each request. You can now write custom logic in CloudFront Functions to overwrite origin properties, use another origin in your CloudFront distribution, or forward requests to any public HTTP endpoint.

Origin modification allows you to create custom routing policies for how traffic should be forwarded to your application servers on cache misses. For example, you can use origin modification to determine the geographic location of a viewer and then forward the request, on cache misses, to the closest AWS Region running your application. This ensures the lowest possible latency for your application. Previously, you had to use AWS Lambda@Edge to modify origins, but now this same capability is available in CloudFront Functions with better performance and lower costs. Origin modification supports updating all existing origin capabilities such as setting custom headers, adjusting timeouts, setting Origin Shield, or changing the primary origin in origin groups.

Origin modification is now available within CloudFront Functions at no additional charge. For more information, see the CloudFront Developer Guide. For examples of how to use origin modification, see our GitHub examples repository.

Read more


Amazon CloudFront now supports Anycast Static IPs

Amazon CloudFront introduces Anycast Static IPs, providing customers with a dedicated list of IP addresses for connecting to all CloudFront edge locations worldwide.

Typically, CloudFront uses rotating IP addresses to serve traffic. Customers implementing Anycast Static IPs will receive a dedicated list of static IP addresses for their workloads. CloudFront Anycast Static IPs enables customers to provide a dedicated list of IP addresses to partners and their customers for enhancing security and simplifying network management across various use cases. For example, a common use case is allow-listing the static IP addresses in network firewalls.

CloudFront supports Anycast Static IPs from all edge locations. This excludes Amazon Web Services China (Beijing) region, operated by Sinnet, and the Amazon Web Services China (Ningxia) region, operated by NWCD. CloudFormation support will be coming soon. Learn more about Anycast Static IPs here and for more information, please refer to the Amazon CloudFront Developer Guide. For pricing, please see CloudFront Pricing.

Read more


Amazon CloudFront now supports additional log formats and destinations for access logs

Amazon CloudFront announces enhancements to its standard access logging capabilities, providing customers with new log configuration and delivery options. Customers can now deliver CloudFront access logs directly to two new destinations: Amazon CloudWatch Logs and Amazon Data Firehose. Customers can select from an expanded list of log output formats, including JSON and Apache Parquet (for logs delivered to S3). Additionally, they can directly enable automatic partitioning of logs delivered to S3, select specific log fields, and set the order in which they are included in the logs.

Until today, customers had to write custom logic to partition logs, convert log formats, or deliver logs to CloudWatch Logs or Data Firehose. The new logging capabilities provide native log configurations, eliminating the need for custom log processing. For example, customers can now directly enable features like Apache Parquet format for CloudFront logs delivered to S3 to improve query performance when using services like Amazon Athena and AWS Glue.

Additionally, customers enabling access log delivery to CloudWatch Logs will receive 750 bytes of logs free for each CloudFront request. Standard access log delivery to Amazon S3 remains free. Please refer to the 'Additional Features' section of the CloudFront pricing page for more details.

Customers can now enable CloudFront standard logs to S3, CloudWatch Logs and Data Firehose through the CloudFront console or APIs. CloudFormation support will be coming soon. For detailed information about the new access log features, please refer to the Amazon CloudFront Developer Guide.

Read more


Amazon CloudFront now supports gRPC delivery

Amazon CloudFront now supports delivery for gRPC applications. gRPC is a modern, open-source remote procedure call (RPC) framework that allows bidirectional communication between a client and a server over HTTP/2 connections. Applications built with gRPC benefit from improved latency using efficient bidirectional streaming and a binary message format, called Protocol Buffers, which are smaller than traditional payloads, like JSON used with RESTful APIs

gRPC reduces communication latency for applications which require continuous client-server interactions for a responsive user experience. For example, a ride-sharing application can use gRPC service to automatically update the location of the requested vehicles on the user's device without the user having to request updates each time. gRPC addresses some of the latency challenges associated with using REST APIs for bidirectional communication. With REST APIs, clients establish a connection to the server, make a request, receive a response, and then terminate the connection, which introduces extra latency on each request. With gRPC, the client and server can send multiple messages independently and concurrently using a single connection. Using CloudFront to deliver gRPC applications, customers receive the full advantages of gRPC, plus CloudFront's worldwide reach, speed, and security.

CloudFront supports gRPC from all edge locations. This excludes Amazon Web Services China (Beijing) region, operated by Sinnet, and the Amazon Web Services China (Ningxia) region, operated by NWCD. Requests and data transfer fees apply to this feature. For further details, visit the CloudFront pricing page and the Developer Guide.
 

Read more


Amazon CloudFront announces VPC origins

Amazon CloudFront announces Virtual Private Cloud (VPC) origins, a new feature that allows customers to use CloudFront to deliver content from applications hosted in VPC private subnets. With VPC origins, customers can have their Application Load Balancers (ALB), Network Load Balancers (NLB), and EC2 Instances in a private subnet that is accessible only through their CloudFront distributions. This makes it easy for customers to secure their web applications, allowing them to focus on growing their businesses while improving security and maintaining high-performance and global scalability with CloudFront.

AWS customers use CloudFront to deliver highly performant and globally scalable applications. Customers serving content from Amazon S3, AWS Elemental Services and Lambda Function URLs can use Origin Access Control as a managed solution to secure their origins. For origins in VPCs, customers had to keep their origins in public subnets, and use Access Control Lists and other mechanisms to restrict access to their origins. Customers had to spend on-going effort to implement and maintain these solutions, leading to undifferentiated work. VPC origins streamlines security management and reduces operational complexity, making it easy to use CloudFront as the single front door for applications.

VPC origins are available in AWS Commercial Regions only, and the full list of supported AWS Regions is available here. There is no additional cost for using VPC origins with CloudFront. CloudFormation support will be coming soon. To learn more, visit CloudFront VPC origins.

Read more


AWS Application Load Balancer announces CloudFront integration with built-in WAF

We are announcing a new one-click integration on Application Load Balancer (ALB) to attach an Amazon CloudFront distribution from the ALB console. This enables the easy use of CF as a distributed single point of entry for your application that ingests, absorbs, and filters all inbound traffic before it reaches your ALB. The features also enables an AWS WAF preconfigured WebACL with basic security protections as a first line of defense against common web threats. Overall, you can easily enable seamless protections from ALB, CloudFront, and AWS WAF with minimal configurations to secure your application.

Previously to accelerate and secure your applications, you had to configure a CloudFront distribution with proper caching, request forwarding, and security protections that connected to your ALB on the right port and protocol. This required navigation between multiple services and manual configuration. With this new integration, the ALB console handles the creation and configuration of ALB, CloudFront and AWS WAF. CloudFront enables your application’s Cache-Control headers to cache content like HTML, CSS/JavaScript, and images close to viewers, improving performance and reducing load on your application. With an additional checkbox, you can attach a security group configured to allow traffic from CloudFront IP addresses; if maintained as the only inbound rule, it ensures all requests are processed and inspected by CloudFront and WAF.

This new integration is available for both new and existing Application Load Balancers. Standard ALB, CloudFront, and AWS WAF pricing apply. The feature is available in all commercial AWS Regions. To learn more about this feature, visit the ALB and CloudFront sections in the AWS User Guide.

Read more


Amazon CloudFront no longer charges for requests blocked by AWS WAF

Effective October 25, 2024, all CloudFront requests blocked by AWS WAF are free of charge. With this change, CloudFront customers will never incur request fees or data transfer charges for requests blocked by AWS WAF. This update requires no changes to your applications and applies to all CloudFront distributions using AWS WAF.

AWS WAF will continue billing for evaluating and blocking these requests. To learn more about using AWS WAF with CloudFront, visit Use AWS WAF protections in the CloudFront Developer Guide.

Read more


AWS announces new edge location in Qatar

Amazon Web Services (AWS) announces expansion in Qatar by launching a new Amazon CloudFront edge location in Doha, Qatar. The new AWS edge location brings the full suite of benefits provided by Amazon CloudFront, a secure, highly distributed, and scalable content delivery network (CDN) that delivers static and dynamic content, APIs, and live and on-demand video with low latency and high performance.

All Amazon CloudFront edge locations are protected against infrastructure-level DDoS threats with AWS Shield that uses always-on network flow monitoring and in-line mitigation to minimize application latency and downtime. You also have the ability to add additional layers of security for applications to protect them against common web exploits and bot attacks by enabling AWS Web Application Firewall (WAF).

Traffic delivered from this edge location is included within the Middle East region pricing. To learn more about AWS edge locations, see CloudFront edge locations.

Read more


amazon-cloudwatch

Amazon CloudWatch now provides centralized visibility into telemetry configurations

Amazon CloudWatch now offers centralized visibility into critical AWS service telemetry configurations, such as Amazon VPC Flow Logs, Amazon EC2 Detailed Metrics, and AWS Lambda Traces. This enhanced visibility enables central DevOps teams, system administrators, and service teams to identify potential gaps in their infrastructure monitoring setup. The telemetry configuration auditing experience seamlessly integrates with AWS Config to discover AWS resources, and can be turned on for the entire organization using the new AWS Organizations integration with Amazon CloudWatch.

With visibility into telemetry configurations, you can identify monitoring gaps that might have been missed in your current setup. For example, this helps you identify gaps in your EC2 detailed metrics so that you can address them and easily detect short-lived performance spikes and build responsive auto-scaling policies. You can audit telemetry configuration coverage at both resource type and individual resource levels, refining the view by filtering across specific accounts, resource types, or resource tags to focus on critical resources.

The telemetry configurations auditing experience is available in US East (N. Virginia), US West (Oregon), US East (Ohio), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm) regions. There is no additional cost to turn on the new experience, including for AWS Config.

You can get started with auditing your telemetry configurations using the Amazon CloudWatch Console, by clicking on Telemetry config in the navigation panel, or programmatically using the API/CLI. To learn more, visit our documentation.

Read more


AWS Config now supports a service-linked recorder

AWS Config added support for a service-linked recorder, a new type of AWS Config recorder that is managed by an AWS service and can record configuration data on service-specific resources, such as the new Amazon CloudWatch telemetry configurations audit. By enabling the service-linked recorder in Amazon CloudWatch, you gain centralized visibility into critical AWS service telemetry configurations, such as Amazon VPC Flow Logs, Amazon EC2 Detailed Metrics, and AWS Lambda Traces.

With service-linked recorders, an AWS service can deploy and manage an AWS Config recorder on your behalf to discover resources and utilize the configuration data to provide differentiated features. For example, an Amazon CloudWatch managed service-linked recorder helps you identify monitoring gaps within specific critical resources within your organization, providing a centralized, single-pane view of telemetry configuration status. Service-linked recorders are immutable to ensure consistency, prevention of configuration drift, and simplified experience. Service-linked recorders operate independently of any existing AWS Config recorder, if one is enabled. This allows you to independently manage your AWS Config recorder for your specific use cases while authorized AWS services can manage the service-linked recorder for feature specific requirements.

Amazon CloudWatch managed service-linked recorder is now available in US East (N. Virginia), US West (Oregon), US East (Ohio), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney) Europe (Frankfurt), Europe (Ireland), Europe (Stockholm) regions. The AWS Config service-linked recorder specific to Amazon CloudWatch telemetry configuration feature is available to customers at no additional cost.

To learn more, please refer to our documentation.
 

Read more


Amazon CloudWatch and Amazon OpenSearch Service launch an integrated analytics experience

Amazon Web Services announces a new integrated analytics experience and zero-ETL integration between Amazon CloudWatch and Amazon OpenSearch Service for customers to get the best of both services. CloudWatch customers can now leverage OpenSearch’s Piped Processing Language (PPL) and OpenSearch SQL. Additionally, CloudWatch customers can accelerate troubleshooting with out-of-the-box curated dashboards for vended logs like Amazon Virtual Private Cloud (VPC), AWS CloudTrail, and AWS WAF. OpenSearch customers can now analyze CloudWatch Logs without having to duplicate data.

With this integration, CloudWatch Logs customers have two more query languages for log analytics, in addition to CloudWatch Logs Insights QL. Customers can use SQL to analyze data, correlate logs using JOIN, sub-queries, and use SQL functions, namely, JSON, mathematical, datetime, and string functions for intuitive log analytics. They can also use the OpenSearch PPL to filter, aggregate and analyze their data. With a few clicks, CloudWatch Logs customers can create OpenSearch dashboards for VPC, WAF, and CloudTrail logs to monitor, analyze, and troubleshoot using visualizations derived from the logs. OpenSearch customers no longer have to copy logs from CloudWatch for analysis, or create ETL pipelines. Now, they can use OpenSearch Discover to analyze CloudWatch logs in-place, and build indexes and dashboards on CloudWatch Logs.

This is now available in the regions where OpenSearch Service direct query is available. Please read pricing and free tier details on Amazon CloudWatch Pricing, and OpenSearch Service Pricing. To get started, please refer to Amazon CloudWatch Logs vended dashboard and Amazon OpenSearch Service Developer Guide.

Read more


Amazon CloudWatch Container Insights launches enhanced observability for Amazon ECS

Amazon CloudWatch Container Insights introduces enhanced observability for Amazon Elastic Container Service (ECS) running on Amazon EC2 and Amazon Fargate with out-of-the-box detailed metrics, from cluster level down to container level to deliver faster problem isolation and troubleshooting.

Enhanced observability enables customers to visually drill up and down across various container layers and directly spot issues like memory leaks in individual containers, reducing mean time to resolution. With enhanced observability customers can now view their clusters, services, tasks or containers sorted by resource consumption, quickly identify anomalies, and mitigate risks pro-actively before end user experience is impacted. Using Container Insights’ new landing page, customers can now easily understand overall health and performance of clusters across multiple accounts, identify the ones operating under high utilization and pinpoint the root cause by directly browsing to the related detailed dashboards view saving time and effort.

You can get started with enhanced observability at cluster level or account level by selecting “Enhanced” radio button on Amazon ECS console or through the AWS CLI, CloudFormation and CDK. You can also collect instance level metrics from EC2 by launching the CloudWatch agent as a daemon service on your Container Insights enabled clusters.

Container Insights is available in all public AWS Regions, including the AWS GovCloud (US) Regions, China (Beijing, operated by Sinnet), and China (Ningxia, operated by NWCD). Container Insights with enhanced observability for ECS comes with a flat metric pricing – see pricing page for details. For further information, visit the Container Insights documentation.

Read more


Amazon CloudWatch adds network performance monitoring for AWS workloads using flow monitors

Amazon CloudWatch Network Monitoring now allows you to monitor network performance of your AWS workloads by using flow monitors. The new feature provides near real-time visibility of network performance for workloads between compute instances such as Amazon EC2 and Amazon EKS, and AWS services such as Amazon S3, Amazon RDS, and Amazon DynamoDB, enabling you to rapidly detect and attribute network-driven impairments for your workloads.

CloudWatch Network Monitoring uses flow monitors to provide TCP-based performance metrics for packet loss and latency, and network health indicators of your AWS workloads to help you quickly pinpoint the root cause of issues. Flow monitors help you determine if a problem is caused by your application stack or by the underlying AWS infrastructure, so that you can proactively monitor your end user experience. If you need to contact AWS Support, Network Monitoring provides AWS Support with the same network health information, along with details about the underlying infrastructure, to help accelerate troubleshooting and resolution.

We are consolidating CloudWatch Internet Monitor and CloudWatch Network Monitor within CloudWatch Network Monitoring, which now includes flow monitors, synthetic monitors, and internet monitors. Use flow monitors to passively monitor the network performance of AWS workloads, synthetic monitors to actively monitor hybrid network segments, and internet monitors to monitor internet segments.

For the full list of AWS Regions where Network Monitoring for AWS workloads is available, visit the Regions list. To learn more, visit the Amazon CloudWatch Network Monitoring documentation.
 

Read more


AWS announces Amazon CloudWatch Database Insights

AWS announces the general availability of Amazon CloudWatch Database Insights with support for Amazon Aurora PostgreSQL and Amazon Aurora MySQL. Database Insights is a database observability solution that provides a curated experience designed for DevOps engineers, application developers, and database administrators (DBAs) to expedite database troubleshooting and gain a holistic view into their database fleet health.

Database Insights consolidates logs and metrics from your applications, your databases, and the operating systems on which they run into a unified view in the console. Using its pre-built dashboards, recommended alarms, and automated telemetry collection, you can monitor the health of your database fleets and use a guided troubleshooting experience to drill down to individual instances for root-cause analysis. Application developers can correlate the impact of database dependencies with the performance and availability of their business-critical applications. This is because they can drill down from the context of their application performance view in Amazon CloudWatch Application Signals to the specific dependent database in Database Insights.

You can get started with Database Insights by enabling it on your Aurora clusters using the Aurora service console, AWS APIs, and SDKs. Database Insights delivers database health monitoring aggregated at the fleet level, as well as instance-level dashboards for detailed database and SQL query analysis.

Database Insights is available in all public AWS Regions and applies a new vCPU-based pricing – see pricing page for details. For further information, visit the Database Insights documentation.
 

Read more


Amazon CloudWatch adds context to observability data in service consoles, accelerating analysis

Amazon CloudWatch now adds context to observability data, making it much easier for IT operators, application developers, and Site Reliability Engineers (SREs) to navigate related telemetry, visualize relationships between resources, and accelerate analysis. This new feature transforms disparate metrics and logs into real-time insights, to identify root cause of issues faster and improve operational efficiency.

With this feature, Amazon CloudWatch now automatically visualizes the relationships within observability data and underlying AWS resources, such as Amazon EC2 instances and AWS Lambda functions. This feature is integrated across the AWS Management Console, accessible from multiple entry points including CloudWatch widgets, CloudWatch alarms, CloudWatch Application Signals, and CloudWatch Container Insights. Selecting this feature opens a side panel where you can explore and dive deeper into related metrics and logs all without leaving your current view. By selecting other metrics or resources of interest within the panel, you can streamline your troubleshooting process.

This new capability is enabled by default in all commercial AWS Regions. To view and explore related telemetry and resources, we recommend updating to the latest version of Amazon CloudWatch agent.

To learn more, visit the Amazon CloudWatch product page or view the documentation.

Read more


Amazon CloudWatch Synthetics now supports Playwright runtime to create canaries with NodeJS

CloudWatch Synthetics, which continuously monitors web applications and APIs by running scripted canaries to help you detect issues before they impact end-users, now supports the Playwright framework for creating NodeJS canaries enabling comprehensive monitoring and diagnosis of complex user journeys and issues that are challenging to automate with other frameworks.

Playwright is an open-source automation library for testing web applications. You can now create multi-tab workflows in a canary using the Playwright runtime which comes with the advantage of troubleshooting failed runs with logs stored directly to CloudWatch Logs database in your AWS account. This replaces the previous method of storing logs as text files and enables you to leverage CloudWatch Logs Insights for query-based filtering, aggregation, and pattern analysis. You can now query CloudWatch logs for your canaries using the canary run ID or step name, making the troubleshooting process faster and more precise than one relying on timestamp correlation for searching logs. Playwright-based canaries also generate artifacts like reports, metrics, and HAR files, even when canaries times out, ensuring you have the required data needed for root cause analysis in those scenarios. Additionally, the new runtime simplifies canary configuration by allowing customization through a JSON file, removing the need to call a library function in the canary code.

Playwright runtime is available for creating canaries in NodeJS in all commercial regions at no additional cost to users.

To learn more about the runtime, see documentation, or refer to the user guide to get started with CloudWatch Synthetics.

Read more


AWS Lambda supports application performance monitoring (APM) via CloudWatch Application Signals

AWS Lambda now supports Amazon CloudWatch Application Signals, an application performance monitoring (APM) solution, enabling developers and operators to easily monitor the health and performance of their serverless applications built using Lambda.

Customers want an easy way to quickly identify and troubleshoot performance issues to minimize the mean time to recovery (MTTR) and operational costs of running serverless applications. Now, Application Signals provides pre-built, standardized dashboards for critical application metrics (such as throughput, availability, latency, faults, and errors), correlated traces, and interactions between the Lambda function and its dependencies (such as other AWS services), without requiring any manual instrumentation or code changes from developers. This gives operators a single-pane-of-glass view of the health of the application and enables them to drill down to establish the root cause of performance anomalies. You can also create Service Level Objectives (SLOs) in Application Signals to closely track the performance KPIs of critical operations in your application, enabling you to easily identify and triage operations that do not meet your business KPIs. Application Signals auto-instruments your Lambda function using enhanced AWS Distro for OpenTelemetry (ADOT) libraries, delivering better performance (cold start latency and memory consumption) than before.

To get started, visit the Configuration tab in Lambda console and enable Application Signals for your function with just one click in the “Monitoring and operational tools” section. To learn more, visit the launch blog post, Lambda developer guide, and Application Signals developer guide.

Application Signals for Lambda is available in all commercial AWS Regions where Lambda and CloudWatch Application Signals are available.
 

Read more


Amazon CloudWatch Internet Monitor adds AWS Local Zones support for VPC subnets

Today, Amazon CloudWatch Internet Monitor introduces support for select AWS Local Zones. Now, you can monitor internet traffic performance for VPC subnets deployed in Local Zones.

With this new feature, you can also view optimization suggestions that include Local Zones. On the Optimize tab in the Internet Monitor console, select the toggle to include Local Zones in traffic optimization suggestions for your application. Additionally, you can compare your current configuration with other supported Local Zones. Select the option to see more optimization suggestions, and then choose specific Local Zones to compare. By comparing latency differences, you can determine the proposed best configuration for your traffic.

At launch, CloudWatch Internet Monitor supports the following Local Zones: us-east-1-dfw-2a, us-east-1-mia-2a, us-east-1-qro-1a, us-east-1-lim-1a, us-east-1-atl-2a, us-east-1-bue-1a, us-east-1-mci-1a, us-west-2-lax-1a, us-west-2-lax-1b, and af-south-1-los-1a.

To learn more, visit the Internet Monitor user guide documentation.

Read more


Amazon CloudWatch launches full visibility into application transactions

AWS announces the general availability of an enhanced search and analytics experience in CloudWatch Application Signals. This feature empowers developers and on-call engineers with complete visibility into application transaction spans, which are the building blocks of distributed traces that capture detailed interactions between users and various application components.

This feature offers 3 core benefits. First, developers can answer any questions related to application performance or end-user impact through an interactive visual editor and enhancements to Logs Insights queries. They can correlate spans with end-user issues using attributes like customer name or order number. With the new JSON parse and unnest functions in Logs Insights, they can link transactions to business events such as failed payments and troubleshoot. Second, developers can diagnose rarely occurring issues, such as p99 latency spikes in APIs, with the enhanced troubleshooting capabilities in Amazon CloudWatch Application Signals that correlates application metrics with comprehensive transaction spans. Finally, CloudWatch Logs offers advanced features for transaction spans, including data masking, forwarding via subscription filters, and metric extraction. You can enable these capabilities for existing spans sent to X-Ray or by sending spans to a new OTLP (OpenTelemetry Protocol) endpoint for traces. This allows you to enhance your observability while maintaining flexibility in your setup.

You can search and analyze spans in all regions where Application Signals is available. A new pricing option is also available , encompassing Application Signals, X-Ray traces, and complete visibility into transaction spans - see Amazon CloudWatch pricing. Refer to documentation for more details.
 

Read more


Amazon CloudWatch Synthetics now automatically deletes Lambda resources associated with canaries

Amazon CloudWatch Synthetics, an outside-in monitoring capability which continually verifies your customers’ experience by running snippets of code on AWS Lambda called canaries, will now automatically delete your associated Lambda resources when you try to delete Synthetics canaries minimizing the manual upkeep required to manage AWS resources in your account.

CloudWatch Synthetics creates Lambdas to execute canaries to monitor the health and performance of your web applications or API endpoints. When you delete a canary the Lambda function and its layers are no longer usable. With the release of this feature these Lambdas will be automatically removed when a canary is deleted, reducing the need for additional housekeeping in maintaining your Synthetics canaries. Canaries deleted via AWS console will automatically cleanup related lambda resources. Any new canaries created via CLI/SDK or CFN will automatically opt-in to this feature whereas canaries created before this launch need to be explicitly opted in.

This feature is available in all commercial regions, the AWS GovCloud (US) Regions, and China regions at no additional cost to the customers.

To learn more about the delete behavior of canaries, see the documentation, or refer to the user guide and One Observability Workshop to get started with CloudWatch Synthetics.
 

Read more


Amazon CloudWatch Logs announces field indexes and enhanced log group selection in Logs Insights

Amazon CloudWatch Logs introduces field indexes and enhanced log group selection to accelerate log analysis. Now, you can index critical log attributes like requestId and transactionId to accelerate query performance and scan relevant indexed data. This means faster troubleshooting, and easier identification of trends. You can create up to 20 field indexes per log group, and once defined, all future logs matching the defined fields will remain indexed for up to 30 days. Additionally, CloudWatch Logs Insights now supports querying up to 10,000 log groups, across one or more accounts linked via cross-account observability.

Customers using field indexes, will benefit from faster query execution times while searching across vast amounts of logs. CloudWatch Logs Insights queries using “filter field = value” syntax will automatically leverage indexes, when available. When combined with enhanced log group selection, customers can now gain faster insights across a much larger set of logs in Logs Insights. Customers can select up to 10,000 log groups via either log group prefix or "All" log groups option. To further optimize query performance and costs, customers can use the new "filterIndex" command to limit queries to indexed data only.

Field indexes are available in all AWS Regions where CloudWatch Logs is available and are included as part of standard log class ingestion at no additional cost.

To get started, define index policy at account level or per log-group level within AWS console, or programmatically via API/CLI. See documentation to learn more about field indexes.
 

Read more


Amazon CloudFront now supports additional log formats and destinations for access logs

Amazon CloudFront announces enhancements to its standard access logging capabilities, providing customers with new log configuration and delivery options. Customers can now deliver CloudFront access logs directly to two new destinations: Amazon CloudWatch Logs and Amazon Data Firehose. Customers can select from an expanded list of log output formats, including JSON and Apache Parquet (for logs delivered to S3). Additionally, they can directly enable automatic partitioning of logs delivered to S3, select specific log fields, and set the order in which they are included in the logs.

Until today, customers had to write custom logic to partition logs, convert log formats, or deliver logs to CloudWatch Logs or Data Firehose. The new logging capabilities provide native log configurations, eliminating the need for custom log processing. For example, customers can now directly enable features like Apache Parquet format for CloudFront logs delivered to S3 to improve query performance when using services like Amazon Athena and AWS Glue.

Additionally, customers enabling access log delivery to CloudWatch Logs will receive 750 bytes of logs free for each CloudFront request. Standard access log delivery to Amazon S3 remains free. Please refer to the 'Additional Features' section of the CloudFront pricing page for more details.

Customers can now enable CloudFront standard logs to S3, CloudWatch Logs and Data Firehose through the CloudFront console or APIs. CloudFormation support will be coming soon. For detailed information about the new access log features, please refer to the Amazon CloudFront Developer Guide.

Read more


Amazon CloudWatch Application Signals launches support for Runtime Metrics

Today, AWS announces the general availability of runtime metrics support in Amazon CloudWatch Application Signals, an OpenTelemetry (OTel) compatible application performance monitoring (APM) feature in CloudWatch. You can view runtime metrics like garbage collection, memory usage, and CPU usage for your Java or Python applications to troubleshoot issues such as high CPU utilization or memory leaks, which can disrupt the end-user experience.

Application Signals simplifies troubleshooting application performance against key business or service level objectives (SLOs) for AWS applications. Without any source code changes, Application Signals collects traces, application metrics(error/latency/throughput), logs and now runtime metrics to bring them together in a single pane of glass view.
Runtime metrics enable real-time monitoring of your application’s resource consumption, such as memory and CPU usage. With Application Signals, you can understand whether anomalies in runtime metrics have any impact on your end-users by correlating them with application metrics such as error/latency/throughput. For example, you will be able to identify if a service latency spike is a result of an increase in garbage collection pauses by viewing these metric graphs side by side. Additionally you will be able to identify thread contention, track memory allocation patterns, and pinpoint memory or CPU spikes that may lead to application slowdowns or crashes, impacting end user experience.

Runtime metrics support is available in all regions Application Signals is available in. Runtime metrics are charged based on Application Signals pricing, see Amazon CloudWatch pricing.

To learn more, see documentation to enable Amazon CloudWatch Application Signals.

Read more


CloudWatch RUM now supports percentile aggregations and simplified troubleshooting with web vitals metrics

CloudWatch RUM, which captures real-time data on web application performance and user interactions, helping you quickly detect and resolve issues impacting the user experience, now supports percentile aggregation of web vital metrics and simplified events based troubleshooting directly from the web vitals anomaly.

Google uses the 75th percentile (p75) of a web page’s Core Web Vitals—Largest Contentful Paint, First Input Delay, and Cumulative Layout Shift—to influence page ranking. With CloudWatch RUM, you can now monitor these p75 values of web page vitals and ensure that majority of your visitors experience optimal performance, minimizing the impact of outliers. You can also click on any point in the Web Vitals graph to view correlated page events, allowing you to quickly dive into event details such as browser, device, and geolocation to identify specific conditions causing performance issues. Additionally, you can track affected users and sessions for in-depth analysis and quickly troubleshoot issues without the added steps of applying filters to retrieve correlated events in CloudWatch RUM.

These enhancements are available in all regions where CloudWatch RUM is available at no additional cost to users.

See documentation to learn more about the feature, or see user guide or AWS One Observability Workshop to get started with real user monitoring using CloudWatch RUM.

Read more


Easily troubleshoot NodeJS applications with Amazon CloudWatch Application Signals

Today, AWS announces the general availability of NodeJS applications monitoring on Amazon CloudWatch Application Signals, an OpenTelemetry (OTel) compatible application performance monitoring (APM) feature in CloudWatch. Application Signals simplifies the process of automatically tracking application performance against key business or service level objectives (SLOs) for AWS applications. Service operators can access a pre-built, standardized dashboard for AWS application metrics through Application Signals.

Customers already use Application Signals to monitor their Java, Python and .NET applications deployed on EKS, EC2 and other platforms. With this release, they can now easily onboard and troubleshoot issues in their NodeJS applications with no additional code. NodeJS application developers can quickly triage current operational health, and whether their applications are meeting their longer-term performance goals. Customers can ensure high availability of their NodeJS applications through Application Signals’ easy navigation flow, starting with an alert for a service level indicator (SLI) gone unhealthy and deep diving from there to an error or a spike in the auto generated graphs for application metrics (latency/errors/requests). In a single pane of glass view, they can correlate application metrics with traces, application logs and infrastructure metrics to troubleshoot issues with their application in a few clicks.

Application Signals is available in all commercial AWS Regions, except, CA West (Calgary) Region, Asia Pacific (Malaysia), AWS GovCloud (US) Regions and China Regions. For pricing, see Amazon CloudWatch pricing.

To learn more, see documentation to enable Amazon CloudWatch Application Signals for Amazon EKS, Amazon EC2, native Kubernetes and custom instrumentation for other platforms.

Read more


Announcing Amazon CloudWatch Metrics support in AWS End User Messaging

Today, AWS announces general availability support for 10 new Amazon CloudWatch metrics in AWS End User Messaging for the SMS and MMS channel. AWS End User Messaging provides developers with a scalable and cost-effective messaging infrastructure without compromising the safety, security, or results of their communications.

You can now use CloudWatch metrics to monitor SMS and MMS message performance. The new metrics allow you to track the number of messages sent and delivered, messages feedback rates such as one-time passcodes conversions, and track messages blocked by SMS protect. Customers can use CloudWatch Metrics Insights to graph and identify trends in real time and monitor those trends directly in the AWS End User Messaging console or in Amazon CloudWatch.

To learn more, visit the AWS End User Messaging SMS User Guide.
 

Read more


Amazon CloudWatch launches Observability Solutions for AWS Services and Workloads on AWS

Observability solutions help you get up-and-running faster with infrastructure and application monitoring at AWS. They are intended for developers who need opinionated guidance about the best options for observing AWS services, custom applications, and third-party workloads. Observability solutions include working examples of instrumentation, telemetry collection, custom dashboards, and metric alarms.

Using observability solutions, you can select from a catalog of available solutions that deliver focused observability guidance for AWS services and common workloads such as Java Virtual Machine (JVM), Apache Kafka, Apache Tomcat, or NGINX. Solutions cover monitoring tasks including installing and configuring Amazon CloudWatch agent, deploying pre-defined custom dashboards and setting metric alarms. Observability solutions also include guidance about observability features such as Detailed Monitoring metrics for infrastructure, Container Insights for container monitoring, and Application Signals for monitoring applications. Solutions are available for Amazon CloudWatch and Amazon Managed Service for Prometheus. Observability solutions can be deployed as-is or customized to suit specific use cases, with options for enabling features or configuring deployments based on workload needs.

Observability solutions are available in all commercial regions.

To get started with observability solutions, navigate to the observability solutions page in the CloudWatch console.

Read more


Split cost allocation data for Amazon EKS now supports metrics from Amazon CloudWatch Container Insights

Starting today, you can use CPU and memory metrics collected by Amazon CloudWatch Container Insights for your Amazon Elastic Kubernetes Service (EKS) clusters in split cost allocation data for Amazon EKS, so you can get granular Kubernetes pod-level costs and make it available in AWS Cost and Usage Reports (CUR). This provides more granular cost visibility for your clusters running multiple application containers using shared EC2 instances, enabling better cost allocation for the shared costs of your EKS clusters.

To enable this feature, you need to enable Container Insights with Enhanced Observability for Amazon Elastic Kubernetes Service (EKS). You can use either the Amazon CloudWatch Observability EKS add-on or the Amazon CloudWatch Observability Helm chart to install the CloudWatch agent and the Fluent-bit agent on an Amazon EKS cluster. You also need to enable split cost allocation data for Amazon EKS in the AWS Billing and Cost Management console, and choose Amazon CloudWatch as the metrics source. Once the feature is enabled, the pod-level usage data will be available in CUR within 24 hours.

This feature is available in all AWS Regions where split cost allocation data for Amazon EKS is available. To get started, visit Understanding split cost allocation data. To learn more about Container Insights product and pricing, visit Container Insights and Amazon CloudWatch Pricing.

Read more


Application Signals now supports burn rate for application performance goals

Amazon CloudWatch Application Signals, an application performance monitoring (APM) feature in CloudWatch, makes it easy to automatically instrument and track application performance against their most important business or service level objectives (SLOs). Customers can now receive alerts when these SLOs reach a critical burn rate. This new feature allows you to calculate how quickly your service is consuming its error budget relative to the SLO's attainment goal. Burn rate metrics provide a clear indication of whether you're meeting, exceeding, or at risk of failing your SLO goals.

Today, with burn rate metrics, you can configure CloudWatch alarms to notify you automatically when your error budget consumption exceeds specified thresholds. This allows for proactive management of service reliability, empowering your teams to take prompt action to achieve long-term performance targets. By setting multiple alarms with varying look-back windows, you can identify sudden error rate spikes and gradual shifts that could affect your error budget.

Burn rates are available in all regions where Application Signals is generally available - 28 commercial AWS Regions except CA West (Calgary) and Asia Pacific (Malaysia) regions. For pricing, see Amazon CloudWatch pricing. See SLO documentation to learn more, or refer to the user guide and AWS One Observability Workshop to get started with Application Signals.

Read more


Configure Route53 CIDR blocks rules based on Internet Monitor suggestions

With Amazon CloudWatch Internet Monitor’s new traffic optimization suggestions feature, you can configure your Amazon Route 53 CIDR blocks to map your application’s client users to an optimal AWS Region based on network behavior.

Internet Monitor now provides actionable suggestions to help you optimize your Route 53 IP-based routing configurations. By leveraging the new traffic insights for your application, you can easily identify the optimal AWS Regions for routing your end user traffic, and then configure your Route 53 IP-based routing based on these recommendations.

Internet Monitor collects performance data and measures latency for your client subnets behind each DNS resolver. This enables Internet Monitor to recommend the AWS Region that will provide the lowest latency for your users, based on their locations, so that you can fine-tune your DNS routing to provide the best performance for users.

To learn more, visit the Cloud Watch Internet Monitor user guide documentation.

Read more


amazon-cloudwatch-logs

Amazon CloudWatch Logs launches the ability to transform and enrich logs

Amazon CloudWatch Logs announces log transformation and enrichment to improve log analytics at scale with consistent, and context-rich format. Customers can add structure to their logs using pre-configured templates for common AWS services such as AWS Web Application Firewall (WAF), Route53, or build custom transformers with native parsers such as Grok. Customers can also rename existing attributes and add additional metadata to their logs such as accountId, and region.

Logs emitted from various sources vary widely in format and attribute names, which makes analysis across sources cumbersome. With today’s launch, customers can simplify their log analytics experience by transforming all their logs into a standardized JSON structure. Transformed logs can be leveraged to accelerate analytics experience using field indexes, discovered fields in CloudWatch Logs Insights, provide flexibility in alarming using metric filters and forwarding via subscription filters. Customers can manage log transformations natively within CloudWatch without needing to setup complex pipelines.

Log transformation and enrichment capability is available in all AWS Commercial Regions, and included with existing Standard log class ingestion price. Logs Store (Archival) costs will be based on log size after transformation, which may exceed the original log volume. With a few clicks in the Amazon CloudWatch Console, customers can configure transformers at log group level. Alternatively, customers can setup transformers at account, or log group level using AWS Command Line Interface (AWS CLI), AWS CloudFormation, AWS Cloud Development Kit (AWS CDK), and AWS SDKs. Read the documentation to learn more about this capability.
 

Read more


Amazon CloudWatch launches full visibility into application transactions

AWS announces the general availability of an enhanced search and analytics experience in CloudWatch Application Signals. This feature empowers developers and on-call engineers with complete visibility into application transaction spans, which are the building blocks of distributed traces that capture detailed interactions between users and various application components.

This feature offers 3 core benefits. First, developers can answer any questions related to application performance or end-user impact through an interactive visual editor and enhancements to Logs Insights queries. They can correlate spans with end-user issues using attributes like customer name or order number. With the new JSON parse and unnest functions in Logs Insights, they can link transactions to business events such as failed payments and troubleshoot. Second, developers can diagnose rarely occurring issues, such as p99 latency spikes in APIs, with the enhanced troubleshooting capabilities in Amazon CloudWatch Application Signals that correlates application metrics with comprehensive transaction spans. Finally, CloudWatch Logs offers advanced features for transaction spans, including data masking, forwarding via subscription filters, and metric extraction. You can enable these capabilities for existing spans sent to X-Ray or by sending spans to a new OTLP (OpenTelemetry Protocol) endpoint for traces. This allows you to enhance your observability while maintaining flexibility in your setup.

You can search and analyze spans in all regions where Application Signals is available. A new pricing option is also available , encompassing Application Signals, X-Ray traces, and complete visibility into transaction spans - see Amazon CloudWatch pricing. Refer to documentation for more details.
 

Read more


Amazon CloudWatch Logs announces field indexes and enhanced log group selection in Logs Insights

Amazon CloudWatch Logs introduces field indexes and enhanced log group selection to accelerate log analysis. Now, you can index critical log attributes like requestId and transactionId to accelerate query performance and scan relevant indexed data. This means faster troubleshooting, and easier identification of trends. You can create up to 20 field indexes per log group, and once defined, all future logs matching the defined fields will remain indexed for up to 30 days. Additionally, CloudWatch Logs Insights now supports querying up to 10,000 log groups, across one or more accounts linked via cross-account observability.

Customers using field indexes, will benefit from faster query execution times while searching across vast amounts of logs. CloudWatch Logs Insights queries using “filter field = value” syntax will automatically leverage indexes, when available. When combined with enhanced log group selection, customers can now gain faster insights across a much larger set of logs in Logs Insights. Customers can select up to 10,000 log groups via either log group prefix or "All" log groups option. To further optimize query performance and costs, customers can use the new "filterIndex" command to limit queries to indexed data only.

Field indexes are available in all AWS Regions where CloudWatch Logs is available and are included as part of standard log class ingestion at no additional cost.

To get started, define index policy at account level or per log-group level within AWS console, or programmatically via API/CLI. See documentation to learn more about field indexes.
 

Read more


amazon-cognito

AWS Amplify introduces passwordless authentication with Amazon Cognito

AWS Amplify is excited to announce support for Amazon Cognito's new passwordless authentication features, enabling developers to implement secure sign-in methods using SMS one-time passwords, email one-time passwords, and WebAuthn passkeys in their applications with Amplify client libraries for JavaScript, Swift, and Android. This update simplifies the implementation of passwordless authentication flows, addressing the growing demand for more secure and user-friendly login experiences while reducing the risks associated with traditional password-based systems.

This new capability enhances application security and user experience by eliminating the need for traditional passwords, reducing the risk of credential-based attacks while streamlining the login process. Passwordless authentication is ideal for organizations aiming to strengthen security and increase user adoption across various sectors, including e-commerce, finance, and healthcare. By removing the frustration of remembering complex passwords, this feature can significantly improve user engagement and simplify account management for both users and organizations.

The passwordless authentication feature is now available in all AWS regions where Amazon Cognito is supported, enabling developers worldwide to leverage this functionality in their applications.

To get started with passwordless authentication in AWS Amplify, visit the AWS Amplify documentation for detailed guides and examples

Read more


Announcing new feature tiers: Essentials and Plus for Amazon Cognito

Amazon Cognito launches new user pool feature tiers: Essentials and Plus. The Essentials tier offers comprehensive and flexible user authentication and access control features, allowing customers to implement secure, scalable, and customized sign-up and sign-in experiences for their application within minutes. It supports password-based log-in, multi-factor authentication (email, SMS, TOTP), and log-in with social identity providers, along with recently announced Managed Login and passwordless log-in (passkeys, email, SMS) features. Essentials also supports customizing access tokens and disallowing password reuse. The Plus tier is geared toward customers with elevated security needs for their applications by offering threat protection capabilities against suspicious log-ins. Plus includes all Essentials features and additionally supports risk-based adaptive authentication, compromised credentials detection, and exporting user authentication event logs to analyze threat signals.

Essentials will be the default tier for new users pools created by customers. Customers also have the flexibility to switch between all available tiers anytime based on their application needs. For existing user pools, customers can enable the new tiers or continue using their current user pool configurations without making any changes. Customers using advanced security features (ASF) in Amazon Cognito should consider the Plus tier, which includes all ASF capabilities, additional capabilities such as passwordless log-in, and up to 60% savings compared to using ASF.

The Essentials and Plus tiers are available at new pricing. Essentials and Plus are available in all AWS Regions where Amazon Cognito is available except AWS GovCloud (US) Regions.

To learn more, refer to:

Read more


Amazon Cognito introduces Managed Login to support rich branding for end user journeys

Amazon Cognito introduces Managed Login, a fully-managed, hosted sign-in and sign-up experience that customers can personalize to align with their company or application branding. Amazon Cognito provides millions of users with secure, scalable, and customizable sign-up and sign-in experiences. With Managed Login, Cognito customers can now use its no-code visual editor to customize the look and feel of the user journey from signup and login to password recovery and multi-factor authentication.

Managed Login helps customers offload the undifferentiated heavy lifting of designing and maintaining custom implementations such as passwordless authentication and localization. For example, Managed Login offers pre-built integrations for passwordless login, including sign-in with passkeys, email, or text message. This provides customers the flexibility to implement low-friction and secure authentication methods without the need to author custom code. With Managed Login, customers now design and manage their end-user sign-up and sign-in experience through the AWS Management Console. Additionally, Cognito has also revamped its getting started experience with application-specific (e.g., for web applications) guidance for customers to swiftly configure their user pools. Together with Managed Login and a simplified getting started experience, customers can now get their applications to end users faster than ever before with Amazon Cognito.

Managed Login is offered as part of the Cognito Essentials tier and can be used in all AWS Regions where Amazon Cognito is available except the AWS GovCloud (US) Regions. To get started, refer to:

Read more


Amazon Cognito now supports passwordless authentication for low-friction and secure logins

Amazon Cognito now allows you to secure user access to your applications with passwordless authentication, including sign-in with passkeys, email, and text message. Passkeys are based on FIDO standards and use public key cryptography, which enables strong, phishing-resistant authentication. With passwordless authentication, you can reduce the friction associated with traditional password-based authentication and thus simplify the user log-in experience for their applications. For example, if your users choose to use passkeys to log in, they can do so using a built-in authenticator, such as Touch ID on Apple MacBooks and Windows Hello facial recognition on PCs.

Amazon Cognito provides millions of users with secure, scalable, and customizable sign-up and sign-in experiences within minutes. With this launch, AWS is now extending the support for passwordless authentication to the applications you build. This enables your end-users to log in to your applications with a low-friction and secure approach.

Passwordless authentication is offered as part of the Cognito Essentials tier and can be used in all AWS Regions where Amazon Cognito is available except the AWS GovCloud (US) Regions. To get started, see the following resources:

Read more


amazon-connect

Amazon Connect Contact Lens now automatically categorizes your contacts using generative AI

Amazon Connect Contact Lens now provides you with the ability to automatically categorize your contacts using generative AI, making it easy to identify top drivers, customer experience, and agent behavior for your contacts. You can provide criteria to categorize contacts in natural language (e.g., did the customer try to make a payment on their balance?). Contact Lens then automatically labels contacts that meet the match criteria, and provides relevant points from the conversation. In addition, you can receive alerts and generate tasks on categorized contacts, and search for contacts using the automated labels. This feature helps supervisors easily categorize contacts for scenarios such as identifying customer interest in specific products, assessing customer satisfaction, monitoring whether agents exhibited professional behavior on calls, and more.

This feature is supported in the English language and is available in two AWS regions including US East (N. Virginia) and US West (Oregon). To learn more, please visit our documentation and our webpage. This feature is included within Contact Lens conversational analytics price at no additional cost. For information about Contact Lens pricing, please visit our pricing page.

Read more


Amazon Connect launches AI guardrails for Amazon Q in Connect

Amazon Q in Connect, a generative AI powered assistant for customer service, now enables customers to natively configure AI guardrails to implement safeguards based on their use cases and responsible AI policies. Contact center administrators can configure company-specific guardrails for Amazon Q in Connect to filter harmful and inappropriate responses, redact sensitive personal information, and limit incorrect information in the responses due to potential large language model (LLM) hallucination.

For end-customer self-service scenarios, guardrails can be used to ensure Amazon Q in Connect responses are constrained to only company-related topics and maintain professional communication standards. Additionally, when agents leverage Amazon Q in Connect to help solve customer issues, these guardrails can prevent accidental exposure of personally identifiable information (PII) to agents. Contact center administrators will have the flexibility to configure these guardrails and selectively apply them to different contact types.

For region availability, please see the availability of Amazon Connect features by Region. To learn more about Amazon Q in Connect, please visit the website or see the help documentation.
 

Read more


Amazon Connect launches new intraday forecast dashboards

Amazon Connect now allows you to compare intraday forecasts against previously published forecasts, review projected daily performance, and receive predictions for effective staffing, all available within the Amazon Connect Contact Lens dashboards. With intraday forecasts, you receive updates every 15 minutes with predictions for rest-of-day contact volumes, average queue answer time, average handle time, and, now, effective staffing. These forecasts allow you to take proactive actions to improve customer wait time and service level. For example, contact center managers can now track agent utilization at the queue level, enabling them to identify potential imbalances or staffing shortages and take action before wait times are impacted.

This feature is available in all AWS Regions where Amazon Connect forecasting, capacity planning, and agent scheduling is available. To learn more see the Amazon Connect Administrator Guide.

Read more


Amazon Connect launches AI assistant for customer segments and trigger-based campaigns

Amazon Connect now offers new capabilities to proactively engage your customers in a personalized manner. These features help non-technical business users create customer segments using prompts and drive trigger-based campaigns to deliver timely, relevant communications to the right audiences.

Use new segment AI assistant in Amazon Connect Customer Profiles to build audiences using natural language queries and receive recommendations based on trends in the customer data. Identify segments such as customers with an increase in support cases over the last quarter, or whose have reduced purchases in the last month, using easy-to-use prompts. Use new trigger-based campaigns based on real-time customer events on Amazon Connect outbound campaigns to proactively drive outbound communications in just a few clicks. Engage customers with timely, relevant communications via their preferred channels, responding instantly to behaviors such as abandoned shopping carts or frequent visits to specific help pages.

With Amazon Connect Customer Profiles and Amazon Connect outbound campaigns, only pay-as-you-go for customer profiles utilized daily, outbound campaigns processing and for associated channels usage. Both features of Amazon Connect are available in US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), Canada (Central), Europe (Frankfurt) and Europe (London) AWS regions. In addition, segment AI assistant is available in Asia Pacific (Seoul), Asia Pacific (Tokyo) and Asia Pacific (Singapore) AWS regions, with trigger based campaigns also available in Africa (Cape Town) AWS region. To learn more, visit our webpages for Customer Profiles and for outbound campaigns.

Read more


Amazon Connect now provides the ability to record audio during IVR and other automated interactions

Amazon Connect now enables you to record audio when your customer engages with self-service interactive voice response (IVR) and other automated interactions. On the Contact details page, you can listen to the recording or review logs which includes information such as the bot transcription or touch-tone menu selection. Recording settings can be configured using the “Set recording and analytics behavior” block on the Amazon Connect drag-and-drop workflow designer, allowing you to easily specify portions of the experience to record. For example, pausing and resuming recordings before and after sensitive exchanges, such as when a customer shares their credit card or social security number. These new capabilities make it easy for you to monitor and audit the quality of your self-service experiences or to record interactions for compliance or policy purposes.

These features are available in all AWS regions where Amazon Connect is available. To learn more, see the Amazon Connect Administrator Guide and Amazon Connect API Reference. To learn more about Amazon Connect, the AWS contact center as a service solution on the cloud, please visit the Amazon Connect website.

Read more


Amazon Connect Contact Lens now supports external voice

Amazon Connect now integrates with other voice systems for real-time and post-call analytics, so you can use Amazon Connect Contact Lens with your existing voice system to help improve customer experience and agent performance.

Amazon Connect Contact Lens provides call recordings, conversational analytics (including contact transcript, sensitive data redaction, content categorization, theme detection, sentiment analysis, real-time alerts, and post-contact summary), and agent performance evaluations (including evaluation forms, automated evaluation, supervisor review) with a rich user experience to display, search and filter customer interactions, and programmatic access to data streams and the data lake. If you are an existing Amazon Connect customer, you can expand use of Contact Lens to other voice systems for consistent analytics in a single data warehouse. If you want to migrate your contact center to Amazon Connect, you can start with Contact Lens analytics and performance insights before migrating their agents.

Contact Lens supports external voice in the US East (N. Virginia) and US West (Oregon) AWS Regions.

To learn more about Amazon Connect and call transfers, review the following resources:

Read more


Amazon Connect now supports external voice transfers

Amazon Connect now integrates with other voice systems to directly transfer voice calls and metadata without using the public telephone network. You can use Amazon Connect telephony and Interactive Voice Response (IVR) with your existing voice systems to help improve customer experience and reduce costs.

Amazon Connect IVR provides conversational voice bots in 30+ languages with natural language processing, automated speech recognition, and text-to-speech to help personalize customer service, provide self-service for complex tasks, and collect information to reduce agent handling time. Now, you can use Amazon Connect to modernize the IVR experience of your existing contact center and your enterprise and branch voice systems. Additionally, enterprises migrating their contact center to Amazon Connect can start with Connect telephony and IVR for immediate modernization ahead of agent migration.

External voice transfer is available in the US East (N. Virginia) and US West (Oregon) AWS Regions.

To learn more about Amazon Connect and call transfers, review the following resources:

Read more


Amazon Connect Contact Lens launches built-in dashboards to analyze conversational AI bot performance

Amazon Connect Contact Lens now offers built-in dashboards to monitor the performance of your conversational AI bots making it easy for you to analyze and continuously improve your self-service and automated experiences. From the Contact Lens flows performance dashboard, you can view Amazon Lex and Q in Connect bot analytics including how your customers communicate their issues, the most common contact reasons, and the outcomes of the interaction. From the dashboard, you can navigate to the bot management page and make updates in a couple clicks to improve bot accuracy. These new capabilities make it easy for you analyze the performance of your conversational AI experiences, all within the Connect web UI.

These features are available in all AWS regions where Amazon Connect and Amazon Lex are available. To learn more about these metrics and the flows performance dashboard, see the Amazon Connect Administrator Guide and Amazon Connect API Reference. To learn more about Amazon Connect, the AWS contact center as a service solution on the cloud, please visit the Amazon Connect website.

Read more


Amazon Connect launches simplified conversational AI bot creation

Amazon Connect now makes it as easy as a few clicks for you to create, edit, and continuously improve conversational AI bots for interactive voice response (IVR) and chatbot self-service experiences. Now, you can configure and design your bots (powered by Amazon Lex) directly from the Connect web UI, allowing you to deliver dynamic, conversational AI experiences to understand your customer’s intent, ask follow-on questions, and automate resolution of their issues.

By using Amazon Connect’s drag-and-drop workflow designer, you can enhance your bots with Amazon Connect Customer Profiles, making it easy to deliver personalized experiences with no code. For example, you can upgrade your touch-tone menu (e.g., Press 1 for Account Support) with a bot to greet your customer by name, proactively offer to help them pay an upcoming bill, and offer them additional support options. In a few clicks, you can also customize and launch the Connect widget to further enhance your customer’s digital experience. These new bot building capabilities in Amazon Connect make it easy for you create and launch bot-powered self-service experiences by reducing the need for you to manage multiple applications or custom integrations.

To learn more refer to our public documentation. This new feature is available in all AWS regions where Amazon Connect and Amazon Lex is available. To learn more about Amazon Connect, the AWS contact center as a service solution on the cloud, please visit the Amazon Connect website.

Read more


Amazon Connect Contact Lens now automates agent performance evaluations using generative AI

Amazon Connect Contact Lens now provides you with the ability to use generative AI to automatically fill and submit agent performance evaluations. Managers can now specify their evaluation criteria in natural language, and use generative AI for automating evaluations of any or all of agents’ customer interactions, and get aggregated agent performance insights across cohorts of agents over time. You are also provided with context and justification for the automated evaluations along with references to specific points in the conversation for agent coaching. This launch provides managers with automated evaluations of additional agent behaviors (e.g., was the agent able to resolve the customer’s issue?), enabling managers to comprehensively monitor and improve regulatory compliance, agent adherence to quality standards and sensitive data collection, while reducing the time spent on evaluating agent performance.

This feature is supported in the English language and is available in 8 AWS regions including US East (N. Virginia), US West (Oregon), Europe (Frankfurt), Europe (London), Canada (Central), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Asia Pacific (Singapore). To learn more, please visit our documentation and our webpage. This feature is included within Contact Lens performance evaluations at no additional cost. For information about Contact Lens pricing, please visit our pricing page.

Read more


Amazon Connect now supports WhatsApp Business messaging

Amazon Connect now supports WhatsApp Business messaging, enabling you to deliver personalized experiences to your customers who use WhatsApp, one of the world's most popular messaging platforms, increasing customer satisfaction and reducing costs. Rich messaging features such as inline images and videos, list messages, and quick replies allow your customers to browse product recommendations, check order status, or schedule appointments.

Amazon Connect for WhatsApp Business messaging makes it easy for your customers to initiate a conversation by simply tapping on WhatsApp-enabled phone numbers or chat buttons published on your website or mobile app, or by scanning a QR code. As a result, you are able to reduce call volumes and lower operational costs by deflecting calls to chats. WhatsApp Business messaging uses the same generative AI-powered chatbots, routing, configuration, analytics, and agent experience as voice, chat, SMS, Apple Messages for Business, tasks, web calling, and email in Amazon Connect, making it easy for you to deliver seamless omnichannel customer experiences.

Amazon Connect for WhatsApp Business messaging is available in US East (N. Virginia), US West (Oregon), Europe (Frankfurt), Europe (London), and Asia Pacific (Singapore) regions.

To learn more and get started, please refer to the help documentation, pricing page, or visit the Amazon Connect website.

Read more


Amazon Connect launches generative AI-powered self-service with Amazon Q in Connect

Amazon Q in Connect, a generative-AI powered assistant for customer service, now supports end-customer self-service interactions across Interactive Voice Response (IVR) and digital channels. With this launch, businesses can augment their existing self-service experiences with generative AI capabilities to create more personalized and dynamic experiences to improve customer satisfaction and first contact resolution.

Amazon Q in Connect can directly converse with end-customers and reason over undefined intents for more ambiguous scenarios to provide customers accurate responses. For example, Amazon Q in Connect can help end-customers by completing actions such as booking trips, applying for loans, or scheduling doctor appointments. Amazon Q in Connect also supports Q&A, helping end-customer get the information they need as well as asking end-customers follow up questions to determine the right answers. If a customer requires additional support, Amazon Q in Connect provides seamless transition to customer service agents, preserving the full conversation context ensuring a cohesive customer experience.

For region availability, please see the availability of Amazon Connect features by Region. To learn more about Amazon Q in Connect, please visit the website or see the help documentation.

Read more


AWS announces Salesforce Contact Center with Amazon Connect (Preview)

Today, AWS announces the Preview of Salesforce Contact Center with Amazon Connect, a groundbreaking offering that integrates native digital and voice capabilities into Salesforce Service Cloud, delivering a unified and streamlined experience for agents. Salesforce users can now unify and route voice, chat, email, and case management across Amazon Connect and Service Cloud capabilities, streamlining operational efficiency and enhancing customer service interactions.

With Salesforce Contact Center with Amazon Connect, companies can now seamlessly integrate their Salesforce CRM data and agent experience with Amazon Connect’s leading voice, digital channel, and routing capabilities. Salesforce users can innovate with personalized and responsive service across every touchpoint. Customers receive personalized, AI-powered self-service experiences powered by Amazon Lex across Amazon Connect voice and chat, quickly solving issues. For more complex inquiries, the seamless transition from self-service to agent-assistance connects customers to the right agent, who has a unified view of the customer’s data, issue, and interaction history in Salesforce Service Cloud. Integrated data and APIs empower agents with Contact Lens real-time voice transcripts and supervisors with call monitoring in Salesforce Service Cloud. Salesforce admins can quickly deploy and configure an integrated contact center solution in minutes with Amazon Connect voice, chat and routing of Salesforce cases.

If you’re interested in joining the preview of Salesforce Contact Center with Amazon Connect, sign up here. To learn more, visit the website.

Read more


Amazon Connect now makes it easier to collect sensitive customer data within chats

Amazon Connect now makes it easier for you to collect sensitive customer data and deliver seamless transactional experiences within chats, enhancing the overall customer experience. You can now support inline chat interactions such as processing payments, updating customer information (e.g., address changes), or collecting customer data (e.g., account details) without requiring the customer to switch channels or navigate to another page on your website.

To get started, use Amazon Connect’s No-code UI builder to create step-by-step guides with forms, enable the ‘This view has sensitive data’ option in the Show view flow block to ensure compliance with data protection and privacy standards, and use a Lambda function to send the collected customer data to any application (e.g., a payment processor).

This feature is supported in all commercial AWS regions where Amazon Connect is offered. To learn more and get started please refer to the help documentation or read the blog post.

Read more


Amazon Connect now allows agents to self-assign tasks

Amazon Connect now allows agents to create and assign a task to themselves by checking a box from the agent workspace or contact control panel (CCP). For example, an agent can schedule a follow up action to update to a customer by scheduling a task for a preferred time and checking the self assignment option. Amazon Connect Tasks empowers you to prioritize, assign, and track all contact center agent tasks to completion, improving agent productivity and ensuring customer issues are quickly resolved.

This feature is supported in all AWS regions where Amazon Connect is offered. To learn more, see our documentation. To learn more about Amazon Connect, the easy-to-use cloud contact center, visit the Amazon Connect website.

Read more


Amazon Connect Contact Lens launches calibrations for agent performance evaluations

You can now perform calibrations to drive consistency and accuracy in how managers evaluate agent performance, so that agents receive feedback that is consistent. During a calibration, multiple managers can evaluate the same contact using the same evaluation form. You can then review differences in evaluations filled by different managers to align managers on evaluation best practices and identify opportunities to improve the evaluation form, e.g. rephrasing an evaluation question to be more specific, so that it is consistently answered by managers. You can also compare manager’s answers with an approved evaluation to measure and improve manager accuracy on evaluating agent performance.

This feature is available in all regions where Contact Lens performance evaluations is already available. To learn more, please visit our documentation and our webpage. For information about Contact Lens pricing, please visit our pricing page.
 

Read more


Amazon Connect Contact Lens generative AI-powered post contact summarization is now available in 5 new regions

Amazon Connect Contact Lens generative AI-powered post contact summarization is now available in Europe (London), Canada (Central), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Asia Pacific (Singapore) AWS regions, which summarize long customer conversations into succinct, coherent, and context rich contact summaries (e.g., “The customer didn’t receive a reimbursement for a last minute flight cancellation and the agent didn’t offer a partial reimbursement as per the SOP”). Agents can access post-contact summaries within seconds after a customer contact call complete to quickly complete their after contact work. This also helps supervisors improve the customer experience by getting faster insights when reviewing contacts, saving time on quality and compliance reviews, and more quickly identifying opportunities to improve agent performance.

With this launch, Contact Lens generative AI-powered post contact summarization is available in 7 AWS regions, including the 5 new regions and the existing US East (N. Virginia), US West (Oregon) regions. To learn more, please visit our documentation and our webpage. This feature is included with Contact Lens conversational analytics at no additional charge. For information about Contact Lens pricing, please visit our pricing page.

Read more


Amazon Connect now provides granular disconnect reasons for chats

The Amazon Connect contact record now includes granular disconnect reasons for chats, enabling you to improve and personalize customer experiences based on how a chat is ended. For example, if the agent disconnects due to a network issue, you can route the chat to the next best agent, or if the customer disconnects due to idleness, you can proactively send an SMS to re-engage them.

Disconnect reasons are available for chats in all AWS regions where Amazon Connect is offered. To learn more refer to the help documentation.

Read more


Amazon Connect Email is now generally available

Amazon Connect Email provides built-in capabilities that make it easy for you to prioritize, assign, and automate the resolution of customer service emails, improving customer satisfaction and agent productivity. With Amazon Connect Email, you can receive and respond to emails sent by customers to business addresses or submitted via web forms on your website or mobile app. You can configure auto-responses, prioritize emails, create or update cases, and route emails to the best available agent when agent assistance is required. Additionally, these capabilities work seamlessly with Amazon Connect outbound campaigns enabling you to deliver proactive and personalized email communications.

To get started, configure an email address using the Amazon Connect-provided domain or integrate your own email domain using Amazon Simple Email Service (Amazon SES). Amazon Connect Email uses the same configuration, routing, analytics, and agent experience as voice, chat, SMS, tasks, and web-calling in Amazon Connect, making it easy for you to deliver seamless omnichannel customer experiences.

Amazon Connect Email is available in the US East (N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), and Europe (London) regions. To learn more and get started, please refer to the help documentation, pricing page, or visit the Amazon Connect website.
 

Read more


Amazon Connect now supports nine additional languages for forecasting, capacity planning, and scheduling

Amazon Connect now supports nine additional languages for forecasting, capacity planning, and scheduling. New languages now supported include: Canadian French, Chinese (Simplified and Traditional), French, German, Italian, Japanese, Korean, Portuguese (Brazilian), and Spanish.

These new languages are available in all AWS Regions where Amazon Connect forecasting, capacity planning, and scheduling are available. To learn more about Amazon Connect agent scheduling, click here.
 

Read more


Amazon Connect Contact Lens launches custom dashboards

Amazon Connect Contact Lens now supports creating custom dashboards, as well as adding or removing widgets from existing dashboards. With these dashboards, you can view and compare real-time and historical aggregated performance, trends, and insights using custom-defined time periods (e.g., week over week), summary charts, time-series chart, etc. Now, you can further customize these dashboards by changing widgets to create the view that best fits your specific business need. For example, if you want to monitor self-service, queue, and agent performance, you can add all three types of widgets to your dashboard to have a single end to end view of contact center performance.

This feature is available in all commercial AWS regions where Amazon Connect is offered. To learn more about dashboards, see the Amazon Connect Administrator Guide. To learn more about Amazon Connect, the AWS cloud-based contact center, please visit the Amazon Connect website.

Read more


Amazon Connect offers new personalized and proactive engagement capabilities

Amazon Connect now offers a set of new capabilities to help you proactively address customer needs before they become potential issues, enabling better customer outcomes. You can initiate proactive outbound communications for real-time service updates, promotional offers, product usage tips, and appointment reminders at just the right moments throughout your customer’s experience from the right channel. Use Amazon Connect Customer Profiles to define target segments that are dynamically updated based on real-time customer behaviors including orders from point-of-sale systems, location data from mobile apps, appointments from scheduling systems, or interactions from websites. Use Amazon Connect outbound campaigns to configure outbound communications in just a few clicks and engage customers with timely, personalized communications via their preferred channels, including voice calls, SMS, or email. Visualize campaign performance using dashboards from Amazon Connect Analytics, ensuring clarity and effectiveness in your proactive customer engagement strategies.

With Amazon Connect Customer Profiles and Amazon Connect outbound campaigns, only pay-as-you-go for customer profiles utilized daily, outbound campaigns processing and for associated channels usage. Both features of Amazon Connect are available in US East (N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Sydney), Canada (Central), and Europe (Frankfurt) and Europe (London). To learn more, visit our webpages for Customer Profiles and for outbound campaigns.

Read more


Amazon Connect launches support for callbacks when using Chats and Tasks

Amazon Connect now enables you to request callbacks from Chats and Tasks in addition to voice calls. For example, if a customer reaches out after hours when no agent is available, they can request a callback by sending a chat message or completing a webform request (via Tasks). Callbacks allow end-customers to get a call from an available agent during normal business hours, without requiring them to stay on the line.

This feature is supported in all AWS regions where Amazon Connect is offered. To learn more, see our documentation. To learn more about Amazon Connect, the easy-to-use cloud contact center, visit the Amazon Connect website.

Read more


amazon-datazone

Data Lineage is now generally available in Amazon DataZone and next generation of Amazon SageMaker

AWS announces general availability of Data Lineage in Amazon DataZone and next generation of Amazon SageMaker, a capability that automatically captures lineage from AWS Glue and Amazon Redshift to visualize lineage events from source to consumption. Being OpenLineage compatible, this feature allows data producers to augment the automated lineage with lineage events captured from OpenLineage-enabled systems or through API, to provide a comprehensive data movement view to data consumers.

This feature automates lineage capture of schema and transformations of data assets and columns from AWS Glue, Amazon Redshift, and Spark executions in tools to maintain consistency and reduce errors. With in-built automation, domain administrators and data producers can automate capture and storage of lineage events when data is configured for data sharing in the business data catalog. Data consumers can gain confidence in an asset's origin from the comprehensive view of its lineage while data producers can assess the impact of changes to an asset by understanding its consumption. Additionally, the data lineage feature versions lineage with each event, enabling users to visualize lineage at any point in time or compare transformations across an asset's or job's history. This historical lineage provides a deeper understanding of how data has evolved, essential for troubleshooting, auditing, and validating the integrity of data assets.

The data lineage feature is generally available in all AWS Regions where Amazon DataZone and next generation of Amazon SageMaker are available.

To learn more, visit Amazon DataZone and next generation of Amazon SageMaker.
 

Read more


Amazon DataZone now enhances data access governance with enforced metadata rules

Amazon DataZone now supports enforced metadata rules for data access workflows, providing organizations with enhanced capabilities to strengthen governance and compliance with their organization needs. This new feature allows domain owners to define and enforce mandatory metadata requirements, ensuring data consumers provide essential information when requesting access to data assets in Amazon DataZone. By streamlining metadata governance, this capability helps organizations meet compliance standards, maintain audit readiness, and simplify access workflows for greater efficiency and control.

With enforced metadata rules, domain owners can establish consistent governance practices across all data subscriptions. For example, financial services organizations can mandate specific compliance-related metadata when data consumers request access to sensitive financial data. Similarly, healthcare providers can enforce metadata requirements to align with regulatory standards for patient data access. This feature simplifies the approval process by guiding data consumers through completing mandatory fields and enabling data owners to make informed decisions, ensuring data access requests meet organizational policies.

The feature is supported in all the AWS commercial regions where Amazon DataZone is currently available.

Check out this blog and video to learn more about how to set up metadata rules for subscription workflows. Get started with the technical documentation.

Read more


Amazon SageMaker now provides new set up experience for Amazon DataZone projects

Amazon SageMaker now provides a new set up experience for Amazon DataZone projects, making it easier for customers to govern access to data and machine learning (ML) assets. With this capability, administrators can now set up Amazon DataZone projects by importing their existing authorized users, security configurations, and policies from Amazon SageMaker domains.

Today, Amazon SageMaker customers use domains to organize list of authorized users, and a variety of security, application, policy, and Amazon Virtual Private Cloud configurations. With this launch, administrators can now accelerate the process of setting up governance for data and ML assets in Amazon SageMaker. They can import users and configurations from existing SageMaker domains to Amazon DataZone projects, mapping SageMaker users to corresponding Amazon DataZone project members. This enables project members to search, discover, and consume ML and data assets within Amazon SageMaker capabilities such as Studio, Canvas, and notebooks. Also, project members can publish these assets from Amazon SageMaker to the DataZone business catalog, enabling other project members to discover and request access to them.

This capability is available in all Amazon Web Services regions where Amazon SageMaker and Amazon DataZone are currently available. To get started, see the Amazon SageMaker administrator guide.

Read more


Amazon DataZone now supports meaning-based Semantic search in its business data catalog, enhancing how data users search and discover assets. With this new capability, users can search by concept and related terms, in addition to the existing keyword-based search. Amazon DataZone is a data management service for customers to catalog, discover, share, and govern data at scale across organizational boundaries with governance and access controls.

As data users are looking to solve their analytics use cases, they start their journey with the search in the business data catalog to understand what data is available. With this launch, users can discover related datasets in Amazon DataZone based on the intent of the user’s query. For example, a search for “profit” now returns data assets related to sales, costs, revenue in addition to the keyword profit. This significantly improves the relevance and quality of the search results and helps support the desired analytics use case. Amazon DataZone’s semantic search feature is powered by a GenAI search engine. This search engine uses an embedded language model to generate sparse vectors which enrich assets with semantically related terms.

Semantic search is available in all AWS Regions where Amazon DataZone is available.

To learn more, visit Amazon DataZone and get started using the guide in documentation.

Read more


Amazon DataZone updates pricing and removes the user-level subscription fee

Today, Amazon DataZone has announced updates to its pricing, which will make the service more accessible and cost-effective for customers. Customers will no longer be charged monthly subscription fee for every configured user. Instead, Amazon DataZone now offers a pay-as-you-go model, where you are charged only for the resources you use. Additionally, DataZone has reduced the price for metadata storage from $0.417 per GB to $0.40 per GB. Finally, Amazon DataZone has also introduced free access to some of the core DataZone APIs that power the key user experiences such as creating and managing their domains, blueprints, and projects.

These price updates are part of Amazon's ongoing commitment to providing flexible, transparent, and cost-effective data management and data governance capabilities to customers. Customers can now scale their usage without being constrained by per-user costs, and make the service accessible to a wider user base.

These pricing changes will be applicable starting Nov 1, 2024 in all AWS Regions where Amazon DataZone is available, including: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Seoul), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (Stockholm), Europe (London), and South America (São Paulo).

Visit Amazon DataZone’s pricing page for more details.
 

Read more


Amazon DataZone Achieves HITRUST Certification

Amazon DataZone has achieved HITRUST certification, demonstrating it meets the requirements established by the Health Information Trust Alliance Common Security Framework (HITRUST CSF) for managing sensitive health data, as required by healthcare and life sciences customers.

This certification includes the testing of over 600 controls derived from multiple security frameworks such as ISO 27001 and NIST 800-53r5, providing a comprehensive set of baseline security and privacy controls. The 2024 AWS HITRUST certification is now available to AWS customers through AWS Artifact in the AWS Management Console. Customers can leverage the certification to meet applicable controls via HITRUST’s Inheritance Program as defined under the HITRUST Shared Responsibility Matrix (SRM).

Amazon DataZone is a data management service that makes it faster and easier for customers to catalog, discover, share, and govern data between data producers and consumers within their organization. For more information about Amazon DataZone and how to get started, refer to our product page and review the Amazon DataZone technical documentation.
 

Read more


amazon-documentdb

AWS Backup now supports resource type and multiple tag selections in backup policies

Today, AWS Backup announces additional options to assign resources to a backup policy on AWS Organizations. Customers can now select specific resources by resource type and exclude them based on resource type or tag. They can also use the combination of multiple tags within the same resource selection.

With additional options to select resources, customers can implement flexible backup strategies across their organizations by combining multiple resource types and/or tags. They can also exclude resources they do not want to back up using resource type or tag, optimizing cost on non-critical resources.

To get started, use your AWS Organizations' management account to create or edit an AWS Backup policy. Then, create or modify a resource selection using the AWS Organizations' API, CLI, or JSON editor in either the AWS Organizations or AWS Backup console.

AWS Backup support for enhanced resource selection in backup policies is available in all commercial regions where AWS Backup’s cross account management is available. For more information, visit our documentation and launch blog.

Read more


amazon-dynamodb

Amazon DynamoDB zero-ETL integration with Amazon SageMaker Lakehouse

Amazon DynamoDB zero-ETL integration with Amazon SageMaker Lakehouse automates the extracting and loading of data from a DynamoDB table into SageMaker Lakehouse, an open and secure lakehouse. You can run analytics and machine learning workloads on your DynamoDB data using SageMaker Lakehouse, without impacting production workloads running on DynamoDB. With this launch, you now have the option to enable analytics workloads using SageMaker Lakehouse, in addition to the previously available Amazon OpenSearch Service and Amazon Redshift zero-ETL integrations.

Using the no-code interface, you can maintain an up-to-date replica of your DynamoDB data in the data lake by quickly setting up your integration to handle the complete process of replicating data and updating records. This zero-ETL integration reduces the complexity and operational burden of data replication to let you focus on deriving insights from your data. You can create and manage integrations using the AWS Management Console, the AWS Command Line Interface (AWS CLI), or the SageMaker Lakehouse APIs.

DynamoDB zero-ETL integration with SageMaker Lakehouse is now available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Hong Kong), Asia Pacific (Singapore), Asia Pacific (Sydney), Europe (Stockholm), Europe (Frankfurt), and Europe (Ireland) AWS Regions. 

To learn more, visit DynamoDB integrations and read the documentation.

Read more


Amazon DynamoDB global tables previews multi-Region strong consistency

Starting today in preview, Amazon DynamoDB global tables now supports multi-Region strong consistency. DynamoDB global tables is a fully managed, serverless, multi-Region, and multi-active database used by tens of thousands of customers. With this new capability, you can now build highly available multi-Region applications with a Recovery Point Objective (RPO) of zero, achieving the highest level of resilience. 

Multi-Region strong consistency ensures your applications can always read the latest version of data from any Region in a global table, removing the undifferentiated heavy lifting of managing consistency across multiple Regions. It is useful for building global applications with strict consistency requirements, such as user profile management, inventory tracking, and financial transaction processing. 

The preview of DynamoDB global tables with multi-Region strong consistency is available in the following Regions: US East (N. Virginia), US East (Ohio), and US West (Oregon). DynamoDB global tables with multi-Region strong consistency is billed according to existing global tables pricing. To learn more about global tables multi-Region strong consistency, see the preview documentation. For information about DynamoDB global tables, see the global tables information page and the developer guide.  

Read more


Amazon DynamoDB announces general availability of attribute-based access control

Amazon DynamoDB is a serverless, NoSQL, fully managed database with single-digit millisecond performance at any scale. Today, we are announcing the general availability of attribute-based access control (ABAC) support for tables and indexes in all AWS Commercial Regions and the AWS GovCloud (US) Regions. ABAC is an authorization strategy that lets you define access permissions based on tags attached to users, roles, and AWS resources. Using ABAC with DynamoDB helps you simplify permission management with your tables and indexes as your applications and organizations scale.

ABAC uses tag-based conditions in your AWS Identity and Access Management (IAM) policies or other policies to allow or deny specific actions on your tables or indexes when IAM principals’ tags match the tags for the tables. Using tag-based conditions, you can also set more granular access permissions based on your organizational structures. ABAC automatically applies your tag-based permissions to new employees and changing resource structures, without rewriting policies as organizations grow.

There is no additional cost to use ABAC. You can get started with ABAC using the AWS Management Console, AWS API, AWS CLI, AWS SDK, or AWS CloudFormation. Learn more at Using attribute-based access control with DynamoDB.

Read more


Amazon DynamoDB reduces prices for on-demand throughput and global tables

Amazon DynamoDB is a serverless, NoSQL, fully managed database with single-digit millisecond performance at any scale. Starting today, we have made Amazon DynamoDB even more cost-effective by reducing prices for on-demand throughput by 50% and global tables by up to 67%.

DynamoDB on-demand mode offers a truly serverless experience with pay-per-request pricing and automatic scaling without the need for capacity planning. Many customers prefer the simplicity of on-demand mode to build modern, serverless applications that can start small and scale to millions of requests per second. While on-demand was previously cost effective for spiky workloads, with this pricing change, most provisioned capacity workloads on DynamoDB will achieve a lower price with on-demand mode. This pricing change is transformative as it makes on-demand the default and recommended mode for most DynamoDB workloads.

Global tables provide a fully managed, multi-active, multi-Region data replication solution that delivers increased resiliency, improved business continuity, and 99.999% availability for globally distributed applications at any scale. DynamoDB has reduced pricing for multi-Region replicated writes to match the pricing of single-Region writes, simplifying cost modeling for multi-Region applications. For on-demand tables, this price change lowers replicated write pricing by 67%, and for tables using provisioned capacity, replicated write pricing has been reduced by 33%.

These pricing changes are already in effect, in all AWS Regions, starting November 1, 2024 and will be automatically reflected in your AWS bill. To learn more about the new price reductions, see the AWS Database Blog, or visit the Amazon DynamoDB Pricing page.
 

Read more


Amazon DynamoDB introduces warm throughput for tables and indexes

Amazon DynamoDB now supports a new warm throughput value and the ability to easily pre-warm DynamoDB tables and indexes. The warm throughput value provides visibility into the number of read and write operations your DynamoDB tables can readily handle, while pre-warming let’s you proactively increase the value to meet future traffic demands.

DynamoDB automatically scales to support workloads of virtually any size. However, when you have peak events like product launches or shopping events, request rates can surge 10x or even 100x in a short period of time. You can now check your tables’ warm throughput value to assess if your table can handle large traffic spikes for peak events. If you expect an upcoming peak event to exceed the current warm throughput value for a given table, you can pre-warm that table in advance of the peak event to ensure it scales instantly to meet demand.

Warm throughput values are available for all provisioned and on-demand tables and indexes at no cost. Pre-warming your table's throughput incurs a charge. See Amazon DynamoDB Pricing page for pricing details. This capability is now available in all AWS commercial Regions. See the Developer Guide to learn more.

Read more


Amazon DynamoDB announces user experience enhancements to organize your tables in the AWS GovCloud (US) Regions

Amazon DynamoDB now enables customers to easily find frequently used tables in the DynamoDB console in the AWS GovCloud (US) Regions. Customers can favorite their tables in the console’s tables page for quicker table access.

Customers can click the favorites icon to view their favorited tables in the console’s tables page. With this update, customers have a faster and more efficient way to find and work with tables that they often monitor, manage, and explore.

Customers can start using favorite tables at no additional cost. Get started with creating a DynamoDB table from the AWS Management Console.

Read more


Today, AWS announced support for a new Apache Flink connector for Amazon DynamoDB. The new connector, contributed by AWS for the Apache Flink open source project, adds Amazon DynamoDB Streams as a new source for Apache Flink. You can now process DynamoDB streams events with Apache Flink, a popular framework and engine for processing and analyzing streaming data.

Amazon DynamoDB is a serverless, NoSQL database service that enables you to develop modern applications at any scale. DynamoDB Streams provides a time-ordered sequence of item-level changes (insert, update, and delete) in a DynamoDB table. With Amazon Managed Service for Apache Flink, you can transform and analyze DynamoDB streams data in real time using Apache Flink and integrate applications with other AWS services such as Amazon S3, Amazon OpenSearch, Amazon Managed Streaming for Apache Kafka, and more. Apache Flink connectors are software components that move data into and out of an Amazon Managed Service for Apache Flink application. You can use the new connector to read data from a DynamoDB stream starting with Apache Flink version 1.19. With Amazon Managed Service for Apache Flink there are no servers and clusters to manage, and there is no compute and storage infrastructure to set up.

The Apache Flink repo for AWS connectors can be found here. For detailed documentation and setup instructions, visit our Documentation Page.

Read more


amazon-ebs-snapshots-archive

Amazon EBS now supports detailed performance statistics on EBS volume health

Today, Amazon announced the availability of detailed performance statistics for Amazon Elastic Block Store (EBS) volumes. This new capability provides you with real-time visibility into the performance of your EBS volumes, making it easier to monitor the health of your storage resources and take actions sooner.

With detailed performance statistics, you can access 11 metrics at up to a per-second granularity to monitor input/output (I/O) statistics of your EBS volumes, including driven I/O and I/O latency histograms. The granular visibility provided by these metrics helps you quickly identify and proactively troubleshoot application performance bottlenecks that may be caused by factors such as reaching an EBS volume's provisioned IOPS or throughput limits, enabling you to enhance application performance and resiliency.

Detailed performance statistics for EBS volumes are available by default for all EBS volumes attached to a Nitro-based EC2 instance in all AWS Commercial, China, and the AWS GovCloud (US) Regions, at no additional charge.

To get started with EBS detailed performance statistics, please visit the documentation here to learn more about the available metrics and how to access them using NVMe-CLI tools.

Read more


amazon-ec2

Amazon EC2 Hpc6id instances are now available in Europe (Paris) region

Starting today, Amazon EC2 Hpc6id instances are available in additional AWS Region Europe (Paris). These instances are optimized to efficiently run memory bandwidth-bound, data-intensive high performance computing (HPC) workloads, such as finite element analysis and seismic reservoir simulations. With EC2 Hpc6id instances, you can lower the cost of your HPC workloads while taking advantage of the elasticity and scalability of AWS.

EC2 Hpc6id instances are powered by 64 cores of 3rd Generation Intel Xeon Scalable processors with an all-core turbo frequency of 3.5 GHz, 1,024 GB of memory, and up to 15.2 TB of local NVMe solid state drive (SSD) storage. EC2 Hpc6id instances, built on the AWS Nitro System, offer 200 Gbps Elastic Fabric Adapter (EFA) networking for high-throughput inter-node communications that enable your HPC workloads to run at scale. The AWS Nitro System is a rich collection of building blocks that offloads many of the traditional virtualization functions to dedicated hardware and software. It delivers high performance, high availability, and high security while reducing virtualization overhead.

To learn more about EC2 Hpc6id instances, see the product detail page.

Read more


Amazon EC2 Hpc7a instances are now available in Europe (Paris) region

Starting today, Amazon EC2 Hpc7a instances are available in additional AWS Region Europe (Paris). EC2 Hpc7a instances are powered by 4th generation AMD EPYC processors with up to 192 cores, and 300 Gbps of Elastic Fabric Adapter (EFA) network bandwidth for fast and low-latency internode communications. Hpc7a instances feature Double Data Rate 5 (DDR5) memory, which enables high-speed access to data in memory.

Hpc7a instances are ideal for compute-intensive, tightly coupled, latency-sensitive high performance computing (HPC) workloads, such as computational fluid dynamics (CFD), weather forecasting, and multiphysics simulations, helping you scale more efficiently on fewer nodes. To optimize HPC instances networking for tightly coupled workloads, you can access these instances in a single Availability Zone within a Region.

To learn more, see Amazon Hpc7a instances.

Read more


Amazon EC2 Trn2 instances are generally available

Today, AWS announces the general availability of Amazon Elastic Compute Cloud (Amazon EC2) Trn2 instances and preview of Trn2 UltraServers, powered by AWS Trainium2 chips. Available via EC2 Capacity Blocks, Trn2 instances and UltraServers are the most powerful EC2 compute solutions for deep learning and generative AI training and inference.

You can use Trn2 instances to train and deploy the most demanding foundation models including large language models (LLMs), multi-modal models, diffusion transformers and more to build a broad set of AI applications. To reduce training times and deliver breakthrough response times (per-token-latency) for the most capable, state-of-the-art models you might need more compute and memory than a single instance can deliver. Trn2 UltraServers is a completely new EC2 offering that uses NeuronLink, a high-bandwidth, low-latency fabric, to connect 64 Trainium2 chips across 4 Trn2 instances into one node unlocking unparalleled performance. For inference, UltraServers help deliver industry-leading response times to create the best real-time experiences. For training, UltraServers boost model training speed and efficiency with faster collective communication for model parallelism as compared to standalone instances.

Trn2 instances feature 16 Trainium2 chips to deliver up to 20.8 petaflops of FP8 compute, 1.5 TB high bandwidth memory with 46 TB/s of memory bandwidth, and 3.2 Tbps of EFA networking. Trn2 UltraServers feature 64 Trainium2 chips to deliver up to 83.2 petaflops of FP8 compute, 6 TB of total high bandwidth memory with 185 TB/s of total memory bandwidth, and 12.8 Tbps of EFA networking. They both are deployed in EC2 UltraClusters to provide non-blocking, petabit scale-out capabilities for distributed training. Trn2 instances are generally available in the trn2.48xlarge size in the US East (Ohio) AWS Region through EC2 Capacity Blocks for ML.

To learn more about Trn2 instances and request access to Trn2 UltraServers please visit the Trn2 instances page

Read more


Amazon EC2 P5en instances, optimized for generative AI and HPC, are generally available

Today, AWS announces the general availability of Amazon Elastic Compute Cloud (Amazon EC2) P5en instances, powered by the latest NVIDIA H200 Tensor Core GPUs. These instances deliver the highest performance in Amazon EC2 for deep learning and high performance computing (HPC) applications.

You can use Amazon EC2 P5en instances for training and deploying increasingly complex large language models (LLMs) and diffusion models powering the most demanding generative AI applications. You can also use P5en instances to deploy demanding HPC applications at scale in pharmaceutical discovery, seismic analysis, weather forecasting, and financial modeling.

P5en instances feature up to 8 H200 GPUs which have 1.7x GPU memory size and 1.5x GPU memory bandwidth than H100 GPUs featured in P5 instances. P5en instances pair the H200 GPUs with high performance custom 4th Generation Intel Xeon Scalable processors, enabling Gen5 PCIe between CPU and GPU which provides up to 4x the bandwidth between CPU and GPU and boosts AI training and inference performance. P5en, with up to 3200 Gbps of third generation of EFA using Nitro v5, shows up to 35% improvement in latency compared to P5 that uses the previous generation of EFA and Nitro. This helps improve collective communications performance for distributed training workloads such as deep learning, generative AI, real-time data processing, and high-performance computing (HPC) applications. To address customer needs for large scale at low latency, P5en instances are deployed in Amazon EC2 UltraClusters, and provide market-leading scale-out capabilities for distributed training and tightly coupled HPC workloads.

P5en instances are now available in the US East (Ohio), US West (Oregon), and Asia Pacific (Tokyo) AWS Regions and US East (Atlanta) Local Zone us-east-1-atl-2a in the p5en.48xlarge size.

To learn more about P5en instances, see Amazon EC2 P5en Instances.

Read more


Announcing Amazon EC2 I8g instances

AWS is announcing the general availability of Amazon Elastic Compute Cloud (Amazon EC2) storage optimized I8g instances. I8g instances offer the best performance in Amazon EC2 for storage-intensive workloads. I8g instances are powered by AWS Graviton4 processors that deliver up to 60% better compute performance compared to previous generation I4g instances. I8g instances use the latest third generation AWS Nitro SSDs, local NVMe storage that deliver up to 65% better real-time storage performance per TB while offering up to 50% lower storage I/O latency and up to 60% lower storage I/O latency variability. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software enhancing the performance and security for your workloads.

I8g instances offer instance sizes up to 24xlarge, 768 GiB of memory, and 22.5 TB instance storage. They are ideal for real-time applications like relational databases, non-relational databases, streaming databases, search queries and data analytic.

I8g instances are available in the following AWS Regions: US East (N. Virginia) and US West (Oregon).

To learn more, see Amazon EC2 I8g instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs.

Read more


Amazon Web Services announces declarative policies

Today, AWS announces the general availability of declarative policies, a new management policy type within AWS Organizations. These policies simplify the way customers enforce durable intent, such as baseline configuration for AWS services within their organization. For example, customers can configure EC2 to allow instance launches using AMIs vended by specific providers and block public access in their VPC with a few simple clicks or commands for their entire organization using declarative policies.

Declarative policies are designed to prevent actions that are non-compliant with the policy. The configuration defined in the declarative policy is maintained even when services add new APIs or features, or when customers add new principals or accounts to their organization. With declarative policies, governance teams have access to the account status report which provides insight into the current configuration for an AWS service across their organization. This helps them asses readiness to enforce configuration at scale. Administrators can provide additional transparency to end users by configuring custom error messages to redirect them to internal wikis or ticketing systems through declarative policies.

To get started, navigate to the AWS Organizations console to create and attach declarative policies. You can also use AWS Control Tower, AWS CLI or CloudFormation templates to configure these policies. Declarative policies today support EC2, EBS and VPC configurations with support for other services coming soon. To learn more see documentation and blog post.

Read more


Introducing Amazon EC2 next generation high density Storage Optimized I7ie instances

Amazon Web Services is announcing general availability for next generation high density Storage Optimized I7ie instances. Designed for large storage I/O intensive workloads, I7ie instances are powered by 5th generation Intel Xeon Scalable processors with an all-core turbo frequency of 3.2 GHz, offering up to 40% better compute performance and 20% better price performance over existing I3en instances. I7ie instances have the highest local NVMe storage density in the cloud for storage optimized instances and offer up to twice as many vCPUs and memory compared to prior generation instances. Powered by 3rd generation AWS Nitro SSDs, I7ie instances deliver up to 65% better real-time storage performance, up to 50% lower storage I/O latency, and 65% lower storage I/O latency variability compared to I3en instances.

I7ie are high density storage optimized instances, ideal for workloads requiring fast local storage with high random read/write performance at very low latency consistency to access large data sets. I7ie instances also deliver 40% better compute performance to run more complex queries without increasing the storage density per vCPU. Additionally, the 16KB torn write prevention feature, enables customers to eliminate performance bottlenecks.

I7ie instances deliver up to 100Gbps of network bandwidth and 60Gbps of bandwidth for Amazon Elastic Block Store (EBS).

I7ie instances are available in the US East (N. Virginia) AWS Region today. Customers can use these instances with On Demand and Savings Plan purchase options. To learn more, visit the I7ie instances page.

Read more


AWS Marketplace now offers EC2 Image Builder components from independent software vendors

AWS Marketplace now offers EC2 Image Builder components from independent software vendors (ISVs), helping you streamline your Amazon Machine Image (AMI) build processes. You can find and subscribe to Image Builder components from ISVs in AWS Marketplace or in the Image Builder console, and incorporate the components into your golden images through Image Builder. AWS Marketplace offers a catalog of Image Builder components from ISVs to help address the monitoring, security, governance, and compliance needs of your organization.

Previously, consolidating software from ISVs into golden images required you to go through a time-consuming procurement process and write custom code, resulting in unnecessary overhead. With the addition of Image Builder components in AWS Marketplace, you can now find, subscribe to, and incorporate software components from ISVs into your golden images on AWS. You can also configure your Image Builder pipelines to automatically update golden images as the latest version of components get released in AWS Marketplace, helping to keep your systems current and eliminating the need for custom code. You can continue sharing golden images within your organization by distributing the entitlements for subscribed components across AWS accounts. Your organization can then use the same golden images, maintaining your security and governance standards.

To learn more, access documentation for AWS Marketplace or EC2 Image Builder. Visit AWS Marketplace to view all supported EC2 Image Builder components, including software from popular providers such as Datadog, Dynatrace, Insight Technology, Inc., Fortinet, OpenVPN Inc, SIOS Technology Corp., Cisco, KeyFactor, Datamasque, Grafana, Kong, Wiz and more.

Read more


AWS simplifies the use of third-party block storage arrays with AWS Outposts

Starting today, customers can attach block data volumes backed by NetApp® on-premises enterprise storage arrays and Pure Storage® FlashArray™ to Amazon Elastic Compute Cloud (Amazon EC2) instances on AWS Outposts directly from the AWS Management Console. This makes it easier for customers to leverage third-party storage with Outposts. Outposts is a fully managed service that extends AWS infrastructure, AWS services, APIs, and tools to virtually any on-premises or edge location for a truly consistent hybrid experience.

With this enhancement, Outpost customers can combine the cloud capabilities offered by Outposts with advanced data management features, high density storage, and high performance offered by NetApp on-premises enterprise storage arrays and Pure Storage FlashArray. Today, customers can use Amazon Elastic Block Store (Amazon EBS) and Local Instance Store volumes to store and process data locally and comply with data residency requirements. Now, with this enhancement, they can do so while leveraging the external volumes backed by compatible third-party storage. By leveraging the new enhancement, customers can maximize value from their existing storage investments, while benefiting from the cloud operational model enabled by Outposts.

This enhancement is available on Outposts racks and Outposts 2U servers at no additional charge in all AWS Regions where Outposts is available, except the AWS GovCloud Regions. See the FAQs for Outposts servers and Outposts racks for the latest availability information.

You can use the AWS Management Console or CLI to attach the third-party block data volumes to Amazon EC2 instances on Outposts. To learn more, check out this blog post.

Read more


Amazon EC2 introduces Allowed AMIs to enhance AMI governance

Amazon EC2 introduces Allowed AMIs, a new account-wide setting that enables you to limit the discovery and use of Amazon Machine Images (AMIs) within your AWS accounts. You can now simply specify the AMI owner accounts or AMI owner aliases permitted within your account, and only AMIs from these owners will be visible and available to you to launch EC2 instances.

Prior to today, you could use any AMI explicitly shared with your account or any public AMI, regardless of its origin or trustworthiness, putting you at risk of accidentally using an AMI that didn’t meet your organization's compliance requirements. Now with Allowed AMIs, your administrators can specify the accounts or owner aliases whose AMIs are permitted for discovery and use within your AWS environment. This streamlined approach provides guardrails to reduce the risk of inadvertently using non-compliant or unauthorized AMIs. Allowed AMIs also supports an audit-mode functionality to identify EC2 instances launched using AMIs not permitted by this setting, helping you identify non-compliant instances before the setting is applied. You can apply this setting across AWS Organizations and Organizational Units using Declarative Policies, allowing you to manage and enforce this setting at scale.

Allowed AMI setting only applies to public AMIs and AMIs explicitly shared with your AWS accounts. By default, this setting is disabled for all AWS accounts. You can enable it by using the AWS CLI, SDKs, or Console. To learn more, please visit our documentation.

Read more


Amazon EC2 R7g instances are now available in AWS Middle East (Bahrain) region

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R7g instances are available in Middle East (Bahrain) region. These instances are powered by AWS Graviton3 processors that provide up to 25% better compute performance compared to AWS Graviton2 processors, and built on top of the the AWS Nitro System, a collection of AWS designed innovations that deliver efficient, flexible, and secure cloud services with isolated multi-tenancy, private networking, and fast local storage.

Amazon EC2 Graviton3 instances also use up to 60% less energy to reduce your cloud carbon footprint for the same performance than comparable EC2 instances. For increased scalability, these instances are available in 9 different instance sizes, including bare metal, and offer up to 30 Gbps networking bandwidth and up to 20 Gbps of bandwidth to the Amazon Elastic Block Store (EBS).

To learn more, see Amazon EC2 R7g. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.

Read more


Amazon EC2 C7g instances are now available in additional regions

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C7g instances are available in Europe (Paris) and Asia Pacific (Osaka) Regions. These instances are powered by AWS Graviton3 processors that provide up to 25% better compute performance compared to AWS Graviton2 processors, and built on top of the the AWS Nitro System, a collection of AWS designed innovations that deliver efficient, flexible, and secure cloud services with isolated multi-tenancy, private networking, and fast local storage.

Amazon EC2 Graviton3 instances also use up to 60% less energy to reduce your cloud carbon footprint for the same performance than comparable EC2 instances. For increased scalability, these instances are available in 9 different instance sizes, including bare metal, and offer up to 30 Gbps networking bandwidth and up to 20 Gbps of bandwidth to the Amazon Elastic Block Store (EBS).

To learn more, see Amazon EC2 C7g. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.
 

Read more


Amazon EC2 R8g instances now available in AWS Asia Pacific (Mumbai)

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R8g instances are available in AWS Asia Pacific (Mumbai) region. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. Amazon EC2 R8g instances are ideal for memory-intensive workloads such as databases, in-memory caches, and real-time big data analytics. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads.

AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. AWS Graviton4-based R8g instances offer larger instance sizes with up to 3x more vCPU (up to 48xlarge) and memory (up to 1.5TB) than Graviton3-based R7g instances. These instances are up to 30% faster for web applications, 40% faster for databases, and 45% faster for large Java applications compared to AWS Graviton3-based R7g instances. R8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS).

To learn more, see Amazon EC2 R8g Instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.

Read more


Amazon EC2 Capacity Blocks now supports instant start times and extensions

Today, Amazon Web Services announces three new features for Amazon Elastic Compute Cloud (Amazon EC2) Capacity Blocks for ML that enable you to get near-instantaneous access to GPU and ML chip instances through Capacity Blocks, extend the durations of your Capacity Blocks, and reserve Capacity Blocks for longer periods of up to six months. With these new features, you have more options to provision GPU and ML chip capacity to meet your machine learning (ML) workload needs.

With Capacity Blocks, you can reserve GPU and ML chip capacity in cluster sizes of one to 64 instances (512 GPUs, or 1,024 Trainium chips), giving you the flexibility to run a wide variety of ML workloads. Starting today, you can provision Capacity Blocks that begin in just minutes, enabling you to quickly access GPU and ML chip capacity. You can also extend your Capacity Block when your ML job takes longer than you anticipated, ensuring uninterrupted access to capacity. Finally, for projects that require GPU or ML chip capacity for longer durations, you can now provision Capacity Blocks for up to six months, allowing you to get capacity for just the amount of time you need.

EC2 Capacity Blocks are available for P5e, P5, P4d, and Trn1 instances in US East (N. Virginia and Ohio), US West (Oregon), Asia Pacific (Tokyo and Melbourne). See the User Guide for a detailed breakdown of instance availability by region.

To learn more, see the Amazon EC2 Capacity Blocks for ML User Guide.

Read more


Request future dated Amazon EC2 Capacity Reservations

Today, we are announcing that you can request Amazon EC2 Capacity Reservations to start on a future date. Capacity Reservations provide assurance for your critical workloads by allowing you to immediately reserve compute capacity in a specific Availability Zone. Starting today, you can now create Capacity Reservations to start on a future date, enabling you to secure capacity for your future needs and providing you with peace of mind for your critical future scaling events.

You can create future dated Capacity Reservations by specifying the capacity you need, start date, and the minimum duration you commit to use the reservation. Once EC2 approves the request, your reservation will be scheduled to go active on the chosen start date and upon activation, you can immediately launch instances.

This new capability is available to all Capacity Reservations customers in all AWS commercial regions, AWS China regions, and the AWS GovCloud (US) Regions at no additional cost. To learn more about these features, please refer to the Capacity Reservations user guide.

Read more


Announcing static stability for Amazon EC2 instances backed by EC2 instance store on AWS Outposts

AWS Outposts now offers static stability for Amazon EC2 instances backed by EC2 instance store. This enables automatic recovery for workloads running on such EC2 instances from power failures or reboots, even when the connection to the parent AWS Region is temporarily unavailable. This means Outposts servers and Outposts racks can recover faster from power outages, minimizing downtime and data loss.

Outposts provides a consistent hybrid experience by bringing AWS services to customer premises and edge locations on fully managed AWS infrastructure. While Outposts typically runs connected to an AWS Region for resource management, access control, and software updates, the new static stability feature enables workloads running on EC2 instances backed by EC2 instance store to recover from power failures even when connectivity to the AWS Region is unavailable. Note that this capability is currently not available for EC2 instances backed by Amazon EBS volumes.

This capability is in all AWS Regions where Outposts is supported. Check out the Outposts servers FAQs page and the Outposts rack FAQs page for the full list of supported Regions.

To get started, no customer specific action is required. Static stability is now enabled for all EC2 instances backed by EC2 instance store.

Read more


Amazon EC2 G6e instances now available in additional regions

Starting today, the Amazon EC2 G6e instances powered by NVIDIA L40S Tensor Core GPUs are now available in Asia Pacific (Tokyo) and Europe (Frankfurt, Spain). G6e instances can be used for a wide range of machine learning and spatial computing use cases. G6e instances deliver up to 2.5x better performance compared to G5 instances and up to 20% lower inference costs than P4d instances.

Customers can use G6e instances to deploy large language models (LLMs) with up to 13B parameters and diffusion models for generating images, video, and audio. Additionally, the G6e instances will unlock customers’ ability to create larger, more immersive 3D simulations and digital twins for spatial computing workloads. G6e instances feature up to 8 NVIDIA L40S Tensor Core GPUs with 384 GB of total GPU memory (48 GB of memory per GPU) and third generation AMD EPYC processors. They also support up to 192 vCPUs, up to 400 Gbps of network bandwidth, up to 1.536 TB of system memory, and up to 7.6 TB of local NVMe SSD storage. Developers can run AI inference workloads on G6e instances using AWS Deep Learning AMIs, AWS Deep Learning Containers, or managed services such as Amazon Elastic Kubernetes Service (Amazon EKS) and AWS Batch, with Amazon SageMaker support coming soon.

Amazon EC2 G6e instances are available today in the AWS US East (N. Virginia, Ohio), US West (Oregon), Asia Pacific (Tokyo), and Europe (Frankfurt, Spain) regions. Customers can purchase G6e instances as On-Demand Instances, Reserved Instances, Spot Instances, or as part of Savings Plans.

To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the G6e instance page.

Read more


Amazon EC2 C7i-flex and M7i-flex instances are now available in AWS Asia Pacific (Malaysia) Region

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) Flex (C7i-flex, M7i-flex) instances powered by custom 4th Gen Intel Xeon Scalable processors (code-named Sapphire Rapids) are available in Asia Pacific (Malaysia) region. These custom processors, available only on AWS, offer up to 15% better performance over comparable x86-based Intel processors utilized by other cloud providers.

Flex instances are the easiest way for you to get price-performance benefits for a majority of general-purpose and compute intensive workloads. C7i-flex and M7i-flex instances deliver up to 19% better price-performance compared to C6i and M6i instances respectively. These instances offer the most common sizes, from large to 8xlarge, and are a great first choice for applications that don't fully utilize all compute resources such as web and application servers, virtual desktops, batch-processing, microservices, databases, caches, and more. For workloads that need larger instance sizes (up to 192 vCPUs and 768 GiB memory) or continuous high CPU usage, you can leverage C7i and M7i instances.

C7i-flex instances are available in the following AWS Regions: US East (N. Virginia, Ohio), US West (N. California, Oregon), Europe (Frankfurt, Ireland, London, Paris, Spain, Stockholm), Canada (Central), Asia Pacific (Malaysia, Mumbai, Seoul, Singapore, Sydney, Tokyo), and South America (São Paulo).

M7i-flex instances are available in the following AWS Regions: US East (N. Virginia, Ohio), US West (N. California, Oregon), Europe (Frankfurt, Ireland, London, Paris, Spain, Stockholm), Canada (Central), Asia Pacific (Malaysia, Mumbai, Seoul, Singapore, Sydney, Tokyo), South America (São Paulo), and the AWS GovCloud (US-East, US-West).
 

Read more


Amazon EC2 R8g instances now available in AWS Europe (Stockholm)

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R8g instances are available in AWS Europe (Stockholm) region. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. Amazon EC2 R8g instances are ideal for memory-intensive workloads such as databases, in-memory caches, and real-time big data analytics. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads.

AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. AWS Graviton4-based R8g instances offer larger instance sizes with up to 3x more vCPU (up to 48xlarge) and memory (up to 1.5TB) than Graviton3-based R7g instances. These instances are up to 30% faster for web applications, 40% faster for databases, and 45% faster for large Java applications compared to AWS Graviton3-based R7g instances. R8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS).

To learn more, see Amazon EC2 R8g Instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.

Read more


Amazon EC2 now provides lineage information for your AMIs

Amazon EC2 now provides source details for your Amazon Machine Images (AMIs). With this lineage information, you can easily trace any copied or derived AMI back to their original AMI source.

Prior to today, you had to maintain a list of AMIs, use tags, and create custom scripts to track the origins of an AMI. This approach was time-consuming, hard to scale, and resulted in operational overheads. Now with this capability, you can easily view details of the source AMI, making it easier for you to understand from where a particular AMI originated. When copying AMIs across AWS Regions, the lineage information clearly links the copied AMIs to their original AMIs. This new capability provides a more streamlined and efficient way to manage and understand the lineage of AMIs within your AWS environment

You can view these details by using the AWS CLI, SDKs, or Console. This capability is available at no additional cost in all AWS Regions, including AWS GovCloud (US) and AWS China Regions. To learn more, please visit our documentation here.

Read more


Amazon EC2 C6a and R6a instances now available in additional AWS region

Starting today, compute optimized Amazon EC2 C6a and memory optimized Amazon EC2 R6a instances are now available in Asia Pacific (Hyderabad) region. C6a and R6a instances are powered by third-generation AMD EPYC processors with a maximum frequency of 3.6 GHz. C6a instances deliver up to 15% better price performance than comparable C5a instances, and R6a deliver up to 35% better price performance than comparable R5a instances. These instances are built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor that delivers practically all of the compute and memory resources of the host hardware to your instances for better overall performance and security.

With this additional region, C6a instances are available in the following AWS Regions: US East (Northern Virginia, Ohio), US West (Oregon, N. California), Asia Pacific (Hong Kong, Mumbai, Singapore, Sydney, Tokyo, Hyderabad), Canada (Central), Europe (Frankfurt, Ireland, London), and South America (Sao Paulo) and R6a instances are available in the following AWS Regions: US East (Northern Virginia, Ohio), US West (Oregon, N. California), Asia Pacific (Mumbai, Singapore, Sydney, Tokyo, Hyderabad), and Europe (Frankfurt, Ireland).

These instances can be purchased as Savings Plans, Reserved, On-Demand, and Spot instances. To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the C6a instances, and R6a instances pages.
 

Read more


Announcing customized delete protection for Amazon EBS Snapshots and EBS-backed AMIs

Customers can now further customize Recycle Bin rules to exclude EBS Snapshots and EBS-backed Amazon Machine Images (AMIs) based on tags. Customers use Recycle Bin to protect their resources from accidental deletion by retaining them for a time period that customers specify before being permanently deleted. The newly launched feature helps customers save cost by customizing their Recycle Bin rules for delete protection of only critical data in their resources, while excluding non-critical data that do not require delete protection.

Creating Region-level retention rules is a simple way to have peace of mind that all EBS Snapshots and EBS-backed AMIs in an AWS Region are protected from accidental deletion by Recycle Bin. However, in some cases, customers have security scanning workflows that create temporary EBS Snapshots that are not used for recovery. Customers may also have backup automation that do not require additional delete protection. The newly added feature to add resource exclusion tags in Recycle Bin can help you reduce storage costs by excluding the resources that do not require deletion protection from moving to Recycle Bin.

This feature is now available in all AWS commercial Regions and AWS GovCloud (US) Regions. Customers can add exclusion tags to their Recycle Bin rules via EC2 Console, API/CLI, or SDK.

To learn more about using Recycle Bin with exclusion tags, please refer to the technical documentation.

Read more


Amazon EC2 X2iezn instances are now available in additional AWS region

Starting today, memory optimized Amazon EC2 X2iezn instances are available in Middle East (UAE). Amazon EC2 X2iezn instances are powered by 2nd generation Intel Xeon Scalable processors with an all core turbo frequency of up to 4.5 GHz, the fastest in the cloud. These instances are a great fit for electronic design automation (EDA) workloads as well as relational databases that benefit from high single-threaded processor performance and a large memory footprint. The combination of high single-threaded compute performance and a 32:1 ratio of memory to vCPU make X2iezn instances an ideal fit for EDA workloads including physical verification, static timing analysis, power sign-off, and full chip gate level simulation, and database workloads that are license bounded. These instances are built on the AWS Nitro System, which is a rich collection of building blocks that offloads many of the traditional virtualization functions to dedicated hardware, delivering high performance, high availability, and highly-secure cloud instances.

With this additional region, the X2iezn instances are now available in the AWS US West (Oregon), US East (Northern Virginia), Europe (Ireland), Asia Pacific (Tokyo), and Middle East (UAE) regions. X2iezn instances will be available for purchase with Savings Plans, Reserved Instances, Convertible Reserved, On-Demand, and Spot instances, or as Dedicated instances or Dedicated hosts.

To get started with X2iezn instances, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the EC2 X2iezn instances Page, visit the AWS forum for EC2 or connect with your usual AWS Support contacts.

Read more


Amazon Time Sync Service supports Microsecond-Accurate Time in Stockholm Region

The Amazon Time Sync Service now supports clock synchronization within microseconds of UTC on Amazon EC2 instances in the Europe (Stockholm) region.

Built on Amazon's proven network infrastructure and the AWS Nitro System, customers can now access local, GPS-disciplined reference clocks on supported EC2 instances. These clocks can be used to more easily order application events, measure 1-way network latency, increase distributed application transaction speed, and incorporate in-region and cross-region scalability features while also simultaneously simplifying technical designs. This capability is an improvement over many on-premises time solutions, and it is the first microsecond-range time service offered by any cloud provider. Additionally, you can audit your clock accuracy from your instance to measure and monitor the expected microsecond-range accuracy. Customers already using the Amazon Time Sync Service on supported instances will see improved clock accuracy automatically, without needing to adjust their AMI or NTP client settings. Customers can also use standard PTP clients and configure a new PTP Hardware Clock (PHC) to get the best accuracy possible. Both NTP and PTP can be used without needing any updates to VPC configurations.

Amazon Time Sync’s microsecond-accurate time is available starting today in Europe (Stockholm), as well as additional regions on supported EC2 instance types. We will be expanding support to more AWS Regions and EC2 instance types. There is no additional charge for using this service.

Configuration instructions, and more information on the Amazon Time Sync Service, are available in the EC2 User Guide.

Read more


Amazon EC2 G6 instances now available in the AWS GovCloud (US-West) Region

Starting today, the Amazon Elastic Compute Cloud (Amazon EC2) G6 instances powered by NVIDIA L4 GPUs are now available in the AWS GovCloud (US-West) Region. G6 instances can be used for a wide range of graphics-intensive and machine learning use cases.

Customers can use G6 instances for deploying ML models for natural language processing, language translation, video and image analysis, speech recognition, and personalization as well as graphics workloads, such as creating and rendering real-time, cinematic-quality graphics and game streaming. G6 instances feature up to 8 NVIDIA L4 Tensor Core GPUs with 24 GB of memory per GPU and third generation AMD EPYC processors. They also support up to 192 vCPUs, up to 100 Gbps of network bandwidth, and up to 7.52 TB of local NVMe SSD storage.

Customers can purchase G6 instances as On-Demand Instances, Reserved Instances, Spot Instances, or as part of Savings Plans. To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the G6 instance page.

Read more


Amazon EC2 Mac instances now available in AWS Canada (Central) Region

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M2 Mac instances are now generally available (GA) in the AWS Canada (Central) region. This marks the first time we are introducing Mac instances to an AWS Canadian region, providing customers with even greater global accessibility to Apple silicon hardware. Customers can now run their macOS workloads in AWS Canada (Central) region to satisfy their data residency requirements, benefit from improved latency to end-users, while also integrating with their pre-existing AWS environment configurations within this region.

M2 Mac instances deliver up to 10% faster performance over M1 Mac instances when building and testing applications for Apple platforms such as iOS, macOS, iPadOS, tvOS, watchOS, visionOS, and Safari. M2 Mac instances are powered by the AWS Nitro System and are built on Apple M2 Mac Mini computers featuring 8 core CPU, 10 core GPU, 24 GiB of memory, and 16 core Apple Neural Engine.

With this expansion, EC2 M2 Mac instances are available across US East (N.Virginia, Ohio), US West (Oregon), Europe (Frankfurt), Asia Pacific (Sydney), and Canada (Central) regions. To learn more or get started, see Amazon EC2 Mac Instances or visit the EC2 Mac documentation reference.

Read more


Amazon EC2 Capacity Blocks expands to new regions

Today, Amazon Web Services announces that Amazon Elastic Compute Cloud (Amazon EC2) Capacity Blocks for ML is available for P5 instances in two new regions: US West (Oregon) and Asia Pacific (Tokyo). You can use EC2 Capacity Blocks to reserve highly sought-after GPU instances in Amazon EC2 UltraClusters for a future date for the amount of time that you need to run your machine learning (ML) workloads.

EC2 Capacity Blocks enable you to reserve GPU capacity up to eight weeks in advance for durations up to 28 days in cluster sizes of one to 64 instances (512 GPUs), giving you the flexibility to run a broad range of ML workloads. They are ideal for short duration pre-training and fine-tuning workloads, rapid prototyping, and for handling surges in inference demand. EC2 Capacity Blocks deliver low-latency, high-throughput connectivity through colocation in Amazon EC2 UltraClusters.

With this expansion, EC2 Capacity Blocks for ML are available for the following instance types and AWS Regions: P5 instances in US East (N. Virginia), US East (Ohio), US West (Oregon), and Asia Pacific (Tokyo); P5e instances in US East (Ohio); P4d instances in US East (Ohio) and US West (Oregon); Trn1 instances in Asia Pacific (Melbourne).

To get started, visit the AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs. To learn more, see the Amazon EC2 Capacity Blocks for ML User Guide.

Read more


Amazon EC2 High Memory instances now available in South America (Sao Paulo) Region

Starting today, Amazon EC2 High Memory instances with 9TiB of memory (u-9tb1.112xlarge) and 18TiB of memory (u-18tb1.112xlarge) are now available in South America (Sao Paulo) region. Customers can start using these new High Memory instances with On Demand and Savings Plan purchase options.

Amazon EC2 High Memory instances are certified by SAP for running Business Suite on HANA, SAP S/4HANA, Data Mart Solutions on HANA, Business Warehouse on HANA, and SAP BW/4HANA in production environments. For details, see the Certified and Supported SAP HANA Hardware Directory.

For information on how to get started with your SAP HANA migration to EC2 High Memory instances, view the Migrating SAP HANA on AWS to an EC2 High Memory Instance documentation. To hear from Steven Jones, GM for SAP on AWS on what this launch means for our SAP customers, you can read his launch blog.
 

Read more


Amazon EC2 High Memory instances now available in Asia Pacific (Mumbai) region

Starting today, Amazon EC2 High Memory instances with 9TB of memory (u-9tb1.112xlarge) are available in the Asia Pacific (Mumbai) region. Customers can start using these new High Memory instances with On Demand and Savings Plan purchase options.

Amazon EC2 High Memory instances are certified by SAP for running Business Suite on HANA, SAP S/4HANA, Data Mart Solutions on HANA, Business Warehouse on HANA, and SAP BW/4HANA in production environments. For details, see the Certified and Supported SAP HANA Hardware Directory.

For information on how to get started with your SAP HANA migration to EC2 High Memory instances, view the Migrating SAP HANA on AWS to an EC2 High Memory Instance documentation. To hear from Steven Jones, GM for SAP on AWS on what this launch means for our SAP customers, you can read his launch blog.
 

Read more


AWS announces availability of Microsoft Windows Server 2025 images on Amazon EC2

Amazon EC2 now supports Microsoft Windows Server 2025 with License Included (LI) Amazon Machine Images (AMIs), providing customers with an easy and flexible way to launch the latest version of Windows Server. By running Windows Server 2025 on Amazon EC2, customers can take advantage of the security, performance, and reliability of AWS with the latest Windows Server features.

Amazon EC2 is the proven, reliable, and secure cloud for your Windows Server workloads. Amazon creates and manages Microsoft Windows Server 2025 AMIs providing a reliable and quick way to launch Windows Server 2025 on EC2 instances. These images support Nitro-based instances with Unified Extensible Firmware Interface (UEFI) to provide enhanced security. These images also come with features such as Amazon EBS gp3 as the default root volume and the AWS NVMe driver pre-installed, which give you faster throughput and maximize price-performance. In addition, you can seamlessly use these images with pre-qualified services such as AWS Systems Manager, Amazon EC2 Image Builder, and AWS License Manager.

Windows Server 2025 AMIs are available in all commercial AWS Regions and the AWS GovCloud (US) Regions. You can find and launch instances directly from the Amazon EC2 console or through API or CLI commands. All instances running Windows Server 2025 AMIs are billed under the EC2 pricing for Windows operating system (OS).

To learn more about the new AMIs, see AWS Windows AMI reference. To learn more about running Windows Server 2025 on Amazon EC2, visit the Windows Workloads on AWS page.

Read more


Amazon EC2 R8g instances now available in AWS Europe (Ireland)

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R8g instances are available in AWS Europe (Ireland) region. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. Amazon EC2 R8g instances are ideal for memory-intensive workloads such as databases, in-memory caches, and real-time big data analytics. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads.

AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. AWS Graviton4-based R8g instances offer larger instance sizes with up to 3x more vCPU (up to 48xlarge) and memory (up to 1.5TB) than Graviton3-based R7g instances. These instances are up to 30% faster for web applications, 40% faster for databases, and 45% faster for large Java applications compared to AWS Graviton3-based R7g instances. R8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS).

To learn more, see Amazon EC2 R8g Instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.

Read more


Introducing Amazon EC2 M8g instances in Dallas Local Zone

AWS announces the general availability of Amazon Elastic Compute Cloud (Amazon EC2) M8g instances in Dallas Local Zone. These instances are powered by AWS Graviton4 processors and built for general-purpose workloads, such as application servers, microservices, gaming servers, midsize data stores, and caching fleets. AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. M8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS).

AWS Local Zones are a type of AWS infrastructure deployment that places compute, storage, database, and other select services closer to large population, industry, and IT centers where no AWS Region exists. You can use Local Zones to run applications that require single-digit millisecond latency for use cases such as real-time gaming, hybrid migrations, media and entertainment content creation, live video streaming, engineering simulations, and AR/VR at the edge.

To get started, you can enable AWS Dallas Local Zone us-east-1-dfw-2a, in the Amazon EC2 Console or the ModifyAvailabilityZoneGroup API, and deploy M8g instances. To learn more, visit AWS Local Zones overview page and see Amazon EC2 M8g instances.

Read more


amazon-ec2-auto-scaling

Amazon EC2 Auto Scaling introduces highly responsive scaling policies

Today, we are launching two new capabilities to EC2 Auto Scaling (ASG) that improve the responsiveness of Target Tracking scaling policies. Target Tracking now automatically adapts to the unique usage patterns of your individual applications, and can be configured to monitor high-resolution CloudWatch metrics to make more timely scaling decisions. With this release, you can enhance your application performance, and also maintain high utilization for your EC2 resources to save costs.

Scaling based on sub-minute CloudWatch metrics enables customers, with applications that have volatile demand patterns, like client-serving APIs, live streaming services, ecommerce websites, or on-demand data processing, reduce the time to detect and respond to changing demand. In addition, Target Tracking policies now self-tune their responsiveness, using historical usage data to determine the optimal balance between cost and performance for each application that saves customers’ time and effort.

Both these new features are available in select commercial regions, and Target Tracking policies will begin self-tuning once they have completed analyzing your application’s usage patterns. You can use Amazon Management Console, CLI, SDKs, and CloudFormation to update your Target Tracking configurations. Refer EC2 Auto Scaling user guide to learn more.

Read more


Amazon EC2 introduces provisioning control to launch instances on On-Demand Capacity

Amazon EC2 introduces a new capability that makes it easy for customers to target instance launches on their On-Demand Capacity Reservations (ODCRs). On-Demand Capacity Reservations help you reserve compute capacity for your workloads in a specific Availability Zone for any duration. This new feature allows you to better utilize your On-Demand Capacity Reservations by ensuring that launches from the RunInstances EC2 API and EC2 Auto Scaling groups will only be fulfilled by your targeted or open Capacity Reservations.

To get started, customers simply specify they if want to only launch on ODCR capacity on either their RunInstances EC2 API, Launch Templates, or Auto-Scaling Groups (ASGs).

This capability is now available in all AWS Regions, except China regions. To get started, please refer to the documentation for use with RunInstances API and ASG.
 

Read more


Amazon EC2 added New CPU-Performance Attribute for Instance Type Selection

Starting today, EC2 Auto Scaling and EC2 Fleet customers can express their EC2 instances’ CPU-performance requirements as part of the Attribute-Based Instance Type Selection (ABIS) configuration. With ABIS, customers can already choose a list of instances types by defining a set of desired resource requirements, such as the number of vCPU cores and memory per instance. Now, in addition to the quantitative resource requirements, customers can also identify an instance family that ABIS will use as a baseline to automatically select instance types that offer similar or better CPU performance, enabling customers to further optimize their instance-type selection.

ABIS is a powerful tool for customers looking to leverage instance type diversification to meet their capacity requirements. For example, customers who use Spot Instances to launch into limited EC2 spare capacity for a discounted price, access multiple instance types to successfully fulfill their larger capacity needs and experience fewer interruptions. With this release, for example, customers can use ABIS in a launch request for instances that can be in the C, M, and R instance classes, with a minimum of 4 vCPUs, and provide CPU performance in line with the C6i instance family, or better.

The feature is available in all AWS commercial and the AWS GovCloud (US) Regions. You can use Amazon Management Console, CLI, SDKs, and CloudFormation to update your instance requirements. To get started, refer the user guide for EC2 Auto Scaling and EC2 Fleet.

Read more


Amazon Application Recovery Controller zonal shift and zonal autoshift extend support for EC2 Auto Scaling

Amazon Application Recovery Controller (ARC) zonal shift and zonal autoshift have expanded their capabilities and now support EC2 Auto Scaling. ARC zonal shift helps you quickly recover an unhealthy application in an Availability Zone (AZ), and reduce the duration and severity of impact to the application due to events such as power outages and hardware or software failures. ARC zonal autoshift safely and automatically shifts your application’s traffic away from an AZ when AWS identifies a potential failure affecting that AZ.

EC2 Auto Scaling customers can now shift traffic away from an AZ in the event of a failure. Zonal shift works with EC2 Auto Scaling by stopping dynamic scale-in, so that capacity is not unnecessarily removed and launching new EC2 instances in the healthy AZs only. In addition, you can set health checks to enabled in the impaired AZ or disable health checks in the impaired AZ. When disabled, it will pause unhealthy instance replacement in the AZ that has an active zonal shift. Enable your EC2 Auto Scaling Groups for zonal shift using the EC2 Auto Scaling console or API, and then trigger a zonal shift or enable autoshift via ARC zonal shift console or API. To learn more review the ARC documentation and read this launch blog.

There is no additional charge for using zonal shift or zonal autoshift. See the AWS Regional Services List for the most up-to-date availability information.
 

Read more


EC2 Auto Scaling now supports Amazon Application Recovery Controller zonal shift and zonal autoshift

EC2 Auto Scaling now supports Amazon Application Recovery Controller (ARC) zonal shift and zonal autoshift to help you quickly recover an impaired application from failures in an Availability Zone (AZ). Starting today, you can shift the launches of EC2 instances in an Auto Scaling Group (ASG) away from an impaired AZ to quickly recover your unhealthy application in another AZ, reducing the duration and severity of impact due to events such as power outages and hardware, or software failures. This new integration also brings support for ARC zonal autoshift, which automatically starts a zonal shift for enabled ASGs when AWS identifies a potential failure affecting an AZ.

You can initiate a zonal shift for an ASG from the Amazon EC2 Auto Scaling or Application Recovery Controller console. You can also use the AWS SDK to start a zonal shift and programmatically shift the instances in your ASG away from an AZ, and shift it back once the affected AZ is healthy.

There is no additional charge for using zonal shift. Zonal shift is now available in all AWS Regions. To get started, read the launch blog, or refer to the documentation.
 

Read more


EC2 Auto Scaling introduces provisioning control on strict availability zone balance

Amazon EC2 Auto Scaling Groups (ASG) introduces a new capability for customers to strictly balance their workloads across Availability Zones, enabling greater control over provisioning and management of their EC2 instances.

Previously, customers that wanted to strictly balance an ASGs EC2 instances across availability zones had to override default behaviors of EC2 Auto Scaling and invest in custom code to modify the ASG’s existing behaviors with life cycle hooks or through maintaining multiple ASGs. With this feature, customers can now to easily achieve strict availability zone balance and ensure higher levels of resiliency for their applications.

This capability is now available through the AWS Command Line Interface (CLI), AWS SDKs, or the AWS Console in all AWS Regions. To get started, please refer to the documentation.

Read more


amazon-ec2-trn2

Amazon EC2 Trn2 instances are generally available

Today, AWS announces the general availability of Amazon Elastic Compute Cloud (Amazon EC2) Trn2 instances and preview of Trn2 UltraServers, powered by AWS Trainium2 chips. Available via EC2 Capacity Blocks, Trn2 instances and UltraServers are the most powerful EC2 compute solutions for deep learning and generative AI training and inference.

You can use Trn2 instances to train and deploy the most demanding foundation models including large language models (LLMs), multi-modal models, diffusion transformers and more to build a broad set of AI applications. To reduce training times and deliver breakthrough response times (per-token-latency) for the most capable, state-of-the-art models you might need more compute and memory than a single instance can deliver. Trn2 UltraServers is a completely new EC2 offering that uses NeuronLink, a high-bandwidth, low-latency fabric, to connect 64 Trainium2 chips across 4 Trn2 instances into one node unlocking unparalleled performance. For inference, UltraServers help deliver industry-leading response times to create the best real-time experiences. For training, UltraServers boost model training speed and efficiency with faster collective communication for model parallelism as compared to standalone instances.

Trn2 instances feature 16 Trainium2 chips to deliver up to 20.8 petaflops of FP8 compute, 1.5 TB high bandwidth memory with 46 TB/s of memory bandwidth, and 3.2 Tbps of EFA networking. Trn2 UltraServers feature 64 Trainium2 chips to deliver up to 83.2 petaflops of FP8 compute, 6 TB of total high bandwidth memory with 185 TB/s of total memory bandwidth, and 12.8 Tbps of EFA networking. They both are deployed in EC2 UltraClusters to provide non-blocking, petabit scale-out capabilities for distributed training. Trn2 instances are generally available in the trn2.48xlarge size in the US East (Ohio) AWS Region through EC2 Capacity Blocks for ML.

To learn more about Trn2 instances and request access to Trn2 UltraServers please visit the Trn2 instances page

Read more


amazon-ecr

Amazon ECR announces 10x increase in repository limit to 100,000

Amazon Elastic Container Registry (ECR) now supports a 10x increase in the default limit for repositories per region per account to 100,000, up from the previous limit of 10,000. This change better aligns with your growth needs and saves you time from not having to request limit increases till 100,000 repositories. You still have the flexibility to adjust the new limit and request additional increases if you require more than 100,000 repositories per registry.

The new limit increase is already applied to your current registries and is available in all AWS commercial and Gov Cloud (US) regions. To learn more about default ECR service limits, please visit our documentation. You can learn more about storing, managing and deploying container images and artifacts with Amazon ECR, including how to get started, from our product page and user guide.

Read more


amazon-ecs

Amazon CloudWatch Container Insights launches enhanced observability for Amazon ECS

Amazon CloudWatch Container Insights introduces enhanced observability for Amazon Elastic Container Service (ECS) running on Amazon EC2 and Amazon Fargate with out-of-the-box detailed metrics, from cluster level down to container level to deliver faster problem isolation and troubleshooting.

Enhanced observability enables customers to visually drill up and down across various container layers and directly spot issues like memory leaks in individual containers, reducing mean time to resolution. With enhanced observability customers can now view their clusters, services, tasks or containers sorted by resource consumption, quickly identify anomalies, and mitigate risks pro-actively before end user experience is impacted. Using Container Insights’ new landing page, customers can now easily understand overall health and performance of clusters across multiple accounts, identify the ones operating under high utilization and pinpoint the root cause by directly browsing to the related detailed dashboards view saving time and effort.

You can get started with enhanced observability at cluster level or account level by selecting “Enhanced” radio button on Amazon ECS console or through the AWS CLI, CloudFormation and CDK. You can also collect instance level metrics from EC2 by launching the CloudWatch agent as a daemon service on your Container Insights enabled clusters.

Container Insights is available in all public AWS Regions, including the AWS GovCloud (US) Regions, China (Beijing, operated by Sinnet), and China (Ningxia, operated by NWCD). Container Insights with enhanced observability for ECS comes with a flat metric pricing – see pricing page for details. For further information, visit the Container Insights documentation.

Read more


AWS announces support for predictive scaling for Amazon ECS services

Today, AWS announces support for predictive scaling for Amazon Elastic Container Service (Amazon ECS). Predictive scaling leverages advanced machine learning algorithms to proactively scale your Amazon ECS services ahead of demand surges, reducing overprovisioning costs while improving application responsiveness and availability.

Amazon ECS offers a rich set of service auto scaling options, including target tracking and step scaling policies, that automatically adjust task counts in response to observed load, as well as scheduled scaling to manually define rules to adjust capacity for routine demand patterns. Many applications observe recurring patterns of steep demand changes, such as early morning spikes when business resumes, wherein a reactive scaling policy can be slow to respond. Predictive scaling is a new capability that harnesses advanced machine learning algorithms, pre-trained on millions of data points, to proactively scale out ECS services ahead of anticipated demand surges. You can use predictive scaling alongside your existing auto scaling policies, such as target tracking or step scaling, so that your applications scale based on both real-time and historic patterns. You can also choose a “forecast only” mode to evaluate its accuracy and suitability, before enabling it to “forecast and scale“. Predictive scaling enhances responsiveness and availability for applications with recurring demand patterns, while also reducing the operational effort of manually configuring scaling policies and the costs from overprovisioning.

You can use AWS management console, SDK, CLI, CloudFormation, and CDK to configure predictive auto scaling for your ECS services. For a list of supported AWS Regions, see documentation. To learn more, visit this blog post and documentation.

Read more


Amazon ECS announces AZ rebalancing that speeds up mean time to recovery after an infrastructure event

Amazon Web Services (AWS) has announced the launch of Availability Zone (AZ) rebalancing for Amazon Elastic Container Service (ECS), a new feature that automatically redistributes containerized workloads across AZs. This capability helps reduce the mean time to recovery after infrastructure events, enabling applications to maintain high availability without requiring manual intervention.

Customers spread tasks across multiple AZs to enhance application resilience and minimize the impact of AZ-level failures, following AWS best practices. However, infrastructure events (such as an AZ outage) can leave the task distribution for an ECS service in an uneven state, potentially causing an availability risk to customer applications. With AZ rebalancing, ECS now automatically adjusts task placement to maintain an even balance, ensuring your applications remain highly available even in the face of failure.

Starting today, customers can enable AZ rebalancing for new and existing ECS services through the AWS CLI or the ECS Console. The feature is available in all Commercial and AWS GovCloud (US) Regions, and supports ECS Fargate and Amazon EC2 launch types. To learn more about AZ rebalancing and how to get started, visit the Amazon ECS documentation page.
 

Read more


Amazon ECS now allows you to configure software version consistency

Amazon Elastic Container Service (Amazon ECS) now allows you to configure software version consistency for specific containers within your Amazon ECS services.

By default, Amazon ECS resolves container image tags to the image digest (SHA256 hash of the image manifest) when you create a new Amazon ECS service or deploy an update to the service. This enforces that all tasks in the service are identical and launched with this image digest(s). However, for certain containers within the task (e.g. telemetry sidecars provided by a 3rd party) customers may prefer to not enforce consistency and intead use a mutable container image tag (e.g. LATEST). Now, you can disable software version consistency for one or more containers in your ECS service by configuring the new versionConsistency attribute in the container definition. ECS applies changes to version consistency when you redeploy your ECS service with the task definition revision.

You can disable software version consistency for your Amazon ECS services running on AWS Fargate platform version 1.4.0 or higher and/or version v1.70.0 or higher of the Amazon ECS Agent in all commercial and the AWS GovCloud (US) Regions. To learn more, please visit our documentation.
 

Read more


Amazon VPC Lattice now supports Amazon Elastic Container Service (Amazon ECS)

Amazon VPC Lattice now provides native integration with Amazon ECS, Amazon's fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. This launch enables VPC Lattice to offer comprehensive support across all major AWS compute services, including Amazon Elastic Compute Cloud (Amazon EC2), Amazon Elastic Kubernetes Service (Amazon EKS), AWS Lambda, Amazon ECS, and AWS Fargate. VPC Lattice is a managed application networking service that simplifies the process of connecting, securing, and monitoring applications across AWS compute services, allowing developers to focus on building applications that matter to their business while reducing time and resources spent on network setup and maintenance.

With native ECS integration, you can now directly associate your ECS services with VPC Lattice target groups, eliminating the need for an intermediate Application Load Balancer (ALB). This streamlined integration reduces cost, operational overhead, and complexity, while enabling you to leverage the complete feature sets of both ECS and VPC Lattice. Organizations with diverse compute infrastructure, such as a mix of Amazon EC2, Amazon EKS, AWS Lambda, and Amazon ECS workloads, can benefit from this launch by unifying service-to-service connectivity, security, and observability across all compute platforms.

This new feature is available in all AWS Regions where Amazon VPC Lattice is available.

To get started, see the following resources:

Read more


AWS introduces service versioning and deployment history for Amazon ECS services

Amazon Elastic Container Service (Amazon ECS) now allows you to view the service revision and deployment history for your long-running applications deployed as Amazon ECS services. This capability makes it easier for you to track and view changes to applications deployed using Amazon ECS, monitor on-going deployments, and debug deployment failures.

Typically, customers deploy long running applications as Amazon ECS services and deploy software updates using a rolling update mechanism where tasks running the old software version are gradually replaced by tasks running the new version. With today’s release, you can now view the deployment history for your Amazon ECS services on the AWS Management Console as well as using the new listServiceDeployments API. You can look at the details of a specific deployment, including whether it succeeded, when it started and completed, and service revision information before and after the deployment using the Console and describeServiceDeployment API. Furthermore, you can look at the immutable configuration for a specific service version, including the task definition, container image digests, load balancer, service connect configuration, etc. using the Console and describeServiceRevision API.

You can view the service version and deployment history for their services deployed on or after October 25, 2024 using the AWS Management Console, API, SDK, and CLI in all AWS Regions. To learn more, visit this blog post and documentation.

Read more


amazon-efs

Amazon EFS now supports up to 2.5 million IOPS per file system

Amazon EFS now supports up to 2.5 million read IOPS and up to 500,000 write IOPS per file system, a 10x increase over the previous limits, making it easier to power machine learning (ML) research, multi-tenant SaaS, genomics, and other data-intensive workloads on AWS.

Amazon EFS provides serverless, fully elastic file storage that makes it simple to set up and run file workloads on AWS. With this launch, Amazon EFS supports up to 2.5 million read IOPS and up to 500,000 write IOPS per file system. Now, applications that demand millions of IOPS and tens of GiB per second of throughput performance, such as analytics user shares supporting hundreds of data scientists, multi-tenant SaaS applications supporting thousands of customers, and distributed applications processing petabytes of genomics data, can easily scale to achieve the required highest level of performance.

The increased IOPS limits are available for all new EFS General Purpose file systems using the Elastic Throughput mode in all AWS commercial regions, except in AWS China Regions. For new file systems, you can request an IOPS limit increase in the Amazon EFS Service Quota console. To learn more, see the Amazon EFS Documentation or create a file system using the Amazon EFS Console, API, or AWS CLI.

Read more


amazon-eks

Announcing Amazon EKS Hybrid Nodes

Today, AWS announces the general availability of Amazon Elastic Kubernetes Service (Amazon EKS) Hybrid Nodes. With Amazon EKS Hybrid Nodes, you can use your on-premises and edge infrastructure as nodes in Amazon EKS clusters. Amazon EKS Hybrid Nodes unifies Kubernetes management across environments and offloads Kubernetes control plane management to AWS for your on-premises and edge applications.

You can now manage Kubernetes applications running on-premises and in edge environments to meet low-latency, local data processing, regulatory, or policy requirements using the same Amazon EKS clusters, features, and tools as applications running in AWS Cloud. Amazon EKS Hybrid Nodes works with any on-premises hardware or virtual machines, bringing the efficiency, scalability, and availability of Amazon EKS to wherever your applications need to run. You can use a wide range of Amazon EKS features with Amazon EKS Hybrid Nodes including Amazon EKS add-ons, EKS Pod Identity, cluster access management, cluster insights, and extended Kubernetes version support. Amazon EKS Hybrid Nodes is natively integrated with various AWS services including AWS Systems Manager, AWS IAM Roles Anywhere, Amazon Managed Service for Prometheus, Amazon CloudWatch, and Amazon GuardDuty for centralized monitoring, logging, and identity management.

Amazon EKS Hybrid Nodes is available in all AWS Regions, except the AWS GovCloud (US) Regions and the China Regions. Amazon EKS Hybrid Nodes is currently available for new Amazon EKS clusters. With Amazon EKS Hybrid Nodes, there are no upfront commitments or minimum fees, and you are charged per hour for the vCPU resources of your hybrid nodes when they are attached to your Amazon EKS clusters.

To get started and learn more about Amazon EKS Hybrid Nodes, see the Amazon EKS Hybrid Nodes User Guide, product webpage, pricing webpage, and AWS News Launch blog.

Read more


Announcing Amazon EKS Auto Mode

Today at re:Invent, AWS announced Amazon Elastic Kubernetes Service (Amazon EKS) Auto Mode, a new feature that fully automates compute, storage, and networking management for Kubernetes clusters. Amazon EKS Auto Mode simplifies running Kubernetes by offloading cluster operations to AWS, improves the performance and security of your applications, and helps optimize compute costs. 

You can use EKS Auto Mode to get Kubernetes conformant managed compute, networking, and storage for any new or existing EKS cluster. This makes it easier for you to leverage the security, scalability, availability, and efficiency of AWS for your Kubernetes applications. EKS Auto Mode removes the need for deep expertise, ongoing infrastructure management, or capacity planning by automatically selecting the best EC2 instances to run your application. It helps optimize compute costs while maintaining application availability by dynamically scaling EC2 instances based on demand. EKS Auto Mode provisions, operates, secures, and upgrades EC2 instances within your account using AWS-controlled access and lifecycle management. It handles OS patches and updates and limits security risks with ephemeral compute, which strengthens your security posture by default.

EKS Auto Mode is available today in all AWS Regions, except AWS GovCloud (US) and China Regions. You can enable EKS Auto Mode in any EKS cluster running Kubernetes 1.29 and above with no upfront fees or commitments—you pay for the management of the compute resources provisioned, in addition to your regular EC2 costs. 

To get started with EKS Auto Mode, use the EKS API, AWS Console, or your favorite infrastructure as code tooling to enable it in a new or existing EKS cluster. To learn more about EKS Auto Mode and how it can streamline your Kubernetes operations, visit the EKS Auto Mode feature page and see the AWS News launch blog.

Read more


Amazon Application Recovery Controller zonal shift and zonal autoshift now support Amazon EKS in the GovCloud (US) Regions

Amazon Application Recovery Controller (ARC) zonal shift and zonal autoshift have expanded their capabilities and now support Amazon Elastic Kubernetes Service (Amazon EKS) in the GovCloud (US) Regions. ARC zonal shift helps customers quickly recover an unhealthy application in an Availability Zone (AZ), and reduce the duration and severity of impact to the application due to events such as power outages and hardware or software failures. ARC zonal autoshift safely and automatically shifts an application’s traffic away from an AZ when AWS identifies a potential failure affecting that AZ.

Amazon EKS customers can now shift traffic away from an AZ in the event of a failure. Zonal shift works with Amazon EKS by shifting in-cluster traffic to healthy AZs and ensuring Pods aren’t scheduled in the impaired AZ. You can enable EKS clusters for zonal shift using the EKS console or API.

There is no additional charge for using zonal shift or zonal autoshift. Amazon EKS support for zonal shift is now available in all commercial AWS Regions and the AWS GovCloud (US) Regions. To get started, read the documentation.
 

Read more


Amazon EKS managed node groups now support AWS Local Zones

Amazon Elastic Kubernetes Service (Amazon EKS) now supports using managed node groups for Kubernetes workloads running on AWS Local Zones. This enhancement allows you to leverage the node provisioning and lifecycle automation of EKS managed node groups for EC2 instances in Local Zones, bringing your Kubernetes applications closer to end-users for improved latency. With this update, you can simplify cluster operations and unify your Kubernetes practices across AWS Local Zones and Regions.

Amazon EKS managed node groups provide an easy-to-use abstraction on top of Amazon EC2 instances and Auto Scaling groups, enabling streamlined creation, upgrading, and termination of Kubernetes cluster nodes (EC2 instances). You can now create EKS managed node groups for AWS Local Zones in new or existing EKS clusters using the Amazon EKS APIs, AWS Management Console, or infrastructure-as-code tools such as AWS CloudFormation and Terraform. This feature comes at no additional cost – you only pay for the AWS resources you provision.

To learn more about using Amazon EKS managed node groups with AWS local zones, please consult the EKS documentation.

Read more


Amazon EKS enhances Kubernetes control plane monitoring

Amazon EKS enhances visibility into the Kubernetes control plane by offering new intuitive dashboards in EKS console and providing a broader set of Kubernetes control plane metrics. This enables cluster administrators to quickly detect, troubleshoot, and remediate issues. All EKS clusters on Kubernetes version 1.28 and above will now automatically display a curated set of dashboards visualizing key control plane metrics within the EKS console, making it easy to observe the health and performance of the control plane. Additionally, a broader set of control plane metrics are made available in Amazon CloudWatch and in a Prometheus endpoint, providing customers with the flexibility to utilize their preferred monitoring solution — be it Amazon CloudWatch, Amazon Managed Service for Prometheus, or third-party monitoring tools.

Newly introduced pre-configured dashboards in the EKS console provide cluster administrators with visual representations of key control plane metrics, enabling rapid assessment of control plane health and performance. Additionally, the EKS console dashboards now integrate with Amazon CloudWatch Log Insights queries, surfacing critical insights from control plane logs directly within the console. Finally, customers now get access to Kubernetes control plane metrics from kube-scheduler and kube-controller-manager, in addition to the existing API server metrics.

The new set of dashboards and metrics are available at no additional charge in all AWS commercial regions and AWS GovCloud (US) Regions. To learn more, visit the launch blog post or EKS user guide.

Read more


Amazon EKS simplifies providing IAM permissions to EKS add-ons

Amazon Elastic Kubernetes Service (EKS) now offers a direct integration between EKS add-ons and EKS Pod Identity, streamlining the lifecycle management process for critical cluster operational software that needs to interact with AWS services outside the cluster.

EKS add-ons that enable integration with underlying AWS resources need IAM permissions to interact with AWS services. EKS Pod Identities simplify how Kubernetes applications obtain AWS IAM permissions. With today’s launch, you can directly manage EKS Pod Identities using EKS add-ons operations through the EKS console, CLI, API, eksctl, and IAC tools like AWS CloudFormation, simplifying usage of Pod Identities for EKS add-ons. This integration expands the selection of Pod Identity compatible EKS add-ons from AWS and AWS Marketplace available for installation through the EKS console during cluster creation.

EKS add-ons integration with Pod Identities is generally available in all commercial AWS regions. To get started, see the EKS user guide.

Read more


Easily troubleshoot NodeJS applications with Amazon CloudWatch Application Signals

Today, AWS announces the general availability of NodeJS applications monitoring on Amazon CloudWatch Application Signals, an OpenTelemetry (OTel) compatible application performance monitoring (APM) feature in CloudWatch. Application Signals simplifies the process of automatically tracking application performance against key business or service level objectives (SLOs) for AWS applications. Service operators can access a pre-built, standardized dashboard for AWS application metrics through Application Signals.

Customers already use Application Signals to monitor their Java, Python and .NET applications deployed on EKS, EC2 and other platforms. With this release, they can now easily onboard and troubleshoot issues in their NodeJS applications with no additional code. NodeJS application developers can quickly triage current operational health, and whether their applications are meeting their longer-term performance goals. Customers can ensure high availability of their NodeJS applications through Application Signals’ easy navigation flow, starting with an alert for a service level indicator (SLI) gone unhealthy and deep diving from there to an error or a spike in the auto generated graphs for application metrics (latency/errors/requests). In a single pane of glass view, they can correlate application metrics with traces, application logs and infrastructure metrics to troubleshoot issues with their application in a few clicks.

Application Signals is available in all commercial AWS Regions, except, CA West (Calgary) Region, Asia Pacific (Malaysia), AWS GovCloud (US) Regions and China Regions. For pricing, see Amazon CloudWatch pricing.

To learn more, see documentation to enable Amazon CloudWatch Application Signals for Amazon EKS, Amazon EC2, native Kubernetes and custom instrumentation for other platforms.

Read more


amazon-elastic-block-store

Amazon EBS announces Time-based Copy for EBS Snapshots

Today, Amazon Elastic Block Store (Amazon EBS), a high-performance block storage service, announces the general availability of Time-based Copy. This new feature helps you meet your business and compliance requirements by ensuring that your EBS Snapshots are copied within and across AWS Regions within a specified timeframe.

Customers use EBS Snapshots to back up their EBS volumes, and copy them across multiple AWS Regions and accounts, for disaster recovery, data migration and compliance purposes. Time-based Copy gives you predictability when copying your snapshots across Regions. With this feature, you can specify a desired completion duration, ranging from 15 minutes to 48 hours, for individual copy requests, ensuring that your EBS Snapshots meet their duration requirements or Recovery Point Objectives (RPOs). You can now also monitor your Copy operations via EventBridge and the new SnapshotCopyBytesTransferred CloudWatch metric, available by default at a 1-minute frequency at no additional charge.

Amazon EBS Time-based Copy is available in all AWS commercial Regions and the AWS GovCloud (US) Regions, through the AWS Console, AWS Command Line Interface (CLI), and AWS SDKs. For pricing information, please visit the EBS pricing page. To learn more, see the technical documentation for Time-based Copy for Snapshots.
 

Read more


Announcing customized delete protection for Amazon EBS Snapshots and EBS-backed AMIs

Customers can now further customize Recycle Bin rules to exclude EBS Snapshots and EBS-backed Amazon Machine Images (AMIs) based on tags. Customers use Recycle Bin to protect their resources from accidental deletion by retaining them for a time period that customers specify before being permanently deleted. The newly launched feature helps customers save cost by customizing their Recycle Bin rules for delete protection of only critical data in their resources, while excluding non-critical data that do not require delete protection.

Creating Region-level retention rules is a simple way to have peace of mind that all EBS Snapshots and EBS-backed AMIs in an AWS Region are protected from accidental deletion by Recycle Bin. However, in some cases, customers have security scanning workflows that create temporary EBS Snapshots that are not used for recovery. Customers may also have backup automation that do not require additional delete protection. The newly added feature to add resource exclusion tags in Recycle Bin can help you reduce storage costs by excluding the resources that do not require deletion protection from moving to Recycle Bin.

This feature is now available in all AWS commercial Regions and AWS GovCloud (US) Regions. Customers can add exclusion tags to their Recycle Bin rules via EC2 Console, API/CLI, or SDK.

To learn more about using Recycle Bin with exclusion tags, please refer to the technical documentation.

Read more


amazon-elastic-file-system

Amazon EFS now supports cross-account Replication

Amazon EFS now supports cross-account Replication, allowing customers to replicate file systems between AWS accounts. EFS Replication enables you to easily maintain an up-to-date replica of your file system in the AWS Region of your choice. With this launch, EFS Replication customers can meet business continuity, multi-account disaster recovery, and compliance requirements by automatically keeping replicas of their file data in separate accounts.

Customers often use multiple AWS accounts to help isolate and manage business applications and data for operational excellence, security, and reliability. Starting today, you can use EFS Replication to replicate your file system to another account in any AWS region. This eliminates the need to set up custom processes to synchronize EFS data across accounts, enhancing resilience and reliability in distributed environments.

EFS cross-account Replication is available for all existing and new EFS file systems in all commercial AWS Regions. To learn more, visit the Amazon EFS Documentation and get started by configuring EFS Replication in just a few clicks using the Amazon EFS Console, AWS CLI, AWS CloudFormation, and APIs.
 

Read more


amazon-elastic-load-balancing

Cross-zone enabled Application Load Balancer now supports zonal shift and zonal autoshift

AWS Application Load Balancer (ALB) now supports Amazon Application Recovery Controller’s zonal shift and zonal autoshift features on load balancers that are enabled across zones. Zonal shift allows you to quickly shift traffic away from an impaired Availability Zone (AZ) and recover from events such as bad application deployment and gray failures. Zonal autoshift safely and automatically shifts your traffic away from an AZ when AWS identifies potential impact to it.

Enabling cross-zone on ALBs is a popular configuration for customers that require an even distribution of traffic across application targets in multiple AZs. With this launch, customers can shift traffic away from an AZ in the event of a failure just like they are able to for cross-zone disabled load balancers. When zonal shift or autoshift is triggered, the ALB will block all traffic to targets in the AZ that is impacted and remove the zonal IP from DNS. You can configure this feature in two steps: First, enable configuration to allow zonal shift to act on your load balancer(s) using the ALB console or API. Second, trigger zonal shift or enable zonal autoshift for the chosen ALBs via Amazon Application Recovery Controller console or API.

Zonal shift and zonal autoshift support on ALB is available in all commercial AWS Regions, including the AWS GovCloud (US) Regions. To learn more, please refer to the ALB zonal shift documentation.

Read more


AWS Application Load Balancer introduces header modification for enhanced traffic control and security

Application Load Balancer (ALB) now supports HTTP request and response header modification giving you greater controls to manage your application’s traffic and security posture without having to alter your application code.

This feature introduces three key capabilities: renaming specific load balancer generated headers, inserting specific response headers, and disabling server response header. With header rename, you can now rename all ALB generated Transport Layer Security (TLS) headers that the load balancer adds to requests, which includes the six mTLS headers and two TLS headers (version and cipher). This capability enables seamless integration with existing applications that expect headers in a specific format, thereby minimizing changes to your backends while using TLS features on the ALB. With header insertion, you can insert custom headers related to Cross-Origin Resource Sharing (CORS) and critical security headers like HTTP Strict-Transport-Security (HSTS). Finally, the capability to disable the ALB generated “Server” header in responses reduces exposure of server-specific information, adding an extra layer of protection to your application. These response header modification features give you the ability to centrally enforce your organizations security posture at the load balancer instead of enforcement at individual applications, which can be prone to errors.

You can configure Header Modification feature using AWS APIs, AWS CLI, or the AWS Management Console. This feature is available for ALBs in all commercial AWS Regions, AWS GovCloud (US) Regions and China Regions. To learn more, refer to the ALB documentation.
 

Read more


Load Balancer Capacity Unit Reservation for Application and Network Load Balancers

Application Load Balancer (ALB) and Network Load Balancer (NLB) now support Load Balancer Capacity Unit (LCU) Reservation that allows you to proactively set a minimum capacity for your load balancer, complementing its existing ability to auto-scale based on your traffic pattern.

With this feature, you can prepare for anticipated traffic surges by reserving a guaranteed minimum capacity in advance, providing customers increased scale and availability during high-demand events. LCU Reservation is ideal for scenarios such as event ticket sales, new product launches, or release of popular content. When using this feature, you pay only for the reserved LCUs and any additional usage above the reservation. You can easily configure this feature through the ELB console or API.

The feature is available for ALB in all commercial AWS Regions, including the AWS GovCloud (US) Regions and NLB in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm). To learn more, please refer to the ALB Documentation and NLB Documentation.

Read more


amazon-elastic-vmware-service

Announcing Amazon Elastic VMware Service (Preview)

Today, AWS announces the preview of Amazon Elastic VMware Service (Amazon EVS). Amazon EVS is a new, native AWS service to run VMware Cloud Foundation (VCF) within your Amazon Virtual Private Cloud (Amazon VPC). 

Amazon EVS automates and simplifies deployments and provides a ready-to-use VMware Cloud Foundation (VCF) environment on AWS. This allows you to quickly migrate VMware-based virtual machines to AWS using the same VCF software and tools you already use in your on-premises environment. 

With Amazon EVS, you can now take advantage of the scale, resilience, and performance of AWS together with familiar VCF software and tools. You have the choice to self-manage or leverage AWS Partners to manage and operate your EVS deployments. With this, you keep complete control over your VMware architecture and can optimize your deployments to meet the unique demands of your applications. Amazon EVS provides the fastest path to migrate and operate VMware workloads on AWS.

Amazon EVS is currently available in preview for pre-selected customers and partners. To learn more about Amazon EVS and how it can help accelerate your VMware workload migration to AWS, visit the Amazon EVS product page or contact us.

Read more


amazon-elasticache

Valkey GLIDE 1.2 adds new features from Valkey 8.0, including AZ awareness

AWS adds support for Availability Zone (AZ) awareness in the open-source Valkey General Language Independent Driver for Enterprise (GLIDE) client library. Valkey GLIDE is a reliable, high-performance, and highly available client, and it’s pre-configured with best practices from over a decade of operating Amazon ElastiCache. Valkey GLIDE is compatible with versions 7.2 and 8.0 of Valkey, as well as versions 6.2, 7.0, and 7.2 of Redis OSS. With this update, Valkey GLIDE will direct requests to Valkey nodes within the same Availability Zone, minimizing cross-zone traffic and reducing response time. Java, Python, and Node.js are the currently supported languages for Valkey GLIDE, with further languages in development.

With this update, Valkey GLIDE 1.2 also supports Amazon ElastiCache and Amazon MemoryDB’s JavaScript Object Notation (JSON) data type, allowing customers to store and access JSON data within their clusters. In addition, it supports MemoryDB’s Vector Similarity Search, empowering customers to store, index, and search vectors for AI applications at single-digit millisecond speed.

Valkey GLIDE is open-source, uses the Apache 2.0 license, and works with any Valkey or Redis OSS datastore, including Amazon ElastiCache and Amazon MemoryDB. Learn more about it in this blog post and submit contributions to the Valkey GLIDE GitHub repository.

Read more


Amazon ElastiCache version 8.0 for Valkey brings faster scaling and improved memory efficiency

Today, Amazon ElastiCache introduces support for Valkey 8.0, the latest Valkey major version. This release brings faster scaling for ElastiCache Serverless for Valkey and improved memory efficiency on node-based ElastiCache, compared to previous versions of ElastiCache for Valkey and Redis OSS. Valkey is an open-source, high-performance key-value datastore stewarded by the Linux Foundation and is a drop-in replacement for Redis OSS. Backed by over 40 companies, Valkey has seen rapid adoption since its inception in March 2024.

Hundreds of thousands of customers use ElastiCache to scale their applications, improve performance, and optimize costs. ElastiCache Serverless version 8.0 for Valkey scales to 5 million requests per second (RPS) per cache in minutes, up to 5x faster than Valkey 7.2, with microsecond read latency. With node-based ElastiCache, you can benefit from improved memory efficiency, with 32 bytes less memory per key compared to ElastiCache version 7.2 for Valkey and ElastiCache for Redis OSS. AWS has made significant contributions to open source Valkey in the areas of performance, scalability, and memory optimizations, and we are bringing these benefits into ElastiCache version 8.0 for Valkey.

ElastiCache version 8.0 for Valkey is now available in all AWS regions. You can upgrade from ElastiCache version 7.2 for Valkey or any ElastiCache for Redis OSS version to ElastiCache version 8.0 for Valkey in a few clicks without downtime. You can get started using the AWS Management Console, Software Development Kit (SDK), or Command Line Interface (CLI). For more information, please visit the ElastiCache features page, blog and documentation.
 

Read more


amazon-emr

Introducing Advanced Scaling in Amazon EMR Managed Scaling

We are excited to announce Advanced Scaling, a new capability in Amazon EMR Managed Scaling which provides customers increased flexibility to control the performance and resource utilization of their Amazon EMR on EC2 clusters. With Advanced Scaling, customers will be able to configure the desired resource utilization or performance levels for their cluster, and Amazon EMR Managed Scaling will leverage the customers intent to intelligently scale the cluster and optimize cluster compute resources.

Customers appreciate the simplicity of Amazon EMR Managed Scaling. However, there are instances where the default Amazon EMR Managed Scaling algorithm might lead to cluster under-utilization for specific customer’s workload. For instance, clusters running multiple tasks of relatively short duration (task runtime of 10 seconds or less), Amazon EMR Managed Scaling by default scales up the cluster aggressively and conservatively scale it down to avoid negative impact to job run times. While this is the right approach for SLA-sensitive workloads, it might not be optimal for cost sensitive workloads. With Advanced Scaling, customer can now configure Amazon EMR Managed Scaling behavior suitable for their workload type and we will apply tailored optimization to intelligently add or remove nodes from the clusters.

To get started with Advanced Scaling, you can set the ScalingStrategy and UtilizationPerformanceIndex parameters either when creating a new Managed Scaling policy, or updating an existing Managed Scaling policy. Advanced Scaling is available with Amazon EMR release 7.0 and later and is available in all regions where Amazon EMR Managed Scaling is available. For more details, please refer to our Advanced Scaling documentation.

Read more


Announcing Amazon EMR 7.4 Release

Today, we are excited to announce the general availability of Amazon EMR 7.4. Amazon EMR 7.4 supports Apache Spark 3.5.2, Apache Hadoop 3.4.0, Trino 446, Apache HBase 2.5.5, Apache Phoenix 5.2.0, Apache Flink 1.19.0, Presto 0.287 and Apache Zookeeper 3.9.2.

Amazon EMR 7.4 enables in-transit encryption for 7 additional endpoints used with distributed applications like Apache Livy, Apache Hue, JupyterEnterpriseGateway, Apache Ranger and Apache Zookeeper. This update builds on the previous release Amazon EMR 7.3, which enabled in-transit encryption for 22 endpoints. In-Transit Encryption enables you to run workloads that meet strict regulatory or compliance requirements by protecting the confidentiality and integrity of your data.

Amazon EMR 7.4 is now available in all regions where Amazon EMR is available. To learn how to enable in transit encryption for your Amazon EMR clusters, view the TLS documentation. See Regional Availability of Amazon EMR, and our release notes for more detailed information.

Read more


amazon-eventBridge

Amazon EventBridge and AWS Step Functions announce integration with private APIs

Amazon EventBridge and AWS Step Functions now support integration with private APIs powered by AWS PrivateLink and Amazon VPC Lattice, making it easier for customers to accelerate innovation and simplify modernization of distributed applications across public and private networks, both on-premises and in the cloud. This allows customers to bring the capabilities of AWS cloud to new and existing workloads, achieving higher performance, agility, and lower costs.

Enterprises across industries are modernizing their applications to drive growth, reduce costs, and foster innovation. However, integrating applications across siloed VPCs and on-premises environments can be challenging, often requiring custom code and complex configurations. With fully-managed connectivity to private HTTPS-based APIs, customers can now securely integrate their legacy systems with cloud-native applications using event-driven architectures and workflow orchestration, allowing them to accelerate their innovation on AWS while driving higher security and regulatory compliance. These advancements allow customers to achieve faster time to market by eliminating the need to write and maintain custom networking or integration code, enabling developers to build extensible systems and add new capabilities easily.

Integration with private APIs in Amazon EventBridge and AWS Step Functions are now generally available in Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), South America (Sao Paulo), US East (N. Virginia), US East (Ohio), US West (N. California), and US West (Oregon). You can start using private APIs with Amazon EventBridge and AWS Step Functions from the AWS Management Console or using the AWS CLI and SDK. To learn more, please read the launch blog, Amazon EventBridge user guide and AWS Step Functions documentation.
 

Read more


AWS End User Messaging announces integration with Amazon EventBridge

Today, AWS End User Messaging announces an integration with Amazon EventBridge. AWS End User Messaging provides developers with a scalable and cost-effective messaging infrastructure without compromising the safety, security, or results of their communications.

Now your SMS, MMS and voice delivery events which contain information like the status of the message, price, and carrier information will be available in EventBridge. You can then send send your SMS events to other AWS services and the many SaaS applications that EventBridge integrates with. EventBridge also allows you to create rules that filter and route your SMS events to event destinations you specify.

To learn more, visit the AWS End User Messaging SMS User Guide.
 

Read more


Amazon EventBridge event delivery latency metric now in the AWS GovCloud (US) Regions

The Amazon EventBridge Event Bus end-to-end event delivery latency metric in Amazon CloudWatch that tracks the duration between event ingestion and successful delivery to the targets on your Event Bus is now available in the AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions. This new IngestionToInvocationSuccessLatency allows you to now detect and respond to event processing delays caused by under-performing, under-scaled, or unresponsive targets.

Amazon EventBridge Event Bus is a serverless event router that enables you to create highly scalable event-driven applications by routing events between your own applications, third-party SaaS applications, and other AWS services. You can set up rules to determine where to send your events, allowing for applications to react to changes in your systems as they occur. With the new IngestionToInvocationSuccessLatency metric you can now better monitor and understand event delivery latency to your targets, increasing the observability of your event-driven architecture.

To learn more about the new IngestionToInvocationSuccessLatency metric for Amazon EventBridge Event Buses, please read our blog post and documentation.
 

Read more


Amazon EventBridge announces up to 94% improvement in end-to-end latency for Event Buses

Amazon EventBridge Event Buses announces up to 94% improvement in end-to-end latency for Event Buses, since January 2023, enabling you to handle highly latency sensitive applications, including fraud detection and prevention, industrial automation, and gaming applications. End-to-End latency is measured by the time taken from event ingestion to first event invocation attempt. This lower latency enables you to build highly responsive and efficient event-driven architectures for your time-sensitive applications. You can now detect and respond to critical events more quickly, enabling rapid innovation, faster decision-making, and improved operational efficiency.

For latency-sensitive mission-critical applications, even small delays can have a big impact. To address this, Amazon EventBridge Event Bus has been able to significantly reduce its average latency from 2235.23ms measured in January 2023, to just 129.33ms measured in August 2024 at P99. This significant improvement in latency allows EventBridge to deliver events in real-time to your mission critical applications.

Amazon EventBridge Event Bus’ lower latency is applied by default across all AWS Regions where Amazon EventBridge is available, including the AWS GovCloud (US) Regions, at no additional cost to you. Customers can monitor these improvements through the IngestionToInvocationStartLatency or the end-to-end IngestionToInvocationSuccessLatency metrics available in the EventBridge console dashboard or via Amazon CloudWatch. This benefits customers globally, and ensures consistent low-latency event processing for customers, regardless of your geographic location.

For more information on Amazon EventBridge Event Bus, please visit our documentation. To get started with Amazon EventBridge, visit the AWS Console and follow these instructions from the user guide.

Read more


amazon-fsx-for-lustre

Amazon FSx for Lustre now supports Elastic Fabric Adapter and NVIDIA GPUDirect Storage

Amazon FSx for Lustre, a service that provides high-performance, cost-effective, and scalable file storage for compute workloads, now supports Elastic Fabric Adapter (EFA) and NVIDIA GPUDirect Storage (GDS). With this launch, Amazon FSx for Lustre now provides the fastest storage performance for GPU instances in the cloud, delivering up to 12x higher throughput per client instance (1200 Gbps) compared to previous FSx for Lustre systems, so you can complete machine learning training jobs faster and reduce workload costs.

EFA improves workload performance by using the AWS Scalable Reliable Datagram (SRD) protocol to increase network throughput utilization and by bypassing the operating system during data transfer. For applications powered by high-performance computing instances such as Trn1 and Hpc7a, you can use EFA to achieve higher throughput per client instance. GDS support builds on EFA to further enhance performance by enabling direct data transfer between the file system and the GPU memory. This direct path eliminates memory copies and CPU involvement in data transfer operations. With the combination of EFA and GDS support, applications using P5 GPU instances and NVIDIA Compute Unified Device Architecture (CUDA) can achieve up to 12x higher throughput (up to 1200 Gbps) per client instance.

EFA and GDS support is available at no additional cost on new FSx for Lustre Persistent-2 file systems in all commercial AWS Regions where Persistent-2 file systems are available. For more information about this new feature, see the Amazon FSx for Lustre documentation and the AWS News Blog, Amazon FSx for Lustre increases throughput to GPU instances by up to 12x.

Read more


The next generation of Amazon FSx for Lustre file systems is now available in US West (N. California)

You can now create the next generation Amazon FSx for Lustre file systems in the US West (N. California) AWS Region.

The next generation of Amazon FSx for Lustre file systems is built on AWS Graviton processors and provides higher throughput per terabyte (up to 1 GB/s per terabyte) and lower cost of throughput compared to previous generation file systems. Using the next generation of FSx for Lustre file systems, you can accelerate execution of machine learning, high-performance computing, media & entertainment, and financial simulations workloads while reducing your cost of storage.

For more information, please visit the Amazon FSx for Lustre product page, and see the AWS region table for complete regional availability information.

Read more


amazon-fsx-for-openzfs

Announcing Amazon FSx Intelligent-Tiering, a new storage class for FSx

Today, AWS announces the general availability of Amazon FSx Intelligent-Tiering, a new storage class for Amazon FSx that costs up to 85% less than the FSx SSD storage class and up to 20% less than traditional HDD-based NAS storage on premises, and that brings full elasticity and intelligent tiering to network-attached storage (NAS). The new storage class is available today on Amazon FSx for OpenZFS.

Using Amazon FSx, customers can launch and run fully managed cloud file systems that have familiar NAS capabilities such as point-in-time snapshots, data clones, and user quotas. Before today, customers have been moving NAS data sets for mission-critical and performance-intensive workloads to FSx for OpenZFS, using the existing SSD storage class for predictable high performance. With the new FSx Intelligent-Tiering storage class, customers can now bring to FSx for OpenZFS a broad range of general-purpose data sets, including those with a large proportion of infrequently accessed data stored on low-cost HDD on premises. FSx Intelligent-Tiering delivers low-cost storage and costs up to 85% less than the FSx SSD storage class and up to 20% less than traditional HDD-based NAS storage on premises. With FSx Intelligent-Tiering, customers no longer need to provision or manage storage and get automatic storage cost optimization as data access patterns change. There are no upfront costs or commitments to use the storage class, and customers pay only for the resources used.

FSx Intelligent-Tiering can be used when creating a new FSx for OpenZFS file system in the following AWS Regions: US East (N. Virginia, Ohio), US West (Oregon), Canada (Central), Europe (Frankfurt, Ireland), and Asia Pacific (Mumbai, Singapore, Sydney, Tokyo).

For more information about this feature, visit the FSx for OpenZFS documentation page.

Read more


amazon-gamelift

Amazon GameLift adds containers for faster dev iteration and simplified management

We are excited to announce Amazon GameLift now supports containers for building, deploying, and running game server packages. Amazon GameLift is a fully managed service that allows developers to quickly manage and scale dedicated game servers for multiplayer games. With this new capability, Amazon GameLift supports end-to-end development of containerized workloads, including deployment and scaling on-premises, in the cloud, or hybrid configurations. This reduces the time it takes to deploy a new version to approximately 5 minutes. It makes production updates faster, and removes the need to host separate customized development environments for quick iteration.

Containers package the entire runtime environment needed to run game servers, including code, dependencies, and configuration files. This allows developers to seamlessly move game server builds between local machines, staging, and production deployments without worrying about missing dependencies or configuration drift. Containers also enable efficient resource utilization by running multiple isolated game servers on the same host machine. Overall, containerization simplifies deployment, ensures consistent and secure environments, and optimizes resource usage for game servers. Containers integrate with AWS Graviton instances and Spot Instances, and run games designed for a containerized environments including those built with popular game engines like Unreal and Unity.

Amazon GameLift manged containers support is now generally available in all Amazon GameLift regions except AWS China. To get started with Amazon GameLift managed containers, visit the Amazon GameLift managed containers documentation.

Read more


amazon-guardduty

Amazon GuardDuty introduces GuardDuty Extended Threat Detection

Today, Amazon Web Services (AWS) announces the general availability of Amazon GuardDuty Extended Threat Detection. This new capability allows you to identify sophisticated, multi-stage attacks targeting your AWS accounts, workloads, and data. You can now use new attack sequence findings that cover multiple resources and data sources over an extensive time period, allowing you to spend less time on first-level analysis and more time responding to critical severity threats to minimize business impact.

GuardDuty Extended Threat Detection uses artificial intelligence and machine learning algorithms trained at AWS scale and automatically correlates security signals from across AWS services to detect critical threats. This capability allows for the identification of attack sequences, such as credential compromise followed by data exfiltration, and represents them as a single, critical-severity finding. The finding includes an incident summary, a detailed events timeline, mapping to MITRE ATT&CK® tactics and techniques, and remediation recommendations.

GuardDuty Extended Threat Detection is available in all AWS commercial Regions where GuardDuty is available. This new capability is automatically enabled for all new and existing GuardDuty customers at no additional cost. You do not need to enable all GuardDuty protection plans. However, enabling additional protection plans will increase the breadth of security signals, allowing for more comprehensive threat analysis and coverage of attack scenarios. You can take action on findings directly from the GuardDuty console or via its integrations with AWS Security Hub and Amazon EventBridge.

To get started, visit the Amazon GuardDuty product page or try GuardDuty free for 30 days on the AWS Free Tier.
 

Read more


AWS announces AWS Security Incident Response for general availability

Today, AWS announces the general availability of AWS Security Incident Response, a new service that helps you prepare for, respond to, and recover from security events. This service offers automated monitoring and investigation of security findings to free up your resources from routine tasks, communication and collaboration features to streamline response coordination, and direct 24/7 access to the AWS Customer Incident Response Team (CIRT).

Security Incident Response integrates with existing detection services, such as Amazon GuardDuty, and third-party tools through AWS Security Hub to rapidly review security alerts, escalate high-priority findings, and, with your permission, implement containment actions. It reduces the number of alerts your team needs to analyze, saving time and allowing your security personnel to focus on strategic initiatives. The service centralizes all incident-related communications, documentation, and actions, making coordinated incident response across internal and external stakeholders possible and reducing the time to coordinate from hours to minutes. You can preconfigure incident response team members, set up automatic notifications, manage case permissions, and use communication tools like video conferencing and in-console messaging during security events. By accessing the service through a single, centralized dashboard in the AWS Management Console, you can monitor active cases, review resolved security incident cases, and track key metrics, such as the number of triaged events and mean time to resolution, in real time. If you require specialized expertise, you can connect 24/7 to the AWS CIRT in only one step.

For more information about AWS Regions where Security Incident Response is available, refer to the following service documentation.

To get started, visit the Security Incident Response console, and explore the overview page to learn more. For configuration details, refer to the Security Incident Response User Guide.

Read more


amazon-ivs

Amazon IVS introduces Multitrack Video to save input costs

Today we are launching Multitrack Video, a new capability in Amazon Interactive Video Service (Amazon IVS) which can save you up to 75% on live video input costs with standard channels. With Multitrack Video, you send multiple video quality renditions directly from your own device instead of using Amazon IVS for transcoding.

Multitrack Video is supported in OBS Studio. Once you enable Multitrack Video on your IVS channels, your broadcasters can simply check a box in OBS to automatically send an optimal set of video qualities based on their hardware and network capabilities. This enables viewers to watch in the best quality for their connection, while you pay $0.50 an hour for standard channel input compared to $2.00 an hour without Multitrack Video. For more pricing information, visit the Amazon IVS pricing page.

Amazon IVS is a managed live streaming solution that is designed to make low-latency or real-time video available to viewers around the world. Video ingest and delivery are available over a managed network of infrastructure optimized for live video. Visit the AWS region table for a full list of AWS Regions where the Amazon IVS console and APIs for control and creation of video streams are available.

To get started, see the Multitrack Video documentation

Read more


amazon-kendra

Announcing GenAI Index in Amazon Kendra

Amazon Kendra is an AI-powered search service enabling organizations to build intelligent search experiences and retrieval augmented generation (RAG) systems to power generative AI applications. Starting today, AWS customers can use a new index - the GenAI Index for RAG and intelligent search. With the Kendra GenAI Index, customers get high out-of-the-box search accuracy powered by the latest information retrieval technologies and semantic models.

Kendra GenAI Index supports mobility across AWS generative AI services like Amazon Bedrock Knowledge Base and Amazon Q Business, giving customers the flexibility to use their indexed content across different use cases. It is available as a managed retriever in Bedrock Knowledge Bases, enabling customers to create a Knowledge Base powered by the Kendra GenAI Index. Customers can also integrate such Knowledge Bases with other Bedrock Services like Guardrails, Prompt Flows, and Agents to build advanced generative AI applications. The GenAI Index supports connectors for 43 different data sources, enabling customers to easily ingest content from a variety of sources.

Kendra GenAI Index is available in the US East (N. Virginia) and US West (Oregon) regions.

To learn more, see Kendra GenAI Index in the Amazon Kendra Developer Guide. For pricing, please refer to Kendra pricing page.

Read more


amazon-keyspaces

Amazon Keyspaces (for Apache Cassandra) now supports adding Regions to existing Keyspaces

Amazon Keyspaces (for Apache Cassandra) is a scalable, serverless, highly available, and fully managed Apache Cassandra-compatible database service that offers 99.999% availability.

Today, Amazon Keyspaces added the capability to add Regions to existing Keyspaces. With this launch, you can convert an existing single-Region Keyspace to a multi-Region Keyspace or add a new Region to an existing multi-Region Keyspace without recreating the existing Keyspaces. As your application traffic and business needs evolve over time, you can easily add new Regions closest to your application to achieve lower read and write latencies. You can also improve the availability and resiliency of your workloads by adding Regions. Keyspaces fully manages all aspects of creating a new Region and populating it with the latest data from other Regions, enabling you to focus your resources on adding value for your customers rather than managing operational tasks. You can still perform read and write operations on your tables in the existing Region during the addition of a new Region. With this capability, you get the flexibility and ease to manage the regional footprint of your application based on your changing needs.

Support for adding Regions to existing Keyspaces is available in all AWS Regions where Amazon Keyspaces offers multi-Region Replication. For more information on multi-Region Replication, see documentation. If you’re new to Amazon Keyspaces, the Getting Started guide shows you how to provision a Keyspace and explore the query and scaling capabilities of Amazon Keyspaces.

Read more


Amazon Keyspaces (for Apache Cassandra) reduces prices by up to 75%

Amazon Keyspaces (for Apache Cassandra) is a scalable, serverless, highly available, and fully managed Apache Cassandra-compatible database service. Effective Today, Amazon Keyspaces (for Apache Cassandra) is reducing prices by up to 75% across several pricing dimensions.

Amazon Keyspaces supports both on-demand and provisioned capacity modes for writing and reading data within a Region or across multiple Regions. Keyspaces’ on-demand mode provides a fully serverless experience with pay-as-you-go pricing and automatic scaling, eliminating the need for capacity planning. Many customers choose on-demand mode for its simplicity, enabling them to build modern, serverless applications that can start small and seamlessly scale to millions of requests per second.

Amazon Keyspaces has lowered prices for on-demand mode by up to 56% for single-Region and up to 65% for multi-Region usage, and for provisioned mode by up to 13% for single-Region and up to 20% for multi-Region usage. Additionally, to make data deletion more cost-effective, Keyspaces has lowered time-to-live (TTL) delete prices by 75%. Previously, on-demand was the cost-effective choice for spiky workloads, but with this pricing change, it now offers a lower cost for most provisioned capacity workloads as well. This change transforms on-demand mode into the recommended and default choice for the majority of Keyspaces workloads.

Together, these price reductions make Amazon Keyspaces even more cost-effective and simplify building, scaling, and managing Cassandra workloads. This pricing change is available in all AWS Regions where AWS offers Amazon Keyspaces. To learn more about the new price reductions, visit the Amazon Keyspaces Pricing.

Read more


amazon-kinesis

Amazon Kinesis Data Streams On-Demand mode supports streams writing up to 10GB/s

Amazon Kinesis Data Streams On-Demand Mode now automatically scales to support streaming applications that write up to 10GB/s per stream and consumers that read up to 20 GB/s per stream. This is a 5x increase from the previously supported limits of 2 GB/s per stream for writers and 4 GB/s for readers.

Amazon Kinesis Data Streams is a serverless data streaming service that allows customers to build de-coupled applications that publish and consume real-time data streams. It includes integrations with 40+ AWS and third-party services, enabling customers to easily build real-time stream processing, analytics, and machine learning applications. Customers use Kinesis Data Streams On-demand Mode for workloads with unpredictable and variable traffic patterns, so they do not have to manage capacity. They can pay based on the amount of data streamed. Customers can now use On-demand Mode for high-throughput data streams.

There is no action required on your part to use this feature in US East (N. Virginia), US West (Oregon) and Europe (Ireland) AWS Regions. When you write data to your Kinesis On-demand stream, it will automatically scale to write up to 10 GB/s. For other AWS Regions, you can reach out to AWS support to raise the peak write throughput capacity of your OD Streams to 10 GB/s. To learn more, see the Kinesis Data Streams Quotas and Limits documentation.

Read more


Amazon Kinesis Data Streams launches CloudFormation support for resource policies

Amazon Kinesis Data Streams now provides AWS CloudFormation supports for managing resource policies for data streams and consumers. You can use CloudFormation templates to programmatically deploy resource policies in a secure, efficient, and repeatable way, reducing the risk of human error from manual configuration.

Kinesis Data Streams allows users to capture, process, and store data streams in real time at any scale. CloudFormation uses stacks to manage AWS resources, allowing you to track changes, apply updates automatically, and easily roll back changes when needed.

CloudFormation support for resource policies is available in all AWS regions where Amazon Kinesis Data Streams is offered, including the AWS GovCloud (US) Regions and China Regions. To learn more about Amazon Kinesis Data Streams resource policies, visit the developer guide.

Read more


amazon-kinesis-firehose

Amazon Data Firehose supports continuous replication of database changes to Apache Iceberg Tables in Amazon S3

Amazon Data Firehose now enables capture and replication of database changes to Apache Iceberg Tables in Amazon S3 (Preview) . This new feature allows customers to easily stream real-time data from MySQL and PostgreSQL databases directly into Apache Iceberg Tables.

Firehose is a fully managed, serverless streaming service that enables customers to capture, transform, and deliver data streams into Amazon S3, Amazon Redshift, OpenSearch, Splunk, Snowflake, and other destinations for analytics. With this functionality, Firehose performs an initial complete data copy from selected database tables, then continuously streams Change Data Capture (CDC) updates to reflect inserts, updates, and deletions in the Apache Iceberg Tables .This streamlined solution eliminates complex data pipeline setups while minimizing impact on database transaction performance .
Key capabilities include: • Automatic creation of Apache Iceberg Tables matching source database schemas • Automatic schema evolution in response to source changes • Selective replication of specific databases, tables, and columns

This preview feature is available in all AWS regions except China, AWS GovCloud (US), and Asia Pacific (Malaysia) Regions. For terms and conditions, see Beta Service Participation in AWS Service Terms.

To get started, visit Amazon Data Firehose documentation and console.

To learn more about this feature, visit this AWS blog post.

Read more


Amazon Data Firehose support for delivering data into Apache Iceberg tables is available in additional AWS Regions

Amazon Data Firehose support for delivering data streams into Apache Iceberg tables in Amazon S3 is now available in all AWS regions except AWS China, AWS GovCloud and ap-southeast-5 regions.

With this feature, Firehose integrates with Apache Iceberg, so customers can deliver data streams directly into Apache Iceberg tables in their Amazon S3 data lake. Firehose can acquire data streams from Kinesis Data Streams, Amazon MSK, or Direct PUT API, and is also integrated to acquire streams from AWS Services such as AWS WAF web ACL logs, Amazon CloudWatch Logs, Amazon VPC Flow Logs, AWS IOT, Amazon SNS, AWS API Gateway Access logs and many others listed here. Customers can stream data from any of these sources directly into Apache Iceberg tables in Amazon S3, and avoid multi-step processes. Firehose is serverless, so customers can simply setup a stream by configuring the source and destination properties, and pay based on bytes processed.

The new feature also allows customers to route records in a data stream to different Apache Iceberg tables based on the content of the incoming record. To route records to different tables, customers can configure routing rules using JSON expressions. Additionally, customers can specify if the incoming record should apply a row-level update or delete operation in the destination Apache Iceberg table, and automate processing for data correction and right to forget scenarios.

To learn more and get started, visit Amazon Data Firehose documentation, pricing, and console.

Read more


amazon-kinesis-streams

Amazon Managed Service for Apache Flink now offers a new Apache Flink connector for Amazon Kinesis Data Streams. This open-source connector, contributed by AWS, supports Apache Flink 2.0 and provides several enhancements. It enables in-order reads during stream scale-up or scale-down, supports Apache Flink's native watermarking, and improves observability through unified connector metrics. Additionally, the connector uses AWS SDK for Java 2.x which supports enhanced performance and security features, and native retry strategy.

Amazon Kinesis Data Streams is a serverless data streaming service that enables customers to capture, process, and store data streams at any scale. Amazon Managed Service for Apache Flink makes it easier to transform and analyze streaming data in real time with Apache Flink without having to manage servers or clusters. You can use the new connector to consume data from a Kinesis Data Stream source for real-time processing in your Apache Flink application and can also send data back to a Kinesis Data Streams destination. You can use the new connector to read data from a Kinesis data stream starting with Apache Flink version 1.19.

To learn more about Apache Flink Amazon Kinesis Data Streams connector, visit the official Apache Flink documentation. You can also check the GitHub repositories for Apache AWS connectors.
 

Read more


Amazon Kinesis Data Streams On-Demand mode supports streams writing up to 10GB/s

Amazon Kinesis Data Streams On-Demand Mode now automatically scales to support streaming applications that write up to 10GB/s per stream and consumers that read up to 20 GB/s per stream. This is a 5x increase from the previously supported limits of 2 GB/s per stream for writers and 4 GB/s for readers.

Amazon Kinesis Data Streams is a serverless data streaming service that allows customers to build de-coupled applications that publish and consume real-time data streams. It includes integrations with 40+ AWS and third-party services, enabling customers to easily build real-time stream processing, analytics, and machine learning applications. Customers use Kinesis Data Streams On-demand Mode for workloads with unpredictable and variable traffic patterns, so they do not have to manage capacity. They can pay based on the amount of data streamed. Customers can now use On-demand Mode for high-throughput data streams.

There is no action required on your part to use this feature in US East (N. Virginia), US West (Oregon) and Europe (Ireland) AWS Regions. When you write data to your Kinesis On-demand stream, it will automatically scale to write up to 10 GB/s. For other AWS Regions, you can reach out to AWS support to raise the peak write throughput capacity of your OD Streams to 10 GB/s. To learn more, see the Kinesis Data Streams Quotas and Limits documentation.

Read more


Amazon OpenSearch Ingestion adds support for ingesting data from Amazon Kinesis Data Streams

Amazon OpenSearch Ingestion now allows you to ingest records from Amazon Kinesis Data Streams, enabling you to seamlessly index streaming data in Amazon OpenSearch Service managed clusters or serverless collections without the need for any third-party data connectors. With this integration, you can now use Amazon OpenSearch Ingestion to perform near- real-time aggregations, sampling and anomaly detection on data ingested from Amazon Kinesis Data Streams, helping you to build efficient data pipelines to power your event-driven applications and real-time analytics use cases.

Amazon OpenSearch Ingestion pipelines can consume data records from one or more Amazon Kinesis Data Streams and transform the data before writing it to Amazon OpenSearch Service or Amazon S3. While reading data from Amazon Kinesis Data Streams via Amazon OpenSearch Ingestion, you have the option to use either enhanced fan-out or shared reads, giving you the flexibility to balance speed and cost. You can also check out this blog post to learn more about this feature.

This feature is available in all the 15 AWS commercial regions where Amazon OpenSearch Ingestion is currently available: US East (Ohio), US East (N. Virginia), US West (Oregon), US West (N. California), Europe (Ireland), Europe (London), Europe (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Singapore), Asia Pacific (Mumbai), Asia Pacific (Seoul), Canada (Central), South America (Sao Paulo), and Europe (Stockholm).

To learn more, see the Amazon OpenSearch Ingestion webpage and the Amazon OpenSearch Service Developer Guide.

Read more


Amazon Kinesis Data Streams launches CloudFormation support for resource policies

Amazon Kinesis Data Streams now provides AWS CloudFormation supports for managing resource policies for data streams and consumers. You can use CloudFormation templates to programmatically deploy resource policies in a secure, efficient, and repeatable way, reducing the risk of human error from manual configuration.

Kinesis Data Streams allows users to capture, process, and store data streams in real time at any scale. CloudFormation uses stacks to manage AWS resources, allowing you to track changes, apply updates automatically, and easily roll back changes when needed.

CloudFormation support for resource policies is available in all AWS regions where Amazon Kinesis Data Streams is offered, including the AWS GovCloud (US) Regions and China Regions. To learn more about Amazon Kinesis Data Streams resource policies, visit the developer guide.

Read more


New Kinesis Client Library 3.0 reduces stream processing compute costs by up to 33%

You can now reduce compute costs to process streaming data with Kinesis Client Library (KCL) 3.0 by up to 33% compared to previous KCL versions. KCL 3.0 introduces an enhanced load balancing algorithm that continuously monitors resource utilization of the stream processing workers and automatically redistributes the load from over-utilized workers to other underutilized workers. This ensures even CPU utilization across workers and removes the need to over-provision the stream processing compute workers which reduces cost. Additionally, KCL 3.0 is built with the AWS SDK for Java 2.x for improved performance and security features, fully removing the dependency on the AWS SDK for Java 1.x.

KCL is an open-source library that simplifies the development of stream processing applications with Amazon Kinesis Data Streams. It manages complex tasks associated with distributed computing such as load balancing, fault tolerance, and service coordination, allowing you to solely focus on your core business logic. You can upgrade your stream processing application running on KCL 2.x by simply replacing the current library using KCL 3.0, without any changes in your application code. KCL 3.0 supports stream processing applications running on Amazon EC2 instances or containers such as Amazon ECS, Amazon EKS, or AWS Fargate.

KCL 3.0 is available with Amazon Kinesis Data Streams in all AWS regions. To learn more, see the Amazon Kinesis Data Streams developer guide, KCL 3.0 release notes, and launch blog.

Read more


amazon-location-service

Amazon Location Service launches Enhanced Places, Routes, and Maps

Amazon Location Service now offers enhanced Places, Routes, and Maps functionality, enabling developers to add advanced location capabilities into their applications more easily. These improvements introduce new capabilities and a new streamlined developer experience to support location-based use cases across industries such as healthcare, transportation & logistics, and retail.

The enhancements include powerful search functions like Geocode to search addresses, Search Nearby to find local businesses, and Autocomplete to predict typed addresses, as well as richer places details including opening hours and contact information. This release also introduces advanced route planning capabilities such as Toll Cost calculation, Waypoint Optimization for multi-stop delivery, Isoline or serviceable area calculation, and supporting a variety of travel restrictions. For example, a food delivery app can use Search Nearby to find and recommend local restaurants, Optimize Waypoints to plan efficient driver routes for multiple orders, and Snap-to-Road to visualize the driver's traveled path on a map. These enhancements are accompanied by new standalone SDKs, making it easier for developers to start new mapping projects, or migrate their existing workloads to Amazon Location Service to benefit from the cost reduction, privacy protection, and ease of integration with other AWS services.

Enhanced Places, Routes, and Maps are available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Stockholm), Europe (Spain), and South America (São Paulo). To learn more, please visit the Developer Guide.
 

Read more


amazon-machine-learning

AWS announces Amazon SageMaker Partner AI Apps

Today Amazon Web Services, Inc. (AWS) announced the general availability of Amazon SageMaker partner AI apps, a new capability that enables customers to easily discover, deploy, and use best-in-class machine learning (ML) and generative AI (GenAI) development applications from leading app providers privately and securely, all without leaving Amazon SageMaker AI so they can develop performant AI models faster.

Until today, integrating purpose-built GenAI and ML development applications that provide specialized capabilities for a variety of model development tasks, required a considerable amount of effort. Beyond the need to invest time and effort in due diligence to evaluate existing offerings, customers had to perform undifferentiated heavy lifting in deploying, managing, upgrading and scaling these applications. Furthermore, to adhere to rigorous security and compliance protocols, organizations need their data to stay within the confines of their security boundaries without needing to move their data elsewhere, for example, to a Software as a Service (SaaS) application. Finally, the resulting developer experience is often fragmented, with developers having to switch back and forth between multiple disjointed interfaces. With SageMaker partner AI apps you can quickly subscribe to a partner solution and seamlessly integrate the app with your SageMaker development environment. SageMaker partner AI apps are fully managed and run privately and securely in your SageMaker environment reducing the risk of data and model exfiltration.

At launch, you will be able to boost your team’s productivity and reduce time to market by enabling: Comet, to track, visualize, and manage experiments for AI model development; Deepchecks, to evaluate quality and compliance for AI models; Fiddler, to validate, monitor, analyze, and improve AI models in production; and, Lakera, to protect AI applications from security threats such as prompt attacks, data loss and inappropriate content.

SageMaker partner AI apps is available in all currently supported regions except Gov Cloud. To learn more please visit SageMaker partner AI app’s developer guide.
 

Read more


Amazon SageMaker HyperPod now provides flexible training plans

Amazon SageMaker HyperPod announces flexible training plans, a new capability that allows you to train generative AI models within your timelines and budgets. Gain predictable model training timelines and run training workloads within your budget requirements, while continuing to benefit from features of SageMaker HyperPod such as resiliency, performance-optimized distributed training, and enhanced observability and monitoring. 

In a few quick steps, you can specify your preferred compute instances, desired amount of compute resources, duration of your workload, and preferred start date for your generative AI model training. SageMaker then helps you create the most cost-efficient training plans, reducing time to train your model by weeks. Once you create and purchase your training plans, SageMaker automatically provisions the infrastructure and runs the training workloads on these compute resources without requiring any manual intervention. SageMaker also automatically takes care of pausing and resuming training between gaps in compute availability, as the plan switches from one capacity block to another. If you wish to remove all the heavy lifting of infrastructure management, you can also create and run training plans using SageMaker fully managed training jobs.  

SageMaker HyperPod flexible training plans are available in the US East (N. Virginia), US East (Ohio), and US West (Oregon) AWS Regions. To learn more, visit: SageMaker HyperPod, documentation, and the announcement blog

Read more


Amazon Bedrock Marketplace brings over 100 models to Amazon Bedrock

Amazon Bedrock Marketplace provides generative AI developers access to over 100 publicly available and proprietary foundation models (FMs), in addition to Amazon Bedrock’s industry-leading, serverless models. Customers deploy these models onto SageMaker endpoints where they can select their desired number of instances and instance types. Amazon Bedrock Marketplace models can be accessed through Bedrock’s unified APIs, and models which are compatible with Bedrock’s Converse APIs can be used with Amazon Bedrock’s tools such as Agents, Knowledge Bases, and Guardrails.

Amazon Bedrock Marketplace empowers generative AI developers to rapidly test and incorporate a diverse array of emerging, popular, and leading FMs of various types and sizes. Customers can choose from a variety of models tailored to their unique requirements, which can help accelerate the time-to-market, improve the accuracy, or reduce the cost of their generative AI workflows. For example, customers can incorporate models highly-specialized for finance or healthcare, or language translation models for Asian languages, all from a single place.

Amazon Bedrock Marketplace is supported in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), and South America (São Paulo).

For more information, please refer to Amazon Bedrock Marketplace's announcement blog or documentation.

Read more


AWS Education Equity Initiative to boost education for underserved learners

Amazon announces a five-year commitment of cloud technology and technical support for organizations creating digital learning solutions that expand access for underserved learners worldwide through the AWS Education Equity Initiative. While the use of educational technologies continues to rise, many organizations lack access to cloud computing and AI resources needed to accelerate and scale their work to reach more learners in need.

Amazon is committing up to $100 million in AWS credits and technical advising to support socially-minded organizations build and scale learning solutions that utilize cloud and AI technologies. This will help reduce initial financial barriers and provide guidance on building and scaling AI-powered education solutions using AWS technologies.

Eligible recipients, including socially-minded edtechs, social enterprises, non-profits, governments, and corporate social responsibility teams, must demonstrate how their solution will benefit students from underserved communities. The initiative is now accepting applications.

To learn more and how to apply, visit the AWS Education Equity Initiative page.

Read more


Task governance is now generally available for Amazon SageMaker HyperPod

Amazon SageMaker HyperPod now provides you with centralized governance across all generative AI development tasks, such as training and inference. You have full visibility and control over compute resource allocation, ensuring the most critical tasks are prioritized and maximizing compute resource utilization, reducing model development costs by up to 40%.

With HyperPod task governance, administrators can more easily define priorities for different tasks and set up limits for how many compute resources each team can use. At any given time, administrators can also monitor and audit the tasks that are running or waiting for compute resources through a visual dashboard. When data scientists create their tasks, HyperPod automatically runs them, adhering to the defined compute resource limits and priorities. For example, when training for a high-priority model needs to be completed as soon as possible but all compute resources are in use, HyperPod frees up resources from lower-priority tasks to support the training. HyperPod pauses the low-priority task, saves the checkpoint, and reallocates the freed-up compute resources. The preempted low-priority task will resume from the last saved checkpoint as resources become available again. And when a team is not fully using the resource limits the administrator has set up, HyperPod use those idle resources to accelerate another team’s tasks. Additionally, HyperPod is now integrated with Amazon SageMaker Studio, bringing task governance and other HyperPod capabilities into the Studio environment. Data scientists can now seamlessly interact with HyperPod clusters directly from Studio, allowing them to develop, submit, and monitor machine learning (ML) jobs on powerful accelerator-backed clusters.

Task governance for HyperPod is available in all AWS Regions where HyperPod is available: US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Stockholm), and South America (São Paulo).

To learn more, visit SageMaker HyperPod webpage, AWS News Blog, and SageMaker AI documentation.

Read more


Announcing new AWS AI Service Cards to advance responsible generative AI

Today, AWS announces the availability of new AWS AI Service Cards for Amazon Nova Reel; Amazon Canvas; Amazon Nova Micro, Lite, and Pro; Amazon Titan Image Generator; and Amazon Titan Text Embeddings. AI Service Cards are a resource designed to enhance transparency by providing customers with a single place to find information on the intended use cases and limitations, responsible AI design choices, and performance optimization best practices for AWS AI services.

AWS AI Service Cards are part of our comprehensive development process to build services in a responsible way. They focus on key aspects of AI development and deployment, including fairness, explainability, privacy and security, safety, controllability, veracity and robustness, governance, and transparency. By offering these cards, AWS aims to empower customers with the knowledge they need to make informed decisions about using AI services in their applications and workflows. Our AI Service Cards will continue to evolve and expand as we engage with our customers and the broader community to gather feedback and continually iterate on our approach.

For more information, see the AI Service Cards for

To learn more about AI Service Cards, as well as our broader approach to building AI in a responsible way, see our Responsible AI webpage.

Read more


Announcing Amazon SageMaker HyperPod recipes

Amazon SageMaker HyperPod recipes help you get started training and fine-tuning publicly available foundation models (FMs) in minutes with state-of-the-art performance. SageMaker HyperPod helps customers scale generative AI model development across hundreds or thousands of AI accelerators with built-in resiliency and performance optimizations, decreasing model training time by up to 40%. However, as FM sizes continue to grow to hundreds of billions of parameters, the process of customizing these models can take weeks of extensive experimenting and debugging. In addition, performing training optimizations to unlock better price performance is often unfeasible for customers, as they often require deep machine learning expertise that could cause further delays in time to market. 

With SageMaker HyperPod recipes, customers of all skill sets can benefit from state-of-the-art performance while quickly getting started training and fine-tuning popular publicly available FMs, including Llama 3.1 405B, Mixtral 8x22B, and Mistral 7B. SageMaker HyperPod recipes include a training stack tested by AWS, removing weeks of tedious work experimenting with different model configurations. You can also quickly switch between GPU-based and AWS Trainium-based instances with a one-line recipe change and enable automated model checkpointing for improved training resiliency. Finally, you can run workloads in production on the SageMaker AI training service of your choice. 

SageMaker HyperPod recipes are available in all AWS Regions where SageMaker HyperPod and SageMaker training jobs are supported. To learn more and get started, visit the SageMaker HyperPod page and blog.

Read more


Announcing GenAI Index in Amazon Kendra

Amazon Kendra is an AI-powered search service enabling organizations to build intelligent search experiences and retrieval augmented generation (RAG) systems to power generative AI applications. Starting today, AWS customers can use a new index - the GenAI Index for RAG and intelligent search. With the Kendra GenAI Index, customers get high out-of-the-box search accuracy powered by the latest information retrieval technologies and semantic models.

Kendra GenAI Index supports mobility across AWS generative AI services like Amazon Bedrock Knowledge Base and Amazon Q Business, giving customers the flexibility to use their indexed content across different use cases. It is available as a managed retriever in Bedrock Knowledge Bases, enabling customers to create a Knowledge Base powered by the Kendra GenAI Index. Customers can also integrate such Knowledge Bases with other Bedrock Services like Guardrails, Prompt Flows, and Agents to build advanced generative AI applications. The GenAI Index supports connectors for 43 different data sources, enabling customers to easily ingest content from a variety of sources.

Kendra GenAI Index is available in the US East (N. Virginia) and US West (Oregon) regions.

To learn more, see Kendra GenAI Index in the Amazon Kendra Developer Guide. For pricing, please refer to Kendra pricing page.

Read more


Amazon Bedrock now supports multi-agent collaboration

Amazon Bedrock now supports multi-agent collaboration, allowing organizations to build and manage multiple AI agents that work together to solve complex workflows. This feature allows developers to create agents with specialized roles tailored for specific business needs, such as financial data collection, research, and decision-making. By enabling seamless agent collaboration, Amazon Bedrock empowers organizations to optimize performance across industries like finance, customer service, and healthcare.

With multi-agent collaboration on Amazon Bedrock, organizations can effortlessly master complex workflows, achieving highly accurate and scalable results across diverse applications. In financial services, for example, specialized agents coordinate to gather data, analyze trends, and provide actionable recommendations—working in parallel to improve response times and precision. This collaborative feature allows businesses to quickly build, deploy, and scale multi-agent setups, reducing development time while ensuring seamless integration and adaptability to evolving needs.

Multi-agent collaboration is currently available in the US East (N. Virginia), US West (Oregon), and Europe (Ireland) AWS Regions.

To learn more, visit Amazon Bedrock Agents

Read more


Amazon Q Developer can now automate code reviews

Starting today, Amazon Q Developer can also perform code reviews, automatically providing comments on your code in the IDE, flagging suspicious code patterns, providing patches where available, and even assessing deployment risk so you can get feedback on your code quickly.

Q Developer is a generative AI-powered assistant for designing, building, testing, deploying, and maintaining software. Its agents for software development have a deep understanding of your entire code repos, so they can accelerate many tasks beyond coding. By automating the first round of code reviews and improving review consistency, Q Developer empowers code authors to fix issues faster, streamlining the process for both authors and reviewers. With this new capability, Q Developer can help you get immediate feedback for your code reviews and code fixes where available, so you can increase the speed of iteration and improve the quality of your code.

This capability is available in the integrated development environment (IDE) through a new chat command: /review. You can start automating code reviews via the Visual Studio Code and IntelliJ IDEA Integrated Development Environments (IDEs) with both an Amazon Q Developer Free Tier or Pro Tier subscription. For more details on pricing, see Amazon Q Developer pricing

This capability is available in all AWS Regions where Amazon Q Developer is available. To get started with generating documentation, visit Amazon Q Developer or read the news blog.

Read more


Amazon Bedrock Model Distillation is now available in preview

With Amazon Bedrock Model Distillation, customers can use smaller, faster, more cost-effective models that deliver use-case specific accuracy that is comparable to the most capable models in Amazon Bedrock.

Today, fine-tuning a smaller cost-efficient model to increase its accuracy for a customers’ use-case is an iterative process where customers need to write prompts and response, refine the training dataset, ensure that the training dataset captures diverse examples, and adjust the training parameters.

Amazon Bedrock Model Distillation automates the process needed to generate synthetic data from the teacher model, trains and evaluates the student model, and then hosts the final distilled model for inference. To remove some of the burden of iteration, Model Distillation may choose to apply different data synthesis methods that are best suited for your use-case to create a distilled model that approximately matches the advanced model for the specific use-case. For example, Bedrock may expand the training dataset by generating similar prompts or generate high-quality synthetic responses using customer provided prompt-response pairs as golden examples.

Learn more in our documentation and blog.
 

Read more


Amazon Q Developer adds operational investigation capability (Preview)

Amazon Q Developer now helps you accelerate operational investigations across your AWS environment in just a fraction of the time. With a deep understanding of your AWS cloud environment and resources, Amazon Q Developer looks for anomalies in your environment, surfaces related signals for you to explore, identifies potential root-cause hypothesis, and suggests next steps to help you remediate issues faster. 

Amazon Q Developer works alongside you throughout your operational troubleshooting journey from issue detection and triaging, through remediation. You can initiate an investigation by selecting the Investigate action on any Amazon CloudWatch data widget across the AWS Management Console. You can also configure Amazon Q to automatically investigate when a CloudWatch alarm is triggered. When an investigation starts, Amazon Q Developer sifts through various signals about your AWS environment including CloudWatch telemetry, AWS CloudTrail Logs, deployment information, changes to resource configuration, and AWS Health events. 

CloudWatch now provides a dedicated investigation experience where teams can collaborate and add findings, view related signals and anomalies, and review suggestions for potential root cause hypotheses. This new capability also provides remediation suggestions for common operational issues across your AWS environment by surfacing relevant AWS Systems Manager Automation runbooks, AWS re:Post articles, and documentation. It also integrates with your existing operational workflows such as Slack via AWS Chatbot. 

The new operational investigation capability within Amazon Q Developer is available at no additional cost during preview in the US East (N. Virginia) Region. To learn more, see getting started and best practice documentation

Read more


Introducing Amazon SageMaker Data and AI Governance

Today, AWS announces Amazon SageMaker Data and AI Governance, a new capability that simplifies discovery, governance, and collaboration for data and AI across your lakehouse, AI models, and applications. Built on Amazon DataZone, SageMaker Data and AI Governance allows engineers, data scientists, and analysts to securely discover and access approved data and models using semantic search with generative AI–created metadata. This new offering helps organizations consistently define and enforce access policies using a single permission model with fine-grained access controls.

With SageMaker Data and AI Governance, you can accelerate data and AI discovery and collaboration at scale. You can enhance data discovery by automatically enriching your data and metadata with business context using generative AI, making it easier for all users to find, understand, and use data. You can share data, AI models, prompts, and other generative AI assets with filtering by table and column names or business glossary terms. SageMaker Data and AI Governance helps establish trust and drives transparency in your data pipelines and AI projects with built-in model monitoring to detect bias and report on how features contribute to your model predictions.

To learn more about how to govern your data and AI assets, visit SageMaker Data and AI Governance.

Read more


Amazon Q in QuickSight unifies insights from structured and unstructured data

Now generally available, Amazon Q in QuickSight provides users with unified insights from structured and unstructured data sources through integration with Amazon Q Business. While structured data is managed in conventional systems, unstructured data such as document libraries, webpages, images and more has remained largely untapped due to its diverse and distributed nature.

With Amazon Q in QuickSight business users can now augment insights from traditional BI data sources such as databases, data lakes and data warehouses, with contextual information from unstructured sources. Users can get augmented insights within QuickSight's BI interface across multi-visual Q&A and Data Stories. Users can use multi-visual Q&A to ask questions in natural language and get visualizations and data summaries augmented with contextual insights from Amazon Q Business. With data stories in Amazon Q in QuickSight users can upload documents, or connect to unstructured data sources from Amazon Q Business to create richer narratives or presentations explaining their data with additional context. This integration enables organizations to harness insights from all their data without the need for manual collation, leading to more informed decision-making, time savings, and a significant competitive edge in the data-driven business landscape.

This new capability is generally available to all Amazon QuickSight Pro Users in US East (N. Virginia), and US West (Oregon) AWS Regions.

To learn more visit the AWS Business Intelligence Blog, the Amazon Q Business What’s New Post and try QuickSight free for 30-days.
 

Read more


Amazon Q Developer can now generate documentation within your source code

Starting today, Amazon Q Developer can document your code by automatically generating readme files and data-flow diagrams within your projects. 

Today, developers report they spend an average of just one hour per day coding. They spend most of their time on tedious, undifferentiated tasks such as learning codebases, writing and reviewing documentation, testing, managing deployments, troubleshooting issues or finding and fixing vulnerabilities. Q Developer is a generative AI-powered assistant for designing, building, testing, deploying, and maintaining software. Its agents for software development have a deep understanding of your entire code repos, so they can accelerate many tasks beyond coding. With this new capability, Q Developer can help you understand your existing code bases faster, or quickly document new features, so you can focus on shipping features for your customers.

This capability is available in the integrated development environment (IDE) through a new chat command: /doc . You can get started generating documentation within the Visual Studio Code and IntelliJ IDEA IDEs with an Amazon Q Developer Free Tier or Pro Tier subscription. For more details on pricing, see Amazon Q Developer pricing.

This capability is available in all AWS Regions where Amazon Q Developer is available. To get started with generating documentation, visit Amazon Q Developer or read the news blog

Read more


Announcing Amazon Bedrock IDE in preview as part of Amazon SageMaker Unified Studio

Today we are announcing the preview launch of Amazon Bedrock IDE, a governed collaborative environment integrated within Amazon SageMaker Unified Studio (preview) that enables developers to swiftly build and tailor generative AI applications. It provides an intuitive interface for developers across various skill levels to access Amazon Bedrock's high-performing foundation models (FMs) and advanced customization capabilities in order to collaboratively build custom generative AI applications.

Amazon Bedrock IDE's integration into Amazon SageMaker Unified Studio removes barriers between data, tools, and builders, for generative AI development. Teams can now access their preferred analytics and ML tools alongside Amazon Bedrock IDE's specialized tools for building generative AI applications. Developers can leverage Retrieval Augmented Generation (RAG) to create Knowledge Bases from their proprietary data sources, Agents for complex task automation, and Guardrails for responsible AI development. This unified workspace reduces complexity, accelerating the prototyping, iteration, and deployment of production-ready, responsible generative AI apps aligned with business needs.

Amazon Bedrock IDE is now available in Amazon SageMaker Unified Studio and supported in 5 regions. For more information on supported regions, please refer to the Amazon SageMaker Unified Studio regions guide.

Learn more about Amazon Bedrock IDE and its features by visiting the Amazon Bedrock IDE user guide and get started with Bedrock IDE by enabling a “Generative AI application development” project profile using this admin guide.
 

Read more


Amazon Q Developer transformation capabilities for mainframe modernization are now available (Preview)

Today, AWS announces new generative AI–powered capabilities of Amazon Q Developer in public preview to help customers and partners accelerate large-scale assessment and modernization of mainframe applications.

Amazon Q Developer is enterprise-ready, offering a unified web experience tailored for large-scale modernization, federated identity, and easier collaboration. Keeping you in the loop, Amazon Q Developer agents analyze and document your code base, identify missing assets, decompose monolithic applications into business domains, plan modernization waves, and refactor code. You can chat with Amazon Q Developer in natural language to share high-level transformation objectives, source repository access, and project context. Amazon Q Developer agents autonomously classify and organize application assets and create comprehensive code documentation to understand and expand the knowledge base of your organization. The agents combine goal-driven reasoning using generative AI and modernization expertise to develop modernization plans customized for your code base and transformation objectives. You can then collaboratively review, adjust, and approve the plans through iterative engagement with the agents. Once you approve the proposed plan, Amazon Q Developer agents autonomously refactor the COBOL code into cloud-optimized Java code while preserving business logic.

By delegating tedious tasks to autonomous Amazon Q Developer agents with your review and approvals, you and your team can collaboratively drive faster modernization, larger project scale, and better transformation quality and performance using generative AI large language models. You can enhance governance and compliance by maintaining a well-documented and explainable trail of transformation decisions.

To learn more, read the blog and visit Amazon Q Developer transformation capabilities webpage and documentation.

Read more


Introducing latency-optimized inference for foundation models in Amazon Bedrock

Latency-optimized inference for foundation models in Amazon Bedrock now available in public preview, delivering faster response times and improved responsiveness for AI applications. Currently, these new inference options support Anthropic's Claude 3.5 Haiku model and Meta's Llama 3.1 405B and 70B models offering reduced latency compared to standard models without compromising accuracy. As verified by Anthropic, with latency-optimized inference in Amazon Bedrock, Claude 3.5 Haiku runs faster on AWS than anywhere else. Additionally, with latency-optimized inference in Bedrock, Llama 3.1  405B and 70B runs faster on AWS than any other major cloud provider.

As more customers move their generative AI applications to production, optimizing the end-user experience becomes crucial, particularly for latency-sensitive applications such as real-time customer service chatbots and interactive coding assistants. Using purpose-built AI chips like AWS Trainium2 and advanced software optimizations in Amazon Bedrock, customers can access more options to optimize their inference for a particular use case. Accessing these capabilities requires no additional setup or model fine-tuning, allowing for immediate enhancement of existing applications with faster response times.

Latency-optimized inference is available for Anthropic’s Claude 3.5 Haiku and Meta’s Llama 3.1 405B and 70B in the US East (Ohio) Region via cross-region inference. To get started, visit the Amazon Bedrock console. For more information about Amazon Bedrock and its capabilities, visit the Amazon Bedrock product page, pricing page, and documentation.

Read more


Amazon Bedrock Knowledge Bases now supports RAG evaluation (Preview)

Today, we are announcing RAG evaluation support in Amazon Bedrock Knowledge Bases. This capability allows you to evaluate your retrieval-augmented generation (RAG) applications built on Amazon Bedrock Knowledge Bases. You can evaluate either information retrieval or the retrieval plus content generation. Evaluations are powered by LLM-as-a-Judge technology, with customers having a choice of several judge models to use. For retrieval evaluation, you can select from metrics such as context relevance and coverage. For retrieve plus generation evaluation, you can select from quality metrics such as correctness, completeness, and faithfulness (hallucination detection), as well as responsible AI metrics such as harmfulness, answer refusal, and stereotyping. You can also compare across evaluation jobs in order to compare Knowledge Bases with different settings like chunking strategy or vector length, or different content generating models.

Evaluating RAG applications can be difficult, as there are many components in the retrieval and generation that need to be optimized. Now, Amazon Bedrock Knowledge Bases’s RAG evaluation tool allows customers to evaluate their Knowledge Base-powered applications conveniently and quickly where their data and LLMs already live. Additionally, you can incorporate Amazon Bedrock Guardrails directly into your evaluation for even more thorough testing. Using these RAG evaluation tools on Amazon Bedrock can save cost as well as weeks of time compared to a full offline human-based evaluation, allowing you to make improvements in your application faster and easier.

To learn more, including region availability, read the AWS News blog and visit the Amazon Bedrock Evaluations page. To get started, log into the Amazon Bedrock Console or use the Amazon Bedrock APIs.

Read more


Amazon Bedrock Knowledge Bases now supports custom connectors and ingestion of streaming data

Amazon Bedrock Knowledge Bases now supports custom connector and ingestion of streaming data, allowing developers to add, update, or delete data in their knowledge base through direct API calls. Amazon Bedrock Knowledge Bases offers fully-managed, end-to-end Retrieval-Augmented Generation (RAG) workflows to create highly accurate, low latency, secure, and custom GenAI applications by incorporating contextual information from your company's data sources. With this new capability, customers can easily ingest specific documents from custom data sources or Amazon S3 without requiring a full sync, and ingest streaming data without the need for intermediary storage.

This enhancement enables customers to ingest specific documents from any custom data source and reduce latency and operational costs for intermediary storage while ingesting streaming data. For instance, a financial services firm can now keep its knowledge base continuously updated with the latest market data, ensuring that their GenAI applications deliver the most relevant information to end-users. By eliminating time-consuming full syncs and storage steps, customers gain faster access to data, reducing latency, and improving application performance.

Customers can start using this feature either through the console or programmatically via the APIs. In the console, users can select a custom connector as the data source, then add documents, text, or base64 encoded text strings.

This capability is available in all regions where Amazon Bedrock Knowledge Bases is supported. There is no additional cost for using this new custom connector capability.

To learn more, visit Amazon Bedrock Knowledge Bases product documentation.
 

Read more


Amazon Bedrock Knowledge Bases now supports streaming responses

Amazon Bedrock Knowledge Bases offers fully-managed, end-to-end Retrieval-Augmented Generation (RAG) workflows to create highly accurate, low latency, secure, and custom GenAI applications by incorporating contextual information from your company's data sources. Today, we are announcing the support of RetrieveAndGenerateStream API in Bedrock Knowledge Bases. This new streaming API allows Bedrock Knowledge Base customers to receive the response as it is being generated by the Large Language Model (LLM), rather than waiting for the complete response.

RAG workflow involves several steps, including querying the data store, gathering relevant context, and then sending the query to a LLM for response summarization. This final step of response generation could take few seconds, depending on the latency of the underlying model used in response generation. To reduce this latency for building latency-sensitive applications, we're now offering the RetrieveAndGenerateStream API which provides the response as a stream as it is being generated by the model. This results in a reduced latency for the first response, providing users with a more seamless and responsive experience when interacting with Bedrock Knowledge Bases.

This new capability is currently supported in all existing Amazon Bedrock Knowledge Base regions. To learn more, visit the documentation.
 

Read more


Amazon Bedrock now supports Rerank API to improve accuracy of RAG applications

Amazon Bedrock announces support for reranker models through the Rerank API, enabling developers to improve the relevance of responses in Retrieval-Augmented Generation (RAG) applications. The reranker models rank a set of retrieved documents based on their relevance to user's query, helping to prioritize the most relevant content to be passed to the foundation models (FM) for response generation. Amazon Bedrock Knowledge Bases offers fully-managed, end-to-end RAG workflows to create custom generative AI applications by incorporating contextual information from various data sources. For Amazon Bedrock Knowledge Base users, enabling the reranker is through a setting available in Retrieve and RetrieveAndGenerate APIs.

Semantic search in RAG systems can improve document retrieval relevance but may struggle with complex or ambiguous queries. For example, a customer service chatbot asked about returning an online purchase might retrieve documents on both return policies and shipping guidelines. Without proper ranking, the generated response could focus on shipping instead of returns, missing the user's intent. Now, Amazon Bedrock provides access to reranking models which will address this by reordering retrieved documents based on their relevance to the user query. This ensures the most useful information is sent to the foundation model for response generation, optimizing the context window usage and potentially reducing costs.

The Rerank API supports Amazon Rerank 1.0 and Cohere Rerank 3.5 models. These models are available in US West (Oregon), Canada (Central), Europe (Frankfurt) and Asia Pacific (Tokyo).

Please visit the Amazon Bedrock product documentation. For details on pricing, please refer to the pricing page.
 

Read more


PartyRock improves app discovery and announces upcoming free daily use

Starting today, PartyRock is supporting improved app discovery using search, making it even easier to explore and build with generative AI. In addition, a new and improved daily free usage model will replace the current free trial grant in 2025 to further empower everyone to build AI apps on PartyRock with daily recurring free use.

Previously, AWS offered new PartyRock users a free trial for a limited time, but starting in 2025 you can access and experiment with PartyRock apps, without the worry of exhausting the free trial credits through a free daily use grant. Since its launch in November 2023, more than a half million apps have been created by PartyRock users. Until now, discovering those apps required link or playlist sharing, or browsing featured apps on the PartyRock Discover page. Users can now use the search bar on the homepage to explore all publicly published PartyRock apps.

Discover how you can build apps to help improve your everyday individual productivity and experiment with these new features by trying PartyRock today. To learn more, read our AWS News Blog.
 

Read more


Amazon Q Developer can now provide more personalized chat answers based on console context

Today, AWS announces the general availability of console context awareness for the Amazon Q Developer chat within the AWS Management Console. This new capability allows Amazon Q Developer to dynamically understand and respond to inquiries based on the specific AWS service you are currently viewing or configuring and the region you are operating within. For example, if you are working within the Amazon Elastic Container Service (Amazon ECS) console, you can ask "How can I create a cluster?" and Amazon Q Developer will recognize the context and provide relevant guidance tailored to creating ECS clusters.

This update enables more natural conversations without providing repetitive context details, allowing you to arrive at the answers you seek faster. This capability is included at no additional cost in the Amazon Q Developer Free Tier. For the Amazon Q Developer Pro Tier, which requires a paid subscription, this capability is also included. For more information on pricing, please see the Amazon Q Developer Pricing page. You can access this feature in all regions Amazon Q Developer chat is available in the AWS Management Console. You can get started today by chatting with Amazon Q Developer in the AWS Management Console.
 

Read more


Amazon Bedrock Agents now supports custom orchestration

Amazon Bedrock Agents now supports custom orchestration, allowing developers to control how agents handle multistep tasks, make decisions, and execute complex workflows. This capability enables developers to define custom orchestration logic for their agents using AWS Lambda, providing them flexibility to tailor agent’s behavior to fit specific use cases.

With Custom Orchestration, developers can implement any customized orchestration strategy for their agents, including Plan and Solve, Tree of Thought, and Standard Operating Procedures (SOP). This ensures agents perform tasks in the desired order, manage states effectively, and integrate seamlessly with external tools. Whether handling complex business processes or automating intricate workflows, custom orchestration offers greater control, accuracy, and efficiency to meet business objectives.

Custom Orchestration is now available in all AWS Regions where Amazon Bedrock Agents are supported. To learn more, visit the documentation.
 

Read more


Amazon Q Java transformation launches Step-by-Step and Library Upgrades

Amazon Q Developer Java upgrade transformation now offers step-by-step upgrades, and library upgrades for Java 17 applications. This new feature allows developers to review and accept code changes in multiple diffs, and to test proposed changes in each diff step-by-step. Additionally, Amazon Q can now upgrade libraries for applications already on Java 17, enabling continuous maintenance.

This launch significantly improves the code review and application modernization process. By allowing developers to review smaller amount of code changes at a time, it makes error fixes easier when manual completion is required. The ability to upgrade apps already on Java 17 to the latest reliable libraries helps organizations save time and effort in maintaining their applications across the board.

This capability is available within the Visual Studio Code and IntelliJ IDEs.

To learn more and get started with these new features here.

Read more


Amazon Q Developer now provides natural language cost analysis

Today, AWS announces the addition of cost analysis capabilities to Amazon Q Developer, allowing customers to retrieve and interpret their AWS cost data through natural language interactions. Amazon Q Developer is a generative AI-powered assistant that helps customers build, deploy, and operate applications on AWS. The cost analysis capability helps users of all skill levels to better understand and manage their AWS spending without previous knowledge of AWS Cost Explorer.

Customers can now ask Amazon Q Developer questions about their AWS costs such as "Which region had the largest cost increase last month?" or "What services cost me the most last quarter?". Q interprets these questions, analyzes the relevant cost data, and provides easy-to-understand responses. Each answer includes transparency on the Cost Explorer parameters used and a link to visualize the data in Cost Explorer.

This feature is now available in all AWS Regions where Amazon Q Developer is supported. Customers can access it via the Amazon Q icon in the AWS Management Console. To get started, see the AWS Cost Management user guide.
 

Read more


Amazon Q Developer now transforms embedded SQL from Oracle to PostgreSQL

When you use AWS Database Migration Service (DMS) and DMS Schema Conversion to migrate a database, you might need to convert the embedded SQL in your application to be compatible with your target database. Rather than converting it manually, you can use Amazon Q Developer in the IDE to automate the conversion.

Amazon Q Developer uses metadata from a DMS Schema Conversion to convert embedded SQL in your application to a version that is compatible with your target database. Amazon Q Developer will detect Oracle SQL statements in your application and convert them to PostgreSQL. You can review and accept the proposed changes, view a summary of the transformation, and follow the recommended next steps in the summary to verify and test the transformed code.

This capability is available within the Visual Studio Code and IntelliJ IDEs.

Learn more and get started here.
 

Read more


Amazon SageMaker introduces Scale Down to Zero for AI inference to help customers save costs

We are excited to announce Scale Down to Zero, a new capability in Amazon SageMaker Inference that allows endpoints to scale to zero instances during periods of inactivity. This feature can significantly reduce costs for running inference using AI models, making it particularly beneficial for applications with variable traffic patterns such as chatbots, content moderation systems, and other generative AI usecases.

With Scale Down to Zero, customers can configure their SageMaker inference endpoints to automatically scale to zero instances when not in use, then quickly scale back up when traffic resumes. This capability is effective for scenarios with predictable traffic patterns, intermittent inference traffic, and development/testing environments. Implementing Scale Down to Zero is simple with SageMaker Inference Components. Customers can configure auto-scaling policies through the AWS SDK for Python (Boto3), SageMaker Python SDK, or the AWS Command Line Interface (AWS CLI). The process involves setting up an endpoint with managed instance scaling enabled, configuring scaling policies, and creating CloudWatch alarms to trigger scaling actions.

Scale Down to Zero is now generally available in all AWS regions where Amazon SageMaker is supported. To learn more about implementing Scale Down to Zero and optimizing costs for generative AI deployments, please visit our documentation page.
 

Read more


Amazon Q Developer Pro tier introduces a new, improved dashboard for user activity

Amazon Q Developer Pro tier now provides a detailed usage activity dashboard that gives administrators greater visibility into how their subscribed users are leveraging Amazon Q Developer features and improving their productivity. The dashboard offers insights into user activity metrics, including the number of AI-generated code lines and the acceptance rate of individual features such as, inline code and chat suggestions in developer’s integrated development environment (IDE). This information enables administrators to monitor usage and evaluate productivity gains achieved through Amazon Q Developer.

New customers will have this usage dashboard enabled by default. Existing Amazon Q Developer administrators can activate the dashboard through the AWS Management Console to start tracking detailed usage metrics. Existing customers can also continue to view a copy of the previous set of metrics and usage data, in addition to the new detailed usage metrics dashboard. To learn more about this feature, visit Amazon Q Developer User Guide.

These improvements come in conjunction with the recently launched per-user activity report and last activity date features for Amazon Q Developer admins, further enhancing visibility and control over user activity.

To learn more about Amazon Q Developer Pro tier subscription management features, visit the AWS Console.

Read more


Announcing InlineAgents for Agents for Amazon Bedrock

Agents for Amazon Bedrock now offers InlineAgents, a new feature that allows developers to define and configure Bedrock Agents dynamically at runtime. This enhancement provides greater flexibility and control over agent capabilities, enabling users to specify foundation models, instructions, action groups, guardrails, and knowledge bases on-the-fly without relying on pre-configured control plane settings.

With InlineAgents, developers can easily customize their agents for specific tasks or user requirements without creating new agent versions or preparing the agent. This feature enables rapid experimentation with different AI configurations, trying out various agent features and dynamically updating the tools available to an Agent without creating separate agents.
InlineAgents is available through the new InvokeInlineAgent API in the Amazon Bedrock Agent Runtime service. This feature maintains full compatibility with existing Bedrock Agents while offering improved flexibility and ease of use. InlineAgents is now available in all AWS Regions where Agents Amazon Bedrock is supported.

To learn more about InlineAgents and how to get started, see the Amazon Bedrock Developer Guide and the AWS SDK documentation for the InvokeInlineAgent API and a code sample to create dynamic tooling.

Read more


Amazon SageMaker launches Multi-Adapter Model Inference

Today, Amazon SageMaker introduces new multi-adapter inference capabilities that unlock exciting possibilities for customers using pre-trained language models. This feature allows you to deploy hundreds of fine-tuned LoRA (Low-Rank Adaptation) model adapters behind a single endpoint, dynamically loading the appropriate adapters in milliseconds based on the request. This enables you to efficiently host many specialized LoRA adapters built on a common base model, delivering high throughput and cost-savings compared to deploying separate models.

With multi-adapter inference, you can quickly customize pre-trained models to meet diverse business needs. For example, marketing and SaaS companies can personalize AI/ML applications using each customer's unique images, communication style, and documents to generate tailored content in seconds. Similarly, enterprises in industries like healthcare and financial services can reuse a common LoRA-powered base model to tackle a variety of specialized tasks, from medical diagnosis to fraud detection, by simply swapping in the appropriate fine-tuned adapter. This flexibility and efficiency unlocks new opportunities to deploy powerful, adaptable AI across your organization.

The multi-adapter inference feature is generally available in: Asia Pacific (Tokyo, Seoul, Mumbai, Singapore, Sydney, Jakarta), Canada (Central), Europe (Frankfurt, Stockholm, Ireland, London), Middle East (UAE), South America (Sao Paulo), US East (N. Virginia, Ohio), and US West (Oregon).

To get started, refer to the Amazon SageMaker developer guide for information on using LoRA and managing model adapters.
 

Read more


Announcing Cross Account Data Store Read Access for AWS HealthOmics

We are excited to announce that AWS HealthOmics sequence stores now support cross account read access to simplify data sharing and tool integration. AWS HealthOmics is a fully managed service that empowers healthcare and life science organizations to store, query, analyze omics data to generate insights to improve health and drive scientific discoveries. With this release, customers can enable secure data sharing with partners, while maintaining auditability and compliance frameworks.

Cross account reading for S3 API enables customers to write resource policies to manage sharing and restrict data reading based on their needs. Through the use of tag propagation and tag-based access control, users can create policies that share read access beyond their account while having a scalable mechanism to granularly restrict files based on their compliance structures. In addition, S3 access logs can be used to audit and validate access ensuring the data customers manage remains properly controlled.

Cross account S3 API access is now supported in all regions where AWS HealthOmics is available: US East (N. Virginia), US West (Oregon), Europe (Frankfurt, Ireland, London), Asia Pacific (Singapore), and Israel (Tel Aviv).

To get started, see the AWS HealthOmics documentation.
 

Read more


AWS Announces Amazon Q account resources chat in the AWS Console Mobile App

Today, Amazon Web Services (AWS) is announcing the general availability of Amazon Q Developer’s AWS account resources chat capability in the AWS Console Mobile Application. With this capability, you can use your device’s voice input and output capabilities along with natural language prompts to list resources in your AWS account, get specific resource details, and ask about related resources while on-the-go.

From the Amazon Q tab in the AWS Console Mobile App, you can ask Q to “list my running EC2 instances in us-east-1” or “list my S3 buckets” and Amazon Q returns a list of resource details, along with a summary. You can ask “what Amazon EC2 instances is Amazon CloudWatch alarm <name> monitoring” or ask “what related resources does my ec2 instance <id> have?” and Amazon Q will respond with specific resource details in a mobile friendly format.

The Console Mobile App lets users view and manage a select set of resources to stay informed and connected with their AWS resources while on-the-go. Visit the product page for more information about the Console Mobile Application.
 

Read more


Amazon Q Business now supports integrations to Asana in (Preview)

Amazon Q Business now supports, in preview, a connector to Asana, a leading enterprise work management platform. This managed connector makes it easy for Amazon Q Business users to synchronize data from their Asana instance with their Amazon Q index. When connected, Amazon Q Business can help users answer questions and generate summaries with context from Asana projects.

Amazon Q Business is a generative AI–powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. It empowers employees to be more creative, data-driven, efficient, prepared, and productive. The over 40 connectors supported by Amazon Q Business can be scheduled to automatically sync your index with your selected data sources, so you're always securely searching through the most up-to-date content.

To learn more about Amazon Q Business and its integration with Asana and Google Calendar visit the Amazon Q Business connectors page here These new connector are available in all AWS Regions where Amazon Q Business is available.
 

Read more


Amazon Q Business now supports an integration to Google Calendar in (Preview)

Amazon Q Business now supports a connector to Google Calendar. This expands Amazon Q Business’s support of Google Workspace to include Google Drive, Gmail, and now Google Calendar. Each managed connectors makes it easy to synchronize your data with your Amazon Q index.

Amazon Q Business is a generative AI–powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. It empowers employees to be more creative, data-driven, efficient, prepared, and productive. The over 40 connectors supported by Amazon Q Business can be scheduled to automatically sync your index with your selected data sources, so you're always securely searching through the most up-to-date content.

To learn more about Amazon Q Business and its integration with Asana and Google Calendar visit the Amazon Q Business connectors page here. These new connector are available in all AWS Regions where Amazon Q Business is available.
 

Read more


Introducing Prompt Optimization in Preview in Amazon Bedrock

Today we are announcing the preview launch of Prompt Optimization in Amazon Bedrock. Prompt Optimization rewrites prompts for higher quality responses from foundational models.

Prompt engineering is the process of designing prompts to guide foundational models to generating relevant responses. These prompts need to be tailored for each specific foundational model, following best practices and guidelines for each model. Developers can now use Prompt Optimization in Amazon Bedrock to rewrite their prompts for improved performance on Claude Sonnet 3.5, Claude Sonnet, Claude Opus, Claude Haiku, Llama 3 70B, Llama 3.1 70B, Mistral Large 2 and Titan Text Premier models. Developers can easily compare the performance of optimized prompts against the original prompts without the need of any deployment. All optimized prompts are saved as part of Prompt Builder for developers to use for their generative AI applications.

Amazon Bedrock Prompt Optimization is now available in preview. Learn more here.
 

Read more


Amazon Polly launches more synthetic generative voices

Today, we are excited to announce the general availability of seven highly expressive Amazon Polly Generative voices in English, French, Spanish, German, and Italian.

Amazon Polly is a fully-managed service that turns text into lifelike speech, allowing you to create applications that talk and to build engaging speech-enabled products depending on your business needs.

Amazon Polly releases two new female-sounding voices (Indian English Kajal and Italian Bianca) and five new male-sounding generative voices: i.e., US Spanish Pedro, Mexican Spanish Andrés, European Spanish Sergio, German Daniel, and French Rémi. This launch not only expands the Polly Generative engine to twenty voices, but also offers a unique feature where the five new male-sounding voices have the same voice identity as the US English voice Matthew. The polyglot capability of the voice combined with high expressivity will be useful for customers with a global outreach. The same voice identity can speak multiple languages natively so that the end customers enjoy an accent-less switch from one language to another.

Kajal, Bianca, Pedro, Andrés, Sergio, Daniel, and Rémi generative voices are accessible in the US East (North Virginia), Europe (Frankfurt), and US West (Oregon) regions and complement the other types of voices that are already available in the same regions.

To hear how Polly voices sound, go to Amazon Polly Features. For more details on the Polly offerings and use, please read the Amazon Polly documentation and visit our pricing page.
 

Read more


Accelerate AWS CloudFormation troubleshooting with Amazon Q Developer assistance

AWS CloudFormation now offers generative AI assistance powered by Amazon Q Developer to help troubleshoot unsuccessful CloudFormation deployments. This new capability provides easy-to-understand analysis and actionable steps to simplify the resolution of the most common resource provisioning errors encountered during CloudFormation deployments.

When creating or modifying a CloudFormation stack, CloudFormation can encounter errors in resource provisioning, such as missing required parameters for an EC2 instance or inadequate permissions. Previously, troubleshooting a failed stack operation could be a time-consuming process. After identifying the root cause of the failure, you had to search through blogs and documentation for solutions and determine the next steps, leading to longer resolution times. Now, when you review a failed stack operation in the CloudFormation Console, CloudFormation automatically highlights the likely root cause of the failure. You can click the "Diagnose with Q" button in the error alert box and Amazon Q Developer will provide a human-readable analysis of the error, helping you understand what went wrong. If you need further assistance, you can click the "Help me resolve" button to receive actionable resolution steps tailored to your specific failure scenario, helping you accelerate resolution of the error.

To get started, open the CloudFormation Console and navigate to the stack events tab for a provisioned stack. This feature is available in AWS Regions where AWS CloudFormation and Amazon Q Developer are available. Refer to the AWS Region table for service availability details. Visit our user guide to learn more about this feature.
 

Read more


Amazon Q generative SQL in Amazon Redshift Query Editor now available in additional AWS regions

Amazon Q generative SQL in Amazon Redshift Query Editor is available in AWS South America (Sao Paulo), Europe (London), and Canada (Central) regions. Amazon Q generative SQL is available in Amazon Redshift Query Editor, an out-of-the-box web-based SQL editor for Amazon Redshift, to simplify SQL query authoring and increase your productivity by allowing you to express SQL queries in natural language and receive SQL code recommendations. Furthermore, it allows you to get insights faster without extensive knowledge of your organization’s complex Amazon Redshift database metadata.

Amazon Q generative SQL uses generative Artificial Intelligence (AI) to analyze user intent, SQL query patterns, and schema metadata to identify common SQL query patterns directly within Amazon Redshift, accelerating the SQL query authoring process for users, and reducing the time required to derive actionable data insights. Amazon Q generative SQL provides a conversational interface where users can submit SQL queries in natural language, within the scope of their current data permissions. For example, when you submit a question such as 'Find total revenue by region,' Amazon Q generative SQL will recognize and suggest the appropriate SQL code for this frequent query pattern by joining multiple Amazon Redshift tables, thus saving time and decreasing the likelihood of errors. You can either accept the query or enhance your prior query by asking additional questions.

To learn more about pricing, visit the Amazon Q Developer pricing page. See the documentation to get started.
 

Read more


AWS App Studio is now generally available

AWS App Studio, a generative AI–powered app-building service that uses natural language to build enterprise-grade applications, is now generally available. App Studio helps technical professionals (such as IT project managers, data engineers, enterprise architects, and solution architects) build intelligent, secure, and scalable applications without requiring deep software development skills. App Studio handles deployments, operations, and maintenance, allowing users to focus on solving business challenges and boosting productivity.

App Studio is the fastest and easiest way to build enterprise-grade applications. Getting started is simple. Users describe the application they need in natural language, and App Studio’s generative AI–powered assistant creates an application with a multipage UI, a data model, and business logic. Builders can easily modify applications using natural language, or with App Studio’s visual canvas. They can also enhance their applications with generative AI using built-in components to generate content, summarize information, and analyze files. Applications can connect to existing data using built-in connectors for AWS (such as Amazon Aurora, Amazon DynamoDB, and Amazon S3) and Salesforce, and also hundreds of third-party services (such as HubSpot, Jira, Twilio, and Zendesk) using an API connector. Users can customize the look and feel of their applications to align with brand guidelines by selecting their logo and company color palette. With App Studio it’s free to build—you only pay for the time employees spend using the published applications, saving up to 80% compared to other comparable offerings.

App Studio is generally available in the following AWS Regions: US West (Oregon) and Europe (Ireland).

To learn more and get started, visit AWS App Studio, review the documentation, and read the announcement.

Read more


Three new Long-Form Voices

The Amazon Polly Long-Form engine now introduces two voices in Spanish and one in US English.

Amazon Polly is a service that turns text into lifelike speech, allowing our customers to build speech-enabled products matching their business needs. Today, we add three new long-form voices to our premium Polly Text-to-Speech (TTS) line of products that we offer for synthesizing speech for longer content, such as articles, stories, or training materials.

Male-sounding US English voice Patrick, female-sounding Spanish voice Alba, and male-sounding Spanish voice Raúl can now read long texts, such as blogs, articles, or learning materials. We trained them using the cutting edge technology that uses semantic cues to modify voice’s speaking style depending on the context. The result is natural-sounding, expressive voices that not only provide our customers with the ability of synthesizing their content in human-like Spanish and English, but expand their use-cases to long content reading.

Patrick, Alba, and Raúl long-form voices are accessible in the US East (North Virginia) region and complement the other long-form voices that are already available for developing speech products for a variety of use cases.

To hear how Polly voices sound, go to Amazon Polly Features. For more details on the Polly offerings and use, please read the Amazon Polly documentation and visit our pricing page.
 

Read more


SageMaker Model Registry now supports model lineage to improve model governance

Amazon SageMaker Model Registry now supports tracking machine learning (ML) model lineage, enabling you to automatically capture and retain information about the steps of an ML workflow, from data preparation and training to model registration and deployment.

Customers use Amazon SageMaker Model Registry as a purpose-built metadata store to manage the entire lifecycle of ML models. With this launch, data scientists and ML engineers can now easily capture and view the model lineage details such as datasets, training jobs, and deployment endpoints in Model Registry. When they register a model, Model Registry begins tracking the lineage of the model from development to deployment. This creates an audit trail that enables traceability and reproducibility, providing visibility across the model lifecycle to improve model governance.

This capability is available in all AWS regions where Amazon SageMaker Model Registry is currently available except GovCloud regions. To learn more, see view Model Lineage details in Amazon SageMaker Studio.
 

Read more


Amazon Q Developer Pro tier adds enhanced administrator capabilities to view user activity

The Amazon Q Developer Pro tier now offers administrators greater visibility into the activity from subscribed users. Amazon Q Developer Pro tier administrators can now view user last activity information and enable daily user activity reports.

Organization administrators can now view the last activity information for each user's subscription and applications within that subscription, enabling better monitoring of usage. This allows inactive subscriptions to be easily identified through filtering and sorting across all associated applications. Member account administrators can view the last active date specific to the users, applications, and accounts they manage. The last active date is only shown for activity on or after October 30, 2024.

Additionally, member account administrators can enable detailed per-user activity reports in the Amazon Q Developer settings by specifying an Amazon S3 bucket where the reports should be published. When enabled, you will receive a daily report in Amazon S3 with detailed user activity metrics, such as the number of messages sent, and AI lines of code generated.

To learn more about Amazon Q Developer Pro tier subscription management features, visit the AWS Console.

Read more


Amazon Bedrock now available in the AWS GovCloud (US-East) Region

Beginning today, customers can use Amazon Bedrock in the AWS GovCloud (US-East) region to easily build and scale generative AI applications using a variety of foundation models (FMs) as well as powerful tools to build generative AI applications. Visit the Amazon Bedrock documentation pages for information about model availability and cross-region inferencing.

Amazon Bedrock is a fully managed service that offers a choice of high-performing large language models (LLMs) and other FMs from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, as well as Amazon via a single API. Amazon Bedrock also provides a broad set of capabilities customers need to build generative AI applications with security, privacy, and responsible AI built in. These capabilities help you build tailored applications for multiple use cases across different industries, helping organizations unlock sustained growth from generative AI while ensuring customer trust and data governance.

To get started, visit the Amazon Bedrock page and see the Amazon Bedrock documentation for more details.

Read more


Amazon Bedrock Prompt Management is now generally available

Earlier this year, we launched Amazon Bedrock Prompt Management in preview to simplify the creation, testing, versioning, and sharing of prompts. Today, we’re announcing its general availability and adding several new key features. First, we are introducing the ability to easily run prompts stored in your AWS account. Amazon Bedrock Runtime APIs Converse and InvokeModel now support executing a prompt using a Prompt identifier. Next, while creating and storing the prompts, you can now specify system prompt, multiple user/assistant messages, and tool configuration in addition to the model choice and inference configuration available in preview — this enables advanced prompt engineers to leverage function calling capabilities provided by certain model families such as the Anthropic Claude models. You can now store prompts for Bedrock Agents in addition to Foundation Models, and we have also introduced the ability to compare two versions of a prompt to quickly review the differences between versions. Finally, we now support custom metadata to be stored with the prompts via the Bedrock SDK, enabling you to store metadata such as author, team, department, etc. to meet your enterprise prompt management needs.

Amazon Bedrock is a fully managed service that offers a choice of high-performing large language models (LLMs) and other FMs from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, as well as Amazon via a single API.

Learn more here and in our documentation. Read our blog here.
 

Read more


Fine-tuning for Anthropic’s Claude 3 Haiku in Amazon Bedrock is now generally available

Fine-tuning for Anthropic's Claude 3 Haiku model in Amazon Bedrock is now generally available. Amazon Bedrock is the only fully managed service that provides you with the ability to fine tune Claude models. Claude 3 Haiku is Anthropic’s most compact model, and is one of the most affordable and fastest options on the market for its intelligence category, according to Anthropic. By providing your own task-specific training dataset, you can fine tune and customize Claude 3 Haiku to boost model accuracy, quality, and consistency to further tailor generative AI for your business.

Fine-tuning allows Claude 3 Haiku to excel in areas crucial to your business compared to more general models by encoding company and domain knowledge. By fine tuning Claude 3 Haiku within your secure AWS environment and adapting its knowledge to your exact business requirements, you can generate higher-quality results and create unique user experiences that reflect your company’s proprietary information, brand, products, and more. You can also enhance performance for domain-specific actions such as classification, interactions with custom APIs, or industry-specific data interpretation. Amazon Bedrock makes a separate copy of the base foundation model that is accessible only by you and trains this private copy of the model.

Fine-tuning for Anthropic's Claude 3 Haiku in Amazon Bedrock is now generally available in the US West (Oregon) AWS Region. To learn more, read the launch blog, technical blog, and documentation. To get started with Claude 3 in Amazon Bedrock, visit the Amazon Bedrock console.

Read more


Today, AWS announced support for a new Apache Flink connector for Amazon Managed Service for Prometheus. The new connector, contributed by AWS for the Apache Flink open source project, adds Amazon Managed Service for Prometheus as a new destination for Apache Flink. You can now manage your Prometheus metrics data cardinality by pre-processing raw data with Apache Flink to build real-time observability with Amazon Managed Service for Prometheus and Grafana.

Amazon Managed Service for Prometheus is a secure, serverless, scaleable, Prometheus-compatible monitoring service. You can use the same open-source Prometheus data model and query language that you use today to monitor the performance of your workloads without having to manage the underlying infrastructure. Apache Flink connectors are software components that move data into and out of an Amazon Managed Service for Apache Flink application. You can use the new connector to send processed data to an Amazon Managed Service for Prometheus destination starting with Apache Flink version 1.19. With Amazon Managed Service for Apache Flink you can transform and analyze data in real time. There are no servers and clusters to manage, and there is no compute and storage infrastructure to set up.

You can learn more about Amazon Managed Service for Apache Flink and Amazon Managed Service for Prometheus in our documentation. To learn more about open source Apache Flink connectors visit the official website. For Amazon Managed Service for Apache Flink and Amazon Managed Service for Prometheus region availability, refer to the AWS Region Table.

Read more


Today, AWS announced support for a new Apache Flink connector for Amazon Simple Queue Service. The new connector, contributed by AWS for the Apache Flink open source project, adds Amazon Simple Queue Service as a new destination for Apache Flink. You can use the new connector to send processed data from Amazon Managed Service for Apache Flink to Amazon Simple Queue Service messages with Apache Flink, a popular framework and engine for processing and analyzing streaming data.

Amazon Managed Service for Apache Flink makes it easier to transform and analyze streaming data in real time with Apache Flink. Apache Flink is an open source framework and engine for processing data streams. Amazon Managed Service for Apache Flink reduces the complexity of building and managing Apache Flink applications and integrates with Amazon Managed Streaming for Apache Kafka (Amazon MSK), Amazon Kinesis Data Streams, Amazon OpenSearch Service, Amazon DynamoDB streams, Amazon S3, custom integrations, and more using built-in connectors.

Amazon Simple Queue Service offers a secure, durable, and available hosted queue that lets you integrate and decouple distributed software systems and components. Amazon SQS offers common constructs such as deal-letter queues and cosrt allocation tags.

You can learn more about Amazon Managed Service for Apache Flink and Amazon Simple Queue Service in our documentation. To learn more about open source Apache Flink connectors visit the official website. For Amazon Managed Service for Apache Flink and Amazon Simple Queue Service region availability, refer to the AWS Region Table.

Read more


Amazon Managed Service for Apache Flink now offers a new Apache Flink connector for Amazon Kinesis Data Streams. This open-source connector, contributed by AWS, supports Apache Flink 2.0 and provides several enhancements. It enables in-order reads during stream scale-up or scale-down, supports Apache Flink's native watermarking, and improves observability through unified connector metrics. Additionally, the connector uses AWS SDK for Java 2.x which supports enhanced performance and security features, and native retry strategy.

Amazon Kinesis Data Streams is a serverless data streaming service that enables customers to capture, process, and store data streams at any scale. Amazon Managed Service for Apache Flink makes it easier to transform and analyze streaming data in real time with Apache Flink without having to manage servers or clusters. You can use the new connector to consume data from a Kinesis Data Stream source for real-time processing in your Apache Flink application and can also send data back to a Kinesis Data Streams destination. You can use the new connector to read data from a Kinesis data stream starting with Apache Flink version 1.19.

To learn more about Apache Flink Amazon Kinesis Data Streams connector, visit the official Apache Flink documentation. You can also check the GitHub repositories for Apache AWS connectors.
 

Read more


Today, AWS announced support for a new Apache Flink connector for Amazon DynamoDB. The new connector, contributed by AWS for the Apache Flink open source project, adds Amazon DynamoDB Streams as a new source for Apache Flink. You can now process DynamoDB streams events with Apache Flink, a popular framework and engine for processing and analyzing streaming data.

Amazon DynamoDB is a serverless, NoSQL database service that enables you to develop modern applications at any scale. DynamoDB Streams provides a time-ordered sequence of item-level changes (insert, update, and delete) in a DynamoDB table. With Amazon Managed Service for Apache Flink, you can transform and analyze DynamoDB streams data in real time using Apache Flink and integrate applications with other AWS services such as Amazon S3, Amazon OpenSearch, Amazon Managed Streaming for Apache Kafka, and more. Apache Flink connectors are software components that move data into and out of an Amazon Managed Service for Apache Flink application. You can use the new connector to read data from a DynamoDB stream starting with Apache Flink version 1.19. With Amazon Managed Service for Apache Flink there are no servers and clusters to manage, and there is no compute and storage infrastructure to set up.

The Apache Flink repo for AWS connectors can be found here. For detailed documentation and setup instructions, visit our Documentation Page.

Read more


Starting today, customers can use Amazon Managed Service for Apache Flink in Asia Pacific (Kuala Lumpur) Region to build real-time stream processing applications.

Amazon Managed Service for Apache Flink makes it easier to transform and analyze streaming data in real time with Apache Flink. Apache Flink is an open source framework and engine for processing data streams. Amazon Managed Service for Apache Flink reduces the complexity of building and managing Apache Flink applications and integrates with Amazon Managed Streaming for Apache Kafka (Amazon MSK), Amazon Kinesis Data Streams, Amazon OpenSearch Service, Amazon DynamoDB streams, Amazon Simple Storage Service (Amazon S3), custom integrations, and more using built-in connectors.

For a list of the AWS Regions where Amazon Managed Service for Apache Flink is available, please see the AWS Region Table.

You can learn more about Amazon Managed Service for Apache Flink here.

Read more


amazon-memorydb

Announcing the general availability of Amazon MemoryDB Multi-Region

Today, AWS announces the general availability of Amazon MemoryDB Multi-Region, a fully managed, active-active, multi-Region database that lets you build multi-Region applications with up to 99.999% availability and microsecond read and single-digit millisecond write latencies. MemoryDB is a fully managed, Valkey- and Redis OSS-compatible database service providing multi-AZ durability, microsecond read and single-digit millisecond write latency, and high throughput. Valkey is an open source, high performance, key-value data store—stewarded by Linux Foundation—and is a drop-in replacement of Redis OSS.  

With MemoryDB Multi-Region, you can build highly available multi-Region applications for increased resiliency. It offers active-active replication so you can serve reads and writes locally from the Regions closest to your customers with microsecond read and single-digit millisecond write latency. MemoryDB Multi-Region asynchronously replicates data between Regions and typically propagates data within a second. It automatically resolves update conflicts and corrects data divergence issues, so you can focus on building your application.       

Get started with MemoryDB Multi-Region from the AWS Management Console or using the latest AWS SDK or AWS Command Line Interface (AWS CLI). First, you need to identify the set of AWS Regions where you want to replicate your data. Then choose an AWS Region to create a new multi-Region cluster and a regional cluster. Once the first regional cluster is created, you can add up to four additional Regions to the multi-Region cluster.  

MemoryDB Multi-Region is available for Valkey in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (London). To learn more, please visit the MemoryDB features page, getting started blog, and documentation. For pricing, please refer to the MemoryDB pricing page.

Read more


amazon-mq

Amazon MQ is now available in the AWS Asia Pacific (Malaysia) region

Amazon MQ is now available in the AWS Asia Pacific (Malaysia) region. With this launch, Amazon MQ is now available in 34 regions.

Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that makes it easier to set up and operate message brokers on AWS. Amazon MQ reduces your operational responsibilities by managing the provisioning, setup, and maintenance of message brokers. Because Amazon MQ connects to your current applications with industry-standard APIs and protocols, you can more easily migrate to AWS without having to rewrite or modify your applications.

For more information, please visit the Amazon MQ product page, and see the AWS Region Table for complete regional availability.

Read more


amazon-msk

Express brokers for Amazon MSK is now generally available

Today, AWS announces the general availability of Express brokers for Amazon Managed Streaming for Apache Kafka (Amazon MSK). Express brokers are a new broker type for Amazon MSK Provisioned designed to deliver up to 3x more throughput per broker, scale up to 20x faster, and reduce recovery time by 90% as compared to standard Apache Kafka brokers. Express brokers come preconfigured with Kafka best practices by default, support all Kafka APIs, and provide the same low-latency performance that Amazon MSK customers expect, so they can continue using existing client applications without any changes.

With Express brokers, customers can provision, scale up, and scale down Kafka cluster capacity in minutes, offload storage management with virtually unlimited pay-as-you-go storage, and build highly resilient applications. Customers can also continue using all of the Amazon MSK key features, including security, connectivity, and observability options, as well as popular integrations, including Amazon MSK Connect, Amazon Simple Storage Service (Amazon S3), AWS Glue Schema Registry, and more. Express brokers are currently available on Kafka version 3.6 and come in three different sizes of Graviton3-based M7g instances: large, 4xlarge, and 16xlarge. Each broker is charged an hourly rate with storage and data ingested charged separately on a pay-as-you-go basis. 

Express brokers are available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).

To learn more, check out the Amazon MSK overview page, pricing page, and developer guide.

To learn more about Express brokers, visit this AWS blog post.

 

Read more


Amazon MSK now supports vector embedding generation using Amazon Bedrock

Amazon MSK (Managed Streaming for Apache Kafka) now supports new Managed Streaming for Apache Flink blueprints to generate vector-embeddings using Amazon Bedrock, making it easier to build real-time AI applications powered by up-to-date, contextual data. This blueprint simplifies the process of incorporating the latest data from your Amazon MSK streaming pipelines into your generative AI models, eliminating the need to write custom code to integrate real-time data streams, vector databases, and large language models.

With just a few clicks, customers can configure the blueprint to continuously generate vector embeddings using Bedrock's embedding models, then index those embeddings in Amazon OpenSearch for their Amazon MSK data streams. This allows customers to combine the context from real-time data with Bedrock's powerful large language models to generate accurate, up-to-date AI responses without writing custom code. Customers can also choose to improve the efficiency of data retrieval using built-in support for data chunking techniques from LangChain, an open-source library, supporting high-quality inputs for model ingestion. The blueprint manages the data integration and processing between MSK, the chosen embedding model, and the Open Search vector store, allowing customers to focus on building their AI applications rather than managing the underlying integration.

Real-time vector embedding blueprint is generally available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Paris), Europe (London), Europe (Ireland) and South America (Sao Paulo) AWS Regions. Visit the Amazon MSK documentation for the list of additional Regions, which will be supported over the next few weeks. To learn more about how to use the blueprint to generate real-time vector embeddings from your Amazon MSK data, visit the AWS blog.

Read more


amazon-mwaa

Amazon MWAA adds smaller environment size

Amazon Managed Workflows for Apache Airflow (MWAA) now offers a micro environment size, giving customers of the managed service the ability to create multiple, independent environments for development and data isolation at a lower cost.

Amazon MWAA is a managed orchestration service for Apache Airflow that makes it easier to set up and operate end-to-end data pipelines in the cloud. With Amazon MWAA micro environments, customers can now create smaller, cost-effective environments that are more efficient for development use, as well as for teams that require data isolation with lightweight workflow requirements.

You can create a micro size Amazon MWAA environment with just a few clicks in the AWS Management Console in all currently supported Amazon MWAA regions. To learn more about larger environments in Amazon MWAA, visit the Launch Blog. To learn more about Amazon MWAA visit the Amazon MWAA documentation.


Apache, Apache Airflow, and Airflow are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries.
 

Read more


amazon-neptune

Amazon Bedrock Knowledge Bases now supports GraphRAG (preview)

Today, we are announcing the support of GraphRAG, a new capability in Amazon Bedrock Knowledge Bases that enhances Generative AI applications by providing more comprehensive, relevant and explainable responses using RAG techniques combined with graph data. Amazon Bedrock Knowledge Bases offers fully-managed, end-to-end Retrieval-Augmented Generation (RAG) workflows to create highly accurate, low latency, and custom Generative AI applications by incorporating contextual information from your company's data sources. Amazon Bedrock Knowledge Bases now offers a fully-managed GraphRAG capability with Amazon Neptune Analytics.

Previously, customers faced challenges in conducting exhaustive, multi-step searches across disparate content. By identifying key entities across documents, GraphRAG delivers insights that leverage relationships within the data, enabling improved responses to end users. For example, users can ask a travel application for family-friendly beach destinations with direct flights and good seafood restaurants. Developers building Generative AI applications can enable GraphRAG in just a few clicks by specifying their data sources and choosing Amazon Neptune Analytics as their vector store when creating a knowledge base. This will automatically generate and store vector embeddings in Amazon Neptune Analytics, along with a graph representation of entities and their relationships.

GraphRAG with Amazon Neptune is built right into Amazon Bedrock Knowledge Bases, offering an integrated experience with no additional setup or additional charges beyond the underlying services. GraphRAG is available in AWS Regions where Amazon Bedrock Knowledge Bases and Amazon Neptune Analytics are both available (see current list of supported regions). To learn more, visit the Amazon Bedrock User Guide.

Read more


Today, we’re introducing a new feature for Neptune Analytics that allows customers to easily provision Amazon VPC interface endpoints (interface endpoints) in their Virtual Private Cloud (Amazon VPC). These endpoints provide direct access from on-premises applications over VPN or AWS Direct Connect, and across AWS Regions via VPC peering. With this feature, network engineers can create and manage VPC resources centrally. By leveraging AWS PrivateLink and interface endpoints, development teams can now establish private, secure network connectivity from their applications to Neptune Analytics with simplified configuration.

Previously, development teams had to manually configure complex network settings, leading to operational overhead and potential misconfigurations that could affect security and connectivity. With AWS PrivateLink support for Neptune Analytics, customers can now streamline private connectivity between VPCs, Neptune Analytics, and on-premises data centers using interface endpoints and private IP addresses. This approach simplifies this process by allowing central teams to create and manage PrivateLink endpoints and development teams to utilize those PrivateLink endpoints for their graphs without needing to manage them directly. This launch allows developers to concentrate on graph load, thereby reducing time-to-value and simplifying overall management.

Please see AWS PrivateLink pricing for the cost details. You can get started with the feature by using AWS API, AWS CLI, or AWS SDK.
 

Read more


Neptune Analytics Adds Support for Seamless Graph Data Import and Export

Today, we’re launching a new feature that enables customers to easily import Parquet data and export Parquet/CSV data to and from their Neptune Analytics graphs. This new capability simplifies the process of loading Parquet data into Neptune Analytics for graph queries and analysis, while also allowing customers to export graph data as Parquet or CSV files. Exported data can then be moved seamlessly to Neptune DB, data lakes, or ML platforms for further exploration and analysis.

Previously, customers faced challenges with limited integration options, vendor lock-in concerns, cross-platform flexibility, and sharing graph data for collaborative analysis. This new export functionality addresses these pain points by providing a seamless, end-to-end experience. The data extraction occurs from a snapshot, ensuring that database performance remains unaffected. With the ability to import and export graph data via APIs, customers can leverage Neptune Analytics to run graph algorithms, update their graphs, and export the data for use in other databases like Neptune or data processing frameworks like Apache Spark or query services like Amazon Athena. This enhanced flexibility empowers customers to gain deeper insights from their graph data and use it across various tools and environments.

To learn more about Neptune Analytics and native export capability, visit the features page, and user guide.
 

Read more


AWS Backup now supports Amazon Neptune in three new Regions

Today, we are announcing the availability of AWS Backup support for Amazon Neptune in the Asia Pacific (Jakarta, Osaka) and Africa (Cape Town) Regions. AWS Backup is a policy-based, fully managed and cost-effective solution that enables you to centralize and automate data protection of Amazon Neptune along with other AWS services (spanning compute, storage, and databases) and third-party applications. Together with AWS Organizations, AWS Backup enables you to centrally deploy policies to configure, manage, and govern your data protection activity.

With this launch, AWS Backup support for Amazon Neptune is available in the following regions: in the following Regions: US East (N. Virginia, Ohio), US West (N. California, Oregon), Canada (Central), Europe (Frankfurt, Ireland, London, Paris, Stockholm), Asia Pacific (Hong Kong, Jakarta, Mumbai, Osaka, Seoul, Singapore, Sydney, Tokyo), Middle East (Bahrain, UAE), Africa (Cape Town), Israel (Tel Aviv), South America (Sao Paulo), AWS GovCloud (US-East, US-West), and China (Beijing, Ningxia). For more information on Region availability, feature availability, and pricing, see the AWS Backup pricing page and the AWS Backup feature availability page.

To learn more about AWS Backup support for Amazon Neptune, visit AWS Backup’s technical documentation. To get started, visit the AWS Backup console.

Read more


Amazon Neptune Serverless is now available in 6 additional AWS Regions

Amazon Neptune Serverless is now available in the Europe (Paris), South America (Sao Paulo), Asia Pacific (Jakarta), Asia Pacific (Mumbai), Asia Pacific (Hong Kong), and Asia Pacific (Seoul) AWS Regions.

Amazon Neptune is a fast, reliable, and fully managed graph database service for building and running applications with highly connected datasets, such as knowledge graphs, fraud graphs, identity graphs, and security graphs. If you have unpredictable and variable workloads, Neptune Serverless automatically determines and provisions the compute and memory resources to run the graph database. Database capacity scales up and down based on the application’s changing requirements to maintain consistent performance, saving up to 90% in database costs compared to provisioning at peak capacity.

With today’s launch, Neptune Serverless is available in 19 AWS Regions. For pricing and region availability, please visit the Neptune pricing page.

You can create a Neptune Serverless cluster from the AWS Management console, AWS Command Line Interface (CLI), or SDK. To learn more about Neptune Serverless visit the product page, or the documentation.

Read more


amazon-nova

Announcing Amazon Nova foundation models available today in Amazon Bedrock

We’re excited to announce Amazon Nova, a new generation of state-of-the-art (SOTA) foundation models (FMs) that deliver frontier intelligence and industry leading price performance. Amazon Nova models available today on Amazon Bedrock are:

  • Amazon Nova Micro, a text only model that delivers the lowest latency responses at very low cost.
  • Amazon Nova Lite, a very low-cost multimodal model that is lightning fast for processing image, video, and text inputs
  • Amazon Nova Pro, a highly capable multimodal model with the best combination of accuracy, speed, and cost for a wide range of tasks.
  • Amazon Nova Canvas, a state-of-the-art image generation model.
  • Amazon Nova Reel, a state-of-the-art video generation model.

Amazon Nova Micro, Amazon Nova Lite, and Amazon Nova Pro are among the fastest and most cost-effective models in their respective intelligence classes. These models have also been optimized to make them easy to use and effective in RAG and agentic applications. With text and vision fine-tuning on Amazon Bedrock, you can customize Amazon Micro, Lite, and Pro to deliver the optimal intelligence, speed, and cost for your needs. With Amazon Nova Canvas and Amazon Nova Reel, you get access to production-grade visual content, with built-in controls for safe and responsible AI use like watermarking and content moderation. You can see the latest benchmarks and examples of these models on the Amazon Nova product page.

Amazon Nova foundation models are available in Amazon Bedrock in the US East (N. Virginia) region. Amazon Nova Micro, Lite, and Pro models are also available in the US West (Oregon), and US East (Ohio) regions via cross-region inference. Learn more about Amazon Nova at the AWS News Blog, the Amazon Nova product page, or the Amazon Nova user guide. You can get started with Amazon Nova foundation models in Amazon Bedrock from the Amazon Bedrock console.

Read more


amazon-omics

Announcing Cross Account Data Store Read Access for AWS HealthOmics

We are excited to announce that AWS HealthOmics sequence stores now support cross account read access to simplify data sharing and tool integration. AWS HealthOmics is a fully managed service that empowers healthcare and life science organizations to store, query, analyze omics data to generate insights to improve health and drive scientific discoveries. With this release, customers can enable secure data sharing with partners, while maintaining auditability and compliance frameworks.

Cross account reading for S3 API enables customers to write resource policies to manage sharing and restrict data reading based on their needs. Through the use of tag propagation and tag-based access control, users can create policies that share read access beyond their account while having a scalable mechanism to granularly restrict files based on their compliance structures. In addition, S3 access logs can be used to audit and validate access ensuring the data customers manage remains properly controlled.

Cross account S3 API access is now supported in all regions where AWS HealthOmics is available: US East (N. Virginia), US West (Oregon), Europe (Frankfurt, Ireland, London), Asia Pacific (Singapore), and Israel (Tel Aviv).

To get started, see the AWS HealthOmics documentation.
 

Read more


AWS HealthOmics workflows now support call caching and intermediate file access

We are excited to announce that AWS HealthOmics workflows now support the ability to reuse task results from previous runs, saving time and compute costs for customers. AWS HealthOmics is a fully managed service that empowers healthcare and life science organizations to store, query, analyze omics data to generate insights to improve health and drive scientific discoveries. With this release, customers can accelerate development of new pipelines by resuming runs from a previous point of failure or code change.

Call caching, or the ability to resume runs, enables customers to restart runs from the point where new code changes are introduced, skipping unchanged tasks that have already been computed to enable faster iterative workflow development cycles. In addition, task intermediate files are stored in a run cache, enabling advanced debugging and troubleshooting of workflow errors during development. In production workflows, call caching saves partial results from failed runs so that customers can rerun the sample from the point of failure, rather than computing successfully completed tasks again, shortening reprocessing times.

Call caching is now supported for Nextflow, WDL, and CWL workflow languages in all regions where AWS HealthOmics is available: US East (N. Virginia), US West (Oregon), Europe (Frankfurt, Ireland, London), Asia Pacific (Singapore), and Israel (Tel Aviv). To get started with call caching, see the AWS HealthOmics documentation.

Read more


amazon-opensearch-service

Amazon OpenSearch Service zero-ETL integration with Amazon Security Lake

Amazon OpenSearch Service now offers a zero-ETL integration with Amazon Security Lake, enabling you to query and analyze security data in-place directly through OpenSearch. This integration allows you to efficiently explore voluminous data sources that were previously cost-prohibitive to analyze, helping you streamline security investigations and obtain comprehensive visibility of your security landscape. By offering the flexibility to selectively ingest data and eliminating the need to manage complex data pipelines, you can now focus on effective security operations while potentially lowering your analytics costs.

Using the powerful analytics and visualization capabilities in OpenSearch Service, you can perform deeper investigations, enhance threat hunting, and proactively monitor your security posture. Pre-built queries and dashboards using the Open Cybersecurity Schema Framework (OCSF) can further accelerate your analysis. The built-in query accelerator boosts performance and enables fast-loading dashboards, enhancing your overall experience. This integration empowers you to accelerate investigations, uncover insights from previously inaccessible data sources, optimize analytics efficiency and costs, with minimal data migration.

OpenSearch Service zero-ETL integration with Security Lake is now generally available in 13 regions globally: Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), US East (Ohio), US East (N. Virginia), US West (Oregon), South America (São Paulo), Europe (Paris), and Canada (Central).

To learn more on using this capability, see the OpenSearch Service Integrations page and the OpenSearch Service Developer Guide. To learn more about how to configure and share Security Lake, see the Get Started Guide.
 

Read more


Amazon OpenSearch Ingestion now supports writing security data to Amazon Security Lake

Amazon OpenSearch Ingestion now allows you to write data into Amazon Security Lake in real-time, allowing you to ingest security data from both AWS and custom sources and uncover valuable insights into potential security issues in near-realtime. Amazon Security Lake centralizes security data from AWS environments, SaaS providers and on- premises into a purpose-built data lake. With this integration, customers can now seamlessly ingest and normalize security data from all popular custom sources before writing it into Amazon Security Lake.

Amazon Security Lake uses the Open Cybersecurity Schema Framework (OCSF) to normalize and combine security data from a broad range of enterprise security data sources in the Apache Parquet format. With this feature, you can now use Amazon OpenSearch Ingestion to ingest and transform security data from popular 3rd party sources like Palo Alto, CrowdStrike, and SentinelOne into OCSF format before writing the data into Security Lake. Once the data is written to Security Lake, it is available in the AWS Glue Data Catalog and AWS Lake Formation tables for the respective source.

This feature is available in all the 15 AWS commercial regions where Amazon OpenSearch Ingestion is currently available: US East (Ohio), US East (N. Virginia), US West (Oregon), US West (N. California), Europe (Ireland), Europe (London), Europe (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Singapore), Asia Pacific (Mumbai), Asia Pacific (Seoul), Canada (Central), South America (Sao Paulo), and Europe (Stockholm).

To learn more, see the Amazon OpenSearch Ingestion webpage and the Amazon OpenSearch Service Developer Guide.

Read more


Amazon OpenSearch Ingestion now supports AWS Lambda for custom data transformation

Amazon OpenSearch Ingestion now allows you to leverage AWS Lambda for event processing and routing, enabling complex transformation and enrichment of your streaming data. Customers can now define custom Lambda functions in their OpenSearch Ingestion pipelines for use cases like generating vector embedding and lookups in external databases to power advanced search use cases.

OpenSearch Ingestion gives you the option of either using only Lambda functions or chaining Lambda functions with native Data Prepper processors when transforming data. You can also batch events into a single payload based on event count and size before invoking Lambda to optimize the number of Lambda invocations to reduce costs and improve throughput. Furthermore, you can use this feature with the inbuilt conditional expressions in Amazon OpenSearch Ingestion to enable use cases like sending out emails and notifications for real-time alerting.

This feature is available in all the 15 AWS commercial regions where Amazon OpenSearch Ingestion is currently available: US East (Ohio), US East (N. Virginia), US West (Oregon), US West (N. California), Europe (Ireland), Europe (London), Europe (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Singapore), Asia Pacific (Mumbai), Asia Pacific (Seoul), Canada (Central), South America (Sao Paulo), and Europe (Stockholm).

To learn more, see the Amazon OpenSearch Ingestion webpage and the Amazon OpenSearch Service Developer Guide.

Read more


Amazon OpenSearch Service now supports Custom Plugins

Amazon OpenSearch Service introduces Custom Plugins, a new plugin management option that allows you to extend OpenSearch functionality and deliver personalized experiences for applications such as website search, log analytics, application monitoring and, observability. OpenSearch provides a rich set of search and analysis capabilities, and with custom plugins, you can extend these further to meet your business needs.

Until now, you had to build and operate your own search infrastructure to support applications that required customization in areas like language analysis, custom filtering, ranking and more. With this launch, you can run custom plugins on Amazon OpenSearch Service that allow you to extend the Search and Analysis functions of OpenSearch. You can use the OpenSearch Service console or APIs to upload and associate search and analysis plugins with your domains. OpenSearch Service validates plugin package for version compatibility, security, and permitted plugin operations.

Custom plugins are now supported on all OpenSearch Service domains running OpenSearch version 2.15 or later, and are available in 14 regions globally: US West (Oregon), US East (Ohio), US East (N. Virginia), South America (Sao Paulo), Europe (Paris), Europe (London), Europe (Ireland), Europe (Frankfurt), Canada (Central), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Singapore), Asia Pacific (Seoul) and Asia Pacific (Mumbai).

To get started with custom plugins, visit our documentation. To learn more about Amazon OpenSearch Service, please visit the product page.
 

Read more


OpenSearch’s vector engine adds support for UltraWarm on Amazon OpenSearch Service

UltraWarm is a fully managed, warm storage tier that’s designed to deliver cost savings on the Amazon OpenSearch Service. With OpenSearch 2.17+ domains, you can now store k-NN (vector) indexes on UltraWarm storage reducing the cost of serving infrequently access k-NN indexes through warm and cold storage tiers. With UltraWarm storage, you can further cost optimize vector search workloads on the OpenSearch vector engine. To learn more, refer to the documentation.

Read more


Amazon OpenSearch Serverless Includes SQL API Support

Amazon OpenSearch Serverless now enables you to query your data using OpenSearch SQL and OpenSearch Piped Processing Language (PPL) through REST API, Java Database Connectivity (JDBC), and Command Line Interface (CLI). Amazon OpenSearch Serverless is a serverless option that makes it easy to run search and analytics workloads without having to think about infrastructure management. This new SQL and PPL API support addresses the need for familiar query syntax and improved integration with existing analytics tools, benefiting data analysts and developers who work with OpenSearch Serverless collections.

SQL API support in OpenSearch Serverless allows you to leverage your existing SQL skills and tools to analyze data stored in your collections. You can now use the AWS CLI to run SQL queries directly from your terminal, connect your preferred business intelligence tools JDBC drivers, and integrate SQL and PPL queries into your Java applications. This feature is particularly useful for organizations looking to streamline their analytics workflows or those transitioning from traditional relational databases to OpenSearch Serverless.

The support for SQL API support on OpenSearch Serverless is now available in 15 regions globally: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe West (Paris), Europe West (London), Asia Pacific South (Mumbai), South America (Sao Paulo), Canada Central (Montreal), Asia Pacific (Seoul), and Europe (Zurich). Please refer to the AWS Regional Services List for more information about Amazon OpenSearch Service availability.

To learn more about SQL API support in OpenSearch Serverless, see the documentation.

Read more


Disk-optimized vector engine now available on the Amazon OpenSearch Service

Amazon OpenSearch's vector engine can now run modern search applications at a third of the cost on OpenSearch 2.17 domains. When you configure a k-NN (vector) index for disk mode, it becomes optimized for operating in a low memory environment. With disk mode on, the index is compressed using techniques like binary quantization and search quality (recall) is retained through a disk-optimized rescoring mechanism using full-precision vectors. Disk-mode is an excellent option for vector search workloads that require high accuracy, cost efficiency and are satisfied by low hundreds-of-milliseconds latency. It provides customers with a lower cost alternative to the existing in-memory mode when single-digit latency is unnecessary. To learn more, refer to the documentation.

Read more


Amazon OpenSearch Serverless has added support for Point in Time (PIT) search, enabling you to run multiple queries against a dataset fixed at a specific moment. This feature allows you to maintain consistent search results even as your data continues to change, making it particularly useful for applications that require deep pagination or need to preserve a stable view of data across multiple queries.

Point in time search supports both forward and backward navigation through search results, ensuring consistency even during ongoing data ingestion. This feature is ideal for e-commerce applications, content management systems, and analytics platforms that require reliable and consistent search capabilities across large datasets.

Point in time search on Amazon OpenSearch Serverless is now available in 15 regions globally: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe West (Paris), Europe West (London), Asia Pacific South (Mumbai), South America (Sao Paulo), Canada Central (Montreal), Asia Pacific (Seoul). and Europe (Zurich). Please refer to the AWS Regional Services List for more information about Amazon OpenSearch Service availability. To learn more about OpenSearch Serverless, see the documentation.

Read more


Amazon OpenSearch Service now scales to 1000 data nodes on a single cluster

Amazon OpenSearch Service now enables you to scale a single cluster to 1000 data nodes (1000 hot nodes and/or 750 warm nodes) and enables you to manage 25 petabytes of data (10 Petabytes in hot nodes and further 15 Petabytes in warm nodes). You no longer need to setup multiple clusters for workloads that require more than 200 data nodes or more than 3 Petabytes of data.

Today, for workloads of more than 3 to 4 petabytes of data, you need to create multiple clusters in OpenSearch Service. This may have required you to refactor your applications or business logic to work with your workload split across multiple clusters. In addition, every cluster requires its own configuration, management, and monitoring, adding to the operational overhead. With this launch, you can scale a single cluster up to 1000 nodes, or 25 petabytes of data, removing the operational overhead that comes with managing multiple clusters.

To scale a cluster beyond 200 nodes, you have to request an increase through Service Quota, after which you can modify your cluster configuration using the AWS Console, AWS CLI, or the AWS SDK. Depending on the size of the cluster, OpenSearch Service will recommend configuration pre-requisites across data nodes, cluster manager nodes, and coordinator nodes. For more information, refer to the documentation.

The new limits are available to all OpenSearch Service clusters running OpenSearch 2.17 and above in all AWS regions where Amazon OpenSearch Service is available. Please refer to the AWS Region Table for more information about Amazon OpenSearch Service availability.

Read more


Amazon OpenSearch Serverless now supports Binary Vector and FP16 cost savings features

We are excited to announce that Amazon OpenSearch Serverless now is supporting Binary Vector and FP16 compression helping reduce costs by lowering the memory requirements. It also lowers the latency, improve performance with acceptable accuracy tradeoff. OpenSearch Serverless is a serverless deployment option for Amazon OpenSearch Service that makes it simple to run search and analytics workloads without the complexities of infrastructure management. OpenSearch Serverless’ compute capacity used for data ingestion, search, and query is measured in OpenSearch Compute Units (OCUs).

The support for OpenSearch Serverless is now available in 17 regions globally: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe West (Paris), Europe West (London), Asia Pacific South (Mumbai), South America (Sao Paulo), Canada Central (Montreal), Asia Pacific (Seoul). Europe (Zurich), AWS GovCloud (US-West), and AWS GovCloud (US-East). Please refer to the AWS Regional Services List for more information about Amazon OpenSearch Service availability. To learn more about OpenSearch Serverless, see the documentation.

Read more


Amazon OpenSearch Service now supports OpenSearch version 2.17

You can now run OpenSearch version 2.17 in Amazon OpenSearch Service. With OpenSearch 2.17, we have made several improvements in the areas of vector search, query performance and machine learning (ML) toolkit to help accelerate application development and enable generative AI workloads.

This launch introduces disk-optimized vector search, a new option for the vector engine that's designed to run efficiently with less memory to deliver accurate, economical vector search at scale. In addition to this, OpenSearch’s FAISS engine now supports byte vectors lowering cost and latency by compressing k-NN indexes with minimal recall degradation. You can now encode numeric terms as a roaring bitmap that enables you to perform aggregations, filtering and more, with lower retrieval latency and reduced memory usage.

This launch also includes key features to help you build ML-powered applications. Firstly, with ML inference search processors you can now run model predictions while executing search queries. In addition to this, you can also perform high-volume ML tasks, such as generating embeddings for large datasets and ingesting them into k-NN indexes using asynchronous batch ingestion. Finally, this launch adds threat intelligence capabilities to Security Analytics solution. This enables you to use customized Structured Threat Information Expression (STIX)-compliant threat intelligence feeds to provide insights to support decision-making and remediation.

For information on upgrading to OpenSearch 2.17, please see the documentation. OpenSearch 2.17 is now available in all AWS Regions where Amazon OpenSearch Service is available.

Read more


Amazon OpenSearch Service adds supports for two new third party plugins

Amazon OpenSearch Service now supports two new third party plugins- encryption plugin from Portal26.ai and Name Match plugin from Babel Street. These are optional plugins that you can choose to associate with your OpenSearch Service clusters.

The encryption plugin from Portal26.ai uses NIST FIPS 140-2 certified encryption to encrypt the data as it gets indexed by the Amazon OpenSearch Service. This plugin includes a Bring Your Own Key (BYOK) capability allowing you to setup separate encryption keys per index thus enabling you to easily support multi-tenant use-cases.

Babel Street Match Plugin for OpenSearch accurately matches names, organisations, addresses, and dates in over 24 languages, enhancing security operations and regulatory compliance while reducing false positives and increasing operational efficiency.

You can use the AWS Management Console and AWS CLI to associate, disassociate and list third party plugins in your domain. Customers can now use “CreatePackage” and “AssociatePackage” APIs to upload and associate the plugin with the Amazon OpenSearch Service cluster. ‘PACKAGE-CONFIG“ and ”PACKAGE-LICENSE“ package types are supported for uploading the plugin configuration and license files that you can directly procure from Portal26.ai for the encryption plugin, and Babel Street for the name match plugin.

These third party plugins are available for Amazon OpenSearch domains running OpenSearch version 2.15 and above, and are available in all AWS regions except AWS GovCloud (US) Regions where Amazon OpenSearch service is available.

For more information about third party plugins, please see the documentation. To learn more about Amazon OpenSearch Service, please visit the product page.
 

Read more


Amazon OpenSearch Service now supports 4th generation Intel (C7i, M7i, R7i) instances

Amazon OpenSearch Service now supports 4th Generation Intel Xeon Scalable processors based compute optimized (C7i), general purpose (M7i), and memory optimized (R7i) instances. These instances deliver up to 15% better price performance over 3rd generation Intel C6i, M6i & R6i instances respectively. You can update your domain to the new instances seamlessly through the OpenSearch Service console or APIs.

These instances support new Intel Advanced Matrix Extensions (AMX) that accelerate matrix multiplication operations for applications such as CPU-based ML. The 4th generation Intel instances support the latest DDR5 memory, offering higher bandwidth compared to 3rd generation Intel processors. To learn more about 4th generation intel improvements, please see the following C7i blog, M7i blog & R7i blog.

One or more than one 4th generation Intel instance types are now available on Amazon OpenSearch Service across 22 regions globally: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Canada (Central), Asia Pacific (Jakarta), Asia Pacific (Malaysia), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Paris), Europe (Spain), Europe (Stockholm), South America (Sao Paulo), AWS GovCloud (US-East) and AWS GovCloud (US-West).

To learn more about region specific instance type availability and their pricing, visit our pricing page. To learn more about Amazon OpenSearch Service, please visit the product page.

Read more


Amazon OpenSearch Service launches next-gen UI for enhanced data exploration and collaboration

Amazon OpenSearch Service launches a modernized operational analytics experience that enables users gain insights cross data spanning managed domains and serverless collections from a single endpoint. The launch also includes Workspaces to enhance collaboration and productivity, allowing teams to create dedicated spaces. Discover is revamped to provide a unified log exploration experience supporting languages such as SQL and Piped-Processing-Language (PPL), in addition to DQL and Lucene. Discover now features a data selector to support multiple sources, new visual design and query autocomplete for improved usability. This experience ensures users can access the latest UI enhancements, regardless of version of underlying managed cluster or collection.

The new OpenSearch analytics experience helps users gain insights from their operational data by providing purpose-built features for observability, security analytics, essentials and search use cases. With the enhanced Discover interface, users can now analyze data from multiple sources without switching tools, improving efficiency. Workspaces enable better collaboration by creating dedicated environments for teams to work on dashboards, saved queries, and other relevant content. Availability of the latest UI updates across all versions ensures uninterrupted access to the newest features and tools.

The new OpenSearch user interface can connect to OpenSearch domains (above version 1.3) and serverless collections. It is now available in 13 AWS commercial regions. To get started, create an OpenSearch application in AWS Management Console. Learn more at Amazon OpenSearch Service Developer Guide.

Read more


Amazon OpenSearch Service announces Extended Support for engine versions

Today, we announce end of Standard Support and Extended Support timelines for legacy Elasticsearch versions and OpenSearch Versions. Standard Support ends on Nov 7, 2025, for legacy Elasticsearch versions up to 6.7, Elasticsearch versions 7.1 through 7.8, OpenSearch versions from 1.0 through 1.2, and OpenSearch versions 2.3 through 2.9. With Extended Support, for an incremental flat fee over regular instance pricing, you continue to get critical security updates beyond end of Standard Support. For more information, see blog.

All Elasticsearch versions will receive at least 12 months of Extended Support with Elasticsearch v5.6 receiving 36 months of Extended Support. OpenSearch versions running on OpenSearch Service, will get at least 12 months of Standard Support after end of support date for corresponding upstream open-source OpenSearch version, or at least 12 months of Standard Support after release of next minor version on OpenSearch Service, whichever is longer. For support timelines by version, please see documentation. While running a version in Extended Support, you will be charged an additional flat fee per Normalized Instance Hour (NIH) (e.g. $0.0065/NIH for US East (N. Virginia). NIH is computed as a factor of instance size (e.g. medium, large), and number of instance hours. For more information on Extended Support charges, please see pricing page.

End of support and Extended Support dates are applicable to all OpenSearch Service clusters running OpenSearch or Elasticsearch versions, in all AWS regions where Amazon OpenSearch Service is available. Please refer AWS Region Table for more information about Amazon OpenSearch Service availability.

Read more


amazon-polly

Three new Long-Form Voices

The Amazon Polly Long-Form engine now introduces two voices in Spanish and one in US English.

Amazon Polly is a service that turns text into lifelike speech, allowing our customers to build speech-enabled products matching their business needs. Today, we add three new long-form voices to our premium Polly Text-to-Speech (TTS) line of products that we offer for synthesizing speech for longer content, such as articles, stories, or training materials.

Male-sounding US English voice Patrick, female-sounding Spanish voice Alba, and male-sounding Spanish voice Raúl can now read long texts, such as blogs, articles, or learning materials. We trained them using the cutting edge technology that uses semantic cues to modify voice’s speaking style depending on the context. The result is natural-sounding, expressive voices that not only provide our customers with the ability of synthesizing their content in human-like Spanish and English, but expand their use-cases to long content reading.

Patrick, Alba, and Raúl long-form voices are accessible in the US East (North Virginia) region and complement the other long-form voices that are already available for developing speech products for a variety of use cases.

To hear how Polly voices sound, go to Amazon Polly Features. For more details on the Polly offerings and use, please read the Amazon Polly documentation and visit our pricing page.
 

Read more


Six new synthetic generative voices for Amazon Polly

Today, we are excited to announce the general availability of six highly expressive Amazon Polly generative voices in English, French, Spanish, and German.
Amazon Polly is a managed service that turns text into lifelike speech, allowing you to create applications that talk and to build speech-enabled products depending on your business needs.

The generative engine is Amazon Polly's most advanced text-to-speech (TTS) model. Today, we release six new synthetic female-sounding generative voices: i.e., Ayanda (South African English), Léa (French), Lucia (European Spanish), Lupe (American Spanish), Mía (Mexican Spanish), and Vicki (German). This launch increases the number of generative Polly voices from seven to thirteen and expands our footprint from three to nine locales. Leveraging the same Gen-AI technology that powered the English generative voices, Polly now supports German, Spanish, and French to provide our customers with more options of highly expressive and engaging voices.

Ayanda, Léa, Lucia, Lupe, Mía, and Vicki generative voices are accessible in the US East (North Virginia), Europe (Frankfurt), and US West (Oregon) regions and complement the other types of voices that are already available in the same regions.

To hear how Polly voices sound, go to Amazon Polly Features. For more details on the Polly offerings and use, please read the Amazon Polly documentation and visit our pricing page.

Read more


amazon-q

Amazon Q Developer can now guide SageMaker Canvas users through ML development

Starting today, you can build ML models using natural language with Amazon Q Developer, now available in Amazon SageMaker Canvas in preview. You can now get generative AI-powered assistance through the ML lifecycle, from data preparation to model deployment. With Amazon Q Developer, users of all skill levels can use natural language to access expert guidance to build high-quality ML models, accelerating innovation and time to market.

Amazon Q Developer will break down your objective into specific ML tasks, define the appropriate ML problem type, and apply data preparation techniques to your data. Amazon Q Developer then guides you through the process of building, evaluating, and deploying custom ML models. ML models produced in SageMaker Canvas with Amazon Q Developer are production ready, can be registered in SageMaker Studio, and the code can be shared with data scientists for integration into downstream MLOps workflows.

Amazon Q Developer is available in SageMaker Canvas in preview in the following AWS Regions: US East (N. Virginia), US West (Oregon), Europe (Frankfurt), Europe (Paris), Asia Pacific (Tokyo), and Asia Pacific (Seoul). To learn more about using Amazon Q Developer with SageMaker Canvas, visit the website, read the AWS News blog, or view the technical documentation.

Read more


Announcing scenarios analysis capability of Amazon Q in QuickSight (preview)

A new scenario analysis capability of Amazon Q in QuickSight is now available in preview. This new capability provides an AI-assisted data analysis experience that helps you make better decisions, faster. Amazon Q in QuickSight simplifies in-depth analysis with step-by-step guidance, saving hours of manual data manipulation and unlocking data-driven decision-making across your organization.

Amazon Q in QuickSight helps business users perform complex scenario analysis up to 10x faster than spreadsheets. You can ask a question or state your goal in natural language and Amazon Q in QuickSight guides you through every step of advanced data analysis—suggesting analytical approaches, automatically analyzing data, surfacing relevant insights, and summarizing findings with suggested actions. This agentic approach breaks down data analysis into a series of easy-to-understand, executable steps, helping you find solutions to complex problems without specialized skills or tedious, error-prone data manipulation in spreadsheets. Working on an expansive analysis canvas, you can intuitively iterate your way to solutions by directly interacting with data, refining analysis steps, or exploring multiple analysis paths side-by-side. This scenario analysis capability is accessible from any Amazon QuickSight dashboard, so you can move seamlessly from visualizing data to modelling solutions. With Amazon Q in QuickSight, you can easily modify, extend, and reuse previous analyses, helping you quickly adapt to changing business needs.

Amazon Q in QuickSight Pro users can use this new capability in preview in the following AWS regions: US East (N. Virginia) and US West (Oregon). To learn more, visit the Amazon Q in QuickSight documentation and read the AWS News Blog.

Read more


Amazon Q Developer now provides transformation capabilities for .NET porting (Preview)

Today, AWS announces new generative-AI powered transformation capabilities of Amazon Q Developer in public preview to accelerate porting of .NET Framework applications to cross-platform .NET. Using these capabilities, you can modernize your Windows .NET applications to be Linux-ready up to four times faster than traditional methods and realize up to 40% savings in licensing costs.

With this launch, Amazon Q Developer is now equipped with agentic capabilities for transformation that allow you to port hundreds of .NET Frameworks applications running on Windows to Linux-ready cross-platform .NET. Using Amazon Q Developer, you can delegate your tedious manual porting tasks and help free up your team’s precious time to focus on innovation.

You can chat with Amazon Q Developer in natural language to share high-level transformation objectives and connect it to your source code repositories. Amazon Q Developer then starts the transformation process with the assessment of your application code to identify .NET versions, supported project types, and their dependencies, and then ports the assessed application code along with their accompanying unit tests to cross-platform .NET. You and your team can collaboratively review, adjust, and approve the transformation process. Additionally, Amazon Q Developer provides a detailed work log as a documented trail of transformation decisions to support your organizational compliance objectives.

The transformation capabilities of Amazon Q Developer are available in public preview via a web experience and in your Visual Studio integrated development environment (IDE). To learn more, read the blogs on the web experience and the IDE experience, and visit Amazon Q Developer transformation capabilities webpage and documentation.
 

Read more


Amazon Q Developer announces automatic unit test generation to accelerate feature development

Today, Amazon Q Developer announces the general availability of a new agent that automates the process of generating unit tests. This agent can be easily initiated by using a simple prompt: “/test”. Once prompted, Amazon Q will use the knowledge of your project to automatically generate and add tests to your project, helping improve code quality, fast.

Amazon Q Developer will also ask you to provide consent before adding tests, allowing you to always stay in the loop so that no unintended changes are made. Automation saves the time and effort needed to write comprehensive unit tests, allowing you to focus on building innovative features. With the ability to quickly add unit tests and increase coverage across code, organizations can safely and more reliably ship code, accelerating feature development across the software development lifecycle.

Automatic unit test generation is generally available within the Visual Studio Code and JetBrains integrated development environments (IDEs) or in public preview as part of the new GitLab Duo with Amazon Q offering, in all AWS Regions where Amazon Q Developer is available. Learn more about unit test generation.

Read more


Amazon Q Developer can now automate code reviews

Starting today, Amazon Q Developer can also perform code reviews, automatically providing comments on your code in the IDE, flagging suspicious code patterns, providing patches where available, and even assessing deployment risk so you can get feedback on your code quickly.

Q Developer is a generative AI-powered assistant for designing, building, testing, deploying, and maintaining software. Its agents for software development have a deep understanding of your entire code repos, so they can accelerate many tasks beyond coding. By automating the first round of code reviews and improving review consistency, Q Developer empowers code authors to fix issues faster, streamlining the process for both authors and reviewers. With this new capability, Q Developer can help you get immediate feedback for your code reviews and code fixes where available, so you can increase the speed of iteration and improve the quality of your code.

This capability is available in the integrated development environment (IDE) through a new chat command: /review. You can start automating code reviews via the Visual Studio Code and IntelliJ IDEA Integrated Development Environments (IDEs) with both an Amazon Q Developer Free Tier or Pro Tier subscription. For more details on pricing, see Amazon Q Developer pricing

This capability is available in all AWS Regions where Amazon Q Developer is available. To get started with generating documentation, visit Amazon Q Developer or read the news blog.

Read more


Today, we are excited to announce that Amazon Q Business, including Amazon Q Apps, has expanded its capabilities with a ready-to-use library of over 50 actions spanning plugins across popular business applications and platforms. This enhancement allows Amazon Q Business users to complete tasks in other applications without leaving the Amazon Q Business interface, improving the user experience and operational efficiency.

The new plugins cover a wide range of widely used business tools, including PagerDuty, Salesforce, Jira, Smartsheet, and ServiceNow. These integrations enable users to perform tasks such as creating and updating tickets, managing incidents, and accessing project information directly from within Amazon Q Business. With Amazon Q Apps, users can further automate their everyday tasks by leveraging the newly introduced actions directly within their purpose-built apps.

The new plugins are available in all AWS Regions where Amazon Q Business is available.

To get started with the new plugins, customers can access them directly from their Amazon Q Business interface. To learn more about Amazon Q Business plugins and how they can enhance your organization's productivity, visit the Amazon Q Business product page or explore the Amazon Q Business plugin documentation.

Read more


Amazon Q Developer adds operational investigation capability (Preview)

Amazon Q Developer now helps you accelerate operational investigations across your AWS environment in just a fraction of the time. With a deep understanding of your AWS cloud environment and resources, Amazon Q Developer looks for anomalies in your environment, surfaces related signals for you to explore, identifies potential root-cause hypothesis, and suggests next steps to help you remediate issues faster. 

Amazon Q Developer works alongside you throughout your operational troubleshooting journey from issue detection and triaging, through remediation. You can initiate an investigation by selecting the Investigate action on any Amazon CloudWatch data widget across the AWS Management Console. You can also configure Amazon Q to automatically investigate when a CloudWatch alarm is triggered. When an investigation starts, Amazon Q Developer sifts through various signals about your AWS environment including CloudWatch telemetry, AWS CloudTrail Logs, deployment information, changes to resource configuration, and AWS Health events. 

CloudWatch now provides a dedicated investigation experience where teams can collaborate and add findings, view related signals and anomalies, and review suggestions for potential root cause hypotheses. This new capability also provides remediation suggestions for common operational issues across your AWS environment by surfacing relevant AWS Systems Manager Automation runbooks, AWS re:Post articles, and documentation. It also integrates with your existing operational workflows such as Slack via AWS Chatbot. 

The new operational investigation capability within Amazon Q Developer is available at no additional cost during preview in the US East (N. Virginia) Region. To learn more, see getting started and best practice documentation

Read more


Announcing GitLab Duo with Amazon Q (Preview)

Today, AWS announces a preview of GitLab Duo with Amazon Q, embedding advanced agent capabilities for software development and workload transformation directly in GitLab's enterprise DevSecOps platform. With this launch, GitLab Duo with Amazon Q delivers a seamless development experience across tasks and teams, automating complex, multi-step tasks for software development, security, and transformation —all using the familiar GitLab workflows developers already know. 

Using GitLab Duo, developers can delegate issues to Amazon Q agents using quick actions. to build new features faster, maximize quality and security with AI-assisted code reviews, create and execute unit tests, and upgrade a legacy Java codebase. GitLab’s unified data store across the software development life cycle (SDLC) gives Amazon Q project context to accelerate and automate end-to-end workflows for software development, simplifying the complex toolchains historically required for collaboration across teams.

  • Streamline software development: Go from new feature idea in an issue, to merge-ready code in minutes. Iterate directly from GitLab, using feedback in comments to accelerate development workflows from end-to-end.
  • Optimize code: Generate unit tests for new merge request to save developer time and ensure consistent quality assurance practices are enforced across teams.
  • Maximize quality and security: Provide AI-driven code quality, security reviews and generated fixes to accelerate feedback cycles.
  • Transform enterprise workloads: Starting with Java 8 or 11 codebases, developers can upgrade to Java 17 directly from a GitLab project to improve application security, performance, and remove technical debt.

Visit the Amazon Q Developer integrations page to learn more.

Read more


Amazon Q in QuickSight unifies insights from structured and unstructured data

Now generally available, Amazon Q in QuickSight provides users with unified insights from structured and unstructured data sources through integration with Amazon Q Business. While structured data is managed in conventional systems, unstructured data such as document libraries, webpages, images and more has remained largely untapped due to its diverse and distributed nature.

With Amazon Q in QuickSight business users can now augment insights from traditional BI data sources such as databases, data lakes and data warehouses, with contextual information from unstructured sources. Users can get augmented insights within QuickSight's BI interface across multi-visual Q&A and Data Stories. Users can use multi-visual Q&A to ask questions in natural language and get visualizations and data summaries augmented with contextual insights from Amazon Q Business. With data stories in Amazon Q in QuickSight users can upload documents, or connect to unstructured data sources from Amazon Q Business to create richer narratives or presentations explaining their data with additional context. This integration enables organizations to harness insights from all their data without the need for manual collation, leading to more informed decision-making, time savings, and a significant competitive edge in the data-driven business landscape.

This new capability is generally available to all Amazon QuickSight Pro Users in US East (N. Virginia), and US West (Oregon) AWS Regions.

To learn more visit the AWS Business Intelligence Blog, the Amazon Q Business What’s New Post and try QuickSight free for 30-days.
 

Read more


Amazon Q Developer can now generate documentation within your source code

Starting today, Amazon Q Developer can document your code by automatically generating readme files and data-flow diagrams within your projects. 

Today, developers report they spend an average of just one hour per day coding. They spend most of their time on tedious, undifferentiated tasks such as learning codebases, writing and reviewing documentation, testing, managing deployments, troubleshooting issues or finding and fixing vulnerabilities. Q Developer is a generative AI-powered assistant for designing, building, testing, deploying, and maintaining software. Its agents for software development have a deep understanding of your entire code repos, so they can accelerate many tasks beyond coding. With this new capability, Q Developer can help you understand your existing code bases faster, or quickly document new features, so you can focus on shipping features for your customers.

This capability is available in the integrated development environment (IDE) through a new chat command: /doc . You can get started generating documentation within the Visual Studio Code and IntelliJ IDEA IDEs with an Amazon Q Developer Free Tier or Pro Tier subscription. For more details on pricing, see Amazon Q Developer pricing.

This capability is available in all AWS Regions where Amazon Q Developer is available. To get started with generating documentation, visit Amazon Q Developer or read the news blog

Read more


Amazon Q Business now provides insights from your databases and data warehouses (preview)

Today, AWS announces the public preview of the integration between Amazon Q Business and Amazon QuickSight, delivering a transformative capability that unifies answers from structured data sources (databases, warehouses) and unstructured data (documents, wikis, emails) in a single application.

Amazon Q Business is a generative AI–powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. Amazon QuickSight is a business intelligence (BI) tool that helps you visualize and understand your structured data through interactive dashboards, reports, and analytics. While organizations want to leverage generative AI for business insights, they experience fragmented access to unstructured and structured data.

With the QuickSight integration, customers can now link their structured sources to Amazon Q Business through QuickSight’s extensive set of data source connectors. Amazon Q Business responds in real time, combining the QuickSight answer from your structured sources with any other relevant information found in documents. For example, users could ask about revenue comparisons, and Amazon Q Business will return an answer from PDF financial reports along with real-time charts and metrics from QuickSight. This integration unifies insights across knowledge sources, helping organizations make more informed decisions while reducing the time and complexity traditionally required to gather insights.

This integration is available to all Amazon Q Business Pro, and Amazon QuickSight Reader Pro, and Author Pro users in the US East (N. Virginia) and US West (Oregon) AWS Regions. To learn more, visit the Amazon Q Business documentation site.

Read more


Amazon Q Developer transformation capabilities for mainframe modernization are now available (Preview)

Today, AWS announces new generative AI–powered capabilities of Amazon Q Developer in public preview to help customers and partners accelerate large-scale assessment and modernization of mainframe applications.

Amazon Q Developer is enterprise-ready, offering a unified web experience tailored for large-scale modernization, federated identity, and easier collaboration. Keeping you in the loop, Amazon Q Developer agents analyze and document your code base, identify missing assets, decompose monolithic applications into business domains, plan modernization waves, and refactor code. You can chat with Amazon Q Developer in natural language to share high-level transformation objectives, source repository access, and project context. Amazon Q Developer agents autonomously classify and organize application assets and create comprehensive code documentation to understand and expand the knowledge base of your organization. The agents combine goal-driven reasoning using generative AI and modernization expertise to develop modernization plans customized for your code base and transformation objectives. You can then collaboratively review, adjust, and approve the plans through iterative engagement with the agents. Once you approve the proposed plan, Amazon Q Developer agents autonomously refactor the COBOL code into cloud-optimized Java code while preserving business logic.

By delegating tedious tasks to autonomous Amazon Q Developer agents with your review and approvals, you and your team can collaboratively drive faster modernization, larger project scale, and better transformation quality and performance using generative AI large language models. You can enhance governance and compliance by maintaining a well-documented and explainable trail of transformation decisions.

To learn more, read the blog and visit Amazon Q Developer transformation capabilities webpage and documentation.

Read more


Amazon Q Business adds support to extract insights from visual elements within documents

Amazon Q Business is a fully managed, generative AI–powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. Amazon Q Business now offers capabilities to answer questions and extract insights from visual elements embedded within documents.

This new feature enables users to query information embedded in various types of visual, including diagrams, infographics, charts, and image-based content. With this launch, customers can now uncover valuable insights that are captured within visual content embedded in documents including PDF, Microsoft PowerPoint and Word, and Google Docs and Google Slides. Amazon Q Business provides transparency by surfacing the specific images utilized to generate the responses, enabling users to contextualize the extracted information.

The new visual analysis feature is available in all AWS Regions where Amazon Q Business is available. To learn more, visit the Amazon Q Business product page.

Read more


Announcing Amazon Q Developer transformation capabilities for VMware (Preview)

Today, AWS announces the preview of Amazon Q Developer transformation capabilities for VMware, the first generative AI–powered assistant that can simplify and accelerate the migration and modernization of VMware workloads to Amazon Elastic Compute Cloud (EC2). These new capabilities help you streamline complex VMware transformation tasks, reducing the time and effort required to move VMware workloads to the cloud. Using advanced AI techniques to automate critical steps in the migration process, Amazon Q Developer helps accelerate your cloud journey, reduce costs, and drive innovation.

Amazon Q Developer transformation agents simplify and automate VMware transformation tasks including on-premises application data discovery, wave planning, network translation and deployment, and orchestration of the overall migration process. Two of the most challenging aspects of VMware transformations— wave planning and network translation— are now automated using VMware domain-expert agents and large language models (LLMs). These AI-powered tools convert VMware networking configurations and firewall rules into native AWS network constructs, significantly reducing complexity and potential errors. Importantly, Amazon Q Developer maintains a balance between automation and human oversight, proactively promoting user input at key decision points to ensure accuracy and control throughout the migration and modernization process.

The preview of Amazon Q Developer transformation capabilities for VMware is available in US East (N. Virginia) AWS region. To learn more about Amazon Q Developer and how it can accelerate your migration to AWS, visit Amazon Q Developer.

Read more


The Amazon Q index enhances software vendors’ AI experiences

Independent software vendors (ISVs) like Asana, Miro, PagerDuty, Zoom, and more are integrating the Amazon Q index into their applications to enrich their generative AI experiences with enterprise knowledge and user context spanning multiple applications. End customers remain in control of which applications can access their data, and the index retains user-level permissions.

The Amazon Q index is a canonical source of content and data that unites data from across over 40 supported connectors. Amazon Q Business customers create an index based on their enterprise data so that generated responses, insights, and actions are most relevant to employees. Software providers register their application with Amazon Q Business, and then their customers permit them to access their indexed data. Once connected, the software vendor uses the additional data to enrich their native generative AI features to deliver more personalized responses back to the customer. This new feature inherits the same security, privacy, and guardrails as Amazon Q Business, accelerating an ISV’s generative AI roadmap so they can focus their efforts on innovative, differentiated features for their end users. 

ISVs can use the Amazon Q index in all AWS Regions where Amazon Q Business is available. 

Learn more about the Amazon Q index for software providers

Read more


Amazon Q Developer launches Java upgrade transformation CLI (Public Preview)

Amazon Q Developer launches public preview of Java upgrade transformation CLI (command line interface). The CLI allows you to invoke transformations from the command line and perform transformations at scale.

The CLI provides the following capabilities:

  • Transform your Java applications from Java 8, Java 11 to Java 17 (available in the IDE and now in the CLI)
  • Custom transformations (new and only in CLI): The CLI will allow you to perform custom transformations you defined specific to your code bases in your organization. Prior to this launch, Amazon Q Developer would upgrade open-source libraries in your Java applications. With custom transformations in the CLI, you can define your own transformations specific to your code bases and internal libraries. You can define custom transformations using ast-grep, a code tool for structural search and replace. Amazon Q Developer can perform your custom transformations and leverage Q’s AI debugging capabilities.
  • Build on local environment (new and only in CLI): The CLI will perform the verification build on your local environment, which ensures running unit tests and integration tests during build verifications.

This capability is available in the command line, on Linux and Mac OS. You can learn more about the Code Transformation CLI and get started here.

Read more


Amazon Q Developer for the Eclipse IDE is now in public preview

The Amazon Q Developer plugin for the Eclipse IDE is now in public preview. With this launch, developers can leverage the power of Q Developer, the most capable generative AI-powered assistant for software development, within the Eclipse IDE.

Eclipse developers can now chat with Amazon Q Developer about their project and code faster with inline code suggestions within the IDE. Developers can also leverage Amazon Q Developer customization to receive tailored responses and code recommendations that conform to their team's internal libraries, proprietary algorithmic techniques, and enterprise code style. This helps users build faster while enhancing productivity across the entire software development lifecycle.

The Amazon Q Developer plugin for the Eclipse IDE Public Preview is available in all AWS regions where Q Developer is supported. Learn more and download the free Amazon Q Developer plugin for Eclipse to get started.

Read more


Amazon Q Developer can now provide more personalized chat answers based on console context

Today, AWS announces the general availability of console context awareness for the Amazon Q Developer chat within the AWS Management Console. This new capability allows Amazon Q Developer to dynamically understand and respond to inquiries based on the specific AWS service you are currently viewing or configuring and the region you are operating within. For example, if you are working within the Amazon Elastic Container Service (Amazon ECS) console, you can ask "How can I create a cluster?" and Amazon Q Developer will recognize the context and provide relevant guidance tailored to creating ECS clusters.

This update enables more natural conversations without providing repetitive context details, allowing you to arrive at the answers you seek faster. This capability is included at no additional cost in the Amazon Q Developer Free Tier. For the Amazon Q Developer Pro Tier, which requires a paid subscription, this capability is also included. For more information on pricing, please see the Amazon Q Developer Pricing page. You can access this feature in all regions Amazon Q Developer chat is available in the AWS Management Console. You can get started today by chatting with Amazon Q Developer in the AWS Management Console.
 

Read more


Introducing Amazon Q Apps with private sharing

Amazon Q Apps, a capability within Amazon Q Business to create lightweight, generative AI-powered apps, now supports private sharing. This new feature enables app creators to restrict app access to select Amazon Q Business users, providing more granular control over app visibility and usage within organizations.

Previously, Amazon Q Apps could only be kept private for individual use or published to all users of the Amazon Q Business environment through the Amazon Q Apps library. Now app creators can share their apps with specific individuals allowing for more targeted collaboration and controlled access. App users with access to shared apps can find these apps in the Amazon Q Apps Library and run them. Apps shown in the library respect the access set by the app creator so those are visible only to selected users. Private sharing enables new functional use cases. For instance, a messaging-compliant document generation app may be shared company-wide for anyone in the organization to use, while a customer outreach app could be restricted to individuals of the sales team only. Private sharing also opens up possibilities for app creators to gather early feedback from a small group of users before wider distribution of their app.

Amazon Q Apps with private sharing is now available in the same regions where Amazon Q Business is available.

To learn more about private sharing in Amazon Q Apps, visit the Q Apps documentation.

Read more


Amazon Q Apps introduces data collection (Preview)

Amazon Q Apps, the generative AI-powered app creation capability of Amazon Q Business, now offers a new data collection feature in public preview. This enhancement enables users to collate data across multiple users within their organization, further enhancing the collaborative quality of Amazon Q Apps for various business needs.

With the new ability to collect data through form cards, app creators can design apps to gather information for a diverse set of business use cases, such as conducting team surveys, compiling questions for company-wide meetings, tracking new hire onboarding progress, or running a project retrospective. These apps can further leverage generative AI to analyze the collected data, identify common themes, summarize ideas, and provide actionable insights. A shared data collection app can be instantiated into different data collections by app users, each with its own unique, shareable link. App users can participate in an ongoing data collection to submit responses, or start their own data collection without the need to duplicate the app.

Amazon Q Apps with data collection is available in the regions where Amazon Q Business is available.

To learn more about data collection in Amazon Q Apps and how it can benefit your organization, visit the Q Apps documentation.

Read more


Amazon Q Java transformation launches Step-by-Step and Library Upgrades

Amazon Q Developer Java upgrade transformation now offers step-by-step upgrades, and library upgrades for Java 17 applications. This new feature allows developers to review and accept code changes in multiple diffs, and to test proposed changes in each diff step-by-step. Additionally, Amazon Q can now upgrade libraries for applications already on Java 17, enabling continuous maintenance.

This launch significantly improves the code review and application modernization process. By allowing developers to review smaller amount of code changes at a time, it makes error fixes easier when manual completion is required. The ability to upgrade apps already on Java 17 to the latest reliable libraries helps organizations save time and effort in maintaining their applications across the board.

This capability is available within the Visual Studio Code and IntelliJ IDEs.

To learn more and get started with these new features here.

Read more


Amazon Q Developer now provides natural language cost analysis

Today, AWS announces the addition of cost analysis capabilities to Amazon Q Developer, allowing customers to retrieve and interpret their AWS cost data through natural language interactions. Amazon Q Developer is a generative AI-powered assistant that helps customers build, deploy, and operate applications on AWS. The cost analysis capability helps users of all skill levels to better understand and manage their AWS spending without previous knowledge of AWS Cost Explorer.

Customers can now ask Amazon Q Developer questions about their AWS costs such as "Which region had the largest cost increase last month?" or "What services cost me the most last quarter?". Q interprets these questions, analyzes the relevant cost data, and provides easy-to-understand responses. Each answer includes transparency on the Cost Explorer parameters used and a link to visualize the data in Cost Explorer.

This feature is now available in all AWS Regions where Amazon Q Developer is supported. Customers can access it via the Amazon Q icon in the AWS Management Console. To get started, see the AWS Cost Management user guide.
 

Read more


Amazon Q Developer now transforms embedded SQL from Oracle to PostgreSQL

When you use AWS Database Migration Service (DMS) and DMS Schema Conversion to migrate a database, you might need to convert the embedded SQL in your application to be compatible with your target database. Rather than converting it manually, you can use Amazon Q Developer in the IDE to automate the conversion.

Amazon Q Developer uses metadata from a DMS Schema Conversion to convert embedded SQL in your application to a version that is compatible with your target database. Amazon Q Developer will detect Oracle SQL statements in your application and convert them to PostgreSQL. You can review and accept the proposed changes, view a summary of the transformation, and follow the recommended next steps in the summary to verify and test the transformed code.

This capability is available within the Visual Studio Code and IntelliJ IDEs.

Learn more and get started here.
 

Read more


Amazon Q Developer Pro tier introduces a new, improved dashboard for user activity

Amazon Q Developer Pro tier now provides a detailed usage activity dashboard that gives administrators greater visibility into how their subscribed users are leveraging Amazon Q Developer features and improving their productivity. The dashboard offers insights into user activity metrics, including the number of AI-generated code lines and the acceptance rate of individual features such as, inline code and chat suggestions in developer’s integrated development environment (IDE). This information enables administrators to monitor usage and evaluate productivity gains achieved through Amazon Q Developer.

New customers will have this usage dashboard enabled by default. Existing Amazon Q Developer administrators can activate the dashboard through the AWS Management Console to start tracking detailed usage metrics. Existing customers can also continue to view a copy of the previous set of metrics and usage data, in addition to the new detailed usage metrics dashboard. To learn more about this feature, visit Amazon Q Developer User Guide.

These improvements come in conjunction with the recently launched per-user activity report and last activity date features for Amazon Q Developer admins, further enhancing visibility and control over user activity.

To learn more about Amazon Q Developer Pro tier subscription management features, visit the AWS Console.

Read more


Amazon Q Business now available as browser extension

Today, Amazon Web Services announces the general availability of Amazon Q Business browser extensions for Google Chrome, Mozilla Firefox, and Microsoft Edge. Users can now supercharge their browsers’ intelligence and receive context-aware, generative AI assistance, making it easy to get on-the-go help for their daily tasks.

The Amazon Q Business browser extension makes it easy for users to summarize web pages, ask questions about web content or uploaded files, and leverage large language model knowledge directly within their browser. With the browser extension, users can maximize reading productivity, streamline their research and analysis of complex information, and get instant help when creating content.

The Amazon Q Business browser extension is now available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (N. California), and US West (Oregon).

Learn how to boost your productivity with AI-powered assistance within your browser by visiting the Amazon Q Business product page and the Amazon Q Business documentation site.

Read more


Amazon Q Developer Chat Customizations is now generally available

Today, Amazon Web Services (AWS) is excited to announce the general availability of customizable chat responses generated by Amazon Q Developer in the IDE. With this capability, you can securely connect Q Developer to your private codebases to receive more precise chat responses that take into account your organization’s internal APIs, libraries, classes, and methods. Readmes and best practices demonstrated within your code repositories are also utilized within your customization. You can use a customized version of Q Developer chat in the IDE to ask questions about how your internal codebase is structured, and where and how certain functions or libraries are used. With these capabilities, Q Developer can boost productivity by reducing the time builders spend examining previously written code and deciphering internal APIs, documentation, and other resources.

To get started, you first need to add your organization’s private repositories to Q Developer through the AWS Management Console, and then create and activate your customization. You can easily manage access to a customization from the AWS Management Console so that only specific developers have access. Each customization is isolated from other customers, and none of the customizations built with these new capabilities will be used to train the foundation models underlying Q Developer.

These capabilities are available as part of the Amazon Q Developer Pro subscription. To learn more about pricing, please visit Amazon Q Developer Pricing.

To learn more, see the Amazon Q Developer webpage.
 

Read more


Smartsheet connector for Amazon Q Business is now generally available

Today, AWS announces the general availability of the Smartsheet connector for Amazon Q Business. Smartsheet is a modern enterprise work management platform. This connector makes it easy to synchronize data from your Smartsheet instance with your Amazon Q Business index. When implemented, your employees can use Amazon Q Business to query their intelligent assistant on information about their Smartsheet projects and tasks.

Amazon Q Business is a generative AI–powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. It empowers employees to be more creative, data-driven, efficient, prepared, and productive. The over 40 connectors supported by Amazon Q Business can be scheduled to automatically sync your index with your selected data sources, so you're always securely searching through the most updated content.

To learn more about Amazon Q Business and its integration with Smartsheet, visit our Amazon Q Business connectors webpage and documentation. The new connector with Smartsheet is available in all AWS Regions where Amazon Q Business is available.

Read more


Amazon Q Business introduces ability to reuse recently uploaded files in a conversation

Amazon Q Business is a fully managed, generative AI–powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. Users can upload files and Amazon Q can help summarize or answers about the files. Starting today, users can drag and drop files to upload and reuse any recently uploaded files in new conversations without uploading the files again.

With the recent documents list, users save time searching and re-uploading frequently used files to Amazon Q Business. The list is only viewable by the individual who uploaded the file and they can clear the cached list by deleting the conversation in which the file was used. Along with the recent documents list, users can now drag and drop files they want upload directly into any conversation inside Amazon Q Business.

The ability to attach from recent files is available in all AWS Regions where Amazon Q Business is available.

You can enable attach from recent files for your team by following steps in the AWS Documentation. To learn more about Amazon Q Business, visit the Amazon Q homepage.

Read more


AWS Announces Amazon Q account resources chat in the AWS Console Mobile App

Today, Amazon Web Services (AWS) is announcing the general availability of Amazon Q Developer’s AWS account resources chat capability in the AWS Console Mobile Application. With this capability, you can use your device’s voice input and output capabilities along with natural language prompts to list resources in your AWS account, get specific resource details, and ask about related resources while on-the-go.

From the Amazon Q tab in the AWS Console Mobile App, you can ask Q to “list my running EC2 instances in us-east-1” or “list my S3 buckets” and Amazon Q returns a list of resource details, along with a summary. You can ask “what Amazon EC2 instances is Amazon CloudWatch alarm <name> monitoring” or ask “what related resources does my ec2 instance <id> have?” and Amazon Q will respond with specific resource details in a mobile friendly format.

The Console Mobile App lets users view and manage a select set of resources to stay informed and connected with their AWS resources while on-the-go. Visit the product page for more information about the Console Mobile Application.
 

Read more


Amazon Q Business now supports integrations to Asana in (Preview)

Amazon Q Business now supports, in preview, a connector to Asana, a leading enterprise work management platform. This managed connector makes it easy for Amazon Q Business users to synchronize data from their Asana instance with their Amazon Q index. When connected, Amazon Q Business can help users answer questions and generate summaries with context from Asana projects.

Amazon Q Business is a generative AI–powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. It empowers employees to be more creative, data-driven, efficient, prepared, and productive. The over 40 connectors supported by Amazon Q Business can be scheduled to automatically sync your index with your selected data sources, so you're always securely searching through the most up-to-date content.

To learn more about Amazon Q Business and its integration with Asana and Google Calendar visit the Amazon Q Business connectors page here These new connector are available in all AWS Regions where Amazon Q Business is available.
 

Read more


Amazon Q Business now supports an integration to Google Calendar in (Preview)

Amazon Q Business now supports a connector to Google Calendar. This expands Amazon Q Business’s support of Google Workspace to include Google Drive, Gmail, and now Google Calendar. Each managed connectors makes it easy to synchronize your data with your Amazon Q index.

Amazon Q Business is a generative AI–powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. It empowers employees to be more creative, data-driven, efficient, prepared, and productive. The over 40 connectors supported by Amazon Q Business can be scheduled to automatically sync your index with your selected data sources, so you're always securely searching through the most up-to-date content.

To learn more about Amazon Q Business and its integration with Asana and Google Calendar visit the Amazon Q Business connectors page here. These new connector are available in all AWS Regions where Amazon Q Business is available.
 

Read more


Amazon Q Business now supports answers from tables embedded in documents

Amazon Q Business is a generative AI–powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. A large portion of that information is found in text narratives stored in various document formats such as PDFs, Word files, and HTML pages. Some information is also stored in tables (e.g. price or product specification tables) embedded in those same document types, CSVs, or spreadsheets. Although Amazon Q Business can provide accurate answers from narrative text, getting answers from these tables requires special handling of more structured information.

Today, we are happy to announce support for tabular search in Amazon Q Business, enabling end-users to extract answers from tables embedded in documents ingested in Amazon Q Business. With tabular search in Amazon Q Business, users can ask questions like “what’s the credit card with the lowest APR and no annual fees?” or “which credit cards offer travel insurance?” where the answers may be found in a product-comparison table, inside a marketing PDF stored in an internal repository, or on a website. Answers are returned as tables, lists or text narratives depending on the context. Tabular search is an out-of-the-box feature in Amazon Q Business that works seamlessly across many domains, with no setup required from admin or end-users. The feature supports tables embedded in HTML, PDF, Word, Excel, CSV, and SmartSheet (via SmartSheet connector) formats.

Amazon Q Business tabular search is available in all AWS Regions where Amazon Q Business is available. To explore Amazon Q Business, visit the website.

Read more


Accelerate AWS CloudFormation troubleshooting with Amazon Q Developer assistance

AWS CloudFormation now offers generative AI assistance powered by Amazon Q Developer to help troubleshoot unsuccessful CloudFormation deployments. This new capability provides easy-to-understand analysis and actionable steps to simplify the resolution of the most common resource provisioning errors encountered during CloudFormation deployments.

When creating or modifying a CloudFormation stack, CloudFormation can encounter errors in resource provisioning, such as missing required parameters for an EC2 instance or inadequate permissions. Previously, troubleshooting a failed stack operation could be a time-consuming process. After identifying the root cause of the failure, you had to search through blogs and documentation for solutions and determine the next steps, leading to longer resolution times. Now, when you review a failed stack operation in the CloudFormation Console, CloudFormation automatically highlights the likely root cause of the failure. You can click the "Diagnose with Q" button in the error alert box and Amazon Q Developer will provide a human-readable analysis of the error, helping you understand what went wrong. If you need further assistance, you can click the "Help me resolve" button to receive actionable resolution steps tailored to your specific failure scenario, helping you accelerate resolution of the error.

To get started, open the CloudFormation Console and navigate to the stack events tab for a provisioned stack. This feature is available in AWS Regions where AWS CloudFormation and Amazon Q Developer are available. Refer to the AWS Region table for service availability details. Visit our user guide to learn more about this feature.
 

Read more


AWS Chatbot adds support for chatting about AWS resources with Amazon Q Developer in Microsoft Teams and Slack

We are excited to announce the general availability of Amazon Q Developer in AWS Chatbot, which provides answers to customers’ AWS resource related queries in Microsoft Teams and Slack.

When issues occur, customers need to quickly find relevant resources to troubleshoot issues. Customer can now ask questions in natural language in chat channels to list resources in AWS accounts, get specific resource details, and ask about related resources using Amazon Q Developer.

With Amazon Q Developer in AWS Chatbot, customers find AWS resources by typing "@aws show ec2 instances in running state in us-east-1" or “@aws what is the size of the auto scaling group XX in us-east-2?”

Get started with AWS Chatbot by visiting the Chatbot Console and by downloading the AWS Chatbot app from the Microsoft Teams marketplace or Slack App Directory. To get started with chatting with Amazon Q in AWS Chatbot, visit the Asking Amazon Q questions in AWS Chatbot in AWS Chatbot documentation.

Read more


Amazon Q generative SQL in Amazon Redshift Query Editor now available in additional AWS regions

Amazon Q generative SQL in Amazon Redshift Query Editor is available in AWS South America (Sao Paulo), Europe (London), and Canada (Central) regions. Amazon Q generative SQL is available in Amazon Redshift Query Editor, an out-of-the-box web-based SQL editor for Amazon Redshift, to simplify SQL query authoring and increase your productivity by allowing you to express SQL queries in natural language and receive SQL code recommendations. Furthermore, it allows you to get insights faster without extensive knowledge of your organization’s complex Amazon Redshift database metadata.

Amazon Q generative SQL uses generative Artificial Intelligence (AI) to analyze user intent, SQL query patterns, and schema metadata to identify common SQL query patterns directly within Amazon Redshift, accelerating the SQL query authoring process for users, and reducing the time required to derive actionable data insights. Amazon Q generative SQL provides a conversational interface where users can submit SQL queries in natural language, within the scope of their current data permissions. For example, when you submit a question such as 'Find total revenue by region,' Amazon Q generative SQL will recognize and suggest the appropriate SQL code for this frequent query pattern by joining multiple Amazon Redshift tables, thus saving time and decreasing the likelihood of errors. You can either accept the query or enhance your prior query by asking additional questions.

To learn more about pricing, visit the Amazon Q Developer pricing page. See the documentation to get started.
 

Read more


Amazon Q Developer in the AWS Management Console now uses the service you’re viewing as context for your chat

Amazon Q Developer in the AWS Management Console now provides context-aware assistance for your questions about resources in your account. This feature allows you to ask questions directly related to the console page you're viewing, eliminating the need to specify the service or resource in your query. Q Developer uses the current page as additional context to provide more accurate and relevant responses, streamlining your interaction with AWS services and resources. When the service or resource cannot be inferred, Q Developer now prompts for clarification about the specific resource in question. It presents a list of potentially relevant resources, allowing you to select the appropriate one.

Customers use AWS Management Console's curated experiences to investigate and act on their resources. Q Developer chat in the console allows customers to ask questions about AWS services and resources. Now, Q Developer uses the resource you're currently viewing as context, reducing the need to specify resource identifiers to Q. For example, if you are viewing an EC2 instance and ask Amazon Q, “what is the ami of this instance?” you will not need to specify the instance you are referring to. For ambiguous questions without clear context, Q Developer offers potentially relevant resource options. Q can now count up to 500 resources of a specific type to assist with quantification.

Start gaining deeper insight into your resources using the AWS resource inspection capabilities with Amazon Q in the AWS console. Learn more about Amazon Q Developer here.
 

Read more


Amazon Q Developer plugins for Datadog and Wiz now generally available

Today's launch extends the abilities of Q Developer to access trusted AWS partner services that customers know and love. Administrators on the Q Developer Pro Tier can enable plugins in the AWS Management Console by configuring the credentials to access these third party services. Builders can now easily query and interact with Datadog and Wiz services directly in the console using Q Developer, helping them find information faster and stay in the flow longer. Customers can access a subset of information from Datadog and Wiz using natural language by asking “@datadog are there any active alerts?” or @wiz what are my top 3 security issues today?

Datadog, an AWS Advanced Technology Partner and the observability and security platform for cloud applications, provides AWS customers with unified, real-time observability and security across their entire technology stack.

With Wiz, organizations can democratize security across the development lifecycle, empowering them to build fast and securely. As an AWS Security Competency Partner, Wiz is committed to effectively reducing risk for AWS customers by seamlessly integrating into AWS services.

When starting a new conversation with Q Developer, use the commands @datadog or @wiz to quickly learn more about these services in the context of your AWS resources. Q Developer will call out to these service APIs, assemble a natural language response, and return a summary with deep links to the Datadog and Wiz resources.

To learn more about Amazon Q Developer, visit the service overview page.

Read more


Amazon Q Developer Pro tier adds enhanced administrator capabilities to view user activity

The Amazon Q Developer Pro tier now offers administrators greater visibility into the activity from subscribed users. Amazon Q Developer Pro tier administrators can now view user last activity information and enable daily user activity reports.

Organization administrators can now view the last activity information for each user's subscription and applications within that subscription, enabling better monitoring of usage. This allows inactive subscriptions to be easily identified through filtering and sorting across all associated applications. Member account administrators can view the last active date specific to the users, applications, and accounts they manage. The last active date is only shown for activity on or after October 30, 2024.

Additionally, member account administrators can enable detailed per-user activity reports in the Amazon Q Developer settings by specifying an Amazon S3 bucket where the reports should be published. When enabled, you will receive a daily report in Amazon S3 with detailed user activity metrics, such as the number of messages sent, and AI lines of code generated.

To learn more about Amazon Q Developer Pro tier subscription management features, visit the AWS Console.

Read more


Amazon Q Business adds simplified setup and new web app experience

Amazon Q Business is a fully managed, generative AI–powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. Amazon Q Business now offers a simplified onboarding that helps administrators deliver a secure AI assistant quickly and a web app experience that allows end users to start using generative AI for their work immediately.

With this launch, administrators can provide end users with the web app even before indexing their internal corporate knowledge for use with Amazon Q Business. This allows end users to ask questions based on local files or world knowledge right away, providing immediate value for their jobs. As administrators index corporate data sources like wikis, documentation, and other information into Amazon Q Business, end users gain even richer insights from their generative AI assistant.

The new setup and web experience is available in all AWS Regions where Amazon Q Business is available.

You can get started with new express setup and web experience in the Amazon Q Business console. To explore Amazon Q Business, visit the Amazon Q homepage.

Read more


amazon-quicksight

Announcing scenarios analysis capability of Amazon Q in QuickSight (preview)

A new scenario analysis capability of Amazon Q in QuickSight is now available in preview. This new capability provides an AI-assisted data analysis experience that helps you make better decisions, faster. Amazon Q in QuickSight simplifies in-depth analysis with step-by-step guidance, saving hours of manual data manipulation and unlocking data-driven decision-making across your organization.

Amazon Q in QuickSight helps business users perform complex scenario analysis up to 10x faster than spreadsheets. You can ask a question or state your goal in natural language and Amazon Q in QuickSight guides you through every step of advanced data analysis—suggesting analytical approaches, automatically analyzing data, surfacing relevant insights, and summarizing findings with suggested actions. This agentic approach breaks down data analysis into a series of easy-to-understand, executable steps, helping you find solutions to complex problems without specialized skills or tedious, error-prone data manipulation in spreadsheets. Working on an expansive analysis canvas, you can intuitively iterate your way to solutions by directly interacting with data, refining analysis steps, or exploring multiple analysis paths side-by-side. This scenario analysis capability is accessible from any Amazon QuickSight dashboard, so you can move seamlessly from visualizing data to modelling solutions. With Amazon Q in QuickSight, you can easily modify, extend, and reuse previous analyses, helping you quickly adapt to changing business needs.

Amazon Q in QuickSight Pro users can use this new capability in preview in the following AWS regions: US East (N. Virginia) and US West (Oregon). To learn more, visit the Amazon Q in QuickSight documentation and read the AWS News Blog.

Read more


Amazon Q in QuickSight unifies insights from structured and unstructured data

Now generally available, Amazon Q in QuickSight provides users with unified insights from structured and unstructured data sources through integration with Amazon Q Business. While structured data is managed in conventional systems, unstructured data such as document libraries, webpages, images and more has remained largely untapped due to its diverse and distributed nature.

With Amazon Q in QuickSight business users can now augment insights from traditional BI data sources such as databases, data lakes and data warehouses, with contextual information from unstructured sources. Users can get augmented insights within QuickSight's BI interface across multi-visual Q&A and Data Stories. Users can use multi-visual Q&A to ask questions in natural language and get visualizations and data summaries augmented with contextual insights from Amazon Q Business. With data stories in Amazon Q in QuickSight users can upload documents, or connect to unstructured data sources from Amazon Q Business to create richer narratives or presentations explaining their data with additional context. This integration enables organizations to harness insights from all their data without the need for manual collation, leading to more informed decision-making, time savings, and a significant competitive edge in the data-driven business landscape.

This new capability is generally available to all Amazon QuickSight Pro Users in US East (N. Virginia), and US West (Oregon) AWS Regions.

To learn more visit the AWS Business Intelligence Blog, the Amazon Q Business What’s New Post and try QuickSight free for 30-days.
 

Read more


Amazon Q Business now provides insights from your databases and data warehouses (preview)

Today, AWS announces the public preview of the integration between Amazon Q Business and Amazon QuickSight, delivering a transformative capability that unifies answers from structured data sources (databases, warehouses) and unstructured data (documents, wikis, emails) in a single application.

Amazon Q Business is a generative AI–powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. Amazon QuickSight is a business intelligence (BI) tool that helps you visualize and understand your structured data through interactive dashboards, reports, and analytics. While organizations want to leverage generative AI for business insights, they experience fragmented access to unstructured and structured data.

With the QuickSight integration, customers can now link their structured sources to Amazon Q Business through QuickSight’s extensive set of data source connectors. Amazon Q Business responds in real time, combining the QuickSight answer from your structured sources with any other relevant information found in documents. For example, users could ask about revenue comparisons, and Amazon Q Business will return an answer from PDF financial reports along with real-time charts and metrics from QuickSight. This integration unifies insights across knowledge sources, helping organizations make more informed decisions while reducing the time and complexity traditionally required to gather insights.

This integration is available to all Amazon Q Business Pro, and Amazon QuickSight Reader Pro, and Author Pro users in the US East (N. Virginia) and US West (Oregon) AWS Regions. To learn more, visit the Amazon Q Business documentation site.

Read more


Amazon QuickSight now supports prompted reports and reader scheduling for pixel-perfect reports

We are enabling Amazon QuickSight readers to generate filtered views of pixel-perfect reports and create schedules to deliver reports via email. Readers can create up to five schedules per dashboard for themselves. Previously, only dashboard owners could create schedules and only on the default (author published) view of the dashboard. Now, if an author has added controls to the pixel-perfect report, schedules can be created or updated to respect selections on the filter control.

These features empower each user to create the view of pixel perfect report that they are interested in and send them as scheduled reports. Authors can create filter controls (prompts) for different audiences to customize the view they are looking for. Readers can use the prompts to filter data and schedule it as a report. Therefore, it ensures that customers receive reports that they are interested in and when they are interested in them.

Prompted Reports and Reader Scheduling are now available in all supported Amazon QuickSight regions - see here for QuickSight regional endpoints.

For more on how to set up this setting, go to our documentation for reader scheduling and documentation for prompted reports.

Read more


Amazon QuickSight launches Highcharts visual (preview)

Amazon QuickSight now offers Highcharts visuals, enabling authors to create custom visualizations using the Highcharts Core library. This new feature extends your visualization capabilities beyond QuickSight's standard chart offerings, allowing you to create bespoke charts such as sunburst charts, network graphs, 3D charts and many more.

Using declarative JSON syntax , authors can configure charts with greater flexibility and granular customization. You can easily reference QuickSight fields and themes in the JSON using QuickSight expressions. The integrated code editor includes contextual assistance features, providing autocomplete and real-time validation to ensure proper configuration. To maintain security, the Highcharts visual editor prevents the injection of CSS and JavaScript. Refer documentation for supported list of JSON and QuickSight expressions

Highcharts visual is now available in all supported Amazon QuickSight regions - US East (Ohio and N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Jakarta, Mumbai, Seoul, Singapore, Sydney and Tokyo), Canada (Central), Europe (Frankfurt, Ireland, London, Milan, Paris, Stockholm, Zurich), South America (São Paulo) and AWS GovCloud (US-West). To learn more about the Highcharts visual and how to leverage its capabilities in your QuickSight dashboards, visit our documentation.

Read more


Amazon QuickSight now supports import visual capability (preview)

Amazon QuickSight introduces the ability to import visuals from an existing dashboard or analysis into your current analysis where authors have ownership privileges. This feature streamlines dashboard and report creation by allowing you to transfer associated dependencies such as datasets, parameters, calculated fields, filter definitions, and visual properties, including conditional formatting rules.

Authors can boost productivity by importing visuals instead of recreating them, facilitating collaboration across teams. The feature intelligently resolves conflicts, eliminates duplicates, rescopes filter definitions, and adjusts visuals to match the destination sheet type and theme. Imported visuals are forked from the source, ensuring independent customization. To learn more, click here.

The Import Visuals feature is available in all supported Amazon QuickSight regions - US East (Ohio and N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Jakarta, Mumbai, Seoul, Singapore, Sydney and Tokyo), Canada (Central), Europe (Frankfurt, Ireland, London, Milan, Paris, Stockholm, Zurich), South America (São Paulo) and AWS GovCloud (US-West).

Read more


Amazon QuickSight launches Layer Map

Amazon QuickSight launches Layer Map, a new geospatial visual with shape layer support. With Layer Maps you can visualize data using custom geographic boundaries, such as congressional districts, sales territories, or user-defined regions. For example, sales managers can visualize sales performance by custom sales territories, and operations analysts can map package delivery volumes across different zip code formats (zip 2, zip 3).

Authors can add shape layer over a base map by uploading GeoJSON file and join it with their data to visualize values. You can also style shape layer by adjusting color, border, and opacity, as well as add interactivity through tooltips and actions. To learn more, click here.

Layer map is now available in following Amazon QuickSight regions - US East (Ohio and N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Jakarta, Mumbai, Seoul, Singapore, Sydney and Tokyo), Canada (Central), Europe (Frankfurt, Ireland, London, Milan, Paris, Stockholm, Zurich), South America (São Paulo).

Read more


Amazon QuickSight launches Image component

Amazon QuickSight now includes Image Component. This provides authors greater flexibility to incorporate static images into their QuickSight dashboards, analysis, reports and stories.

With Image component, Authors can upload images directly from your local desktop to QuickSight for a variety of use cases, such as adding company logos and branding, including background images with free-form layout, and creating captivating story covers. It also supports tooltip and alt text, providing additional context and accessibility for readers. Furthermore, it offers navigation and URL actions, enabling authors to make their images interactive, such as triggering specific dashboard actions when the image is clicked. For more details refer to documentation.

Image component is now available in all supported Amazon QuickSight regions - US East (Ohio and N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Jakarta, Mumbai, Seoul, Singapore, Sydney and Tokyo), Canada (Central), Europe (Frankfurt, Ireland, London, Milan, Paris, Stockholm, Zurich), South America (São Paulo) and AWS GovCloud (US-West).

Read more


Amazon QuickSight now supports font customization for visuals

Amazon QuickSight now supports the ability to customize fonts across specific visuals. Authors can now completely customize fonts for Table and Pivot table, while for remaining visuals they can customize fonts for specific properties including title, subtitle, legends title and legends values.

Authors can set the font size(in pixels), font family, color, and styling options like bold, italics, and underline across analysis, including dashboard, reports and embedded scenarios. With this update, you can align the dashboard's fonts with your organization's branding guidelines, creating a cohesive and visually appealing experience. Additionally, the font customization options can help improve the readability and meet accessibility standards, especially when viewing visuals on a large screen.

Font customization for above listed visuals is now available in all supported Amazon QuickSight regions - US East (Ohio and N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Jakarta, Mumbai, Seoul, Singapore, Sydney and Tokyo), Canada (Central), Europe (Frankfurt, Ireland, London, Milan, Paris, Stockholm, Zurich), South America (São Paulo) and AWS GovCloud (US-West).
 

Read more


Amazon QuickSight supports fine grained permissions for capabilities with APIs for IAM Identity Center users

Amazon QuickSight now supports user level custom permissions profile assignment for IAM Identity Center users. Custom permissions profiles enable administrators to restrict access to capabilities in the QuickSight application by adding the profile to a user. A custom permissions profile defines which capabilities are disabled for a user or role. For example, administrators can restrict specific users from exporting data to excel and csv and prevent users from sharing QuickSight assets.

Custom permissions profiles are managed with the following APIs: CreateCustomPermissions, ListCustomPermissions, DescribeCustomPermissions, UpdateCustomPermissions and DeleteCustomPermissions. Custom permissions assignment to users is managed with the following APIs: UpdateUserCustomPermission and DeleteUserCustomPermission. These APIs are supported with all identity types in QuickSight.

This feature is available in all AWS Regions where Amazon QuickSight is available. To learn more, see Customizing access to Amazon QuickSight capabilities.

Read more


Amazon QuickSight launches self serve Brand Customization

Amazon QuickSight launches self serve brand customization which allows QuickSight admins with relevant AWS Identity and Access Management (IAM) permissions to align QuickSight’s user interface with their organization’s brand by modifying visual elements like brand colors and logo. This creates a cohesive look and feel that aligns with their organization’s identity. Brand customization includes customization of logo, favorite icon, and color scheme used for QuickSight screen elements. Admins can configure and apply custom brand through public API or UI. Once a brand is applied to the account, it gets materialized across all non-admin pages in the QuickSight console, embedded components, as well as schedules, alerts and share emails. For more information and to see the list of all QuickSight components which can be customized click here.

The self serve brand customization is available with the Amazon QuickSight Enterprise Edition in all supported Amazon QuickSight regions - US East (Ohio and N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Jakarta, Mumbai, Seoul, Singapore, Sydney and Tokyo), Canada (Central), China (Beijing) Europe (Frankfurt, Ireland, London, Milan, Paris, Stockholm, Zurich), South America (São Paulo) and AWS GovCloud (US-West).
 

Read more


Amazon QuickSight now supports Client Credentials OAuth for Snowflake through API/CLI

Today, Amazon QuickSight is announcing the general availability of Client Credentials flow based OAuth through API/CLI to connect to Snowflake data sources. This launch enables you to create Snowflake connections as part of your Infrastructure as Code (IaC) efforts with full support for AWS CloudFormation.

This type of OAuth solution is used to obtain an access token for machine-to-machine communication. This flow is suitable for scenarios where a client (e.g., a server-side application or a script) needs to access resources hosted on a server without the involvement of a user. The launch includes support for Token (Client Secrets Based OAuth) & X509 (Client Private Key JWT) based OAuth. This launch also includes support for Role Based Access Control (RBAC). RBAC is used to display the corresponding schema/table information tied to that role during dataset creation by QuickSight authors.

This feature is now available in all supported Amazon QuickSight regions here. For more details, click here.

Read more


Amazon QuickSight now supports Client Credentials OAuth for Starburst through API/CLI

Today, Amazon QuickSight is announcing the general availability of Client Credentials flow based OAuth through API/CLI to connect to Starburst data sources. This launch enables you to create Starburst connections as part of your Infrastructure as Code (IaC) efforts with full support for AWS CloudFormation.

This type of OAuth solution is used to obtain an access token for machine-to-machine communication. This flow is suitable for scenarios where a client (e.g., a server-side application or a script) needs to access resources hosted on a server without the involvement of a user. The launch includes the support for Token (Client Secrets Based OAuth) & X509 (Client Private Key JWT) based OAuth. This launch also includes the support for Role Based Access Control (RBAC). RBAC is used to display the corresponding schema/table information tied to that role during dataset creation by QuickSight authors.

This feature is now available in all supported Amazon QuickSight regions here. For more details, click here.

Read more


amazon-rds

Amazon RDS Performance Insights extends On-demand Analysis to new regions

Amazon RDS (Relational Database Service) Performance Insights expands the availability of its on-demand analysis experience to 15 new regions. This feature is available for Aurora MySQL, Aurora PostgreSQL, and RDS for PostgreSQL engines.

This on-demand analysis experience, which was previously available in only 15 regions, is now available in all commercial regions. This feature allows you to analyze Performance Insights data for a time period of your choice. You can learn how the selected time period differs from normal, what went wrong, and get advice on corrective actions. Through simple-to-understand graphs and explanations, you can identify the chief contributors to performance issues. You will also get the guidance on the next steps to act on these issues. This can reduce the mean-time-to-diagnosis for database performance issues from hours to minutes.

Amazon RDS Performance Insights is a database performance tuning and monitoring feature of RDS that allows you to visually assess the load on your database and determine when and where to take action. With one click in the Amazon RDS Management Console, you can add a fully-managed performance monitoring solution to your Amazon RDS database.

To learn more about RDS Performance Insights, read the Amazon RDS User Guide and visit Performance Insights pricing for pricing details and region availability.
 

Read more


Amazon RDS for PostgreSQL, MySQL, and MariaDB now supports M8g and R8g database instances

AWS Graviton4-based M8g and R8g database (DB) instances are now generally available for Amazon Relational Database Service (RDS) for PostgreSQL, MySQL, and MariaDB. Graviton4-based instances provide up to a 40% performance improvement and up to 29% price/performance improvement for on-demand pricing over Graviton3-based instances of equivalent sizes on Amazon RDS open source databases, depending on database engine, version, and workload.

AWS Graviton4 processors are the latest generation of custom-designed AWS Graviton processors built on the AWS Nitro System. Both M8g and R8g DB instances are available with new 24xlarge and 48xlarge sizes. With these new sizes, M8g and R8g DB instances offer up to 192 vCPU, up to 50Gbps enhanced networking bandwidth, and up to 40Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS).

These instances are now available in the US East (N. Virginia, Ohio), US West (Oregon), and Europe (Frankfurt) Regions. For complete information on pricing and regional availability, please refer to the Amazon RDS pricing page. For information on specific engine versions that support these DB instance types, please see the Amazon RDS documentation.
 

Read more


Amazon RDS for SQL Server Supports Minor Versions in November 2024

New minor versions of Microsoft SQL Server are now available on Amazon RDS for SQL Server, providing performance enhancements and security fixes. Amazon RDS for SQL Server now supports these latest minor versions of SQL Server 2016, 2017, 2019 and 2022 across the Express, Web, Standard, and Enterprise editions.

We encourage you to upgrade your Amazon RDS for SQL Server database instances at your convenience. You can upgrade with just a few clicks in the Amazon RDS Management Console or by using the AWS CLI. Learn more about upgrading your database instances from the Amazon RDS User Guide. The new minor versions include:
 

  • SQL Server 2016 GDR for SP3 - 13.0.6455.2
  • SQL Server 2017 CU31 GDR - 14.0.3485.1
  • SQL Server 2019 CU29 GDR - 15.0.4410.1
  • SQL Server 2022 CU16 - 16.0.4165.4


These minor versions are available in all AWS commercial regions where Amazon RDS for SQL Server databases are available, including the AWS GovCloud (US) Regions.

Amazon RDS for SQL Server makes it simple to set up, operate, and scale SQL Server deployments in the cloud. See Amazon RDS for SQL Server Pricing for pricing details and regional availability.

Read more


Amazon RDS Blue/Green Deployments support minor version upgrade for RDS for PostgreSQL

Amazon Relational Database Service (Amazon RDS) Blue/Green Deployments now supports safer, simpler, and faster minor version upgrades for your Amazon RDS for PostgreSQL databases using physical replication. The use of PostgreSQL physical replication for database change management, such as minor version upgrade, simplifies your RDS Blue/Green Deployments upgrade experience by overcoming PostgreSQL community logical replication limitations.

You can now use Amazon RDS Blue/Green Deployments for deploying multiple database changes to production such as minor version upgrades, shrink storage volume, maintenance updates, and scaling instances in a single switchover event using physical replication. RDS Blue/Green Deployments for PostgreSQL relies on logical replication for major version upgrades.

Blue/Green Deployments for PostgreSQL creates a fully managed staging environment using physical replication for minor version upgrades, that allows you to deploy and test production changes, keeping your current production database safer. With a few clicks, you can switchover the staging environment to be the new production system in as fast as a minute, with no data loss and no changes to your application for database endpoint management.

Amazon RDS Blue/Green Deployments is now available for Amazon RDS for PostgreSQL using physical replication for all minor versions for the major versions 11 and higher in all applicable AWS Regions. In a few clicks, update your databases using Amazon RDS Blue/Green Deployments via the Amazon RDS Console. Learn more about Blue/Green Deployments on the Amazon RDS features page.
 

Read more


Amazon RDS for PostgreSQL supports pgvector 0.8.0

Amazon Relational Database Service (RDS) for PostgreSQL now supports pgvector 0.8.0, an open-source extension for PostgreSQL for storing and efficiently querying vector embeddings in your database, letting you use retrieval-augemented generation (RAG) when building your generative AI applications. pgvector 0.8.0 release includes improvements on PostgreSQL query planner’s selection of index when filters are present, which can deliver better query performance and improve search result quality.

pgvector 0.8.0 release includes a variety of improvements to how pgvector filters data using conditions in WHERE clauses and joins that can improve query performance and usability. Additionally, the iterative index scans help prevent ‘overfiltering’, ensuring generation of sufficient results to satisfy the conditions of a query. If an initial index scan doesn't satisfy the query conditions, pgvector will continue to search the index until it hits a configurable threshold. This release also has performance improvements for searching and building HNSW indexes.

pgvector 0.8.0 is available on database instances in Amazon RDS running PostgreSQL 17.1 and higher, 16.5 and higher, 15.9 and higher, 14.14 and higher, and 13.17 and higher in all applicable AWS Regions.

Amazon RDS for PostgreSQL makes it simple to set up, operate, and scale PostgreSQL deployments in the cloud. See Amazon RDS for PostgreSQL Pricing for pricing details and regional availability. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console.

Read more


Amazon RDS Blue/Green Deployments Green storage fully performant prior to switchover

Amazon Relational Database Service (Amazon RDS) Blue/Green Deployments now support managed initialization of Green storage volumes that accelerates the loading of storage blocks from Amazon S3. This ensures that the volumes are fully performant prior to switchover of the Green databases. Blue/Green Deployments create a fully managed staging environment, or Green database, by restoring the Blue database snapshot. The Green database allows you to deploy and test production changes, keeping your current production database, or Blue database, safer.

Previously, you had to manually initialize the storage volumes of the Green databases. With this launch, RDS Blue/Green Deployments will proactively manage and accelerate the storage initialization for your green database instances. You will be able to view the progress of storage initialization using the RDS Console and command line interface (CLI). Managed storage initialization of the Green databases is supported for Blue/Green deployments created for RDS for PostgreSQL, RDS for MySQL, and RDS for MariaDB engines.

Amazon RDS Blue/Green Deployments are available for Amazon RDS for PostgreSQL major versions 12 and higher, RDS for MySQL major versions 5.7 and higher, and Amazon RDS for MariaDB major versions 10.4 and higher.

In a few clicks, update your databases using Amazon RDS Blue/Green Deployments via the Amazon RDS Console. Learn more about RDS Blue/Green Deployments and the supported engine versions here.
 

Read more


Amazon RDS for PostgreSQL supports minor versions 17.2, 16.6, 15.10, 14.15, 13.18, and 12.22

Amazon Relational Database Service (RDS) for PostgreSQL now supports the latest minor versions 17.2, 16.6, 15.10, 14.15, 13.18, and 12.22. We recommend that you upgrade to the latest minor versions to fix known security vulnerabilities in prior versions of PostgreSQL, and to benefit from the bug fixes added by the PostgreSQL community.

You are able to leverage automatic minor version upgrades to automatically upgrade your databases to more recent minor versions during scheduled maintenance window. Learn more about upgrading your database instances in the Amazon RDS User Guide.

Additionally, starting with PostgreSQL major version 18, Amazon RDS for PostgreSQL will deprecate plcoffee and plls PostgreSQL extensions. We recommend that you stop using Coffee scripts and LiveScript in your applications, ensuring you have an upgrade path for future.

Amazon RDS for PostgreSQL makes it simple to set up, operate, and scale PostgreSQL deployments in the cloud. See Amazon RDS for PostgreSQL Pricing for pricing details and regional availability. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console.
 

Read more


Amazon RDS for MySQL now supports MySQL 8.4 LTS release

Amazon RDS for MySQL now supports MySQL major version 8.4, the latest long-term support (LTS) release from the MySQL community. RDS for MySQL 8.4 is integrated with AWS Libcrypto (AWS-LC) FIPS module (Certificate #4816), and includes support for multi-source replication plugin for analytics, Group Replication plugin for continuous availability, as well as several performance and feature improvements added by the MySQL community. Learn more about the community enhancements in the MySQL 8.4 release notes.

You can leverage Amazon RDS Managed Blue/Green deployments to upgrade your databases from MySQL 8.0 to MySQL 8.4. Learn more about RDS for MySQL 8.4 features and upgrade options, including Managed Blue/Green deployments in the Amazon RDS User Guide.

Amazon RDS for MySQL 8.4 is now available in all AWS Commercial and the AWS GovCloud (US) Regions.

Amazon RDS for MySQL makes it straightforward to set up, operate, and scale MySQL deployments in the cloud. Learn more about pricing details and regional availability at Amazon RDS for MySQL. Create or update a fully managed Amazon RDS for MySQL 8.4 database in the Amazon RDS Management Console.
 

Read more


Announcing auto migration of EC2 databases to Amazon RDS using AWS Database Migration Service

AWS announces a “1-click move to managed” feature for Amazon Relational Database Service (Amazon RDS) that enables you to easily and seamlessly migrate your self-managed MySQL, PostgreSQL, or MariaDB databases to an equivalent Amazon RDS or Amazon Aurora database.

Using the 1-click move to managed functionality on the Amazon RDS console, you can migrate your self-managed databases running on an Amazon EC2 server to a managed Amazon RDS or Aurora database. This feature eliminates the infrastructure set up burden and makes it easy and seamless to re-platform your application’s database workload to Amazon RDS. Amazon RDS leverages Data Migration Service (DMS) homogeneous migration APIs to abstract and automate the entire process, including networking and system configuration, required to initiate and complete the migration. The process is flexible, scalable, and cost effective because the entire migration is performed using a temporary environment and using native database tools.

The RDS 1-click move to managed feature is now available on the RDS console in AWS commercial regions where homogeneous data migrations are supported. Get started today by visiting the Amazon RDS Console. Refer the RDS user guide or Aurora user guide to learn more.

Read more


Amazon RDS Blue/Green Deployments support storage volume shrink

Amazon Relational Database Service (Amazon RDS) Blue/Green Deployments now supports the ability to shrink the storage volumes for your RDS database instances, allowing you to better utilize your storage resources and manage their costs. You can now increase and decrease your storage volume size based on anticipated application demands.

Previously, to shrink a storage volume, you had to manually create a new database instance with a smaller volume size, manually migrate the data from your current database to the newly created database instance, and switch database endpoints, often resulting in extended downtime. Blue/Green Deployments create a fully managed staging environment, or Green databases, with your specified storage size, and keep the Blue and Green databases in sync. With a few clicks, you can promote the Green databases to be the new production system in as fast as a minute, with no data loss and no changes to you're application to switch database endpoints.

Amazon RDS Blue/Green Deployments support for storage volume shrink is available for Amazon RDS for PostgreSQL major versions 12 and higher, RDS for MySQL major versions 5.7 and higher, and Amazon RDS for MariaDB major versions 10.4 and higher.

In a few clicks, update your databases using Amazon RDS Blue/Green Deployments via the Amazon RDS Console. Learn more about RDS Blue/Green Deployments and the supported engine versions here.

Read more


Amazon Aurora now supports PostgreSQL 17.0 in the Amazon RDS Database preview environment

Amazon Aurora PostgreSQL-Compatible Edition now supports PostgreSQL version 17.0 in the Amazon RDS Database Preview Environment, allowing you to evaluate PostgreSQL 17.0 on Amazon Aurora PostgreSQL. PostgreSQL 17.0 was released by the PostgreSQL community on September 26, 2024. PostgreSQL 17 adds new features like a new memory management system for VACUUM and new SQL/JSON capabilities, including constructors, identity functions, and the JSON_TABLE()function. To learn more about PostgreSQL 17, read here.

Database instances in the RDS Database Preview Environment allow testing of a new database engine without the hassle of having to self-install, provision, and manage a preview version of the Aurora PostgreSQL database software. Clusters are retained for a maximum period of 60 days and are automatically deleted after this retention period. Amazon RDS Database Preview Environment database instances are priced the same as production Aurora instances created in the US East (Ohio) Region.
 

Amazon Aurora is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. To get started with Amazon Aurora, take a look at our getting started page.

Read more


Amazon RDS for PostgreSQL now supports major version 17

Amazon RDS for PostgreSQL now supports major version 17, starting with PostgreSQL version 17.1. The release includes support for the latest minor versions 16.5, 15.9, 14.14, 13.17, and 12.21. RDS for PostgreSQL comes with support for 94 PostgreSQL extensions such as pgvector 0.8.0, pg_tle 1.4.0, pgactive 2.1.4, and hypopg.1.4.1 that are updated to support PostgreSQL 17. This release also includes support for a new SQL function for monitoring autovacuum, providing insights to prevent transaction ID wraparound.

PostgreSQL 17 community updates include support for vacuuming that reduces memory usage, improves time to finish vacuuming, and shows progress of vacuuming indexes. With PostgreSQL 17, you no longer need to drop logical replication slots when performing a major version upgrade. PostgreSQL 17 continues to build on the SQL/JSON standard, adding support for `JSON_TABLE` features that can convert JSON to a standard PostgreSQL table. PostgreSQL 17 also includes general improvements to query performance and adds more flexibility to partition management with the ability to SPLIT/MERGE partitions.

You can upgrade your database using several options including RDS Blue/Green deployments, upgrade in-place, restore from a snapshot. Learn more about upgrading your database instances in the Amazon RDS User Guide.

Amazon RDS for PostgreSQL makes it simple to set up, operate, and scale PostgreSQL deployments in the cloud. See Amazon RDS for PostgreSQL Pricing for pricing details and regional availability. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console.

Read more


Amazon RDS for MySQL supports new minor version 8.0.40

Amazon Relational Database Service (Amazon RDS) for MySQL now supports MySQL minor version 8.0.40. We recommend that you upgrade to the latest minor versions to fix known security vulnerabilities in prior versions of MySQL, and to benefit from the bug fixes, performance improvements, and new functionality added by the MySQL community. Learn more about the enhancements in RDS for MySQL 8.0.40 in the Amazon RDS user guide.

You can leverage automatic minor version upgrades to automatically upgrade your databases to more recent minor versions during scheduled maintenance windows. You can also leverage Amazon RDS Managed Blue/Green deployments for safer, simpler, and faster updates to your MySQL instances. Learn more about upgrading your database instances, including automatic minor version upgrades and Blue/Green Deployments, in the Amazon RDS User Guide.

Amazon RDS for MySQL makes it simple to set up, operate, and scale MySQL deployments in the cloud. Learn more about pricing details and regional availability at Amazon RDS for MySQL. Create or update a fully managed Amazon RDS for MySQL database in the Amazon RDS Management Console.

Read more


Amazon RDS announces cross-region automated backups in Asia Pacific (Hyderabad) and Africa (Cape Town)

Cross-Region Automated Backup replication for Amazon RDS is now available in Asia Pacific (Hyderabad) and Africa (Cape Town) Regions. This launch allows you to setup automated backup replication between Asia Pacific (Hyderabad) and Asia Pacific (Mumbai); and between Africa (Cape Town) and Europe (Ireland), Europe (London), or Europe (Frankfurt) Regions.

Automated Backups enable recovery capability for mission-critical databases by providing you the ability to restore your database to a specific point in time within your backup retention period. With Cross-Region Automated Backup replication, RDS will replicate snapshots and transaction logs to the chosen destination AWS Region. In the event that your primary AWS Region becomes unavailable, you can restore the automated backup to a point in time in the secondary AWS Region and quickly resume operations. As transaction logs are uploaded to the target AWS Region frequently, you can achieve a Recovery Point Objective (RPO) of within the last few minutes.

You can setup Cross-Region Automated Backup replication with just a few clicks on the Amazon RDS Management Console or using the AWS SDK or CLI. Cross-Region Automated Backup replication is available on Amazon RDS for PostgreSQL, Amazon RDS for MariaDB, Amazon RDS for MySQL, Amazon RDS for Oracle, and Amazon RDS for Microsoft SQL Server. For more information, including instructions on getting started, read the Amazon RDS documentation.

Read more


Amazon RDS Performance Insights now supports Data API for Aurora MySQL

Amazon RDS (Relational Database Service) Performance Insights now allows customers to monitor queries run through the RDS Data API for Aurora MySQL clusters. The RDS Data API provides an HTTP endpoint to run SQL statements on an Amazon Aurora DB cluster.

With this launch, customers are now able to use Performance Insights to monitor the impact of the queries run through the RDS Data API on their database performance. Additionally, customers can identify these queries and their related statistics by slicing the database load metric using the host name dimension, and filtering for 'RDS Data API'.

Amazon RDS Performance Insights is a database performance tuning and monitoring feature of RDS that allows you to visually assess the load on your database and determine when and where to take action. With one click in the Amazon RDS Management Console, you can add a fully-managed performance monitoring solution to your Amazon RDS database.

To learn more about RDS Performance Insights, read the Amazon RDS User Guide and visit Performance Insights pricing for pricing details and region availability.
 

Read more


amazon-rds-for-mysql

Amazon RDS Blue/Green Deployments support storage volume shrink

Amazon Relational Database Service (Amazon RDS) Blue/Green Deployments now supports the ability to shrink the storage volumes for your RDS database instances, allowing you to better utilize your storage resources and manage their costs. You can now increase and decrease your storage volume size based on anticipated application demands.

Previously, to shrink a storage volume, you had to manually create a new database instance with a smaller volume size, manually migrate the data from your current database to the newly created database instance, and switch database endpoints, often resulting in extended downtime. Blue/Green Deployments create a fully managed staging environment, or Green databases, with your specified storage size, and keep the Blue and Green databases in sync. With a few clicks, you can promote the Green databases to be the new production system in as fast as a minute, with no data loss and no changes to you're application to switch database endpoints.

Amazon RDS Blue/Green Deployments support for storage volume shrink is available for Amazon RDS for PostgreSQL major versions 12 and higher, RDS for MySQL major versions 5.7 and higher, and Amazon RDS for MariaDB major versions 10.4 and higher.

In a few clicks, update your databases using Amazon RDS Blue/Green Deployments via the Amazon RDS Console. Learn more about RDS Blue/Green Deployments and the supported engine versions here.

Read more


amazon-rds-for-oracle

Amazon RDS for Oracle now M7i and R7i instances types

Amazon Relational Database (RDS) for Oracle now supports M7i and R7i database instance types. M7i and R7i are the latest Intel-based offering and are available with a new maximum instance size of 48xlarge, which brings 50% more vCPU and memory than the maximum size of M6i and R6i instance types.

M7i and R7i instances are available for Amazon RDS for Oracle in Bring Your Own License model for both Oracle Database Enterprise Edition (EE) and Oracle Database Standard Edition 2 (SE2) editions. You can launch the new database instance in the Amazon RDS Management Console or using the AWS CLI.

Amazon RDS for Oracle is a fully managed commercial database that makes it easy to set up, operate, and scale Oracle deployments in the cloud. To learn more about Amazon RDS for Oracle, read RDS for Oracle User Guide and visit Amazon RDS for Oracle Pricing for available instance configurations, pricing details, and region availability.
 

Read more


Amazon RDS for Oracle now supports October 2024 Release Update

Amazon Relational Database Service (Amazon RDS) for Oracle now supports the October 2024 Release Update (RU) for Oracle Database versions 19c and 21c.

To learn more about Oracle RUs supported on Amazon RDS for each engine version, see the Amazon RDS for Oracle Release notes. If the auto minor version upgrade (AmVU) option is enabled, your DB instance is upgraded to the latest quarterly RU six to eight weeks after it is made available by Amazon RDS for Oracle in your AWS Region. These upgrades will happen during the maintenance window. To learn more, see the Amazon RDS maintenance window documentation.

For more information about the AWS Regions where Amazon RDS for Oracle is available, see the AWS Region table.

Read more


amazon-rds-for-sql-server

Amazon RDS for SQL Server Supports Minor Versions in November 2024

New minor versions of Microsoft SQL Server are now available on Amazon RDS for SQL Server, providing performance enhancements and security fixes. Amazon RDS for SQL Server now supports these latest minor versions of SQL Server 2016, 2017, 2019 and 2022 across the Express, Web, Standard, and Enterprise editions.

We encourage you to upgrade your Amazon RDS for SQL Server database instances at your convenience. You can upgrade with just a few clicks in the Amazon RDS Management Console or by using the AWS CLI. Learn more about upgrading your database instances from the Amazon RDS User Guide. The new minor versions include:
 

  • SQL Server 2016 GDR for SP3 - 13.0.6455.2
  • SQL Server 2017 CU31 GDR - 14.0.3485.1
  • SQL Server 2019 CU29 GDR - 15.0.4410.1
  • SQL Server 2022 CU16 - 16.0.4165.4


These minor versions are available in all AWS commercial regions where Amazon RDS for SQL Server databases are available, including the AWS GovCloud (US) Regions.

Amazon RDS for SQL Server makes it simple to set up, operate, and scale SQL Server deployments in the cloud. See Amazon RDS for SQL Server Pricing for pricing details and regional availability.

Read more


Amazon RDS for SQL Server supports minor versions in October 2024

New minor versions of Microsoft SQL Server are now available on Amazon RDS for SQL Server, providing performance enhancements and security fixes. Amazon RDS for SQL Server now supports these latest minor versions of SQL Server 2016, 2017, 2019 and 2022 across the Express, Web, Standard, and Enterprise editions.

We encourage you to upgrade your Amazon RDS for SQL Server database instances at your convenience. You can upgrade with just a few clicks in the Amazon RDS Management Console or by using the AWS CLI. Learn more about upgrading your database instances from the Amazon RDS User Guide. The new minor versions include:

  • SQL Server 2016 SP3 GRD - 13.0.6450.1
  • SQL Server 2017 CU31 - 14.0.3480.1
  • SQL Server 2019 CU28 - 15.0.4395.2
  • SQL Server 2022 CU15 - 16.0.4150.1


These minor versions are available in all AWS commercial regions where Amazon RDS for SQL Server databases are available, including the AWS GovCloud (US) Regions.

Amazon RDS for SQL Server makes it simple to set up, operate, and scale SQL Server deployments in the cloud. See Amazon RDS for SQL Server Pricing for pricing details and regional availability.

Read more


amazon-redshift

Amazon SageMaker Lakehouse and Amazon Redshift support for zero-ETL integrations from eight applications

Amazon SageMaker Lakehouse and Amazon Redshift now support zero-ETL integrations from applications, automating the extraction and loading of data from eight applications, including Salesforce, SAP, ServiceNow, and Zendesk. As an open, unified, and secure lakehouse for your analytics and AI initiatives, Amazon SageMaker Lakehouse enhances these integrations to streamline your data management processes.

These zero-ETL integrations are fully managed by AWS and minimize the need to build ETL data pipelines. With this new zero-ETL integration, you can efficiently extract and load valuable data from your customer support, relationship management, and ERP applications into your data lake and data warehouse for analysis. Zero-ETL integration reduces users' operational burden and saves the weeks of engineering effort needed to design, build, and test data pipelines. By selecting a few settings in the no-code interface, you can quickly set up your zero-ETL integration to automatically ingest and continually maintain an up-to-date replica of your data in the data lake and data warehouse. Zero-ETL integrations help you focus on deriving insights from your application data, breaking down data silos in your organization and improving operational efficiency. Now run enhanced analysis on your application data using Apache Spark and Amazon Redshift for analytics or machine learning. Optimize your data ingestion processes and focus instead on analysis and gaining insights. 

Amazon SageMaker Lakehouse and Amazon Redshift support for zero-ETL integrations from eight applications is now generally available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).

You can create and manage integrations using either the AWS Glue console, the AWS Command Line Interface (AWS CLI), or the AWS Glue APIs. To learn more, visit What is zero-ETL and What is AWS Glue.

Read more


Amazon Redshift multi-data warehouse writes through data sharing is now generally available

AWS announces the general availability of Amazon Redshift multi-data warehouse writes through data sharing. You can now start writing to Amazon Redshift databases from multiple Amazon Redshift data warehouses in just a few clicks. The written data is available to all Amazon Redshift warehouses as soon as it is committed. This allows your teams to flexibly scale compute by adding warehouses of different types and sizes based on their write workloads’ price-performance needs, isolate compute to more easily meet your workload performance requirements, and easily and securely collaborate with other teams.

With Amazon Redshift multi-data warehouse writes through data sharing, you can easily keep extract, load and transform (ETL) jobs more predictable by splitting workloads between multiple warehouses, helping you meet your workload performance requirements with less time and effort. You can track usage and control costs as each team or application can write using its own warehouse, regardless of where the data is stored. You can use different types of RA3 and Serverless warehouses across different sizes to meet each individual workload's price-performance needs. Your data is immediately available across AWS accounts and regions once committed, enabling better collaboration across your organization.

Amazon Redshift multi-warehouse writes through data sharing is available for RA3 provisioned clusters and Serverless workgroups in all AWS regions where Amazon Redshift data sharing is supported. To get started with Amazon Redshift multi-warehouse writes through data sharing, visit the documentation page.

Read more


Amazon Redshift announces support for Confluent Cloud and Apache Kafka

Amazon Redshift now supports streaming ingestion from Confluent Managed Cloud and self-managed Apache Kafka clusters on Amazon EC2 instances, expanding its capabilities beyond Amazon Kinesis Data Streams (KDS) and Amazon Managed Streaming for Apache Kafka (MSK).

With this update, customers can ingest data from a wider range of streaming sources directly into their Amazon Redshift data warehouses. Amazon Redshift introduces mTLS (mutual Transport Layer Security) as the authentication protocol for secure communication between Amazon Redshift and the newly supported Kafka streaming sources. This ensures that data ingestion from these new sources maintains the high security standards expected in enterprise data workflows. Additionally, a new SQL identifier 'KAFKA' has been introduced to simplify the identification of these newly supported Kafka sources in Amazon Redshift External Schema definitions.

You can start using this expanded streaming ingestion capability immediately, to build more comprehensive and flexible data pipelines that ingest data from various Kafka sources — those offered by AWS (Amazon MSK), those available from partners (Confluent Cloud) or those that are self-managed (Apache Kafka) on Amazon EC2.

To learn more and get started with streaming data into Amazon Redshift from any Kafka source, refer to the Amazon Redshift streaming documentation.

Read more


Amazon Redshift Query Editor V2 Increases Maximum Result Set and Export size to 100MB

AWS announces Amazon Redshift Query Editor V2 now supports increased maximum result set and export size to 100MB of your query result sets with no row limit. Prior to this limit of your query result sets was* 5MB or 100,000 rows. This enhancement provides greater flexibility for you and your team to work with large datasets, enabling you to generate, analyze, and export more comprehensive data without previous constraints.

If you work with large datasets, such as security logs, gaming data, and other big data workloads, that require in-depth analysis, the previous 5MB or 100,000-row limit on result sets and exports often fell short of your needs, forcing you to piece together insights from multiple queries and downloads. With the new 100MB result set size and export capabilities in Amazon Redshift Query Editor, you can now generate a single, more complete view of your data, export it directly as a CSV or JSON file, and conduct richer analysis to drive better-informed business decisions.

The increased 100MB result set and export size capabilities for Amazon Redshift Query Editor V2 are available in all AWS commercial Regions. For more information about the AWS Regions where Redshift is available, please refer to the AWS Regions table.

To learn more, see the Amazon Redshift documentation.
 

Read more


Amazon Q generative SQL in Amazon Redshift Query Editor now available in additional AWS regions

Amazon Q generative SQL in Amazon Redshift Query Editor is available in AWS South America (Sao Paulo), Europe (London), and Canada (Central) regions. Amazon Q generative SQL is available in Amazon Redshift Query Editor, an out-of-the-box web-based SQL editor for Amazon Redshift, to simplify SQL query authoring and increase your productivity by allowing you to express SQL queries in natural language and receive SQL code recommendations. Furthermore, it allows you to get insights faster without extensive knowledge of your organization’s complex Amazon Redshift database metadata.

Amazon Q generative SQL uses generative Artificial Intelligence (AI) to analyze user intent, SQL query patterns, and schema metadata to identify common SQL query patterns directly within Amazon Redshift, accelerating the SQL query authoring process for users, and reducing the time required to derive actionable data insights. Amazon Q generative SQL provides a conversational interface where users can submit SQL queries in natural language, within the scope of their current data permissions. For example, when you submit a question such as 'Find total revenue by region,' Amazon Q generative SQL will recognize and suggest the appropriate SQL code for this frequent query pattern by joining multiple Amazon Redshift tables, thus saving time and decreasing the likelihood of errors. You can either accept the query or enhance your prior query by asking additional questions.

To learn more about pricing, visit the Amazon Q Developer pricing page. See the documentation to get started.
 

Read more


Amazon Redshift to enhance security by changing default behavior

Security is the top priority at Amazon Web Services (AWS). To that end, Amazon Redshift is introducing enhanced security defaults which helps you adhere to best practices in data security and reduce the risk of potential misconfigurations.

Three default security changes will take effect after January 10, 2025. First, public accessibility will be disabled by default for all newly created provisioned clusters and clusters restored from snapshots. By default, connections to clusters will only be permitted from client applications within the same Virtual Private Cloud (VPC). Second, database encryption will be enabled by default for provisioned clusters. When creating a provisioned cluster without specifying a KMS key, the cluster will automatically be encrypted with an AWS-owned key. Third, Amazon Redshift will enforce SSL connections by default for clients connecting to newly created provisioned and restored data warehouses. This default change will also apply to new serverless workgroups.

Please review your data warehouse creation configurations, scripts, and tools to make necessary changes to align with new default settings before January 10, 2025, to avoid any potential disruption. You will still have the ability to modify cluster or workgroup settings to change the default behavior.Your existing data warehouses will not be impacted by these security enhancements. However, it is recommended you review and update your configurations to align with the new default security settings in order to further strengthen the security posture.

These new default changes will be implemented in all AWS regions where Amazon Redshift is available. For more information, please refer to our documentation.
 

Read more


Amazon S3 Access Grants now integrate with Amazon Redshift

Amazon S3 Access Grants now integrate with Amazon Redshift. S3 Access Grants map identities from your Identity Provider (IdP), such as Entra ID and Okta, to datasets stored in Amazon S3, helping you to easily manage data permissions at scale. This integration gives you the ability to manage S3 permissions for AWS IAM Identity Center users and groups when using Redshift, without the need to write and maintain bucket policies or individual IAM roles.

Using S3 Access Grants, you can grant permissions to buckets or prefixes in S3 to users and groups in your IdP by connecting S3 with IAM Identity Center. Then, when you use Identity Center authentication for Redshift, end users in the appropriate user groups will automatically have permission to read and write data in S3 using COPY, UNLOAD, and CREATE LIBRARY SQL commands. S3 Access Grants then automatically update S3 permissions as users are added and removed from user groups in the IdP.

Amazon S3 Access Grants with Amazon Redshift are available for users federated via IdP in all AWS Regions where AWS IAM Identity Center is available. For pricing details, visit Amazon S3 pricing and Amazon Redshift pricing. To learn more about S3 Access Grants, refer to the documentation.

Read more


Amazon Redshift Serverless higher base capacity of 1024 RPUs is now available in additional AWS regions

Amazon Redshift Serverless higher base capacity of up to 1024 Redshift Processing Units (RPUs) is now available in the AWS Europe (Frankfurt) and Europe (Ireland) regions. Amazon Redshift Serverless measures data warehouse capacity in RPUs, and you pay only for the duration of workloads run in RPU-hours on a per-second basis. Previously, the highest base capacity was 512 RPUs. With the new higher base capacity of 1024 RPUs, you now have even more flexibility to support workloads of large complexity, processing terabytes or petabytes in size to accelerate data loading and querying based on your price performance requirements. You now have a base capacity range from 8 to 1024 RPUs in the two additional AWS regions.

The large base capacity of Amazon Redshift Serverless can improve performance for your workloads serving use cases, such as complex and long running queries, queries with large numbers of columns, queries with joins and aggregations requiring high memory, data lake queries scanning large amounts of data, and ingesting large datasets into the data warehouse.

To get started, see the Amazon Redshift Serverless feature page, user documentation, and API Reference.

Read more


Announcing AWS DMS Serverless improved Oracle to S3 full load throughput

AWS Database Migration Service Serverless (AWS DMSS) now offers improved throughput for Oracle to Amazon S3 full load migrations. With this enhancement, you can now migrate data from Oracle databases to S3 up to two times faster than previously possible with AWS DMSS.

AWS DMSS Oracle to Amazon S3 Full Load performance enhancements will be applied automatically whenever AWS DMSS detects a full load migration between an Oracle database and Amazon S3. For detailed information on these improvements, refer to the AWS DMSS enhanced throughput documentation.

To learn more, see the AWS DMS Full Load for Oracle databases documentation. For AWS DMS regional availability, please refer to the AWS Region Table.

Read more


Amazon Redshift Multi-AZ is generally available for RA3 clusters in 3 additional AWS regions

Amazon Redshift is announcing the general availability of Multi-AZ deployments for RA3 clusters in the Asia Pacific (Malaysia), Europe (London) and South America (Sao Paulo) AWS regions. Redshift Multi-AZ deployments support running your data warehouse in multiple AWS Availability Zones (AZ) simultaneously and continue operating in unforeseen failure scenarios. A Multi-AZ deployment raises the Amazon Redshift Service Level Agreement (SLA) to 99.99% and delivers a highly available data warehouse for the most demanding mission-critical workloads.

Enterprise customers with mission critical workloads require a data warehouse with fast failover times and simplified operations that minimizes impact to applications. Redshift Multi-AZ deployment helps meet these demands by reducing recovery time and automatically recovering in another AZ during an unlikely event such as an AZ failure. A Redshift Multi-AZ data warehouse also maximizes query processing throughput by operating in multiple AZs and using compute resources from both AZs to process read and write queries.

Amazon Redshift Multi-AZ is now generally available for RA3 clusters through the Redshift Console, API and CLI. For all regions where Multi-AZ is available, see the supported AWS regions.

To learn more about Amazon Redshift Multi-AZ, see the Amazon Redshift Reliability page and Amazon Redshift Multi-AZ documentation page.

Read more


AWS announces CSV result format support for Amazon Redshift Data API

Amazon Redshift Data API enables you to access data efficiently from Amazon Redshift data warehouses by eliminating the need to manage database drivers, connections, network configurations, data buffering, and more. Data API now supports comma seperated values (CSV) result format which provides flexibility in how you access and process data, allowing you to choose between JSON and CSV formats based on your application needs.

With CSV result format, you can now specify whether you want your query results formatted as JSON or CSV through the --result-format parameter when calling ExecuteStatement and BatchExecuteStatement APIs. To retrieve CSV results, use the new GetStatementResultV2 API which supports CSV results, while GetStatementResult API continues to support only JSON. If not specified, the default format remains JSON.

CSV support with Data API is now generally available for both Redshift Provisioned and Amazon Redshift Serverless data warehouses in all AWS commercial and the AWS GovCloud (US) Regions which support Data API. To get started and learn more, visit Amazon Redshift database developers guide.

Read more


amazon-route-53

Introducing Amazon Route 53 Resolver DNS Firewall Advanced

Today, AWS announced Amazon Route 53 Resolver DNS Firewall Advanced, a new set of capabilities on Route 53 Resolver DNS Firewall that allow you to monitor and block suspicious DNS traffic associated with advanced DNS threats, such as DNS tunneling and Domain Generation Algorithms (DGAs), that are designed to avoid detection by threat intelligence feeds or are difficult for threat intelligence feeds alone to track and block in time.

Today, Route 53 Resolver DNS Firewall helps you block DNS queries made for domains identified as low-reputation or suspected to be malicious, and to allow queries for trusted domains. With DNS Firewall Advanced, you can now enforce additional protections that monitor and block your DNS traffic in real-time based on anomalies identified in the domain names being queried from your VPCs. To get started, you can configure one or multiple DNS Firewall Advanced rule(s), specifying the type of threat (DGA, DNS tunneling) to be inspected. You can add the rule(s) to a DNS Firewall rule group, and enforce it on your VPCs by associating the rule group to each desired VPC directly or by using AWS Firewall Manager, AWS Resource Access Manager (RAM), AWS CloudFormation, or Route 53 Profiles.

Route 53 Resolver DNS Firewall Advanced is available in all AWS Regions, including the AWS GovCloud (US) Regions. To learn more about the new capabilities and the pricing, visit the Route 53 Resolver DNS Firewall webpage and the Route 53 pricing page. To get started, visit the Route 53 documentation.

Read more


amazon-s3

Amazon S3 Access Grants now integrate with AWS Glue

Amazon S3 Access Grants now integrate with AWS Glue for analytics, machine learning (ML), and application development workloads in AWS. S3 Access Grants map identities from your Identity Provider (IdP), such as Entra ID and Okta or AWS Identity and Access Management (IAM) principals, to datasets stored in Amazon S3. This integration gives you the ability to manage S3 permissions for end users running jobs with Glue 5.0 or later, without the need to write and maintain bucket policies or individual IAM roles.

AWS Glue provides a data integration service that simplifies data exploration, preparation, and integration from multiple sources, including S3. Using S3 Access Grants, you can grant permissions to buckets or prefixes in S3 to users and groups in an existing corporate directory, or to IAM users and roles. When end users in the appropriate user groups access S3 using Glue ETL for Apache Spark, they will then automatically have the necessary permissions to read and write data. S3 Access Grants also automatically update S3 permissions as users are added and removed from user groups in the IdP.

Amazon S3 Access Grants support is available when using AWS Glue 5.0 and later, and is available in all commercial AWS Regions where AWS Glue 5.0 and AWS IAM Identity Center are available. For pricing details, visit Amazon S3 pricing and Amazon Glue pricing. To learn more about S3 Access Grants, refer to the S3 User Guide.
 

Read more


Announcing Amazon S3 Metadata (Preview) – Easiest and fastest way to manage your metadata

Amazon S3 Metadata is the easiest and fastest way to help you instantly discover and understand your S3 data with automated, easily-queried metadata that updates in near real-time. This helps you to curate, identify, and use your S3 data for business analytics, real-time inference applications, and more. S3 Metadata supports object metadata, which includes system-defined details like size and the source of the object, and custom metadata, which allows you to use tags to annotate your objects with information like product SKU, transaction ID, or content rating, for example.

S3 Metadata is designed to automatically capture metadata from objects as they are uploaded into a bucket, and to make that metadata queryable in a read-only table. As data in your bucket changes, S3 Metadata updates the table within minutes to reflect the latest changes. These metadata tables are stored in S3 Tables, the new S3 storage offering optimized for tabular data. S3 Tables integration with AWS Glue Data Catalog is in preview, allowing you to stream, query, and visualize data—including S3 Metadata tables—using AWS Analytics services such as Amazon Data Firehose, Athena, Redshift, EMR, and QuickSight. Additionally, S3 Metadata integrates with Amazon Bedrock, allowing for the annotation of AI-generated videos with metadata that specifies its AI origin, creation timestamp, and the specific model used for its generation.

Amazon S3 Metadata is currently available in preview in the US East (N. Virginia), US East (Ohio), and US West (Oregon) Regions, and coming soon to additional Regions. For pricing details, visit the S3 pricing page. To learn more, visit the product page, documentation, and AWS News Blog.

Read more


Announcing Amazon S3 Tables – Fully managed Apache Iceberg tables optimized for analytics workloads

Amazon S3 Tables deliver the first cloud object store with built-in Apache Iceberg support, and the easiest way to store tabular data at scale. S3 Tables are specifically optimized for analytics workloads, resulting in up to 3x faster query throughput and up to 10x higher transactions per second compared to self-managed tables. With S3 Tables support for the Apache Iceberg standard, your tabular data can be easily queried by popular AWS and third-party query engines. Additionally, S3 Tables are designed to perform continual table maintenance to automatically optimize query efficiency and storage cost over time, even as your data lake scales and evolves. S3 Tables integration with AWS Glue Data Catalog is in preview, allowing you to stream, query, and visualize data—including S3 Metadata tables—using AWS Analytics services such as Amazon Data Firehose, Athena, Redshift, EMR, and QuickSight.

S3 Tables introduce table buckets, a new bucket type that is purpose-built to store tabular data. With table buckets, you can quickly create tables and set up table-level permissions to manage access to your data lake. You can then load and query data in your tables with standard SQL, and take advantage of Apache Iceberg’s advanced analytics capabilities such as row-level transactions, queryable snapshots, schema evolution, and more. Table buckets also provide policy-driven table maintenance, helping you to automate operational tasks such as compaction, snapshot management, and unreferenced file removal.

Amazon S3 Tables are now available in the US East (N. Virginia), US East (Ohio), and US West (Oregon) Regions, and coming soon to additional Regions. For pricing details, visit the S3 pricing page. To learn more, visit the product page, documentation, and AWS News Blog.

Read more


Amazon S3 adds new default data integrity protections

Amazon S3 updates the default behavior of object upload requests with new data integrity protections that build upon S3’s existing durability posture. The latest AWS SDKs now automatically calculate CRC-based checksums for uploads as data is transmitted over the network. S3 independently verifies these checksums and accepts objects after confirming that data integrity was maintained in transit over the public internet. Additionally, S3 now stores a CRC-based whole-object checksum in object metadata, even for multipart uploads, which helps you to verify the integrity of an object stored in S3 at any time.

S3 has always validated the integrity of object uploads from the S3 API to storage by calculating MD5 checksums and allowed customers to provide their own pre-calculated MD5 checksums for integrity validation. S3 also supports five additional checksum algorithms, CRC64NVME, CRC32, CRC32C, SHA-1, and SHA-256, for integrity validations on upload and download. Using checksums for data validation is a best practice for data durability, and this new default behavior adds additional data integrity protections with no changes to your applications and at no additional cost.

Default checksum protections are rolling out across all AWS Regions in the next few weeks. To get started, you can use the AWS Management Console or the latest AWS SDKs to upload objects. To learn more about checksums in S3, visit the AWS News Blog and the S3 User Guide.

Read more


Storage Browser for Amazon S3 is now generally available

Amazon S3 is announcing the general availability of Storage Browser for S3, an open source component that you can add to your web applications to provide your end users with a simple interface for data stored in S3. With Storage Browser for S3, you can provide authorized end users, such as customers, partners, and employees, with access to easily browse, download, and upload data in S3 directly from your own applications. Storage Browser for S3 is available in the AWS Amplify React and JavaScript client libraries.

With the general availability of Storage Browser for S3, your end users can now search for their data based on file name and can copy and delete data they have access to. Additionally, Storage Browser for S3 now automatically calculates checksums of the data your end users upload and blocks requests that do not pass these durability checks.

We welcome your contributions and feedback on our roadmap, which outlines the plan for adding new capabilities to Storage Browser for S3. Storage Browser for S3 is backed by AWS Support, which means customers with AWS Business and Enterprise Support plans get 24/7 access to cloud support engineers. To learn more and get started, visit the AWS News Blog and the UI documentation.
 

Read more


Announcing AWS Transfer Family web apps

AWS Transfer Family web apps are a new resource that you can use to create a simple interface for accessing your data in Amazon S3 through a web browser. With Transfer Family web apps, you can provide your workforce with a fully managed, branded, and secure portal for your end users to browse, upload, and download data in S3.

Transfer Family offers fully managed file transfers over SFTP, FTPS, FTP, and AS2, enabling seamless workload migrations with no need to change your third-party clients or their configurations. Now, you can also enable browser-based transfers for non-technical users in your organization through a user-friendly interface. Transfer Family web apps are integrated with AWS IAM Identity Center and S3 Access Grants, enabling fine-grained access controls that map corporate identities in your existing directories directly to S3 datasets. With a few clicks in the Transfer Family console, you can generate a shareable URL for your web app. Then, your authenticated users can start accessing data you authorize them to access through their web browsers.

Transfer Family web apps are available in select AWS Regions. You can get started with Transfer Family web apps in the Transfer Family console. For pricing, visit the Transfer Family pricing page. To learn more, read the AWS News Blog or visit the Transfer Family User Guide.
 

Read more


Amazon S3 launches storage classes for AWS Dedicated Local Zones

You can now use the Amazon S3 Express One Zone and S3 One Zone-Infrequent Access storage classes in AWS Dedicated Local Zones. Dedicated Local Zones are a type of AWS infrastructure that is fully managed by AWS, built for exclusive use by you or your community, and placed in a location or data center specified by you to help you comply with regulatory requirements.

In Dedicated Local Zones, these storage classes are purpose-built to store data in a specific data perimeter, helping to support your data isolation and data residency use cases. To learn more, visit the S3 User Guide.

Read more


Amazon S3 now supports enforcement of conditional write operations for S3 general purpose buckets

Amazon S3 now supports enforcement of conditional write operations for S3 general purpose buckets using bucket policies. With enforcement of conditional writes, you can now mandate that S3 check the existence of an object before creating it in your bucket. Similarly, you can also mandate that S3 check the state of the object’s content before updating it in your bucket. This helps you to simplify distributed applications by preventing unintentional data overwrites, especially in high-concurrency, multi-writer scenarios.

To enforce conditional write operations, you can now use s3:if-none-match or s3:if-match condition keys to write a bucket policy that mandates the use of HTTP if-none-match or HTTP if-match conditional headers in S3 PutObject and CompleteMultipartUpload API requests. With this bucket policy in place, any attempt to write an object to your bucket without the required conditional header will be rejected. You can use this to centrally enforce the use of conditional writes across all the applications that write to your bucket.

You can use bucket policies to enforce conditional writes at no additional charge in all AWS Regions. You can use the AWS SDK, API, or CLI to perform conditional writes. To learn more about conditional writes, visit the S3 User Guide.

Read more


Amazon S3 adds new functionality for conditional writes

Amazon S3 can now perform conditional writes that evaluate if an object is unmodified before updating it. This helps you coordinate simultaneous writes to the same object and prevents multiple concurrent writers from unintentionally overwriting the object without knowing the state of its content. You can use this capability by providing the ETag of an object using S3 PutObject or CompleteMultipartUpload API requests in both S3 general purpose and directory buckets.

Conditional writes simplify how distributed applications with multiple clients concurrently update data across shared datasets. Similar to using the HTTP if-none-match conditional header to check for the existence of an object before creating it, clients can now perform conditional-write checks on an object’s Etag, which reflects changes to the object, by specifying it via the HTTP if-match header in the API request. S3 then evaluates if the object's ETag matches the value provided in the API request before committing the write and prevents your clients from overwriting the object until the condition is satisfied. This new conditional header can help improve the efficiency of your large-scale analytics, distributed machine learning, and other highly parallelized workloads by reliably offloading compare and swap operations to S3.

This new conditional-write functionality is available at no additional charge in all AWS Regions. You can use the AWS SDK, API, or CLI to perform conditional writes. To learn more about conditional writes, visit the S3 User Guide.

Read more


Amazon S3 Express One Zone now supports conditional deletes

Amazon S3 Express One Zone, a high-performance S3 storage class for latency-sensitive applications, can now evaluate whether an object is unchanged before deleting it. This conditional delete capability helps you improve data durability and reduce errors from accidental deletions in high-concurrency, multiple-writer scenarios.

Conditional writes simplify how distributed applications with multiple clients concurrently update data across shared datasets, helping to prevent unintentional overwrites. Now, in directory buckets, clients can perform conditional delete checks on an object’s last modified time, size, and Etag using the x-amz-if-match-last-modified-time, x-amz-if-match-size, and HTTP if-match headers, respectively, in the DeleteObject and DeleteObjects API. S3 Express One Zone then evaluates if each of these object attributes match the value provided in these headers and prevents your clients from deleting the object until the condition is satisfied. You can use these headers in conjunction or individually in a delete request to reliably offload object-state evaluation to S3 Express One Zone and efficiently secure your distributed and highly parallelized workloads against unintended deletions.

S3 Express One Zone support for conditional deletes is available at no additional charge in all AWS Regions where the storage class is available. You can use the S3 API, SDKs, and CLI to perform conditional deletes. To learn more, visit the S3 documentation.
 

Read more


Amazon S3 Express One Zone is now available in three additional AWS Regions

The Amazon S3 Express One Zone storage class is now available in three additional AWS Regions: Asia Pacific (Mumbai), Europe (Ireland), and US East (Ohio).

S3 Express One Zone is a high-performance, single-Availability Zone storage class purpose-built to deliver consistent single-digit millisecond data access for your most frequently accessed data and latency-sensitive applications. S3 Express One Zone delivers data access speed up to 10x faster and request costs up to 50% lower than S3 Standard. It enables workloads such as machine learning training, interactive analytics, and media content creation to achieve single-digit millisecond data access speed with high durability and availability.

S3 Express One Zone is now generally available in seven AWS Regions. For information on AWS service and AWS Partner integrations with S3 Express One Zone, visit the S3 Express One Zone integrations page. To learn more about S3 Express One Zone, visit the S3 User Guide.

Read more


Amazon S3 Express One Zone now supports the ability to append data to an object

Amazon S3 Express One Zone now supports the ability to append data to an object. For the first time, applications can add data to an existing object in S3.

Applications that continuously receive data over a period of time need the ability to add data to existing objects. For example, log-processing applications continuously add new log entries to the end of existing log files. Similarly, media-broadcasting applications add new video segments to video files as they are transcoded and then immediately stream the video to viewers. Previously, these applications needed to combine data in local storage before copying the final object to S3. Now, applications can directly append new data to existing objects and then immediately read the object, all within S3 Express One Zone.

You can append data to objects in S3 Express One Zone in all AWS Regions where the storage class is available. You can get started using the AWS SDK, the AWS CLI, or Mountpoint for Amazon S3 (version 1.12.0 or higher). To learn more, visit the S3 User Guide.

Read more


Amazon S3 Express One Zone now supports S3 Lifecycle expirations

Amazon S3 Express One Zone, a high-performance S3 storage class for latency-sensitive applications, now supports object expiration using S3 Lifecycle. S3 Lifecycle can expire objects based on age to help you automatically optimize storage costs.

Now, you can configure S3 Lifecycle rules for S3 Express One Zone to expire objects on your behalf. You can configure an S3 Lifecycle expiration rule either for your entire bucket or for a subset of objects by filtering by prefix or object size. For example, you can create an S3 Lifecycle rule that expires all objects smaller than 512 KB after 3 days and another rule that expires all objects in a prefix after 10 days. Additionally, S3 Lifecycle logs S3 Express One Zone object expirations in AWS CloudTrail, giving you the ability to monitor, set alerts for, and audit them.

Amazon S3 Express One Zone support for S3 Lifecycle expiration is generally available in all AWS Regions where the storage class is available. You can get started with S3 Lifecycle using the Amazon S3 REST API, AWS Command Line Interface (CLI), or AWS Software Development Kit (SDK) client. To learn more about S3 Lifecycle, visit the S3 User Guide.

Read more


Mountpoint for Amazon S3 now supports a high performance shared cache

You can now use Amazon S3 Express One Zone as a high performance read cache with Mountpoint for Amazon S3. The cache can be shared by multiple compute instances and can elastically scale to any dataset size. Mountpoint for S3 is a file client that translates local file system API calls to REST API calls on S3 objects. With this launch, Mountpoint for S3 can cache data in S3 Express One Zone after it’s read, making the subsequent read requests up to 7x faster compared to reading data from S3 Standard.

Previously, Mountpoint for S3 could cache recently accessed data in Amazon EC2 instance storage, EC2 instance memory, or an Amazon EBS volume. This improved performance for repeated read access from the same compute instance for dataset sizes up to the size of the available local storage. Starting today, you can also opt in to caching data in S3 Express One Zone, benefiting applications that repeatedly read a shared dataset across multiple compute instances, without any limits on the total dataset size. Once you opt in, Mountpoint for S3 retains objects with sizes up to one megabyte in S3 Express One Zone. This is ideal for compute-intensive use cases such as machine learning training for computer vision models where applications repeatedly read millions of small images from multiple instances.

Mountpoint for Amazon S3 is an open source project backed by AWS support, which means customers with AWS Business and Enterprise Support plans get 24/7 access to cloud support engineers. To get started, visit the GitHub page and product page.

Read more


AWS Backup for Amazon S3 adds new restore parameter

AWS Backup introduces a new restore parameter for Amazon S3 backups, offering you the ability to choose how many versions of an object to restore.

By default, AWS Backup restores only the latest version of objects from the version stack at any point in time. The new parameter will now allow you to recover all versions of your data by restoring the entire version stack. You can also recover just the latest version(s) of an object without the overhead of restoring all older versions. With this feature, you now have more flexibility to control the data recovery process of Amazon S3 buckets/prefixes from your Amazon S3 backups, tailoring restore jobs to your requirements.

This feature is available in all Regions where AWS Backup for Amazon S3 is available. For more information on Regional availability and pricing, see the AWS Backup pricing page.

To learn more about AWS Backup for Amazon S3, visit the product page and technical documentation. To get started, visit the AWS Backup console.
 

Read more


AWS Lambda supports Amazon S3 as a failed-event destination for asynchronous and stream event sources

AWS Lambda now supports Amazon S3 as a failed-event destination for asynchronous invocations, and for Amazon Kinesis and Amazon DynamoDB event source mappings (ESMs). This enables customers to route the failed batch of records and function execution results to S3 using a simple configuration, without the overhead of writing and managing additional code.

Customers building event-driven applications with asynchronous event sources or stream event sources for Lambda can configure services like Amazon Simple Queue Service (SQS) and Amazon Simple Notification Service (SNS) as failed-event destinations to store the results of failed invocations. However, in scenarios where existing failed-event destinations do not support the payload size requirements for the failed events, customers need to write custom logic to retrieve and redrive event payload data. With today’s announcement, customers can configure S3 as a failed-event destination for Lambda functions invoked via asynchronous invocations, Kinesis ESMs, and DynamoDB ESMs. This enables customers to deliver complete event payload data to the failed-event destination, and helps reduce the overhead of managing custom logic to reliably retrieve and redrive failed event data.

This feature is generally available in all AWS Commercial Regions where AWS Lambda and the configured event source or event destination are available.

To enable S3 as a failed-event destination, refer to our documentation for configuring destinations with asynchronous invocations, Kinesis ESMs, and DynamoDB ESMs. This feature incurs no additional charge to use. You pay for charges associated with Amazon S3 usage.

Read more


Amazon Data Firehose supports continuous replication of database changes to Apache Iceberg Tables in Amazon S3

Amazon Data Firehose now enables capture and replication of database changes to Apache Iceberg Tables in Amazon S3 (Preview) . This new feature allows customers to easily stream real-time data from MySQL and PostgreSQL databases directly into Apache Iceberg Tables.

Firehose is a fully managed, serverless streaming service that enables customers to capture, transform, and deliver data streams into Amazon S3, Amazon Redshift, OpenSearch, Splunk, Snowflake, and other destinations for analytics. With this functionality, Firehose performs an initial complete data copy from selected database tables, then continuously streams Change Data Capture (CDC) updates to reflect inserts, updates, and deletions in the Apache Iceberg Tables .This streamlined solution eliminates complex data pipeline setups while minimizing impact on database transaction performance .
Key capabilities include: • Automatic creation of Apache Iceberg Tables matching source database schemas • Automatic schema evolution in response to source changes • Selective replication of specific databases, tables, and columns

This preview feature is available in all AWS regions except China, AWS GovCloud (US), and Asia Pacific (Malaysia) Regions. For terms and conditions, see Beta Service Participation in AWS Service Terms.

To get started, visit Amazon Data Firehose documentation and console.

To learn more about this feature, visit this AWS blog post.

Read more


AWS Organizations member accounts can now regain access to accidentally locked Amazon S3 buckets

AWS Organizations member accounts can now use a simple process through AWS Identity and Access Management (IAM) to regain access to accidentally locked Amazon S3 buckets. With this capability, you can repair misconfigured S3 bucket policies while improving your organization’s security and compliance posture.

IAM now provides centralized management of long-term root credentials, helping you prevent unintended access and improving your account security at scale in your organization. You can also perform a curated set of root-only tasks, using short-lived and privileged root sessions. For example, you can centrally delete an S3 bucket policy in just a few steps. First, navigate to the Root access management page in the IAM console, select an account, and choose Take privileged action. Next, select Delete bucket policy and select your chosen S3 bucket.

AWS Organization member accounts can use this capability in all AWS Regions, including the AWS GovCloud (US) Regions and AWS China Regions. Customers can use this new capability via the IAM console or programmatically using the AWS CLI or SDK. For more information, visit the AWS News Blog and IAM documentation.

Read more


Amazon S3 Access Grants now integrate with Amazon Redshift

Amazon S3 Access Grants now integrate with Amazon Redshift. S3 Access Grants map identities from your Identity Provider (IdP), such as Entra ID and Okta, to datasets stored in Amazon S3, helping you to easily manage data permissions at scale. This integration gives you the ability to manage S3 permissions for AWS IAM Identity Center users and groups when using Redshift, without the need to write and maintain bucket policies or individual IAM roles.

Using S3 Access Grants, you can grant permissions to buckets or prefixes in S3 to users and groups in your IdP by connecting S3 with IAM Identity Center. Then, when you use Identity Center authentication for Redshift, end users in the appropriate user groups will automatically have permission to read and write data in S3 using COPY, UNLOAD, and CREATE LIBRARY SQL commands. S3 Access Grants then automatically update S3 permissions as users are added and removed from user groups in the IdP.

Amazon S3 Access Grants with Amazon Redshift are available for users federated via IdP in all AWS Regions where AWS IAM Identity Center is available. For pricing details, visit Amazon S3 pricing and Amazon Redshift pricing. To learn more about S3 Access Grants, refer to the documentation.

Read more


Amazon S3 now supports up to 1 million buckets per AWS account

Amazon S3 has increased the default bucket quota from 100 to 10,000 per AWS account. Additionally, any customer can request a quota increase up to 1 million buckets. As a result, customers can create new buckets for individual datasets that they store in S3 to more easily take advantage of capabilities such as default encryption, security policies, S3 Replication, and more to remove barriers to scaling and optimize their S3 storage architecture.

Amazon S3’s new default bucket quota of 10,000 buckets is now applied to all AWS accounts and requires no action by customers. To increase your bucket quota from 10,000 to up to 1 million buckets, simply request a quota increase via Service Quotas. You can create your first 2,000 buckets at no cost. Above 2,000 buckets, you are charged a small monthly fee.

The increased default general purpose bucket limit per account now applies to all AWS Regions. To learn more about general purpose bucket quotas, visit the S3 User Guide.
 

Read more


Amazon S3 Access Grants is now available in the AWS Canada West (Calgary) Region

You can now create Amazon S3 Access Grants in the AWS Canada West (Calgary) Region.

Amazon S3 Access Grants map identities in directories such as Microsoft Entra ID, or AWS Identity and Access Management (IAM) principals, to datasets in S3. This helps you manage data permissions at scale by automatically granting S3 access to end users based on their corporate identity.

To learn more about Amazon S3 Access Grants, visit our product detail page, and see the S3 Access Grants Region Table for complete regional availability information.
 

Read more


amazon-sagemaker

SageMaker SDK enhances training and inference workflows

Today, we are introducing the new ModelTrainer class and enhancing the ModelBuilder class in the SageMaker Python SDK. These updates streamline training workflows and simplify inference deployments.

The ModelTrainer class enables customers to easily set up and customize distributed training strategies on Amazon SageMaker. This new feature accelerates model training times, optimizes resource utilization, and reduces costs through efficient parallel processing. Customers can smoothly transition their custom entry points and containers from a local environment to SageMaker, eliminating the need to manage infrastructure. ModelTrainer simplifies configuration by reducing parameters to just a few core variables and providing user-friendly classes for intuitive SageMaker service interactions. Additionally, with the enhanced ModelBuilder class, customers can now easily deploy HuggingFace models, switch between developing in local environment to SageMaker, and customize their inference using their pre- and post-processing scripts. Importantly, customers can now pass the trained model artifacts from ModelTrainer class easily to ModelBuilder class, enabling a seamlessly transition from training to inference on SageMaker.

You can learn more about ModelTrainer class here, ModelBuilder enhancements here, and get started using ModelTrainer and ModelBuilder sample notebooks.

Read more


Amazon SageMaker introduces new capabilities to accelerate scaling of Generative AI Inference

We are excited to announce two new capabilities in SageMaker Inference that significantly enhance the deployment and scaling of generative AI models: Container Caching and Fast Model Loader. These innovations address critical challenges in scaling large language models (LLMs) efficiently, enabling faster response times to traffic spikes and more cost-effective scaling. By reducing model loading times and accelerating autoscaling, these features allow customers to improve the responsiveness of their generative AI applications as demand fluctuates, particularly benefiting services with dynamic traffic patterns.

Container Caching dramatically reduces the time required to scale generative AI models for inference by pre-caching container images. This eliminates the need to download them when scaling up, resulting in significant reduction in scaling time for generative AI model endpoints. Fast Model Loader streams model weights directly from Amazon S3 to the accelerator, loading models much faster compared to traditional methods. These capabilities allow customers to create more responsive auto-scaling policies, enabling SageMaker to add new instances or model copies quickly when defined thresholds are reached, thus maintaining optimal performance during traffic spikes while at the same time managing costs effectively.

These new capabilities are accessible in all AWS regions where Amazon SageMaker Inference is available. To learn more see our documentation for detailed implementation guidance.
 

Read more


AWS announces Amazon SageMaker Partner AI Apps

Today Amazon Web Services, Inc. (AWS) announced the general availability of Amazon SageMaker partner AI apps, a new capability that enables customers to easily discover, deploy, and use best-in-class machine learning (ML) and generative AI (GenAI) development applications from leading app providers privately and securely, all without leaving Amazon SageMaker AI so they can develop performant AI models faster.

Until today, integrating purpose-built GenAI and ML development applications that provide specialized capabilities for a variety of model development tasks, required a considerable amount of effort. Beyond the need to invest time and effort in due diligence to evaluate existing offerings, customers had to perform undifferentiated heavy lifting in deploying, managing, upgrading and scaling these applications. Furthermore, to adhere to rigorous security and compliance protocols, organizations need their data to stay within the confines of their security boundaries without needing to move their data elsewhere, for example, to a Software as a Service (SaaS) application. Finally, the resulting developer experience is often fragmented, with developers having to switch back and forth between multiple disjointed interfaces. With SageMaker partner AI apps you can quickly subscribe to a partner solution and seamlessly integrate the app with your SageMaker development environment. SageMaker partner AI apps are fully managed and run privately and securely in your SageMaker environment reducing the risk of data and model exfiltration.

At launch, you will be able to boost your team’s productivity and reduce time to market by enabling: Comet, to track, visualize, and manage experiments for AI model development; Deepchecks, to evaluate quality and compliance for AI models; Fiddler, to validate, monitor, analyze, and improve AI models in production; and, Lakera, to protect AI applications from security threats such as prompt attacks, data loss and inappropriate content.

SageMaker partner AI apps is available in all currently supported regions except Gov Cloud. To learn more please visit SageMaker partner AI app’s developer guide.
 

Read more


Amazon SageMaker HyperPod now provides flexible training plans

Amazon SageMaker HyperPod announces flexible training plans, a new capability that allows you to train generative AI models within your timelines and budgets. Gain predictable model training timelines and run training workloads within your budget requirements, while continuing to benefit from features of SageMaker HyperPod such as resiliency, performance-optimized distributed training, and enhanced observability and monitoring. 

In a few quick steps, you can specify your preferred compute instances, desired amount of compute resources, duration of your workload, and preferred start date for your generative AI model training. SageMaker then helps you create the most cost-efficient training plans, reducing time to train your model by weeks. Once you create and purchase your training plans, SageMaker automatically provisions the infrastructure and runs the training workloads on these compute resources without requiring any manual intervention. SageMaker also automatically takes care of pausing and resuming training between gaps in compute availability, as the plan switches from one capacity block to another. If you wish to remove all the heavy lifting of infrastructure management, you can also create and run training plans using SageMaker fully managed training jobs.  

SageMaker HyperPod flexible training plans are available in the US East (N. Virginia), US East (Ohio), and US West (Oregon) AWS Regions. To learn more, visit: SageMaker HyperPod, documentation, and the announcement blog

Read more


Task governance is now generally available for Amazon SageMaker HyperPod

Amazon SageMaker HyperPod now provides you with centralized governance across all generative AI development tasks, such as training and inference. You have full visibility and control over compute resource allocation, ensuring the most critical tasks are prioritized and maximizing compute resource utilization, reducing model development costs by up to 40%.

With HyperPod task governance, administrators can more easily define priorities for different tasks and set up limits for how many compute resources each team can use. At any given time, administrators can also monitor and audit the tasks that are running or waiting for compute resources through a visual dashboard. When data scientists create their tasks, HyperPod automatically runs them, adhering to the defined compute resource limits and priorities. For example, when training for a high-priority model needs to be completed as soon as possible but all compute resources are in use, HyperPod frees up resources from lower-priority tasks to support the training. HyperPod pauses the low-priority task, saves the checkpoint, and reallocates the freed-up compute resources. The preempted low-priority task will resume from the last saved checkpoint as resources become available again. And when a team is not fully using the resource limits the administrator has set up, HyperPod use those idle resources to accelerate another team’s tasks. Additionally, HyperPod is now integrated with Amazon SageMaker Studio, bringing task governance and other HyperPod capabilities into the Studio environment. Data scientists can now seamlessly interact with HyperPod clusters directly from Studio, allowing them to develop, submit, and monitor machine learning (ML) jobs on powerful accelerator-backed clusters.

Task governance for HyperPod is available in all AWS Regions where HyperPod is available: US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Stockholm), and South America (São Paulo).

To learn more, visit SageMaker HyperPod webpage, AWS News Blog, and SageMaker AI documentation.

Read more


Announcing Amazon SageMaker HyperPod recipes

Amazon SageMaker HyperPod recipes help you get started training and fine-tuning publicly available foundation models (FMs) in minutes with state-of-the-art performance. SageMaker HyperPod helps customers scale generative AI model development across hundreds or thousands of AI accelerators with built-in resiliency and performance optimizations, decreasing model training time by up to 40%. However, as FM sizes continue to grow to hundreds of billions of parameters, the process of customizing these models can take weeks of extensive experimenting and debugging. In addition, performing training optimizations to unlock better price performance is often unfeasible for customers, as they often require deep machine learning expertise that could cause further delays in time to market. 

With SageMaker HyperPod recipes, customers of all skill sets can benefit from state-of-the-art performance while quickly getting started training and fine-tuning popular publicly available FMs, including Llama 3.1 405B, Mixtral 8x22B, and Mistral 7B. SageMaker HyperPod recipes include a training stack tested by AWS, removing weeks of tedious work experimenting with different model configurations. You can also quickly switch between GPU-based and AWS Trainium-based instances with a one-line recipe change and enable automated model checkpointing for improved training resiliency. Finally, you can run workloads in production on the SageMaker AI training service of your choice. 

SageMaker HyperPod recipes are available in all AWS Regions where SageMaker HyperPod and SageMaker training jobs are supported. To learn more and get started, visit the SageMaker HyperPod page and blog.

Read more


Amazon DynamoDB zero-ETL integration with Amazon SageMaker Lakehouse

Amazon DynamoDB zero-ETL integration with Amazon SageMaker Lakehouse automates the extracting and loading of data from a DynamoDB table into SageMaker Lakehouse, an open and secure lakehouse. You can run analytics and machine learning workloads on your DynamoDB data using SageMaker Lakehouse, without impacting production workloads running on DynamoDB. With this launch, you now have the option to enable analytics workloads using SageMaker Lakehouse, in addition to the previously available Amazon OpenSearch Service and Amazon Redshift zero-ETL integrations.

Using the no-code interface, you can maintain an up-to-date replica of your DynamoDB data in the data lake by quickly setting up your integration to handle the complete process of replicating data and updating records. This zero-ETL integration reduces the complexity and operational burden of data replication to let you focus on deriving insights from your data. You can create and manage integrations using the AWS Management Console, the AWS Command Line Interface (AWS CLI), or the SageMaker Lakehouse APIs.

DynamoDB zero-ETL integration with SageMaker Lakehouse is now available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Hong Kong), Asia Pacific (Singapore), Asia Pacific (Sydney), Europe (Stockholm), Europe (Frankfurt), and Europe (Ireland) AWS Regions. 

To learn more, visit DynamoDB integrations and read the documentation.

Read more


AWS expands data connectivity for Amazon SageMaker Lakehouse and AWS Glue

Amazon SageMaker Lakehouse announces unified data connectivity capabilities to streamline the creation, management, and usage of connections to data sources across databases, data lakes and enterprise applications. SageMaker Lakehouse unified data connectivity provides a connection configuration template, support for standard authentication methods like basic authentication and OAuth 2.0, connection testing, metadata retrieval, and data preview. Customers can create SageMaker Lakehouse connections through SageMaker Unified Studio (preview), AWS Glue console, or custom-built application using APIs under AWS Glue.

With SageMaker Lakehouse unified data connectivity, a data connection is configured once and can be reused by SageMaker Unified Studio, AWS Glue and Amazon Athena for use cases in data integration, data analytics and data science. You will gain confidence in the established connection by validating credentials with connection testing. With the ability to browse metadata, you can understand the structure and schema of the data source and identify relevant tables and fields. Lastly, the data preview capability supports mapping source fields to target schemas, identifying needed data transformation, and receiving immediate feedback on the source data queries.

SageMaker Lakehouse unified connectivity is available where Amazon SageMaker Lakehouse or AWS Glue is available. To get started, visit AWS Glue connection documentation or the Amazon SageMaker Lakehouse data connection documentation.

Read more


AWS announces Amazon SageMaker Lakehouse

AWS announces Amazon SageMaker Lakehouse, a unified, open, and secure data lakehouse that simplifies your analytics and artificial intelligence (AI). Amazon SageMaker Lakehouse unifies all your data across Amazon S3 data lakes and Amazon Redshift data warehouses, helping you build powerful analytics and AI/ML applications on a single copy of data.

SageMaker Lakehouse gives you the flexibility to access and query your data in-place with Apache Iceberg open standard. All data in SageMaker Lakehouse can be queried from SageMaker Unified Studio (preview) and engines such as Amazon EMR, AWS Glue, Amazon Redshift or Apache Spark. You can secure your data in the lakehouse by defining fine-grained permissions, which are consistently applied across all analytics and ML tools and engines. With SageMaker Lakehouse, you can use your existing investments. You can seamlessly make data from your Redshift data warehouses available for analytics and AI/ML. In addition, you can now create data lakes by leveraging the analytics optimized Redshift Managed Storage (RMS). Bringing data into lakehouse is easy. You can use zero-ETL to bring data from operational databases, streaming services, and applications, or query in-place data via federated query.

SageMaker Lakehouse is available in US East (N. Virginia), US East (Ohio), Europe (Ireland), US West (Oregon), Canada (Central), Europe (Frankfurt), Europe (Stockholm), Europe (London), Asia Pacific (Sydney), Asia Pacific (Hong Kong), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Seoul), South America (Sao Paulo).

SageMaker Lakehouse is accessible directly from SageMaker Unified Studio. In addition, you can access SageMaker Lakehouse from AWS Console, AWS Glue APIs and CLIs. To learn more, visit SageMaker Lakehouse and read the launch blog. For pricing information please visit here.

Read more


Introducing the next generation of Amazon SageMaker

Today, AWS announces the next generation of Amazon SageMaker, a unified platform for data, analytics, and AI. This launch brings together widely adopted AWS machine learning and analytics capabilities and provides an integrated experience for analytics and AI with unified access to data and built-in governance. Teams can collaborate and build faster from a single development environment using familiar AWS tools for model development, generative AI application development, data processing, and SQL analytics, accelerated by Amazon Q Developer, the most capable generative AI assistant for software development.

The next generation of SageMaker also introduces new capabilities, including Amazon SageMaker Unified Studio (preview), Amazon SageMaker Lakehouse, and Amazon SageMaker Data and AI Governance. Within the new SageMaker Unified Studio, users can discover their data and put it to work using the best tool for the job across data and AI use cases. SageMaker Unified Studio brings together functionality and tools from the range of standalone studios, query editors, and visual tools available today in Amazon EMR, AWS Glue, Amazon Redshift, Amazon Bedrock, and the existing Amazon SageMaker Studio. SageMaker Lakehouse provides an open data architecture that reduces data silos and unifies data across Amazon Simple Storage Service (Amazon S3) data lakes, Amazon Redshift data warehouses, and third party and federated data sources. SageMaker Lakehouse offers the flexibility to access and query data with Apache Iceberg–compatible tools and engines. SageMaker Data and AI Governance, including Amazon SageMaker Catalog built on Amazon DataZone, empowers users to securely discover, govern, and collaborate on data and AI workflows.  

For more information on AWS Regions where the next generation of Amazon SageMaker is available, see Supported Regions

To learn more and get started, visit the following resources:

Read more


Amazon SageMaker Lakehouse integrated access controls now available in Amazon Athena federated queries

Amazon SageMaker now supports connectivity, discovery, querying, and enforcing fine-grained data access controls on federated sources when querying data with Amazon Athena. Athena is a query service that makes it simple to analyze your data lake and federated data sources such as Amazon Redshift, Amazon DynamoDB, or Snowflake using SQL without extract, transform, and load (ETL) scripts. Now, data workers can connect to and unify these data sources within SageMaker Lakehouse. Federated source metadata is unified in SageMaker Lakehouse, where you apply fine-grained policies in one place, helping to streamline analytics workflows and secure your data.

Log into Amazon SageMaker Unified Studio, connect to a federated data source in SageMaker Lakehouse, and govern data with column- and tag-based permissions that are enforced when querying federated data sources with Athena. In addition to the SageMaker Unified Studio, you can connect to these data sources through the Athena console and API. To help you automate and streamline connector set up, the new user experiences allow you to create and manage connections to data sources with ease.

Now, organizations can extract insights from a unified set of data sources while strengthening security posture, wherever your data is stored. The unification and fine-grained access controls on federated sources are available in all AWS Regions where SageMaker Lakehouse is available. To learn more, visit SageMaker Lakehouse documentation.

Read more


Introducing Amazon SageMaker Data and AI Governance

Today, AWS announces Amazon SageMaker Data and AI Governance, a new capability that simplifies discovery, governance, and collaboration for data and AI across your lakehouse, AI models, and applications. Built on Amazon DataZone, SageMaker Data and AI Governance allows engineers, data scientists, and analysts to securely discover and access approved data and models using semantic search with generative AI–created metadata. This new offering helps organizations consistently define and enforce access policies using a single permission model with fine-grained access controls.

With SageMaker Data and AI Governance, you can accelerate data and AI discovery and collaboration at scale. You can enhance data discovery by automatically enriching your data and metadata with business context using generative AI, making it easier for all users to find, understand, and use data. You can share data, AI models, prompts, and other generative AI assets with filtering by table and column names or business glossary terms. SageMaker Data and AI Governance helps establish trust and drives transparency in your data pipelines and AI projects with built-in model monitoring to detect bias and report on how features contribute to your model predictions.

To learn more about how to govern your data and AI assets, visit SageMaker Data and AI Governance.

Read more


Announcing the preview of Amazon SageMaker Unified Studio

Today, AWS announces the next generation of Amazon SageMaker, including the preview launch of Amazon SageMaker Unified Studio, an integrated data and AI development environment that enables collaboration and helps teams build data products faster. SageMaker Unified Studio brings together familiar tools from AWS analytics and AI/ML services for data processing, SQL analytics, machine learning model development, and generative AI application development. Amazon SageMaker Lakehouse, which is accessible through SageMaker Unified Studio, provides open source compatibility and access to data stored across Amazon Simple Storage Service (Amazon S3) data lakes, Amazon Redshift data warehouses, and third- party and federated data sources. Enhanced governance features are built in to help you meet enterprise security requirements.

SageMaker Unified Studio allows you to find, access, and query data and AI assets across your organization, then work together in projects to securely build and share analytics and AI artifacts, including data, models, and generative AI applications. SageMaker Unified Studio offers the capabilities to build integrated data pipelines with visual extract, transform, and load (ETL), develop ML models, and create custom generative AI applications. New unified Jupyter Notebooks enable seamless work across different compute resources and clusters, while an integrated SQL editor lets you query your data stored in various sources—all within a single, collaborative environment. Amazon Bedrock IDE, formerly Amazon Bedrock Studio, is now part of the SageMaker Unified Studio in public preview, offering the capabilities to rapidly build and customize generative AI applications. Amazon Q Developer, the most capable generative AI assistant for software development, is integrated into SageMaker Unified Studio to accelerate and streamline tasks across the development lifecycle.

For more information on AWS Regions where SageMaker Unified Studio is available in preview, see Supported Regions.

To get started, see the following resources:

Read more


Data Lineage is now generally available in Amazon DataZone and next generation of Amazon SageMaker

AWS announces general availability of Data Lineage in Amazon DataZone and next generation of Amazon SageMaker, a capability that automatically captures lineage from AWS Glue and Amazon Redshift to visualize lineage events from source to consumption. Being OpenLineage compatible, this feature allows data producers to augment the automated lineage with lineage events captured from OpenLineage-enabled systems or through API, to provide a comprehensive data movement view to data consumers.

This feature automates lineage capture of schema and transformations of data assets and columns from AWS Glue, Amazon Redshift, and Spark executions in tools to maintain consistency and reduce errors. With in-built automation, domain administrators and data producers can automate capture and storage of lineage events when data is configured for data sharing in the business data catalog. Data consumers can gain confidence in an asset's origin from the comprehensive view of its lineage while data producers can assess the impact of changes to an asset by understanding its consumption. Additionally, the data lineage feature versions lineage with each event, enabling users to visualize lineage at any point in time or compare transformations across an asset's or job's history. This historical lineage provides a deeper understanding of how data has evolved, essential for troubleshooting, auditing, and validating the integrity of data assets.

The data lineage feature is generally available in all AWS Regions where Amazon DataZone and next generation of Amazon SageMaker are available.

To learn more, visit Amazon DataZone and next generation of Amazon SageMaker.
 

Read more


Amazon SageMaker introduces Scale Down to Zero for AI inference to help customers save costs

We are excited to announce Scale Down to Zero, a new capability in Amazon SageMaker Inference that allows endpoints to scale to zero instances during periods of inactivity. This feature can significantly reduce costs for running inference using AI models, making it particularly beneficial for applications with variable traffic patterns such as chatbots, content moderation systems, and other generative AI usecases.

With Scale Down to Zero, customers can configure their SageMaker inference endpoints to automatically scale to zero instances when not in use, then quickly scale back up when traffic resumes. This capability is effective for scenarios with predictable traffic patterns, intermittent inference traffic, and development/testing environments. Implementing Scale Down to Zero is simple with SageMaker Inference Components. Customers can configure auto-scaling policies through the AWS SDK for Python (Boto3), SageMaker Python SDK, or the AWS Command Line Interface (AWS CLI). The process involves setting up an endpoint with managed instance scaling enabled, configuring scaling policies, and creating CloudWatch alarms to trigger scaling actions.

Scale Down to Zero is now generally available in all AWS regions where Amazon SageMaker is supported. To learn more about implementing Scale Down to Zero and optimizing costs for generative AI deployments, please visit our documentation page.
 

Read more


Amazon SageMaker launches Multi-Adapter Model Inference

Today, Amazon SageMaker introduces new multi-adapter inference capabilities that unlock exciting possibilities for customers using pre-trained language models. This feature allows you to deploy hundreds of fine-tuned LoRA (Low-Rank Adaptation) model adapters behind a single endpoint, dynamically loading the appropriate adapters in milliseconds based on the request. This enables you to efficiently host many specialized LoRA adapters built on a common base model, delivering high throughput and cost-savings compared to deploying separate models.

With multi-adapter inference, you can quickly customize pre-trained models to meet diverse business needs. For example, marketing and SaaS companies can personalize AI/ML applications using each customer's unique images, communication style, and documents to generate tailored content in seconds. Similarly, enterprises in industries like healthcare and financial services can reuse a common LoRA-powered base model to tackle a variety of specialized tasks, from medical diagnosis to fraud detection, by simply swapping in the appropriate fine-tuned adapter. This flexibility and efficiency unlocks new opportunities to deploy powerful, adaptable AI across your organization.

The multi-adapter inference feature is generally available in: Asia Pacific (Tokyo, Seoul, Mumbai, Singapore, Sydney, Jakarta), Canada (Central), Europe (Frankfurt, Stockholm, Ireland, London), Middle East (UAE), South America (Sao Paulo), US East (N. Virginia, Ohio), and US West (Oregon).

To get started, refer to the Amazon SageMaker developer guide for information on using LoRA and managing model adapters.
 

Read more


Amazon SageMaker Notebook Instances now support Trainium1 and Inferentia 2 based instances

We are pleased to announce general availability of Trainium1 and Inferentia2 based EC2 instances on SageMaker Notebook Instances.

Amazon EC2 Trn1 instances, powered by AWS Trainium chips, and Inf2 instances, powered by AWS Inferentia chips, are purpose-built for high-performance deep learning training and inference, respectively. Trn1 instances offer cost savings over other comparable Amazon EC2 instances for training 100B+ parameter generative AI models like large language models (LLMs) and latent diffusion. Inf2 instances deliver low-cost, high-performance inference for generative AI including LLMs and vision transformers. You can use Trn1 and Inf2 instances across a broad set of applications, such as text summarization, code generation, question answering, image and video generation, recommendation, and fraud detection.

Amazon EC2 Trn1 instances are available for SageMaker Notebook Instances in AWS US East (N. Virginia and Ohio) and US West (Oregon) regions. Amazon EC2 Trn1n instances are available for SageMaker NBI in AWS US East (N. Virginia and Ohio). Amazon EC2 Inf2 instances are available for SageMaker NBI in AWS US West (Oregon), AWS US East (N. Virginia and Ohio), AWS EU (Ireland), AWS EU (Frankfurt), AWS Asia Pacific (Tokyo), AWS Asia Pacific (Sydney), AWS Asia Pacific (Mumbai), AWS EU (London), AWS Asia Pacific (Singapore), AWS EU (Stockholm), AWS EU (Paris), and AWS South America (São Paulo).

Visit developer guide for instructions on setting up and using SageMaker Notebook Instances.
 

Read more


Amazon SageMaker now provides new set up experience for Amazon DataZone projects

Amazon SageMaker now provides a new set up experience for Amazon DataZone projects, making it easier for customers to govern access to data and machine learning (ML) assets. With this capability, administrators can now set up Amazon DataZone projects by importing their existing authorized users, security configurations, and policies from Amazon SageMaker domains.

Today, Amazon SageMaker customers use domains to organize list of authorized users, and a variety of security, application, policy, and Amazon Virtual Private Cloud configurations. With this launch, administrators can now accelerate the process of setting up governance for data and ML assets in Amazon SageMaker. They can import users and configurations from existing SageMaker domains to Amazon DataZone projects, mapping SageMaker users to corresponding Amazon DataZone project members. This enables project members to search, discover, and consume ML and data assets within Amazon SageMaker capabilities such as Studio, Canvas, and notebooks. Also, project members can publish these assets from Amazon SageMaker to the DataZone business catalog, enabling other project members to discover and request access to them.

This capability is available in all Amazon Web Services regions where Amazon SageMaker and Amazon DataZone are currently available. To get started, see the Amazon SageMaker administrator guide.

Read more


SageMaker Model Registry now supports model lineage to improve model governance

Amazon SageMaker Model Registry now supports tracking machine learning (ML) model lineage, enabling you to automatically capture and retain information about the steps of an ML workflow, from data preparation and training to model registration and deployment.

Customers use Amazon SageMaker Model Registry as a purpose-built metadata store to manage the entire lifecycle of ML models. With this launch, data scientists and ML engineers can now easily capture and view the model lineage details such as datasets, training jobs, and deployment endpoints in Model Registry. When they register a model, Model Registry begins tracking the lineage of the model from development to deployment. This creates an audit trail that enables traceability and reproducibility, providing visibility across the model lifecycle to improve model governance.

This capability is available in all AWS regions where Amazon SageMaker Model Registry is currently available except GovCloud regions. To learn more, see view Model Lineage details in Amazon SageMaker Studio.
 

Read more


Amazon SageMaker Model Registry now supports defining machine learning model lifecycle stages

Today, we are excited to announce that Amazon SageMaker Model Registry now supports custom machine learning (ML) model lifecycle stages. This capability further improves model governance by enabling data scientists and ML engineers to define and control the progression of their models across various stages, from development to production.

Customers use Amazon SageMaker Model Registry as a purpose-built metadata store to manage the entire lifecycle of ML models. With this launch, data scientists and ML engineers can now define custom stages such as development, testing, and production for ML models in the model registry. This makes it easy to track and manage models as they transition across different stages in the model lifecycle from training to inference. They can also track stage approval status such as Pending Approval, Approved, and Rejected to check when the model is ready to move to the next stage. These custom stages and approval status help data scientists and ML engineers define and enforce model approval workflows, ensuring that models meet specific criteria before advancing to the next stage. By implementing these custom stages and approval processes, customers can standardize their model governance practices across their organization, maintain better oversight of model progression, and ensure that only approved models reach production environments.

This capability is available in all AWS regions where Amazon SageMaker Model Registry is currently available except GovCloud regions. To learn more, see Staging Construct for your Model Lifecycle.

Read more


Amazon SageMaker Notebook Instances now support JupyterLab 4 notebooks

We're excited to announce the availability of JupyterLab 4 on Amazon SageMaker Notebook Instances, providing you with a powerful and modern interactive development environment (IDE) for your data science and machine learning (ML) workflows.

With this update, you can now leverage the latest features and improvements in JupyterLab 4, including faster performance and notebook windowing, making working with large notebooks much more efficient. The Extension Manager now includes both prebuilt Python extensions and extensions from PyPI, making it easier to discover and install the tools you need. The Search and Replace functionality has been improved with new features, including highlighting matches in rendered Markdown cells, searching in the current selection, and regular expression support for replacements. By providing JupyterLab 4 on Amazon SageMaker Notebook Instances, we're empowering you with a cutting-edge development environment to boost your productivity and efficiency when building ML models and exploring data.

JupyterLab 4 notebooks are available in all commercial AWS regions where SageMaker Notebook Instance is available. Visit developer guides for instructions on setting up and using SageMaker notebook instances.

Read more


amazon-sagemaker-canvas

Amazon Q Developer can now guide SageMaker Canvas users through ML development

Starting today, you can build ML models using natural language with Amazon Q Developer, now available in Amazon SageMaker Canvas in preview. You can now get generative AI-powered assistance through the ML lifecycle, from data preparation to model deployment. With Amazon Q Developer, users of all skill levels can use natural language to access expert guidance to build high-quality ML models, accelerating innovation and time to market.

Amazon Q Developer will break down your objective into specific ML tasks, define the appropriate ML problem type, and apply data preparation techniques to your data. Amazon Q Developer then guides you through the process of building, evaluating, and deploying custom ML models. ML models produced in SageMaker Canvas with Amazon Q Developer are production ready, can be registered in SageMaker Studio, and the code can be shared with data scientists for integration into downstream MLOps workflows.

Amazon Q Developer is available in SageMaker Canvas in preview in the following AWS Regions: US East (N. Virginia), US West (Oregon), Europe (Frankfurt), Europe (Paris), Asia Pacific (Tokyo), and Asia Pacific (Seoul). To learn more about using Amazon Q Developer with SageMaker Canvas, visit the website, read the AWS News blog, or view the technical documentation.

Read more


amazon-sagemaker-hyperpod

Amazon SageMaker HyperPod now provides flexible training plans

Amazon SageMaker HyperPod announces flexible training plans, a new capability that allows you to train generative AI models within your timelines and budgets. Gain predictable model training timelines and run training workloads within your budget requirements, while continuing to benefit from features of SageMaker HyperPod such as resiliency, performance-optimized distributed training, and enhanced observability and monitoring. 

In a few quick steps, you can specify your preferred compute instances, desired amount of compute resources, duration of your workload, and preferred start date for your generative AI model training. SageMaker then helps you create the most cost-efficient training plans, reducing time to train your model by weeks. Once you create and purchase your training plans, SageMaker automatically provisions the infrastructure and runs the training workloads on these compute resources without requiring any manual intervention. SageMaker also automatically takes care of pausing and resuming training between gaps in compute availability, as the plan switches from one capacity block to another. If you wish to remove all the heavy lifting of infrastructure management, you can also create and run training plans using SageMaker fully managed training jobs.  

SageMaker HyperPod flexible training plans are available in the US East (N. Virginia), US East (Ohio), and US West (Oregon) AWS Regions. To learn more, visit: SageMaker HyperPod, documentation, and the announcement blog

Read more


Task governance is now generally available for Amazon SageMaker HyperPod

Amazon SageMaker HyperPod now provides you with centralized governance across all generative AI development tasks, such as training and inference. You have full visibility and control over compute resource allocation, ensuring the most critical tasks are prioritized and maximizing compute resource utilization, reducing model development costs by up to 40%.

With HyperPod task governance, administrators can more easily define priorities for different tasks and set up limits for how many compute resources each team can use. At any given time, administrators can also monitor and audit the tasks that are running or waiting for compute resources through a visual dashboard. When data scientists create their tasks, HyperPod automatically runs them, adhering to the defined compute resource limits and priorities. For example, when training for a high-priority model needs to be completed as soon as possible but all compute resources are in use, HyperPod frees up resources from lower-priority tasks to support the training. HyperPod pauses the low-priority task, saves the checkpoint, and reallocates the freed-up compute resources. The preempted low-priority task will resume from the last saved checkpoint as resources become available again. And when a team is not fully using the resource limits the administrator has set up, HyperPod use those idle resources to accelerate another team’s tasks. Additionally, HyperPod is now integrated with Amazon SageMaker Studio, bringing task governance and other HyperPod capabilities into the Studio environment. Data scientists can now seamlessly interact with HyperPod clusters directly from Studio, allowing them to develop, submit, and monitor machine learning (ML) jobs on powerful accelerator-backed clusters.

Task governance for HyperPod is available in all AWS Regions where HyperPod is available: US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Stockholm), and South America (São Paulo).

To learn more, visit SageMaker HyperPod webpage, AWS News Blog, and SageMaker AI documentation.

Read more


Announcing Amazon SageMaker HyperPod recipes

Amazon SageMaker HyperPod recipes help you get started training and fine-tuning publicly available foundation models (FMs) in minutes with state-of-the-art performance. SageMaker HyperPod helps customers scale generative AI model development across hundreds or thousands of AI accelerators with built-in resiliency and performance optimizations, decreasing model training time by up to 40%. However, as FM sizes continue to grow to hundreds of billions of parameters, the process of customizing these models can take weeks of extensive experimenting and debugging. In addition, performing training optimizations to unlock better price performance is often unfeasible for customers, as they often require deep machine learning expertise that could cause further delays in time to market. 

With SageMaker HyperPod recipes, customers of all skill sets can benefit from state-of-the-art performance while quickly getting started training and fine-tuning popular publicly available FMs, including Llama 3.1 405B, Mixtral 8x22B, and Mistral 7B. SageMaker HyperPod recipes include a training stack tested by AWS, removing weeks of tedious work experimenting with different model configurations. You can also quickly switch between GPU-based and AWS Trainium-based instances with a one-line recipe change and enable automated model checkpointing for improved training resiliency. Finally, you can run workloads in production on the SageMaker AI training service of your choice. 

SageMaker HyperPod recipes are available in all AWS Regions where SageMaker HyperPod and SageMaker training jobs are supported. To learn more and get started, visit the SageMaker HyperPod page and blog.

Read more


amazon-sagemaker-lakehouse

AWS announces Amazon SageMaker Lakehouse

AWS announces Amazon SageMaker Lakehouse, a unified, open, and secure data lakehouse that simplifies your analytics and artificial intelligence (AI). Amazon SageMaker Lakehouse unifies all your data across Amazon S3 data lakes and Amazon Redshift data warehouses, helping you build powerful analytics and AI/ML applications on a single copy of data.

SageMaker Lakehouse gives you the flexibility to access and query your data in-place with Apache Iceberg open standard. All data in SageMaker Lakehouse can be queried from SageMaker Unified Studio (preview) and engines such as Amazon EMR, AWS Glue, Amazon Redshift or Apache Spark. You can secure your data in the lakehouse by defining fine-grained permissions, which are consistently applied across all analytics and ML tools and engines. With SageMaker Lakehouse, you can use your existing investments. You can seamlessly make data from your Redshift data warehouses available for analytics and AI/ML. In addition, you can now create data lakes by leveraging the analytics optimized Redshift Managed Storage (RMS). Bringing data into lakehouse is easy. You can use zero-ETL to bring data from operational databases, streaming services, and applications, or query in-place data via federated query.

SageMaker Lakehouse is available in US East (N. Virginia), US East (Ohio), Europe (Ireland), US West (Oregon), Canada (Central), Europe (Frankfurt), Europe (Stockholm), Europe (London), Asia Pacific (Sydney), Asia Pacific (Hong Kong), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Seoul), South America (Sao Paulo).

SageMaker Lakehouse is accessible directly from SageMaker Unified Studio. In addition, you can access SageMaker Lakehouse from AWS Console, AWS Glue APIs and CLIs. To learn more, visit SageMaker Lakehouse and read the launch blog. For pricing information please visit here.

Read more


amazon-security-lake

Introducing the Amazon Security Lake Ready Specialization

We are excited to announce the new Amazon Security Lake Ready Specialization, which recognizes AWS Partners who have technically validated their software solutions to integrate with Amazon Security Lake and demonstrated successful customer deployments. These solutions have been technically validated by AWS Partner Solutions Architects for their sound architecture and proven customer success. Security Lake Ready software solutions can either contribute data to the Security Lake or consume this data and provide analytics, delivering a cohesive security solution for AWS customers.

Amazon Security Lake automates data management tasks for customers, reducing costs and consolidating security data that customers own. It uses the Open Cybersecurity Schema Framework (OCSF), an open standard that helps customers address the challenges of data normalization and schema mapping across multiple log sources. With Amazon Security Lake Ready software solutions, customers now have a single place with verified partner solutions where security data can be stored in an open-source format, ready for identifying potential threats and vulnerabilities, and for security investigations and analytics.

Explore Amazon Security Lake Ready software solutions that can help your organization improve the protection of workloads, applications, and data by significantly reducing the operational overhead of managing security data. To learn more about how to become an Amazon Security Lake Ready Partner, visit the AWS Service Ready Program webpage.
 

Read more


Amazon OpenSearch Service zero-ETL integration with Amazon Security Lake

Amazon OpenSearch Service now offers a zero-ETL integration with Amazon Security Lake, enabling you to query and analyze security data in-place directly through OpenSearch. This integration allows you to efficiently explore voluminous data sources that were previously cost-prohibitive to analyze, helping you streamline security investigations and obtain comprehensive visibility of your security landscape. By offering the flexibility to selectively ingest data and eliminating the need to manage complex data pipelines, you can now focus on effective security operations while potentially lowering your analytics costs.

Using the powerful analytics and visualization capabilities in OpenSearch Service, you can perform deeper investigations, enhance threat hunting, and proactively monitor your security posture. Pre-built queries and dashboards using the Open Cybersecurity Schema Framework (OCSF) can further accelerate your analysis. The built-in query accelerator boosts performance and enables fast-loading dashboards, enhancing your overall experience. This integration empowers you to accelerate investigations, uncover insights from previously inaccessible data sources, optimize analytics efficiency and costs, with minimal data migration.

OpenSearch Service zero-ETL integration with Security Lake is now generally available in 13 regions globally: Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), US East (Ohio), US East (N. Virginia), US West (Oregon), South America (São Paulo), Europe (Paris), and Canada (Central).

To learn more on using this capability, see the OpenSearch Service Integrations page and the OpenSearch Service Developer Guide. To learn more about how to configure and share Security Lake, see the Get Started Guide.
 

Read more


amazon-ses

SES Mail Manager adds delivery of email to Amazon Q Business applications

SES announces that Mail Manager now has a rule action for “Deliver to Q Business” which allows customers to specify an Amazon Q Business application resource and submit email messages to it for indexing and queries. This simplifies setup and allows granular control of which messages are selected by the rule conditions, as well as enabling multiple parallel configurations if customers want to index different messages into separate Q Business applications entirely.

Amazon Q Business is a generative AI–powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. Customers submitting email content will be able to identify patterns of discussion, activities around specific themes, and other content which is not an explicit cybersecurity attack but may still be of interest to managers, risk officers, or compliance teams. Mail Manager and Q Business offer an additional dimension for email risk management, with full flexibility around which messages are retained, in which locations, and for what duration.

The Mail Manager rule action to deliver to Amazon Q Business is available in all AWS commercial Regions where both Q Business and Mail Manager are already available. To learn more about Mail Manager, click here.

Read more


Amazon SES adds inline template support to send email APIs

Amazon Simple Email Service (SES) now allows customers to provide email templates directly within the SendBulkEmail or SendEmail API request. SES will use the provided inline template content to render and assemble the email content for delivery, reducing the need to manage template resources in your SES account.

Previously, Amazon Simple Email Service (SES) customers had to pre-create and store email templates in their SES account to use them for sending emails. This added complexity and friction to the email sending process, as customers had to manage the lifecycle of these templates. The new inline template support simplifies the integration process by allowing you to include the template content directly in your send API request, without having to create and maintain separate template resources.

Support for inline templates templated sending feature is available in all AWS Regions where Amazon SES is offered.

To learn more, see the documentation for using templates to send personalized email with the Amazon SES API.

Read more


amazon-sns

Amazon SNS delivers to Amazon Data Firehose endpoints in six new regions

Amazon Simple Notification Services (Amazon SNS) now delivers to Amazon Data Firehose endpoints in Asia Pacific (Hong Kong), Asia Pacific (Hyderabad), Asia Pacific (Melbourne), Europe (Zurich), Europe (Spain), Middle East (UAE).

You can now use Amazon SNS to deliver notifications to Amazon Data Firehose (Firehose) endpoints for archiving and analysis. Through Firehose delivery streams, you can deliver events to AWS destinations such as Amazon Simple Storage Service (Amazon S3), Amazon Redshift, and Amazon OpenSearch Service, or to third-party destinations such as Datadog, New Relic, MongoDB, and Splunk. For more information, see Fanout to Firehose delivery streams.

To get started, see the following resources:

Read more


Amazon SNS delivers to Amazon Data Firehose endpoints in the AWS GovCloud (US) Regions

Amazon Simple Notification Service (Amazon SNS) now delivers to Amazon Data Firehose endpoints in the AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions.

You can now use Amazon SNS to deliver notifications to Amazon Data Firehose (Firehose) endpoints for archiving and analysis. Through Firehose delivery streams, you can deliver events to AWS destinations such as Amazon Simple Storage Service (Amazon S3), Amazon Redshift, and Amazon OpenSearch Service, or to third-party destinations such as Datadog, New Relic, MongoDB, and Splunk. For more information, see Fanout to Firehose delivery streams.

To get started, see the following resources:

Read more


Amazon SNS supports message archiving and replay for FIFO topics in the AWS GovCloud (US) Regions

Amazon SNS now supports in-place message archiving and replay for SNS FIFO topics in the AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions.

Topic owners can now set an archive policy, which defines a retention period for the messages published to their topic. Subscribers can then set a replay policy to an individual subscription, which triggers a replay of select messages from the archive, from a starting point until an ending point. Subscribers can also set a filter policy on their subscription to further select the messages in-scope for a replay.

To get started, see the following resources:

Read more


amazon-sqs

Today, AWS announced support for a new Apache Flink connector for Amazon Simple Queue Service. The new connector, contributed by AWS for the Apache Flink open source project, adds Amazon Simple Queue Service as a new destination for Apache Flink. You can use the new connector to send processed data from Amazon Managed Service for Apache Flink to Amazon Simple Queue Service messages with Apache Flink, a popular framework and engine for processing and analyzing streaming data.

Amazon Managed Service for Apache Flink makes it easier to transform and analyze streaming data in real time with Apache Flink. Apache Flink is an open source framework and engine for processing data streams. Amazon Managed Service for Apache Flink reduces the complexity of building and managing Apache Flink applications and integrates with Amazon Managed Streaming for Apache Kafka (Amazon MSK), Amazon Kinesis Data Streams, Amazon OpenSearch Service, Amazon DynamoDB streams, Amazon S3, custom integrations, and more using built-in connectors.

Amazon Simple Queue Service offers a secure, durable, and available hosted queue that lets you integrate and decouple distributed software systems and components. Amazon SQS offers common constructs such as deal-letter queues and cosrt allocation tags.

You can learn more about Amazon Managed Service for Apache Flink and Amazon Simple Queue Service in our documentation. To learn more about open source Apache Flink connectors visit the official website. For Amazon Managed Service for Apache Flink and Amazon Simple Queue Service region availability, refer to the AWS Region Table.

Read more


Amazon SQS increases in-flight limit for FIFO queues from 20K to 120K

Amazon SQS increases the in-flight limit for FIFO queues from 20K to 120K messages. When a message is sent to an SQS FIFO queue, it is added to the queue backlog. Once you invoke a receive request on the FIFO queue, the message is now marked as in-flight and remains in-flight until a delete message request is invoked.

With this change to the in-flight limit, your receivers can now process a maximum of 120K messages concurrently, increased from 20K previously, via SQS FIFO queues. If you have sufficient publish throughput and were constrained by the 20K in-flight limit, you can now process up to 120K messages at a time by scaling your receivers.

The increased in-flight limits is available in all commercial and the AWS GovCloud (US) Regions where SQS FIFO queues are available.

To get started, see the following resources:

Read more


amazon-timestream

AWS Backup now supports Amazon Timestream in Asia Pacific (Mumbai)

Today, we are announcing the availability of AWS Backup support for Amazon Timestream for LiveAnalytics in the Asia Pacific (Mumbai) Region. AWS Backup is a policy-based, fully managed and cost-effective solution that enables you to centralize and automate data protection of Amazon Timestream for LiveAnalytics along with other AWS services (spanning compute, storage, and databases) and third-party applications. Together with AWS Organizations, AWS Backup enables you to centrally deploy policies to configure, manage, and govern your data protection activity.

With this launch, AWS Backup support for Amazon Timestream for LiveAnalytics is available in the following Regions: US East (N. Virginia, Ohio, Oregon), Asia Pacific (Sydney, Tokyo), and Europe (Frankfurt, Ireland). For more information on regional availability, feature availability, and pricing, see the AWS Backup pricing page and the AWS Backup Feature Availability page.

To learn more about AWS Backup support for Amazon Timestream for LiveAnalytics, visit AWS Backup’s technical documentation. To get started, visit the AWS Backup console.
 

Read more


Announcing Provisioned Timestream Compute Units (TCUs) for Amazon Timestream for LiveAnalytics

Today, Amazon Timestream for Live Analytics announces the launch of Provisioned Timestream Compute Units (TCUs), a new feature that allows you to provision dedicated compute resources for your queries, providing predictable and cost-effective query performance.

Amazon Timestream for LiveAnalytics is a serverless time-series database that automatically scales to ingest and analyze gigabytes of time-series data and Provisioned TCUs provide an additional layer of control and flexibility for your query workloads. With Provisioned TCUs, you can provision dedicated compute resources for your queries, guaranteeing consistent performance and predictable costs. As your workload evolves, you can easily adjust compute resources to maintain optimal performance and cost control, and accurately allocate resources to match your query needs. To get started with Provisioned TCUs, use the Amazon Timestream for Live Analytics console, AWS SDK, or CLI to provision the desired number of TCUs for your account. You can provision TCUs in multiples of 4, with a minimum of 4 TCUs and a maximum of 1000 TCUs.

Provisioning Timestream Compute Units is currently supported in Asia Pacific (Mumbai) only. To learn more about pricing, visit the Amazon Timestream for Live Analytics pricing page. For more information about Provisioned TCUs, see the Amazon Timestream for Live Analytics Developer Guide.

Read more


Amazon Timestream for InfluxDB is now available in China regions

You can now use Amazon Timestream for InfluxDB in the Amazon Web Services China (Beijing) Region, operated by Sinnet and Amazon Web Services China (Ningxia) Region, operated by NWC. Timestream for InfluxDB makes it easy for application developers and DevOps teams to run fully managed InfluxDB databases on Amazon Web Services for real-time time-series applications using open-source APIs.

Timestream for InfluxDB offers the full feature set available in the InfluxDB 2.7 release of the open-source version, and adds deployment options with Multi-AZ high availability and enhanced durability. For high availability, Timestream for InfluxDB allows you to automatically create a primary database instance and synchronously replicate the data to an instance in a different Availability Zone. When it detects a failure, Timestream for InfluxDB automatically fails over to a standby instance without manual intervention.

With the latest release, customers can use Amazon Timestream for InfluxDB in the following regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Canada (Central), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Jakarta), Europe (Paris), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Stockholm), Europe (Spain), Middle East (UAE), Amazon Web Services China (Beijing) Region, operated by Sinnet, and Amazon Web Services China (Ningxia) Region, operated by NWCD. To get started with Amazon Timestream, visit our product page.

Read more


amazon-verified-permissions

Amazon Verified Permissions launches new API to get multiple policies

Amazon Verified Permissions has launched a new API called batchGetPolicies. Customers can now make a single API call that returns multiple policies, for example to populate a list of policies that apply to a specific principal or resource. Amazon Verified Permissions is a permissions management and fine-grained authorization service for the applications that you build. Amazon Verified Permissions uses the Cedar policy language to enable developers and admins to define policy-based access controls based on roles and attributes. For example, a patient management application might call Amazon Verified Permissions (AVP) to determine if Alice is permitted access to Bob’s patient records.

The new API accepts up to 100 policy IDs and returns the corresponding set of policies, from across one or more policy stores. This simplifies the integration and reduces latency. Using the API reduces the number of calls that an application needs to make to Verified Permissions. For example, when building a permissions management UX that lists Cedar policies, the application now needs to make only one call to get 50 policies, rather than making 50 calls.

This feature is available in all regions where Verified Permissions is available. Pricing is based on the number of policies requested. For more information on pricing visit Amazon Verified Permissions Pricing – AWS - Amazon Web Services. For more information on the service visit Fine-Grained Authorization - Amazon Verified Permissions - AWS.
 

Read more


amazon-virtual-private-cloud

Amazon VPC IP Address Manager is now available in Asia Pacific (Malaysia) Region

Amazon Virtual Private Cloud IP Address Manager (Amazon VPC IPAM) that makes it easier for you to plan, track, and monitor IP addresses for your AWS workloads, is now available in Asia Pacific (Malaysia) Region.

Amazon VPC IPAM allows you to easily organize your IP addresses based on your routing and security needs, and set simple business rules to govern IP address assignments. Using VPC IPAM, you can automate IP address assignment to Amazon VPCs and subnets, eliminating the need to use spreadsheet-based or homegrown IP address planning applications, which can be hard to maintain and time-consuming. VPC IPAM automatically tracks critical IP address information, eliminating the need to manually track or do bookkeeping for IP addresses. VPC IPAM keeps your IP address monitoring data (up to a maximum of three years), which you can use to do retrospective analysis and audits for your network security and routing policies.

With this Region expansion, Amazon VPC IPAM is available in all AWS Regions, including the AWS GovCloud (US) Regions and China Regions.

To learn more about IPAM, view the IPAM documentation. For details on pricing, refer to the IPAM tab on the Amazon VPC Pricing Page.

Read more


AWS announces Block Public Access for Amazon Virtual Private Cloud

Today, AWS announced Virtual Private Cloud (VPC) Block Public Access (BPA), a new centralized declarative control that enables network and security administrators to authoritatively block Internet traffic for their VPCs. VPC BPA supersedes any other setting and ensures your VPC resources are protected from unfettered Internet access in compliance with your organizations security and governance policy.

Amazon VPC allows customers to launch AWS resources in a logically isolated virtual network. Often times customers have thousands of AWS accounts and VPCs that are owned by multiple business units or application developer teams. Central administrators have the critical responsibility to ensure that resources in their VPCs are accessible to the public Internet in a highly controlled fashion. VPC BPA offers a single declarative control that allows admins to easily block Internet access to VPCs via the Internet Gateway or the Egress-only Internet Gateway and ensures that there is no unintended public exposure to their AWS resources regardless of their routing and security configuration. Admins can apply BPA across all or select VPCs in their account, block bi-directional or ingress-only Internet connectivity and exclude select subnets for resources that need Internet access. VPC BPA is integrated with AWS Network Access Analyzer and VPC Flow Logs to support impact analysis, provide advanced visibility and help customers meet audit and compliance requirements.

VPC BPA is available in all AWS Regions where Amazon VPC is offered. There is no additional charge for using this feature. For additional information, visit the Amazon VPC documentation and blog post.
 

Read more


amazon-vpc

VPC Lattice now includes TCP support with VPC Resources

With the launch of VPC Resources for Amazon VPC Lattice, you can now access all of your application dependencies through a VPC Lattice service network. You're able to connect to your application dependencies hosted in different VPCs, accounts, and on-premises using additional protocols, including TLS, HTTP, HTTPS, and now TCP. This new feature expands upon the existing HTTP-based services support, enabling you to share a wider range of resources across your organization.

With VPC Resource support, you can add your TCP resources, such as Amazon RDS databases, custom DNS, or IP endpoints, to a VPC Lattice service network. Now, you can share and connect to all your application dependencies, such as HTTP APIs and TCP databases, across thousands of VPCs, simplifying network management and providing centralized visibility with built-in access controls.

VPC Resources are generally available with VPC Lattice in Africa (Cape Town), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Europe (Frankfurt), Europe (Ireland), Europe (Paris), Europe (Stockholm), Middle East (Bahrain), South America (Sao Paulo), US East (N. Virginia), US East (Ohio), US West (Oregon).

To get started, read the VPC Resources launch blog, architecture blog, and VPC Lattice User Guide. Learn more about VPC Lattice, visit Amazon VPC Lattice Getting Started.
 

Read more


AWS PrivateLink customers can now use VPC endpoints (powered by AWS PrivateLink) to privately and securely access VPC resources. These resources, such as databases or clusters, can be in your VPC or on-premises network, need not be load-balanced, and can be shared with other teams in your organization or with external independent software vendor (ISV) partners.

AWS PrivateLink is a highly available and scalable technology that enables your VPCs to have private unidirectional connection to VPC endpoint services, including supported AWS services and AWS Marketplace services, and now to VPC resources. Prior to this launch, customers could only access or share services that use Network Load Balancer or Gateway Load Balancer. Now, customers can share any VPC resource using AWS Resource Access Manager (AWS RAM). This resource can be an AWS-native resource such as an RDS database, a domain name, or an IP address in another VPC or on-premises environment. Once shared, the intended users can access these resources privately using VPC endpoints. They can use a resource VPC endpoint to access one resource or pool multiple resources in an Amazon VPC Lattice service network, and access the service network using a service network VPC endpoint. There are standard charges for sharing and accessing VPC resources — please see the pricing pages for AWS PrivateLink and VPC Lattice.

This capability is available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Africa (Cape Town), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Paris), Europe (Stockholm), Middle East (Bahrain), and South America (Sao Paulo).

To learn more about this capability and get started, please read our launch blog or refer to the AWS PrivateLink documentation.

Read more


Amazon VPC IPAM now supports enabling IPAM for organizational units within AWS Organizations

Today, AWS announced the ability for Amazon VPC IP Address Manager (IPAM) to be enabled and used for specific organizational units (OUs) within AWS Organizations. This allows you to enable IPAM for specific types of workloads, such as production workloads, or for specific business subsidiaries, that are grouped as OUs in your organization.

VPC IPAM makes it easier for you to plan, track, and monitor IP addresses for your AWS workloads. Typically, you would enable IPAM for the entire organization giving you a unified view of all the IP addresses. In some cases, you may want to enable IPAM only for parts of your organization. For example, you want to enable IPAM for all types of workloads, except sandbox which is isolated from your core-network and contains only experimental workloads. Or, you want to onboard selected business subsidiaries that need IPAM ahead of others in the organization. In such cases, you can use this new feature to enable IPAM for specific parts of your organization that are grouped as OUs.

Amazon VPC IPAM is available in all AWS Regions, including China (Beijing, operated by Sinnet), and China (Ningxia, operated by NWCD), and the AWS GovCloud (US) Regions.

To learn more about this feature, view the service documentation. For details on IPAM pricing, refer to the IPAM tab on the Amazon VPC Pricing page.

Read more


Amazon VPC IP Address Manager is now available in Asia Pacific (Malaysia) Region

Amazon Virtual Private Cloud IP Address Manager (Amazon VPC IPAM) that makes it easier for you to plan, track, and monitor IP addresses for your AWS workloads, is now available in Asia Pacific (Malaysia) Region.

Amazon VPC IPAM allows you to easily organize your IP addresses based on your routing and security needs, and set simple business rules to govern IP address assignments. Using VPC IPAM, you can automate IP address assignment to Amazon VPCs and subnets, eliminating the need to use spreadsheet-based or homegrown IP address planning applications, which can be hard to maintain and time-consuming. VPC IPAM automatically tracks critical IP address information, eliminating the need to manually track or do bookkeeping for IP addresses. VPC IPAM keeps your IP address monitoring data (up to a maximum of three years), which you can use to do retrospective analysis and audits for your network security and routing policies.

With this Region expansion, Amazon VPC IPAM is available in all AWS Regions, including the AWS GovCloud (US) Regions and China Regions.

To learn more about IPAM, view the IPAM documentation. For details on pricing, refer to the IPAM tab on the Amazon VPC Pricing Page.

Read more


Amazon VPC Lattice now supports Amazon Elastic Container Service (Amazon ECS)

Amazon VPC Lattice now provides native integration with Amazon ECS, Amazon's fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. This launch enables VPC Lattice to offer comprehensive support across all major AWS compute services, including Amazon Elastic Compute Cloud (Amazon EC2), Amazon Elastic Kubernetes Service (Amazon EKS), AWS Lambda, Amazon ECS, and AWS Fargate. VPC Lattice is a managed application networking service that simplifies the process of connecting, securing, and monitoring applications across AWS compute services, allowing developers to focus on building applications that matter to their business while reducing time and resources spent on network setup and maintenance.

With native ECS integration, you can now directly associate your ECS services with VPC Lattice target groups, eliminating the need for an intermediate Application Load Balancer (ALB). This streamlined integration reduces cost, operational overhead, and complexity, while enabling you to leverage the complete feature sets of both ECS and VPC Lattice. Organizations with diverse compute infrastructure, such as a mix of Amazon EC2, Amazon EKS, AWS Lambda, and Amazon ECS workloads, can benefit from this launch by unifying service-to-service connectivity, security, and observability across all compute platforms.

This new feature is available in all AWS Regions where Amazon VPC Lattice is available.

To get started, see the following resources:

Read more


amazon-workspaces

Announcing Idle Disconnect Timeout for Amazon WorkSpaces

Amazon WorkSpaces now supports Idle Disconnect Timeout for Windows WorkSpaces Personal with the Amazon DCV protocol. WorkSpaces administrators can now configure how long a user can be inactive while connected to a personal WorkSpace, before they are disconnected. This setting is already available for WorkSpaces Pools, but this launch includes end user notifications for idle users, warning that their session will be disconnected soon, for both Personal and Pools.

Idle Disconnect Timeout helps Amazon WorkSpaces administrators better optimize costs and resources for their fleet. This feature helps ensure that customers who pay for their resources hourly are only paying for the WorkSpaces that are actually in use. The notifications also provide improved overall user experience for both Personal and Pools end users, by warning them about the pending disconnection and giving them a chance to continue or save their work beforehand.

Idle Disconnect Timeout is available at no additional cost for Windows WorkSpaces running DCV, in all the AWS Regions where WorkSpaces is currently available. To get started with Amazon WorkSpaces, see Getting Started with Amazon WorkSpaces.

To enable this feature, you must be using Windows WorkSpaces Personal DCV host agent version 2.1.0.1554 or later. Your users must be on WorkSpaces Windows or macOS client versions 5.24 or later, WorkSpaces Linux client version 2024.7 or later, or on Web Access. Refer to the client version release notes for more details. To learn more, visit Manage your Windows WorkSpaces in the Amazon WorkSpaces Administrator Guide.

Read more


Amazon WorkSpaces Secure Browser now supports inline data redaction

Today, AWS End User Computing Services announced customers may now redact specified data fields in web content accessed with Amazon WorkSpaces Secure Browser. With inline data redaction, administrators can create policies that help predict and redact certain data (e.g., Social Security numbers, credit card numbers, etc.) before it is displayed on the screen.

Inline data redaction helps customers raise the security bar for accessing certain data by automatically redacting data from strings of text displayed in web pages. Using the AWS Management Console, administrators can create redaction policies by choosing from 30 built-in data types (e.g., Social Security Numbers, Credit Card Numbers), or create their own custom data types. Administrators can set policies governing the strictness of enforcement and define the URLs where redaction should be enforced. For example, you can define redaction policies for your support agents to help prevent the visual display of credit card numbers from web based payment systems. This way, you can help ensure that the credit card number field is redacted without restricting access to other data necessary to provide support.

Inline data redaction is available for your portal at no additional charge, in all the AWS Regions where WorkSpaces Secure Browser is available.

If you are new to WorkSpaces Secure Browser you can get started by visiting the pricing page and adding the Free Trial offer to your AWS account. Then, go to the Amazon WorkSpaces Secure Browser management console and create a portal, today.

Read more


Amazon WorkSpaces introduces support for Rocky Linux

Amazon Web Services today announced support for Rocky Linux from CIQ on Amazon WorkSpaces Personal, a fully managed virtual desktop offering. With this launch, organizations can provide their end users with an RPM Package Manager compatible environment, optimized for running compute-intensive applications, while helping to improve IT agility and reduce costs. Now WorkSpaces Personal customers have the flexibility to choose from a wider range of Linux distributions including Rocky Linux, Red Hat Enterprise Linux, and Ubuntu Desktop.

With Rocky Linux on WorkSpaces Personal, IT organizations can enable developers to work in an environment that is consistent with their production environment, and provide power users like engineers and data scientists with on-demand access to Rocky Linux environments as needed - quickly spinning up and tearing down instances and managing the entire fleet through the AWS Console, without the burden of capacity planning or license management. WorkSpaces Personal offers a wide range of high-performance, license-included, fully-managed virtual desktop bundles—enabling organizations to only pay for the resources they use.

Rocky Linux on WorkSpaces Personal is available in all AWS Regions where WorkSpaces Personal is available, except for AWS China Regions. Depending on the WorkSpaces Personal running mode, you will be charged hourly or monthly for your virtual desktops. For more details on pricing, refer to Amazon WorkSpaces Pricing.

To get started with Rocky Linux on WorkSpaces Personal, sign in to the AWS Management Console and open the Amazon WorkSpaces console.  For more information, see the Amazon WorkSpaces Administration Guide.
 

Read more


Amazon WorkSpaces WSP enables desktop traffic over TCP/UDP port 443

Amazon WorkSpaces Amazon DCV-enabled desktop traffic now supports both TCP and UDP over Port 443. This feature will be used automatically, requiring no configuration changes. Customers using port 4195 can continue to do so. The WorkSpaces client application prioritizes UDP (QUIC) for optimal performance, but will fallback to TCP if UDP is blocked. The WorkSpaces web client will connect over either TCP Port 4195 or 443. If Port 4195 is blocked, the client will exclusively use port 443.

Organizations managing WorkSpaces may not be the same as the organization managing the client networks where users will connect to WorkSpaces. Each network is managed independently, creating administration challenges, delays, or roadblocks to change outbound access rules. By opening WorkSpaces DCV desktop traffic over TCP/UDP Port 443 with support for fallback to TCP if UDP is not available, customers no longer need to open TCP/UDP 4195 unique ports.

WorkSpaces DCV enabled desktop traffic over TCP/UDP Port 443 support is available in all AWS Regions where Amazon WorkSpaces is available. There is no additional charge for this feature. Please see the Amazon WorkSpaces Administration Guide for more information.

Read more


analytics

SageMaker SDK enhances training and inference workflows

Today, we are introducing the new ModelTrainer class and enhancing the ModelBuilder class in the SageMaker Python SDK. These updates streamline training workflows and simplify inference deployments.

The ModelTrainer class enables customers to easily set up and customize distributed training strategies on Amazon SageMaker. This new feature accelerates model training times, optimizes resource utilization, and reduces costs through efficient parallel processing. Customers can smoothly transition their custom entry points and containers from a local environment to SageMaker, eliminating the need to manage infrastructure. ModelTrainer simplifies configuration by reducing parameters to just a few core variables and providing user-friendly classes for intuitive SageMaker service interactions. Additionally, with the enhanced ModelBuilder class, customers can now easily deploy HuggingFace models, switch between developing in local environment to SageMaker, and customize their inference using their pre- and post-processing scripts. Importantly, customers can now pass the trained model artifacts from ModelTrainer class easily to ModelBuilder class, enabling a seamlessly transition from training to inference on SageMaker.

You can learn more about ModelTrainer class here, ModelBuilder enhancements here, and get started using ModelTrainer and ModelBuilder sample notebooks.

Read more


Announcing scenarios analysis capability of Amazon Q in QuickSight (preview)

A new scenario analysis capability of Amazon Q in QuickSight is now available in preview. This new capability provides an AI-assisted data analysis experience that helps you make better decisions, faster. Amazon Q in QuickSight simplifies in-depth analysis with step-by-step guidance, saving hours of manual data manipulation and unlocking data-driven decision-making across your organization.

Amazon Q in QuickSight helps business users perform complex scenario analysis up to 10x faster than spreadsheets. You can ask a question or state your goal in natural language and Amazon Q in QuickSight guides you through every step of advanced data analysis—suggesting analytical approaches, automatically analyzing data, surfacing relevant insights, and summarizing findings with suggested actions. This agentic approach breaks down data analysis into a series of easy-to-understand, executable steps, helping you find solutions to complex problems without specialized skills or tedious, error-prone data manipulation in spreadsheets. Working on an expansive analysis canvas, you can intuitively iterate your way to solutions by directly interacting with data, refining analysis steps, or exploring multiple analysis paths side-by-side. This scenario analysis capability is accessible from any Amazon QuickSight dashboard, so you can move seamlessly from visualizing data to modelling solutions. With Amazon Q in QuickSight, you can easily modify, extend, and reuse previous analyses, helping you quickly adapt to changing business needs.

Amazon Q in QuickSight Pro users can use this new capability in preview in the following AWS regions: US East (N. Virginia) and US West (Oregon). To learn more, visit the Amazon Q in QuickSight documentation and read the AWS News Blog.

Read more


Amazon DynamoDB zero-ETL integration with Amazon SageMaker Lakehouse

Amazon DynamoDB zero-ETL integration with Amazon SageMaker Lakehouse automates the extracting and loading of data from a DynamoDB table into SageMaker Lakehouse, an open and secure lakehouse. You can run analytics and machine learning workloads on your DynamoDB data using SageMaker Lakehouse, without impacting production workloads running on DynamoDB. With this launch, you now have the option to enable analytics workloads using SageMaker Lakehouse, in addition to the previously available Amazon OpenSearch Service and Amazon Redshift zero-ETL integrations.

Using the no-code interface, you can maintain an up-to-date replica of your DynamoDB data in the data lake by quickly setting up your integration to handle the complete process of replicating data and updating records. This zero-ETL integration reduces the complexity and operational burden of data replication to let you focus on deriving insights from your data. You can create and manage integrations using the AWS Management Console, the AWS Command Line Interface (AWS CLI), or the SageMaker Lakehouse APIs.

DynamoDB zero-ETL integration with SageMaker Lakehouse is now available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Hong Kong), Asia Pacific (Singapore), Asia Pacific (Sydney), Europe (Stockholm), Europe (Frankfurt), and Europe (Ireland) AWS Regions. 

To learn more, visit DynamoDB integrations and read the documentation.

Read more


Amazon S3 Access Grants now integrate with AWS Glue

Amazon S3 Access Grants now integrate with AWS Glue for analytics, machine learning (ML), and application development workloads in AWS. S3 Access Grants map identities from your Identity Provider (IdP), such as Entra ID and Okta or AWS Identity and Access Management (IAM) principals, to datasets stored in Amazon S3. This integration gives you the ability to manage S3 permissions for end users running jobs with Glue 5.0 or later, without the need to write and maintain bucket policies or individual IAM roles.

AWS Glue provides a data integration service that simplifies data exploration, preparation, and integration from multiple sources, including S3. Using S3 Access Grants, you can grant permissions to buckets or prefixes in S3 to users and groups in an existing corporate directory, or to IAM users and roles. When end users in the appropriate user groups access S3 using Glue ETL for Apache Spark, they will then automatically have the necessary permissions to read and write data. S3 Access Grants also automatically update S3 permissions as users are added and removed from user groups in the IdP.

Amazon S3 Access Grants support is available when using AWS Glue 5.0 and later, and is available in all commercial AWS Regions where AWS Glue 5.0 and AWS IAM Identity Center are available. For pricing details, visit Amazon S3 pricing and Amazon Glue pricing. To learn more about S3 Access Grants, refer to the S3 User Guide.
 

Read more


AWS expands data connectivity for Amazon SageMaker Lakehouse and AWS Glue

Amazon SageMaker Lakehouse announces unified data connectivity capabilities to streamline the creation, management, and usage of connections to data sources across databases, data lakes and enterprise applications. SageMaker Lakehouse unified data connectivity provides a connection configuration template, support for standard authentication methods like basic authentication and OAuth 2.0, connection testing, metadata retrieval, and data preview. Customers can create SageMaker Lakehouse connections through SageMaker Unified Studio (preview), AWS Glue console, or custom-built application using APIs under AWS Glue.

With SageMaker Lakehouse unified data connectivity, a data connection is configured once and can be reused by SageMaker Unified Studio, AWS Glue and Amazon Athena for use cases in data integration, data analytics and data science. You will gain confidence in the established connection by validating credentials with connection testing. With the ability to browse metadata, you can understand the structure and schema of the data source and identify relevant tables and fields. Lastly, the data preview capability supports mapping source fields to target schemas, identifying needed data transformation, and receiving immediate feedback on the source data queries.

SageMaker Lakehouse unified connectivity is available where Amazon SageMaker Lakehouse or AWS Glue is available. To get started, visit AWS Glue connection documentation or the Amazon SageMaker Lakehouse data connection documentation.

Read more


Introducing AWS Glue 5.0

Today, we are excited to announce the general availability of AWS Glue 5.0. With AWS Glue 5.0, you get improved performance, enhanced security, support for Amazon Sagemaker Unified Studio and Sagemaker Lakehouse, and more. AWS Glue 5.0 enables you to develop, run, and scale your data integration workloads and get insights faster.

AWS Glue is a serverless, scalable data integration service that makes it simple to discover, prepare, move, and integrate data from multiple sources. AWS Glue 5.0 upgrades the engines to Apache Spark 3.5.2, Python 3.11, and Java 17, with new performance and security improvements. Glue 5.0 updates open table format support to Apache Hudi 0.15.0, Apache Iceberg 1.6.1, and Delta Lake 3.2.0 so you can solve advanced use cases around performance, cost, governance, and privacy in your data lakes. AWS Glue 5.0 adds Spark native fine grained access control with AWS Lake Formation so you can apply table, column, row, and cell level permissions on Amazon S3 data lakes. Finally, Glue 5.0 adds support for Sagemaker Lakehouse to unify all your data across Amazon S3 data lakes and Amazon Redshift data warehouses.

AWS Glue 5.0 is generally available today in US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (London), Europe (Stockholm), Europe (Frankfurt), Asia Pacific (Hong Kong), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), and South America (São Paulo) regions.

To learn more, visit the AWS Glue product page and documentation.

Read more


Amazon SageMaker Lakehouse and Amazon Redshift support for zero-ETL integrations from eight applications

Amazon SageMaker Lakehouse and Amazon Redshift now support zero-ETL integrations from applications, automating the extraction and loading of data from eight applications, including Salesforce, SAP, ServiceNow, and Zendesk. As an open, unified, and secure lakehouse for your analytics and AI initiatives, Amazon SageMaker Lakehouse enhances these integrations to streamline your data management processes.

These zero-ETL integrations are fully managed by AWS and minimize the need to build ETL data pipelines. With this new zero-ETL integration, you can efficiently extract and load valuable data from your customer support, relationship management, and ERP applications into your data lake and data warehouse for analysis. Zero-ETL integration reduces users' operational burden and saves the weeks of engineering effort needed to design, build, and test data pipelines. By selecting a few settings in the no-code interface, you can quickly set up your zero-ETL integration to automatically ingest and continually maintain an up-to-date replica of your data in the data lake and data warehouse. Zero-ETL integrations help you focus on deriving insights from your application data, breaking down data silos in your organization and improving operational efficiency. Now run enhanced analysis on your application data using Apache Spark and Amazon Redshift for analytics or machine learning. Optimize your data ingestion processes and focus instead on analysis and gaining insights. 

Amazon SageMaker Lakehouse and Amazon Redshift support for zero-ETL integrations from eight applications is now generally available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).

You can create and manage integrations using either the AWS Glue console, the AWS Command Line Interface (AWS CLI), or the AWS Glue APIs. To learn more, visit What is zero-ETL and What is AWS Glue.

Read more


AWS announces Amazon SageMaker Lakehouse

AWS announces Amazon SageMaker Lakehouse, a unified, open, and secure data lakehouse that simplifies your analytics and artificial intelligence (AI). Amazon SageMaker Lakehouse unifies all your data across Amazon S3 data lakes and Amazon Redshift data warehouses, helping you build powerful analytics and AI/ML applications on a single copy of data.

SageMaker Lakehouse gives you the flexibility to access and query your data in-place with Apache Iceberg open standard. All data in SageMaker Lakehouse can be queried from SageMaker Unified Studio (preview) and engines such as Amazon EMR, AWS Glue, Amazon Redshift or Apache Spark. You can secure your data in the lakehouse by defining fine-grained permissions, which are consistently applied across all analytics and ML tools and engines. With SageMaker Lakehouse, you can use your existing investments. You can seamlessly make data from your Redshift data warehouses available for analytics and AI/ML. In addition, you can now create data lakes by leveraging the analytics optimized Redshift Managed Storage (RMS). Bringing data into lakehouse is easy. You can use zero-ETL to bring data from operational databases, streaming services, and applications, or query in-place data via federated query.

SageMaker Lakehouse is available in US East (N. Virginia), US East (Ohio), Europe (Ireland), US West (Oregon), Canada (Central), Europe (Frankfurt), Europe (Stockholm), Europe (London), Asia Pacific (Sydney), Asia Pacific (Hong Kong), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Seoul), South America (Sao Paulo).

SageMaker Lakehouse is accessible directly from SageMaker Unified Studio. In addition, you can access SageMaker Lakehouse from AWS Console, AWS Glue APIs and CLIs. To learn more, visit SageMaker Lakehouse and read the launch blog. For pricing information please visit here.

Read more


Announcing Amazon S3 Metadata (Preview) – Easiest and fastest way to manage your metadata

Amazon S3 Metadata is the easiest and fastest way to help you instantly discover and understand your S3 data with automated, easily-queried metadata that updates in near real-time. This helps you to curate, identify, and use your S3 data for business analytics, real-time inference applications, and more. S3 Metadata supports object metadata, which includes system-defined details like size and the source of the object, and custom metadata, which allows you to use tags to annotate your objects with information like product SKU, transaction ID, or content rating, for example.

S3 Metadata is designed to automatically capture metadata from objects as they are uploaded into a bucket, and to make that metadata queryable in a read-only table. As data in your bucket changes, S3 Metadata updates the table within minutes to reflect the latest changes. These metadata tables are stored in S3 Tables, the new S3 storage offering optimized for tabular data. S3 Tables integration with AWS Glue Data Catalog is in preview, allowing you to stream, query, and visualize data—including S3 Metadata tables—using AWS Analytics services such as Amazon Data Firehose, Athena, Redshift, EMR, and QuickSight. Additionally, S3 Metadata integrates with Amazon Bedrock, allowing for the annotation of AI-generated videos with metadata that specifies its AI origin, creation timestamp, and the specific model used for its generation.

Amazon S3 Metadata is currently available in preview in the US East (N. Virginia), US East (Ohio), and US West (Oregon) Regions, and coming soon to additional Regions. For pricing details, visit the S3 pricing page. To learn more, visit the product page, documentation, and AWS News Blog.

Read more


Introducing the next generation of Amazon SageMaker

Today, AWS announces the next generation of Amazon SageMaker, a unified platform for data, analytics, and AI. This launch brings together widely adopted AWS machine learning and analytics capabilities and provides an integrated experience for analytics and AI with unified access to data and built-in governance. Teams can collaborate and build faster from a single development environment using familiar AWS tools for model development, generative AI application development, data processing, and SQL analytics, accelerated by Amazon Q Developer, the most capable generative AI assistant for software development.

The next generation of SageMaker also introduces new capabilities, including Amazon SageMaker Unified Studio (preview), Amazon SageMaker Lakehouse, and Amazon SageMaker Data and AI Governance. Within the new SageMaker Unified Studio, users can discover their data and put it to work using the best tool for the job across data and AI use cases. SageMaker Unified Studio brings together functionality and tools from the range of standalone studios, query editors, and visual tools available today in Amazon EMR, AWS Glue, Amazon Redshift, Amazon Bedrock, and the existing Amazon SageMaker Studio. SageMaker Lakehouse provides an open data architecture that reduces data silos and unifies data across Amazon Simple Storage Service (Amazon S3) data lakes, Amazon Redshift data warehouses, and third party and federated data sources. SageMaker Lakehouse offers the flexibility to access and query data with Apache Iceberg–compatible tools and engines. SageMaker Data and AI Governance, including Amazon SageMaker Catalog built on Amazon DataZone, empowers users to securely discover, govern, and collaborate on data and AI workflows.  

For more information on AWS Regions where the next generation of Amazon SageMaker is available, see Supported Regions

To learn more and get started, visit the following resources:

Read more


AWS Glue Data catalog now automates generating statistics for new tables

AWS Glue Data Catalog now automates generating statistics for new tables. These statistics are integrated with cost-based optimizer (CBO) from Amazon Redshift and Amazon Athena, resulting in improved query performance and potential cost savings.

Table statistics are used by a query engine, such as Amazon Redshift and Amazon Athena, to determine the most efficient way to execute a query. Previously, creating statistics for Apache Iceberg tables in AWS Glue Data Catalog required you to continuously monitor and update configurations for your tables. Now, AWS Glue Data Catalog lets you generate statistics automatically for new tables with one time catalog configuration. You can get started by selecting default catalog in the Lake Formation console and enabling table statistics in the table optimization configuration tab. As new tables are created or existing tables are updated, statistics are generated using a sample of rows for all columns and will be refreshed periodically. For Apache Iceberg tables, these statistics include the number of distinct values (NDVs). For other file formats like Parquet, additional statistics are collected, such as the number of nulls, maximum and minimum values, and average length. Amazon Redshift and Amazon Athena use the updated statistics to optimize queries, using optimizations such as optimal join order or cost based aggregation pushdown. Glue Catalog console provides you visibility into the updated statistics and statistics generation runs.

The support for automation for AWS Glue Catalog statistics is generally available in the following AWS regions: US East (N. Virginia, Ohio), US West (N. California, Oregon), Europe (Ireland), Asia Pacific (Tokyo) regions. Read the blog post and visit AWS Glue Catalog documentation to learn more.
 

Read more


Amazon SageMaker Lakehouse integrated access controls now available in Amazon Athena federated queries

Amazon SageMaker now supports connectivity, discovery, querying, and enforcing fine-grained data access controls on federated sources when querying data with Amazon Athena. Athena is a query service that makes it simple to analyze your data lake and federated data sources such as Amazon Redshift, Amazon DynamoDB, or Snowflake using SQL without extract, transform, and load (ETL) scripts. Now, data workers can connect to and unify these data sources within SageMaker Lakehouse. Federated source metadata is unified in SageMaker Lakehouse, where you apply fine-grained policies in one place, helping to streamline analytics workflows and secure your data.

Log into Amazon SageMaker Unified Studio, connect to a federated data source in SageMaker Lakehouse, and govern data with column- and tag-based permissions that are enforced when querying federated data sources with Athena. In addition to the SageMaker Unified Studio, you can connect to these data sources through the Athena console and API. To help you automate and streamline connector set up, the new user experiences allow you to create and manage connections to data sources with ease.

Now, organizations can extract insights from a unified set of data sources while strengthening security posture, wherever your data is stored. The unification and fine-grained access controls on federated sources are available in all AWS Regions where SageMaker Lakehouse is available. To learn more, visit SageMaker Lakehouse documentation.

Read more


Introducing Amazon SageMaker Data and AI Governance

Today, AWS announces Amazon SageMaker Data and AI Governance, a new capability that simplifies discovery, governance, and collaboration for data and AI across your lakehouse, AI models, and applications. Built on Amazon DataZone, SageMaker Data and AI Governance allows engineers, data scientists, and analysts to securely discover and access approved data and models using semantic search with generative AI–created metadata. This new offering helps organizations consistently define and enforce access policies using a single permission model with fine-grained access controls.

With SageMaker Data and AI Governance, you can accelerate data and AI discovery and collaboration at scale. You can enhance data discovery by automatically enriching your data and metadata with business context using generative AI, making it easier for all users to find, understand, and use data. You can share data, AI models, prompts, and other generative AI assets with filtering by table and column names or business glossary terms. SageMaker Data and AI Governance helps establish trust and drives transparency in your data pipelines and AI projects with built-in model monitoring to detect bias and report on how features contribute to your model predictions.

To learn more about how to govern your data and AI assets, visit SageMaker Data and AI Governance.

Read more


Announcing the preview of Amazon SageMaker Unified Studio

Today, AWS announces the next generation of Amazon SageMaker, including the preview launch of Amazon SageMaker Unified Studio, an integrated data and AI development environment that enables collaboration and helps teams build data products faster. SageMaker Unified Studio brings together familiar tools from AWS analytics and AI/ML services for data processing, SQL analytics, machine learning model development, and generative AI application development. Amazon SageMaker Lakehouse, which is accessible through SageMaker Unified Studio, provides open source compatibility and access to data stored across Amazon Simple Storage Service (Amazon S3) data lakes, Amazon Redshift data warehouses, and third- party and federated data sources. Enhanced governance features are built in to help you meet enterprise security requirements.

SageMaker Unified Studio allows you to find, access, and query data and AI assets across your organization, then work together in projects to securely build and share analytics and AI artifacts, including data, models, and generative AI applications. SageMaker Unified Studio offers the capabilities to build integrated data pipelines with visual extract, transform, and load (ETL), develop ML models, and create custom generative AI applications. New unified Jupyter Notebooks enable seamless work across different compute resources and clusters, while an integrated SQL editor lets you query your data stored in various sources—all within a single, collaborative environment. Amazon Bedrock IDE, formerly Amazon Bedrock Studio, is now part of the SageMaker Unified Studio in public preview, offering the capabilities to rapidly build and customize generative AI applications. Amazon Q Developer, the most capable generative AI assistant for software development, is integrated into SageMaker Unified Studio to accelerate and streamline tasks across the development lifecycle.

For more information on AWS Regions where SageMaker Unified Studio is available in preview, see Supported Regions.

To get started, see the following resources:

Read more


Data Lineage is now generally available in Amazon DataZone and next generation of Amazon SageMaker

AWS announces general availability of Data Lineage in Amazon DataZone and next generation of Amazon SageMaker, a capability that automatically captures lineage from AWS Glue and Amazon Redshift to visualize lineage events from source to consumption. Being OpenLineage compatible, this feature allows data producers to augment the automated lineage with lineage events captured from OpenLineage-enabled systems or through API, to provide a comprehensive data movement view to data consumers.

This feature automates lineage capture of schema and transformations of data assets and columns from AWS Glue, Amazon Redshift, and Spark executions in tools to maintain consistency and reduce errors. With in-built automation, domain administrators and data producers can automate capture and storage of lineage events when data is configured for data sharing in the business data catalog. Data consumers can gain confidence in an asset's origin from the comprehensive view of its lineage while data producers can assess the impact of changes to an asset by understanding its consumption. Additionally, the data lineage feature versions lineage with each event, enabling users to visualize lineage at any point in time or compare transformations across an asset's or job's history. This historical lineage provides a deeper understanding of how data has evolved, essential for troubleshooting, auditing, and validating the integrity of data assets.

The data lineage feature is generally available in all AWS Regions where Amazon DataZone and next generation of Amazon SageMaker are available.

To learn more, visit Amazon DataZone and next generation of Amazon SageMaker.
 

Read more


Amazon Q in QuickSight unifies insights from structured and unstructured data

Now generally available, Amazon Q in QuickSight provides users with unified insights from structured and unstructured data sources through integration with Amazon Q Business. While structured data is managed in conventional systems, unstructured data such as document libraries, webpages, images and more has remained largely untapped due to its diverse and distributed nature.

With Amazon Q in QuickSight business users can now augment insights from traditional BI data sources such as databases, data lakes and data warehouses, with contextual information from unstructured sources. Users can get augmented insights within QuickSight's BI interface across multi-visual Q&A and Data Stories. Users can use multi-visual Q&A to ask questions in natural language and get visualizations and data summaries augmented with contextual insights from Amazon Q Business. With data stories in Amazon Q in QuickSight users can upload documents, or connect to unstructured data sources from Amazon Q Business to create richer narratives or presentations explaining their data with additional context. This integration enables organizations to harness insights from all their data without the need for manual collation, leading to more informed decision-making, time savings, and a significant competitive edge in the data-driven business landscape.

This new capability is generally available to all Amazon QuickSight Pro Users in US East (N. Virginia), and US West (Oregon) AWS Regions.

To learn more visit the AWS Business Intelligence Blog, the Amazon Q Business What’s New Post and try QuickSight free for 30-days.
 

Read more


Amazon Q Business now provides insights from your databases and data warehouses (preview)

Today, AWS announces the public preview of the integration between Amazon Q Business and Amazon QuickSight, delivering a transformative capability that unifies answers from structured data sources (databases, warehouses) and unstructured data (documents, wikis, emails) in a single application.

Amazon Q Business is a generative AI–powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. Amazon QuickSight is a business intelligence (BI) tool that helps you visualize and understand your structured data through interactive dashboards, reports, and analytics. While organizations want to leverage generative AI for business insights, they experience fragmented access to unstructured and structured data.

With the QuickSight integration, customers can now link their structured sources to Amazon Q Business through QuickSight’s extensive set of data source connectors. Amazon Q Business responds in real time, combining the QuickSight answer from your structured sources with any other relevant information found in documents. For example, users could ask about revenue comparisons, and Amazon Q Business will return an answer from PDF financial reports along with real-time charts and metrics from QuickSight. This integration unifies insights across knowledge sources, helping organizations make more informed decisions while reducing the time and complexity traditionally required to gather insights.

This integration is available to all Amazon Q Business Pro, and Amazon QuickSight Reader Pro, and Author Pro users in the US East (N. Virginia) and US West (Oregon) AWS Regions. To learn more, visit the Amazon Q Business documentation site.

Read more


Amazon OpenSearch Service zero-ETL integration with Amazon Security Lake

Amazon OpenSearch Service now offers a zero-ETL integration with Amazon Security Lake, enabling you to query and analyze security data in-place directly through OpenSearch. This integration allows you to efficiently explore voluminous data sources that were previously cost-prohibitive to analyze, helping you streamline security investigations and obtain comprehensive visibility of your security landscape. By offering the flexibility to selectively ingest data and eliminating the need to manage complex data pipelines, you can now focus on effective security operations while potentially lowering your analytics costs.

Using the powerful analytics and visualization capabilities in OpenSearch Service, you can perform deeper investigations, enhance threat hunting, and proactively monitor your security posture. Pre-built queries and dashboards using the Open Cybersecurity Schema Framework (OCSF) can further accelerate your analysis. The built-in query accelerator boosts performance and enables fast-loading dashboards, enhancing your overall experience. This integration empowers you to accelerate investigations, uncover insights from previously inaccessible data sources, optimize analytics efficiency and costs, with minimal data migration.

OpenSearch Service zero-ETL integration with Security Lake is now generally available in 13 regions globally: Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), US East (Ohio), US East (N. Virginia), US West (Oregon), South America (São Paulo), Europe (Paris), and Canada (Central).

To learn more on using this capability, see the OpenSearch Service Integrations page and the OpenSearch Service Developer Guide. To learn more about how to configure and share Security Lake, see the Get Started Guide.
 

Read more


AWS Clean Rooms now supports multiple clouds and data sources

Today, AWS Clean Rooms announces support for collaboration with datasets from multiple clouds and data sources. This launch allows companies and their partners to easily collaborate with data stored in Snowflake and Amazon Athena, without having to move or share their underlying data among collaborators.

With AWS Clean Rooms expanded data sources and clouds support, organizations can seamlessly collaborate with any company leveraging datasets across AWS and Snowflake, without any party having to move, reveal, or copy their underlying datasets. This launch enables companies to collaborate on the most up-to-date data with zero extract, transform, and load (zero-ETL), eliminating the cost and complexity associated with migrating datasets out of existing environments. For example, a media publisher with data stored in Amazon S3 and an advertiser with data stored in Snowflake can analyze their collective datasets to evaluate the advertiser's spend without having to build ETL data pipelines, or share underlying data with one another. We are just getting started, and will continue to expand the ways in which customers can securely collaborate in AWS Clean Rooms while maintaining control of their records and information.

With AWS Clean Rooms, customers can create a secure data clean room in minutes and collaborate with any company on AWS or Snowflake, to generate unique insights about advertising campaigns, investment decisions, and research and development. For more information about the AWS Regions where AWS Clean Rooms is available, see the AWS Regions table. To learn more about collaborating with AWS Clean Rooms, visit AWS Clean Rooms.
 

Read more


Introducing Advanced Scaling in Amazon EMR Managed Scaling

We are excited to announce Advanced Scaling, a new capability in Amazon EMR Managed Scaling which provides customers increased flexibility to control the performance and resource utilization of their Amazon EMR on EC2 clusters. With Advanced Scaling, customers will be able to configure the desired resource utilization or performance levels for their cluster, and Amazon EMR Managed Scaling will leverage the customers intent to intelligently scale the cluster and optimize cluster compute resources.

Customers appreciate the simplicity of Amazon EMR Managed Scaling. However, there are instances where the default Amazon EMR Managed Scaling algorithm might lead to cluster under-utilization for specific customer’s workload. For instance, clusters running multiple tasks of relatively short duration (task runtime of 10 seconds or less), Amazon EMR Managed Scaling by default scales up the cluster aggressively and conservatively scale it down to avoid negative impact to job run times. While this is the right approach for SLA-sensitive workloads, it might not be optimal for cost sensitive workloads. With Advanced Scaling, customer can now configure Amazon EMR Managed Scaling behavior suitable for their workload type and we will apply tailored optimization to intelligently add or remove nodes from the clusters.

To get started with Advanced Scaling, you can set the ScalingStrategy and UtilizationPerformanceIndex parameters either when creating a new Managed Scaling policy, or updating an existing Managed Scaling policy. Advanced Scaling is available with Amazon EMR release 7.0 and later and is available in all regions where Amazon EMR Managed Scaling is available. For more details, please refer to our Advanced Scaling documentation.

Read more


Amazon Redshift multi-data warehouse writes through data sharing is now generally available

AWS announces the general availability of Amazon Redshift multi-data warehouse writes through data sharing. You can now start writing to Amazon Redshift databases from multiple Amazon Redshift data warehouses in just a few clicks. The written data is available to all Amazon Redshift warehouses as soon as it is committed. This allows your teams to flexibly scale compute by adding warehouses of different types and sizes based on their write workloads’ price-performance needs, isolate compute to more easily meet your workload performance requirements, and easily and securely collaborate with other teams.

With Amazon Redshift multi-data warehouse writes through data sharing, you can easily keep extract, load and transform (ETL) jobs more predictable by splitting workloads between multiple warehouses, helping you meet your workload performance requirements with less time and effort. You can track usage and control costs as each team or application can write using its own warehouse, regardless of where the data is stored. You can use different types of RA3 and Serverless warehouses across different sizes to meet each individual workload's price-performance needs. Your data is immediately available across AWS accounts and regions once committed, enabling better collaboration across your organization.

Amazon Redshift multi-warehouse writes through data sharing is available for RA3 provisioned clusters and Serverless workgroups in all AWS regions where Amazon Redshift data sharing is supported. To get started with Amazon Redshift multi-warehouse writes through data sharing, visit the documentation page.

Read more


Amazon QuickSight now supports prompted reports and reader scheduling for pixel-perfect reports

We are enabling Amazon QuickSight readers to generate filtered views of pixel-perfect reports and create schedules to deliver reports via email. Readers can create up to five schedules per dashboard for themselves. Previously, only dashboard owners could create schedules and only on the default (author published) view of the dashboard. Now, if an author has added controls to the pixel-perfect report, schedules can be created or updated to respect selections on the filter control.

These features empower each user to create the view of pixel perfect report that they are interested in and send them as scheduled reports. Authors can create filter controls (prompts) for different audiences to customize the view they are looking for. Readers can use the prompts to filter data and schedule it as a report. Therefore, it ensures that customers receive reports that they are interested in and when they are interested in them.

Prompted Reports and Reader Scheduling are now available in all supported Amazon QuickSight regions - see here for QuickSight regional endpoints.

For more on how to set up this setting, go to our documentation for reader scheduling and documentation for prompted reports.

Read more


Amazon DataZone now enhances data access governance with enforced metadata rules

Amazon DataZone now supports enforced metadata rules for data access workflows, providing organizations with enhanced capabilities to strengthen governance and compliance with their organization needs. This new feature allows domain owners to define and enforce mandatory metadata requirements, ensuring data consumers provide essential information when requesting access to data assets in Amazon DataZone. By streamlining metadata governance, this capability helps organizations meet compliance standards, maintain audit readiness, and simplify access workflows for greater efficiency and control.

With enforced metadata rules, domain owners can establish consistent governance practices across all data subscriptions. For example, financial services organizations can mandate specific compliance-related metadata when data consumers request access to sensitive financial data. Similarly, healthcare providers can enforce metadata requirements to align with regulatory standards for patient data access. This feature simplifies the approval process by guiding data consumers through completing mandatory fields and enabling data owners to make informed decisions, ensuring data access requests meet organizational policies.

The feature is supported in all the AWS commercial regions where Amazon DataZone is currently available.

Check out this blog and video to learn more about how to set up metadata rules for subscription workflows. Get started with the technical documentation.

Read more


Today, AWS announced support for a new Apache Flink connector for Amazon Managed Service for Prometheus. The new connector, contributed by AWS for the Apache Flink open source project, adds Amazon Managed Service for Prometheus as a new destination for Apache Flink. You can now manage your Prometheus metrics data cardinality by pre-processing raw data with Apache Flink to build real-time observability with Amazon Managed Service for Prometheus and Grafana.

Amazon Managed Service for Prometheus is a secure, serverless, scaleable, Prometheus-compatible monitoring service. You can use the same open-source Prometheus data model and query language that you use today to monitor the performance of your workloads without having to manage the underlying infrastructure. Apache Flink connectors are software components that move data into and out of an Amazon Managed Service for Apache Flink application. You can use the new connector to send processed data to an Amazon Managed Service for Prometheus destination starting with Apache Flink version 1.19. With Amazon Managed Service for Apache Flink you can transform and analyze data in real time. There are no servers and clusters to manage, and there is no compute and storage infrastructure to set up.

You can learn more about Amazon Managed Service for Apache Flink and Amazon Managed Service for Prometheus in our documentation. To learn more about open source Apache Flink connectors visit the official website. For Amazon Managed Service for Apache Flink and Amazon Managed Service for Prometheus region availability, refer to the AWS Region Table.

Read more


Today, AWS announced support for a new Apache Flink connector for Amazon Simple Queue Service. The new connector, contributed by AWS for the Apache Flink open source project, adds Amazon Simple Queue Service as a new destination for Apache Flink. You can use the new connector to send processed data from Amazon Managed Service for Apache Flink to Amazon Simple Queue Service messages with Apache Flink, a popular framework and engine for processing and analyzing streaming data.

Amazon Managed Service for Apache Flink makes it easier to transform and analyze streaming data in real time with Apache Flink. Apache Flink is an open source framework and engine for processing data streams. Amazon Managed Service for Apache Flink reduces the complexity of building and managing Apache Flink applications and integrates with Amazon Managed Streaming for Apache Kafka (Amazon MSK), Amazon Kinesis Data Streams, Amazon OpenSearch Service, Amazon DynamoDB streams, Amazon S3, custom integrations, and more using built-in connectors.

Amazon Simple Queue Service offers a secure, durable, and available hosted queue that lets you integrate and decouple distributed software systems and components. Amazon SQS offers common constructs such as deal-letter queues and cosrt allocation tags.

You can learn more about Amazon Managed Service for Apache Flink and Amazon Simple Queue Service in our documentation. To learn more about open source Apache Flink connectors visit the official website. For Amazon Managed Service for Apache Flink and Amazon Simple Queue Service region availability, refer to the AWS Region Table.

Read more


Amazon OpenSearch Ingestion now supports writing security data to Amazon Security Lake

Amazon OpenSearch Ingestion now allows you to write data into Amazon Security Lake in real-time, allowing you to ingest security data from both AWS and custom sources and uncover valuable insights into potential security issues in near-realtime. Amazon Security Lake centralizes security data from AWS environments, SaaS providers and on- premises into a purpose-built data lake. With this integration, customers can now seamlessly ingest and normalize security data from all popular custom sources before writing it into Amazon Security Lake.

Amazon Security Lake uses the Open Cybersecurity Schema Framework (OCSF) to normalize and combine security data from a broad range of enterprise security data sources in the Apache Parquet format. With this feature, you can now use Amazon OpenSearch Ingestion to ingest and transform security data from popular 3rd party sources like Palo Alto, CrowdStrike, and SentinelOne into OCSF format before writing the data into Security Lake. Once the data is written to Security Lake, it is available in the AWS Glue Data Catalog and AWS Lake Formation tables for the respective source.

This feature is available in all the 15 AWS commercial regions where Amazon OpenSearch Ingestion is currently available: US East (Ohio), US East (N. Virginia), US West (Oregon), US West (N. California), Europe (Ireland), Europe (London), Europe (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Singapore), Asia Pacific (Mumbai), Asia Pacific (Seoul), Canada (Central), South America (Sao Paulo), and Europe (Stockholm).

To learn more, see the Amazon OpenSearch Ingestion webpage and the Amazon OpenSearch Service Developer Guide.

Read more


Amazon QuickSight launches Highcharts visual (preview)

Amazon QuickSight now offers Highcharts visuals, enabling authors to create custom visualizations using the Highcharts Core library. This new feature extends your visualization capabilities beyond QuickSight's standard chart offerings, allowing you to create bespoke charts such as sunburst charts, network graphs, 3D charts and many more.

Using declarative JSON syntax , authors can configure charts with greater flexibility and granular customization. You can easily reference QuickSight fields and themes in the JSON using QuickSight expressions. The integrated code editor includes contextual assistance features, providing autocomplete and real-time validation to ensure proper configuration. To maintain security, the Highcharts visual editor prevents the injection of CSS and JavaScript. Refer documentation for supported list of JSON and QuickSight expressions

Highcharts visual is now available in all supported Amazon QuickSight regions - US East (Ohio and N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Jakarta, Mumbai, Seoul, Singapore, Sydney and Tokyo), Canada (Central), Europe (Frankfurt, Ireland, London, Milan, Paris, Stockholm, Zurich), South America (São Paulo) and AWS GovCloud (US-West). To learn more about the Highcharts visual and how to leverage its capabilities in your QuickSight dashboards, visit our documentation.

Read more


Amazon QuickSight now supports import visual capability (preview)

Amazon QuickSight introduces the ability to import visuals from an existing dashboard or analysis into your current analysis where authors have ownership privileges. This feature streamlines dashboard and report creation by allowing you to transfer associated dependencies such as datasets, parameters, calculated fields, filter definitions, and visual properties, including conditional formatting rules.

Authors can boost productivity by importing visuals instead of recreating them, facilitating collaboration across teams. The feature intelligently resolves conflicts, eliminates duplicates, rescopes filter definitions, and adjusts visuals to match the destination sheet type and theme. Imported visuals are forked from the source, ensuring independent customization. To learn more, click here.

The Import Visuals feature is available in all supported Amazon QuickSight regions - US East (Ohio and N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Jakarta, Mumbai, Seoul, Singapore, Sydney and Tokyo), Canada (Central), Europe (Frankfurt, Ireland, London, Milan, Paris, Stockholm, Zurich), South America (São Paulo) and AWS GovCloud (US-West).

Read more


Amazon QuickSight launches Layer Map

Amazon QuickSight launches Layer Map, a new geospatial visual with shape layer support. With Layer Maps you can visualize data using custom geographic boundaries, such as congressional districts, sales territories, or user-defined regions. For example, sales managers can visualize sales performance by custom sales territories, and operations analysts can map package delivery volumes across different zip code formats (zip 2, zip 3).

Authors can add shape layer over a base map by uploading GeoJSON file and join it with their data to visualize values. You can also style shape layer by adjusting color, border, and opacity, as well as add interactivity through tooltips and actions. To learn more, click here.

Layer map is now available in following Amazon QuickSight regions - US East (Ohio and N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Jakarta, Mumbai, Seoul, Singapore, Sydney and Tokyo), Canada (Central), Europe (Frankfurt, Ireland, London, Milan, Paris, Stockholm, Zurich), South America (São Paulo).

Read more


Amazon QuickSight launches Image component

Amazon QuickSight now includes Image Component. This provides authors greater flexibility to incorporate static images into their QuickSight dashboards, analysis, reports and stories.

With Image component, Authors can upload images directly from your local desktop to QuickSight for a variety of use cases, such as adding company logos and branding, including background images with free-form layout, and creating captivating story covers. It also supports tooltip and alt text, providing additional context and accessibility for readers. Furthermore, it offers navigation and URL actions, enabling authors to make their images interactive, such as triggering specific dashboard actions when the image is clicked. For more details refer to documentation.

Image component is now available in all supported Amazon QuickSight regions - US East (Ohio and N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Jakarta, Mumbai, Seoul, Singapore, Sydney and Tokyo), Canada (Central), Europe (Frankfurt, Ireland, London, Milan, Paris, Stockholm, Zurich), South America (São Paulo) and AWS GovCloud (US-West).

Read more


Amazon QuickSight now supports font customization for visuals

Amazon QuickSight now supports the ability to customize fonts across specific visuals. Authors can now completely customize fonts for Table and Pivot table, while for remaining visuals they can customize fonts for specific properties including title, subtitle, legends title and legends values.

Authors can set the font size(in pixels), font family, color, and styling options like bold, italics, and underline across analysis, including dashboard, reports and embedded scenarios. With this update, you can align the dashboard's fonts with your organization's branding guidelines, creating a cohesive and visually appealing experience. Additionally, the font customization options can help improve the readability and meet accessibility standards, especially when viewing visuals on a large screen.

Font customization for above listed visuals is now available in all supported Amazon QuickSight regions - US East (Ohio and N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Jakarta, Mumbai, Seoul, Singapore, Sydney and Tokyo), Canada (Central), Europe (Frankfurt, Ireland, London, Milan, Paris, Stockholm, Zurich), South America (São Paulo) and AWS GovCloud (US-West).
 

Read more


Announcing generative AI troubleshooting for Apache Spark in AWS Glue (Preview)

AWS Glue announces generative AI troubleshooting for Apache Spark, a new capability that helps data engineers and scientists quickly identify and resolve issues in their Spark jobs. Spark Troubleshooting uses machine learning and generative AI technologies to provide automated root cause analysis for Spark job issues, along with actionable recommendations to fix identified issues.

AWS Glue is a serverless, scalable data integration service that makes it easier to discover, prepare, and combine data for analytics, machine learning, and application development. With Spark troubleshooting, you can initiate automated analysis of failed jobs with a single click in the AWS Glue console. This feature provides root cause analysis and remediation steps for hard-to-diagnose Spark issues like memory errors, data skew problems, and resource not found exceptions. This helps you reduce downtime in critical data pipelines. Powered by Amazon Bedrock, Spark troubleshooting reduces debugging time from days to minutes.

The generative AI troubleshooting for Apache Spark preview is available for jobs running on AWS Glue 4.0, and in the following AWS Regions: US East (N. Virginia), US West (Oregon), Europe (Ireland), US East (Ohio), and more. To learn more, visit the AWS Glue website, read the Launch blog, or read the documentation.
 

Read more


Amazon Managed Service for Apache Flink now offers a new Apache Flink connector for Amazon Kinesis Data Streams. This open-source connector, contributed by AWS, supports Apache Flink 2.0 and provides several enhancements. It enables in-order reads during stream scale-up or scale-down, supports Apache Flink's native watermarking, and improves observability through unified connector metrics. Additionally, the connector uses AWS SDK for Java 2.x which supports enhanced performance and security features, and native retry strategy.

Amazon Kinesis Data Streams is a serverless data streaming service that enables customers to capture, process, and store data streams at any scale. Amazon Managed Service for Apache Flink makes it easier to transform and analyze streaming data in real time with Apache Flink without having to manage servers or clusters. You can use the new connector to consume data from a Kinesis Data Stream source for real-time processing in your Apache Flink application and can also send data back to a Kinesis Data Streams destination. You can use the new connector to read data from a Kinesis data stream starting with Apache Flink version 1.19.

To learn more about Apache Flink Amazon Kinesis Data Streams connector, visit the official Apache Flink documentation. You can also check the GitHub repositories for Apache AWS connectors.
 

Read more


Amazon Redshift announces support for Confluent Cloud and Apache Kafka

Amazon Redshift now supports streaming ingestion from Confluent Managed Cloud and self-managed Apache Kafka clusters on Amazon EC2 instances, expanding its capabilities beyond Amazon Kinesis Data Streams (KDS) and Amazon Managed Streaming for Apache Kafka (MSK).

With this update, customers can ingest data from a wider range of streaming sources directly into their Amazon Redshift data warehouses. Amazon Redshift introduces mTLS (mutual Transport Layer Security) as the authentication protocol for secure communication between Amazon Redshift and the newly supported Kafka streaming sources. This ensures that data ingestion from these new sources maintains the high security standards expected in enterprise data workflows. Additionally, a new SQL identifier 'KAFKA' has been introduced to simplify the identification of these newly supported Kafka sources in Amazon Redshift External Schema definitions.

You can start using this expanded streaming ingestion capability immediately, to build more comprehensive and flexible data pipelines that ingest data from various Kafka sources — those offered by AWS (Amazon MSK), those available from partners (Confluent Cloud) or those that are self-managed (Apache Kafka) on Amazon EC2.

To learn more and get started with streaming data into Amazon Redshift from any Kafka source, refer to the Amazon Redshift streaming documentation.

Read more


Amazon Redshift Query Editor V2 Increases Maximum Result Set and Export size to 100MB

AWS announces Amazon Redshift Query Editor V2 now supports increased maximum result set and export size to 100MB of your query result sets with no row limit. Prior to this limit of your query result sets was* 5MB or 100,000 rows. This enhancement provides greater flexibility for you and your team to work with large datasets, enabling you to generate, analyze, and export more comprehensive data without previous constraints.

If you work with large datasets, such as security logs, gaming data, and other big data workloads, that require in-depth analysis, the previous 5MB or 100,000-row limit on result sets and exports often fell short of your needs, forcing you to piece together insights from multiple queries and downloads. With the new 100MB result set size and export capabilities in Amazon Redshift Query Editor, you can now generate a single, more complete view of your data, export it directly as a CSV or JSON file, and conduct richer analysis to drive better-informed business decisions.

The increased 100MB result set and export size capabilities for Amazon Redshift Query Editor V2 are available in all AWS commercial Regions. For more information about the AWS Regions where Redshift is available, please refer to the AWS Regions table.

To learn more, see the Amazon Redshift documentation.
 

Read more


Announcing generative AI upgrades for Apache Spark in AWS Glue (preview)

AWS Glue announces generative AI upgrades for Apache Spark, a new generative AI capability that enables data practitioners to quickly upgrade and modernize their existing Spark jobs. Powered by Amazon Bedrock, this feature automates the analysis and updating of Spark scripts and configurations, reducing the time and effort required for Spark upgrades from weeks to minutes.

AWS Glue is a serverless, scalable data integration service that makes it easier to discover, prepare, and combine data for analytics, machine learning, and application development. With Spark Upgrades, you can initiate automated upgrades with a single click in the AWS Glue console to modernize your Spark jobs from an older version to AWS Glue version 4.0. This feature analyzes your Python-based Spark jobs and generates upgrade plans detailing code changes and configuration modifications. It leverages generative AI to iteratively improve and validate the upgraded code by executing test runs as Glue jobs. Once validation is successful, you receive a detailed summary of all changes for review, enabling confident deployment of your upgraded Spark jobs. This automated approach reduces the complexity of Spark upgrades while maintaining the reliability of your data pipelines.

The generative AI upgrades for Apache Spark preview is available for AWS Glue in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), and Asia Pacific (Sydney). To learn more, visit the AWS Glue website, read the Launch blog, or read the documentation.
 

Read more


Amazon OpenSearch Ingestion now supports AWS Lambda for custom data transformation

Amazon OpenSearch Ingestion now allows you to leverage AWS Lambda for event processing and routing, enabling complex transformation and enrichment of your streaming data. Customers can now define custom Lambda functions in their OpenSearch Ingestion pipelines for use cases like generating vector embedding and lookups in external databases to power advanced search use cases.

OpenSearch Ingestion gives you the option of either using only Lambda functions or chaining Lambda functions with native Data Prepper processors when transforming data. You can also batch events into a single payload based on event count and size before invoking Lambda to optimize the number of Lambda invocations to reduce costs and improve throughput. Furthermore, you can use this feature with the inbuilt conditional expressions in Amazon OpenSearch Ingestion to enable use cases like sending out emails and notifications for real-time alerting.

This feature is available in all the 15 AWS commercial regions where Amazon OpenSearch Ingestion is currently available: US East (Ohio), US East (N. Virginia), US West (Oregon), US West (N. California), Europe (Ireland), Europe (London), Europe (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Singapore), Asia Pacific (Mumbai), Asia Pacific (Seoul), Canada (Central), South America (Sao Paulo), and Europe (Stockholm).

To learn more, see the Amazon OpenSearch Ingestion webpage and the Amazon OpenSearch Service Developer Guide.

Read more


AWS Lake Formation now supports named LF-Tag expressions

Today, AWS announces the general availability of named LF-Tag expressions in AWS Lake Formation. With this launch, customers can create and manage named combinations of LF-Tags. With Named LF-Tag expressions, customers can now create permission expressions that better represent complex business requirements in permissions.

Customers use LF-Tags to create complex data grants based on attributes and want to manage the combination of LF-Tags. Now, when customers want to grant the same combination of LF-Tags to multiple users, they can create a named LF-Tag expression and grant that expression to multiple users rather than providing the full expression for every grant. Additionally, changes in a customer’s LF-Tag ontology, for example for changes in business requirements, means customers can update a single expression instead of all permissions that used the changed LF-Tags.

Named LF-Tag expressions are generally available in commercial AWS Regions where AWS Lake Formation is available and the AWS GovCloud (US) Regions.

To get started with this feature, visit the AWS Lake Formation documentation.
 

Read more


Amazon OpenSearch Service now supports Custom Plugins

Amazon OpenSearch Service introduces Custom Plugins, a new plugin management option that allows you to extend OpenSearch functionality and deliver personalized experiences for applications such as website search, log analytics, application monitoring and, observability. OpenSearch provides a rich set of search and analysis capabilities, and with custom plugins, you can extend these further to meet your business needs.

Until now, you had to build and operate your own search infrastructure to support applications that required customization in areas like language analysis, custom filtering, ranking and more. With this launch, you can run custom plugins on Amazon OpenSearch Service that allow you to extend the Search and Analysis functions of OpenSearch. You can use the OpenSearch Service console or APIs to upload and associate search and analysis plugins with your domains. OpenSearch Service validates plugin package for version compatibility, security, and permitted plugin operations.

Custom plugins are now supported on all OpenSearch Service domains running OpenSearch version 2.15 or later, and are available in 14 regions globally: US West (Oregon), US East (Ohio), US East (N. Virginia), South America (Sao Paulo), Europe (Paris), Europe (London), Europe (Ireland), Europe (Frankfurt), Canada (Central), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Singapore), Asia Pacific (Seoul) and Asia Pacific (Mumbai).

To get started with custom plugins, visit our documentation. To learn more about Amazon OpenSearch Service, please visit the product page.
 

Read more


AWS Glue Data Catalog now supports Apache Iceberg automatic table optimization through Amazon VPC

AWS Glue Data Catalog now supports automatic optimization of Apache Iceberg tables that can be only accessed from a specific Amazon Virtual Private Cloud (VPC) environment. You can enable automatic optimization by providing a VPC configuration to optimize storage and improve query performance while keeping your tables secure.

AWS Glue Data Catalog supports compaction, snapshot retention and unreferenced file management that help you reduce metadata overhead, control storage costs and improve query performance. Customers who have governance and security configurations that require an Amazon S3 bucket to reside within a specific VPC can now use it with Glue Catalog. This gives you broader capabilities for automatic management of your Apache Iceberg data, regardless of where it's stored on Amazon S3.

Automatic optimization for Iceberg tables through Amazon VPC is available in 13 AWS regions US East (N. Virginia, Ohio), US West (Oregon), Europe (Ireland, London, Frankfurt, Stockholm), Asia Pacific (Tokyo, Seoul, Mumbai, Singapore, Sydney), South America (São Paulo). Customers can enable this through the AWS Console, AWS CLI, or AWS SDKs.

To get started, you can now provide the Glue network connection as an additional configuration along with optimization settings such as default retention period and days to keep unreferenced files. The AWS Glue Data Catalog will use the VPC information in the Glue connection to access Amazon S3 buckets and optimize Apache Iceberg tables.
To learn more, read the blog, and visit the AWS Glue Data Catalog documentation.
 

Read more


Amazon MWAA adds smaller environment size

Amazon Managed Workflows for Apache Airflow (MWAA) now offers a micro environment size, giving customers of the managed service the ability to create multiple, independent environments for development and data isolation at a lower cost.

Amazon MWAA is a managed orchestration service for Apache Airflow that makes it easier to set up and operate end-to-end data pipelines in the cloud. With Amazon MWAA micro environments, customers can now create smaller, cost-effective environments that are more efficient for development use, as well as for teams that require data isolation with lightweight workflow requirements.

You can create a micro size Amazon MWAA environment with just a few clicks in the AWS Management Console in all currently supported Amazon MWAA regions. To learn more about larger environments in Amazon MWAA, visit the Launch Blog. To learn more about Amazon MWAA visit the Amazon MWAA documentation.


Apache, Apache Airflow, and Airflow are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries.
 

Read more


Amazon CloudWatch Internet Monitor adds AWS Local Zones support for VPC subnets

Today, Amazon CloudWatch Internet Monitor introduces support for select AWS Local Zones. Now, you can monitor internet traffic performance for VPC subnets deployed in Local Zones.

With this new feature, you can also view optimization suggestions that include Local Zones. On the Optimize tab in the Internet Monitor console, select the toggle to include Local Zones in traffic optimization suggestions for your application. Additionally, you can compare your current configuration with other supported Local Zones. Select the option to see more optimization suggestions, and then choose specific Local Zones to compare. By comparing latency differences, you can determine the proposed best configuration for your traffic.

At launch, CloudWatch Internet Monitor supports the following Local Zones: us-east-1-dfw-2a, us-east-1-mia-2a, us-east-1-qro-1a, us-east-1-lim-1a, us-east-1-atl-2a, us-east-1-bue-1a, us-east-1-mci-1a, us-west-2-lax-1a, us-west-2-lax-1b, and af-south-1-los-1a.

To learn more, visit the Internet Monitor user guide documentation.

Read more


OpenSearch’s vector engine adds support for UltraWarm on Amazon OpenSearch Service

UltraWarm is a fully managed, warm storage tier that’s designed to deliver cost savings on the Amazon OpenSearch Service. With OpenSearch 2.17+ domains, you can now store k-NN (vector) indexes on UltraWarm storage reducing the cost of serving infrequently access k-NN indexes through warm and cold storage tiers. With UltraWarm storage, you can further cost optimize vector search workloads on the OpenSearch vector engine. To learn more, refer to the documentation.

Read more


Amazon QuickSight supports fine grained permissions for capabilities with APIs for IAM Identity Center users

Amazon QuickSight now supports user level custom permissions profile assignment for IAM Identity Center users. Custom permissions profiles enable administrators to restrict access to capabilities in the QuickSight application by adding the profile to a user. A custom permissions profile defines which capabilities are disabled for a user or role. For example, administrators can restrict specific users from exporting data to excel and csv and prevent users from sharing QuickSight assets.

Custom permissions profiles are managed with the following APIs: CreateCustomPermissions, ListCustomPermissions, DescribeCustomPermissions, UpdateCustomPermissions and DeleteCustomPermissions. Custom permissions assignment to users is managed with the following APIs: UpdateUserCustomPermission and DeleteUserCustomPermission. These APIs are supported with all identity types in QuickSight.

This feature is available in all AWS Regions where Amazon QuickSight is available. To learn more, see Customizing access to Amazon QuickSight capabilities.

Read more


Amazon Kinesis Data Streams On-Demand mode supports streams writing up to 10GB/s

Amazon Kinesis Data Streams On-Demand Mode now automatically scales to support streaming applications that write up to 10GB/s per stream and consumers that read up to 20 GB/s per stream. This is a 5x increase from the previously supported limits of 2 GB/s per stream for writers and 4 GB/s for readers.

Amazon Kinesis Data Streams is a serverless data streaming service that allows customers to build de-coupled applications that publish and consume real-time data streams. It includes integrations with 40+ AWS and third-party services, enabling customers to easily build real-time stream processing, analytics, and machine learning applications. Customers use Kinesis Data Streams On-demand Mode for workloads with unpredictable and variable traffic patterns, so they do not have to manage capacity. They can pay based on the amount of data streamed. Customers can now use On-demand Mode for high-throughput data streams.

There is no action required on your part to use this feature in US East (N. Virginia), US West (Oregon) and Europe (Ireland) AWS Regions. When you write data to your Kinesis On-demand stream, it will automatically scale to write up to 10 GB/s. For other AWS Regions, you can reach out to AWS support to raise the peak write throughput capacity of your OD Streams to 10 GB/s. To learn more, see the Kinesis Data Streams Quotas and Limits documentation.

Read more


Announcing Amazon EMR 7.4 Release

Today, we are excited to announce the general availability of Amazon EMR 7.4. Amazon EMR 7.4 supports Apache Spark 3.5.2, Apache Hadoop 3.4.0, Trino 446, Apache HBase 2.5.5, Apache Phoenix 5.2.0, Apache Flink 1.19.0, Presto 0.287 and Apache Zookeeper 3.9.2.

Amazon EMR 7.4 enables in-transit encryption for 7 additional endpoints used with distributed applications like Apache Livy, Apache Hue, JupyterEnterpriseGateway, Apache Ranger and Apache Zookeeper. This update builds on the previous release Amazon EMR 7.3, which enabled in-transit encryption for 22 endpoints. In-Transit Encryption enables you to run workloads that meet strict regulatory or compliance requirements by protecting the confidentiality and integrity of your data.

Amazon EMR 7.4 is now available in all regions where Amazon EMR is available. To learn how to enable in transit encryption for your Amazon EMR clusters, view the TLS documentation. See Regional Availability of Amazon EMR, and our release notes for more detailed information.

Read more


Amazon OpenSearch Serverless Includes SQL API Support

Amazon OpenSearch Serverless now enables you to query your data using OpenSearch SQL and OpenSearch Piped Processing Language (PPL) through REST API, Java Database Connectivity (JDBC), and Command Line Interface (CLI). Amazon OpenSearch Serverless is a serverless option that makes it easy to run search and analytics workloads without having to think about infrastructure management. This new SQL and PPL API support addresses the need for familiar query syntax and improved integration with existing analytics tools, benefiting data analysts and developers who work with OpenSearch Serverless collections.

SQL API support in OpenSearch Serverless allows you to leverage your existing SQL skills and tools to analyze data stored in your collections. You can now use the AWS CLI to run SQL queries directly from your terminal, connect your preferred business intelligence tools JDBC drivers, and integrate SQL and PPL queries into your Java applications. This feature is particularly useful for organizations looking to streamline their analytics workflows or those transitioning from traditional relational databases to OpenSearch Serverless.

The support for SQL API support on OpenSearch Serverless is now available in 15 regions globally: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe West (Paris), Europe West (London), Asia Pacific South (Mumbai), South America (Sao Paulo), Canada Central (Montreal), Asia Pacific (Seoul), and Europe (Zurich). Please refer to the AWS Regional Services List for more information about Amazon OpenSearch Service availability.

To learn more about SQL API support in OpenSearch Serverless, see the documentation.

Read more


Disk-optimized vector engine now available on the Amazon OpenSearch Service

Amazon OpenSearch's vector engine can now run modern search applications at a third of the cost on OpenSearch 2.17 domains. When you configure a k-NN (vector) index for disk mode, it becomes optimized for operating in a low memory environment. With disk mode on, the index is compressed using techniques like binary quantization and search quality (recall) is retained through a disk-optimized rescoring mechanism using full-precision vectors. Disk-mode is an excellent option for vector search workloads that require high accuracy, cost efficiency and are satisfied by low hundreds-of-milliseconds latency. It provides customers with a lower cost alternative to the existing in-memory mode when single-digit latency is unnecessary. To learn more, refer to the documentation.

Read more


AWS Glue expands connectivity to 19 native connectors for Enterprise applications

AWS Glue announces 19 new connectors for Enterprise applications to expand its connectivity portfolio. Now, customers can use AWS Glue native connectors to ingest data from Facebook Ads, Google Ads, Google Analytics 4, Google Sheets, Hubspot, Instagram Ads, Intercom, Jira Cloud, Marketo, Oracle NetSuite, SAP OData, Salesforce Marketing Cloud, Salesforce Marketing Cloud Account Engagement, ServiceNow, Slack, Snapchat Ads, Stripe, Zendesk and Zoho CRM.

As enterprises increasingly rely on data-driven decisions, they are looking for services making it easier to integrate with data from various Enterprise applications. With these 19 new connectors, customers can easily establish a connection to their Enterprise applications using AWS console or AWS Glue APIs without the need to learn application specific APIs. These connectors are scalable and performant with AWS Glue Spark engine and support for standard authorization and authentication method like OAuth 2.0. With these connectors, customers can test connection, validate their connection credential, browse metadata and preview data.

AWS Glue native connectors to Facebook Ads, Google Ads, Google Analytics 4, Google Sheets, Hubspot, Instagram Ads, Intercom, Jira Cloud, Marketo, Oracle NetSuite, SAP OData, Salesforce Marketing Cloud, Salesforce Marketing Cloud Account Engagement, ServiceNow, Slack, Snapchat Ads, Stripe, Zendesk and Zoho CRM are available in all AWS commercial regions.

To get started, create new AWS Glue connections with these connectors and use them as source in AWS Glue studio. To learn more, visit AWS Glue documentation for connectors.

Read more


Amazon OpenSearch Serverless now supports point in time (PIT) search

Amazon OpenSearch Serverless has added support for Point in Time (PIT) search, enabling you to run multiple queries against a dataset fixed at a specific moment. This feature allows you to maintain consistent search results even as your data continues to change, making it particularly useful for applications that require deep pagination or need to preserve a stable view of data across multiple queries.

Point in time search supports both forward and backward navigation through search results, ensuring consistency even during ongoing data ingestion. This feature is ideal for e-commerce applications, content management systems, and analytics platforms that require reliable and consistent search capabilities across large datasets.

Point in time search on Amazon OpenSearch Serverless is now available in 15 regions globally: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe West (Paris), Europe West (London), Asia Pacific South (Mumbai), South America (Sao Paulo), Canada Central (Montreal), Asia Pacific (Seoul). and Europe (Zurich). Please refer to the AWS Regional Services List for more information about Amazon OpenSearch Service availability. To learn more about OpenSearch Serverless, see the documentation.

Read more


Amazon OpenSearch Service now scales to 1000 data nodes on a single cluster

Amazon OpenSearch Service now enables you to scale a single cluster to 1000 data nodes (1000 hot nodes and/or 750 warm nodes) and enables you to manage 25 petabytes of data (10 Petabytes in hot nodes and further 15 Petabytes in warm nodes). You no longer need to setup multiple clusters for workloads that require more than 200 data nodes or more than 3 Petabytes of data.

Today, for workloads of more than 3 to 4 petabytes of data, you need to create multiple clusters in OpenSearch Service. This may have required you to refactor your applications or business logic to work with your workload split across multiple clusters. In addition, every cluster requires its own configuration, management, and monitoring, adding to the operational overhead. With this launch, you can scale a single cluster up to 1000 nodes, or 25 petabytes of data, removing the operational overhead that comes with managing multiple clusters.

To scale a cluster beyond 200 nodes, you have to request an increase through Service Quota, after which you can modify your cluster configuration using the AWS Console, AWS CLI, or the AWS SDK. Depending on the size of the cluster, OpenSearch Service will recommend configuration pre-requisites across data nodes, cluster manager nodes, and coordinator nodes. For more information, refer to the documentation.

The new limits are available to all OpenSearch Service clusters running OpenSearch 2.17 and above in all AWS regions where Amazon OpenSearch Service is available. Please refer to the AWS Region Table for more information about Amazon OpenSearch Service availability.

Read more


Amazon OpenSearch Serverless now supports Binary Vector and FP16 cost savings features

We are excited to announce that Amazon OpenSearch Serverless now is supporting Binary Vector and FP16 compression helping reduce costs by lowering the memory requirements. It also lowers the latency, improve performance with acceptable accuracy tradeoff. OpenSearch Serverless is a serverless deployment option for Amazon OpenSearch Service that makes it simple to run search and analytics workloads without the complexities of infrastructure management. OpenSearch Serverless’ compute capacity used for data ingestion, search, and query is measured in OpenSearch Compute Units (OCUs).

The support for OpenSearch Serverless is now available in 17 regions globally: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe West (Paris), Europe West (London), Asia Pacific South (Mumbai), South America (Sao Paulo), Canada Central (Montreal), Asia Pacific (Seoul). Europe (Zurich), AWS GovCloud (US-West), and AWS GovCloud (US-East). Please refer to the AWS Regional Services List for more information about Amazon OpenSearch Service availability. To learn more about OpenSearch Serverless, see the documentation.

Read more


Amazon Q generative SQL in Amazon Redshift Query Editor now available in additional AWS regions

Amazon Q generative SQL in Amazon Redshift Query Editor is available in AWS South America (Sao Paulo), Europe (London), and Canada (Central) regions. Amazon Q generative SQL is available in Amazon Redshift Query Editor, an out-of-the-box web-based SQL editor for Amazon Redshift, to simplify SQL query authoring and increase your productivity by allowing you to express SQL queries in natural language and receive SQL code recommendations. Furthermore, it allows you to get insights faster without extensive knowledge of your organization’s complex Amazon Redshift database metadata.

Amazon Q generative SQL uses generative Artificial Intelligence (AI) to analyze user intent, SQL query patterns, and schema metadata to identify common SQL query patterns directly within Amazon Redshift, accelerating the SQL query authoring process for users, and reducing the time required to derive actionable data insights. Amazon Q generative SQL provides a conversational interface where users can submit SQL queries in natural language, within the scope of their current data permissions. For example, when you submit a question such as 'Find total revenue by region,' Amazon Q generative SQL will recognize and suggest the appropriate SQL code for this frequent query pattern by joining multiple Amazon Redshift tables, thus saving time and decreasing the likelihood of errors. You can either accept the query or enhance your prior query by asking additional questions.

To learn more about pricing, visit the Amazon Q Developer pricing page. See the documentation to get started.
 

Read more


Amazon Redshift to enhance security by changing default behavior

Security is the top priority at Amazon Web Services (AWS). To that end, Amazon Redshift is introducing enhanced security defaults which helps you adhere to best practices in data security and reduce the risk of potential misconfigurations.

Three default security changes will take effect after January 10, 2025. First, public accessibility will be disabled by default for all newly created provisioned clusters and clusters restored from snapshots. By default, connections to clusters will only be permitted from client applications within the same Virtual Private Cloud (VPC). Second, database encryption will be enabled by default for provisioned clusters. When creating a provisioned cluster without specifying a KMS key, the cluster will automatically be encrypted with an AWS-owned key. Third, Amazon Redshift will enforce SSL connections by default for clients connecting to newly created provisioned and restored data warehouses. This default change will also apply to new serverless workgroups.

Please review your data warehouse creation configurations, scripts, and tools to make necessary changes to align with new default settings before January 10, 2025, to avoid any potential disruption. You will still have the ability to modify cluster or workgroup settings to change the default behavior.Your existing data warehouses will not be impacted by these security enhancements. However, it is recommended you review and update your configurations to align with the new default security settings in order to further strengthen the security posture.

These new default changes will be implemented in all AWS regions where Amazon Redshift is available. For more information, please refer to our documentation.
 

Read more


AWS Lake Formation is now available in the Asia Pacific (Malaysia) Region

AWS Lake Formation is a service that allows you to set up a secure data lake in days. A data lake is a centralized, curated, and secured repository that stores your data, both in its original form and prepared for analysis. A data lake enables you to break down data silos and combine different types of analytics to gain insights and guide better business decisions.

Creating a data lake with Lake Formation allows you to define where your data resides and what data access and security policies you want to apply. Your users can then access the centralized AWS Glue Data Catalog which describes available data sets and their appropriate usage. Your users can then leverage these data sets with their choice of analytics and machine learning services, like Amazon EMR for Apache Spark, Amazon Redshift Spectrum, AWS Glue, Amazon QuickSight, and Amazon Athena.

For a list of regions where AWS Lake Formation is available, see the AWS Region Table.
 

Read more


Amazon Data Firehose supports continuous replication of database changes to Apache Iceberg Tables in Amazon S3

Amazon Data Firehose now enables capture and replication of database changes to Apache Iceberg Tables in Amazon S3 (Preview) . This new feature allows customers to easily stream real-time data from MySQL and PostgreSQL databases directly into Apache Iceberg Tables.

Firehose is a fully managed, serverless streaming service that enables customers to capture, transform, and deliver data streams into Amazon S3, Amazon Redshift, OpenSearch, Splunk, Snowflake, and other destinations for analytics. With this functionality, Firehose performs an initial complete data copy from selected database tables, then continuously streams Change Data Capture (CDC) updates to reflect inserts, updates, and deletions in the Apache Iceberg Tables .This streamlined solution eliminates complex data pipeline setups while minimizing impact on database transaction performance .
Key capabilities include: • Automatic creation of Apache Iceberg Tables matching source database schemas • Automatic schema evolution in response to source changes • Selective replication of specific databases, tables, and columns

This preview feature is available in all AWS regions except China, AWS GovCloud (US), and Asia Pacific (Malaysia) Regions. For terms and conditions, see Beta Service Participation in AWS Service Terms.

To get started, visit Amazon Data Firehose documentation and console.

To learn more about this feature, visit this AWS blog post.

Read more


Amazon QuickSight launches self serve Brand Customization

Amazon QuickSight launches self serve brand customization which allows QuickSight admins with relevant AWS Identity and Access Management (IAM) permissions to align QuickSight’s user interface with their organization’s brand by modifying visual elements like brand colors and logo. This creates a cohesive look and feel that aligns with their organization’s identity. Brand customization includes customization of logo, favorite icon, and color scheme used for QuickSight screen elements. Admins can configure and apply custom brand through public API or UI. Once a brand is applied to the account, it gets materialized across all non-admin pages in the QuickSight console, embedded components, as well as schedules, alerts and share emails. For more information and to see the list of all QuickSight components which can be customized click here.

The self serve brand customization is available with the Amazon QuickSight Enterprise Edition in all supported Amazon QuickSight regions - US East (Ohio and N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Jakarta, Mumbai, Seoul, Singapore, Sydney and Tokyo), Canada (Central), China (Beijing) Europe (Frankfurt, Ireland, London, Milan, Paris, Stockholm, Zurich), South America (São Paulo) and AWS GovCloud (US-West).
 

Read more


Amazon OpenSearch Service now supports OpenSearch version 2.17

You can now run OpenSearch version 2.17 in Amazon OpenSearch Service. With OpenSearch 2.17, we have made several improvements in the areas of vector search, query performance and machine learning (ML) toolkit to help accelerate application development and enable generative AI workloads.

This launch introduces disk-optimized vector search, a new option for the vector engine that's designed to run efficiently with less memory to deliver accurate, economical vector search at scale. In addition to this, OpenSearch’s FAISS engine now supports byte vectors lowering cost and latency by compressing k-NN indexes with minimal recall degradation. You can now encode numeric terms as a roaring bitmap that enables you to perform aggregations, filtering and more, with lower retrieval latency and reduced memory usage.

This launch also includes key features to help you build ML-powered applications. Firstly, with ML inference search processors you can now run model predictions while executing search queries. In addition to this, you can also perform high-volume ML tasks, such as generating embeddings for large datasets and ingesting them into k-NN indexes using asynchronous batch ingestion. Finally, this launch adds threat intelligence capabilities to Security Analytics solution. This enables you to use customized Structured Threat Information Expression (STIX)-compliant threat intelligence feeds to provide insights to support decision-making and remediation.

For information on upgrading to OpenSearch 2.17, please see the documentation. OpenSearch 2.17 is now available in all AWS Regions where Amazon OpenSearch Service is available.

Read more


AWS Glue is now available in Asia Pacific (Malaysia)

We are happy to announce that AWS Glue, a serverless data integration service, is now available in the AWS Asia Pacific (Malaysia) Regions.

AWS Glue is a serverless data integration service that makes it simple to discover, prepare, and combine data for analytics, machine learning, and application development. AWS Glue provides both visual and code-based interfaces to make data integration simpler so you can analyze your data and put it to use in minutes instead of months.

To learn more, visit the AWS Glue product page and our documentation. For AWS Glue region availability, please see the AWS Region table.
 

Read more


Amazon OpenSearch Service adds supports for two new third party plugins

Amazon OpenSearch Service now supports two new third party plugins- encryption plugin from Portal26.ai and Name Match plugin from Babel Street. These are optional plugins that you can choose to associate with your OpenSearch Service clusters.

The encryption plugin from Portal26.ai uses NIST FIPS 140-2 certified encryption to encrypt the data as it gets indexed by the Amazon OpenSearch Service. This plugin includes a Bring Your Own Key (BYOK) capability allowing you to setup separate encryption keys per index thus enabling you to easily support multi-tenant use-cases.

Babel Street Match Plugin for OpenSearch accurately matches names, organisations, addresses, and dates in over 24 languages, enhancing security operations and regulatory compliance while reducing false positives and increasing operational efficiency.

You can use the AWS Management Console and AWS CLI to associate, disassociate and list third party plugins in your domain. Customers can now use “CreatePackage” and “AssociatePackage” APIs to upload and associate the plugin with the Amazon OpenSearch Service cluster. ‘PACKAGE-CONFIG“ and ”PACKAGE-LICENSE“ package types are supported for uploading the plugin configuration and license files that you can directly procure from Portal26.ai for the encryption plugin, and Babel Street for the name match plugin.

These third party plugins are available for Amazon OpenSearch domains running OpenSearch version 2.15 and above, and are available in all AWS regions except AWS GovCloud (US) Regions where Amazon OpenSearch service is available.

For more information about third party plugins, please see the documentation. To learn more about Amazon OpenSearch Service, please visit the product page.
 

Read more


Amazon OpenSearch Service now supports 4th generation Intel (C7i, M7i, R7i) instances

Amazon OpenSearch Service now supports 4th Generation Intel Xeon Scalable processors based compute optimized (C7i), general purpose (M7i), and memory optimized (R7i) instances. These instances deliver up to 15% better price performance over 3rd generation Intel C6i, M6i & R6i instances respectively. You can update your domain to the new instances seamlessly through the OpenSearch Service console or APIs.

These instances support new Intel Advanced Matrix Extensions (AMX) that accelerate matrix multiplication operations for applications such as CPU-based ML. The 4th generation Intel instances support the latest DDR5 memory, offering higher bandwidth compared to 3rd generation Intel processors. To learn more about 4th generation intel improvements, please see the following C7i blog, M7i blog & R7i blog.

One or more than one 4th generation Intel instance types are now available on Amazon OpenSearch Service across 22 regions globally: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Canada (Central), Asia Pacific (Jakarta), Asia Pacific (Malaysia), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Paris), Europe (Spain), Europe (Stockholm), South America (Sao Paulo), AWS GovCloud (US-East) and AWS GovCloud (US-West).

To learn more about region specific instance type availability and their pricing, visit our pricing page. To learn more about Amazon OpenSearch Service, please visit the product page.

Read more


AWS Glue Data Catalog now supports scheduled generation of column level statistics

AWS Glue Data Catalog now supports the scheduled generation of column-level statistics for Apache Iceberg tables and file formats such as Parquet, JSON, CSV, XML, ORC, and ION. With this launch, you can simplify and automate the generation of statistics by creating a recurring schedule in the Glue Data Catalog. These statistics are integrated with the cost-based optimizer (CBO) from Amazon Redshift Spectrum and Amazon Athena, resulting in improved query performance and potential cost savings.

Previously, to setup recurring statistics generation schedule, you had to call AWS services using a combination of AWS Lambda and Amazon EventBridge Scheduler. With this new feature, you can now provide the recurring schedule as an additional configuration to Glue Data Catalog along with sampling percentage. For each scheduled run, the number of distinct values (NDVs) are collected for Apache Iceberg tables, and additional statistics such as the number of nulls, maximum, minimum, and average length are collected for other file formats. As the statistics are updated, Amazon Redshift and Amazon Athena use them to optimize queries, using optimizations such as optimal join order or cost based aggregation pushdown. You have visibility into the status and timing of each statistics generation run, as well as the updated statistics values.

To get started, you can schedule statistics generation using the AWS Glue Data Catalog Console or AWS Glue APIs. The support for scheduled generation of AWS Glue Catalog statistics is generally available in all regions where Amazon EventBridge Scheduler is available. Visit AWS Glue Catalog documentation to learn more.

Read more


Today, AWS announced support for a new Apache Flink connector for Amazon DynamoDB. The new connector, contributed by AWS for the Apache Flink open source project, adds Amazon DynamoDB Streams as a new source for Apache Flink. You can now process DynamoDB streams events with Apache Flink, a popular framework and engine for processing and analyzing streaming data.

Amazon DynamoDB is a serverless, NoSQL database service that enables you to develop modern applications at any scale. DynamoDB Streams provides a time-ordered sequence of item-level changes (insert, update, and delete) in a DynamoDB table. With Amazon Managed Service for Apache Flink, you can transform and analyze DynamoDB streams data in real time using Apache Flink and integrate applications with other AWS services such as Amazon S3, Amazon OpenSearch, Amazon Managed Streaming for Apache Kafka, and more. Apache Flink connectors are software components that move data into and out of an Amazon Managed Service for Apache Flink application. You can use the new connector to read data from a DynamoDB stream starting with Apache Flink version 1.19. With Amazon Managed Service for Apache Flink there are no servers and clusters to manage, and there is no compute and storage infrastructure to set up.

The Apache Flink repo for AWS connectors can be found here. For detailed documentation and setup instructions, visit our Documentation Page.

Read more


Starting today, customers can use Amazon Managed Service for Apache Flink in Asia Pacific (Kuala Lumpur) Region to build real-time stream processing applications.

Amazon Managed Service for Apache Flink makes it easier to transform and analyze streaming data in real time with Apache Flink. Apache Flink is an open source framework and engine for processing data streams. Amazon Managed Service for Apache Flink reduces the complexity of building and managing Apache Flink applications and integrates with Amazon Managed Streaming for Apache Kafka (Amazon MSK), Amazon Kinesis Data Streams, Amazon OpenSearch Service, Amazon DynamoDB streams, Amazon Simple Storage Service (Amazon S3), custom integrations, and more using built-in connectors.

For a list of the AWS Regions where Amazon Managed Service for Apache Flink is available, please see the AWS Region Table.

You can learn more about Amazon Managed Service for Apache Flink here.

Read more


Amazon OpenSearch Ingestion adds support for ingesting data from Amazon Kinesis Data Streams

Amazon OpenSearch Ingestion now allows you to ingest records from Amazon Kinesis Data Streams, enabling you to seamlessly index streaming data in Amazon OpenSearch Service managed clusters or serverless collections without the need for any third-party data connectors. With this integration, you can now use Amazon OpenSearch Ingestion to perform near- real-time aggregations, sampling and anomaly detection on data ingested from Amazon Kinesis Data Streams, helping you to build efficient data pipelines to power your event-driven applications and real-time analytics use cases.

Amazon OpenSearch Ingestion pipelines can consume data records from one or more Amazon Kinesis Data Streams and transform the data before writing it to Amazon OpenSearch Service or Amazon S3. While reading data from Amazon Kinesis Data Streams via Amazon OpenSearch Ingestion, you have the option to use either enhanced fan-out or shared reads, giving you the flexibility to balance speed and cost. You can also check out this blog post to learn more about this feature.

This feature is available in all the 15 AWS commercial regions where Amazon OpenSearch Ingestion is currently available: US East (Ohio), US East (N. Virginia), US West (Oregon), US West (N. California), Europe (Ireland), Europe (London), Europe (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Singapore), Asia Pacific (Mumbai), Asia Pacific (Seoul), Canada (Central), South America (Sao Paulo), and Europe (Stockholm).

To learn more, see the Amazon OpenSearch Ingestion webpage and the Amazon OpenSearch Service Developer Guide.

Read more


Amazon Kinesis Data Streams launches CloudFormation support for resource policies

Amazon Kinesis Data Streams now provides AWS CloudFormation supports for managing resource policies for data streams and consumers. You can use CloudFormation templates to programmatically deploy resource policies in a secure, efficient, and repeatable way, reducing the risk of human error from manual configuration.

Kinesis Data Streams allows users to capture, process, and store data streams in real time at any scale. CloudFormation uses stacks to manage AWS resources, allowing you to track changes, apply updates automatically, and easily roll back changes when needed.

CloudFormation support for resource policies is available in all AWS regions where Amazon Kinesis Data Streams is offered, including the AWS GovCloud (US) Regions and China Regions. To learn more about Amazon Kinesis Data Streams resource policies, visit the developer guide.

Read more


Amazon DataZone now supports meaning-based Semantic search

Amazon DataZone now supports meaning-based Semantic search in its business data catalog, enhancing how data users search and discover assets. With this new capability, users can search by concept and related terms, in addition to the existing keyword-based search. Amazon DataZone is a data management service for customers to catalog, discover, share, and govern data at scale across organizational boundaries with governance and access controls.

As data users are looking to solve their analytics use cases, they start their journey with the search in the business data catalog to understand what data is available. With this launch, users can discover related datasets in Amazon DataZone based on the intent of the user’s query. For example, a search for “profit” now returns data assets related to sales, costs, revenue in addition to the keyword profit. This significantly improves the relevance and quality of the search results and helps support the desired analytics use case. Amazon DataZone’s semantic search feature is powered by a GenAI search engine. This search engine uses an embedded language model to generate sparse vectors which enrich assets with semantically related terms.

Semantic search is available in all AWS Regions where Amazon DataZone is available.

To learn more, visit Amazon DataZone and get started using the guide in documentation.

Read more


Amazon QuickSight now supports Client Credentials OAuth for Snowflake through API/CLI

Today, Amazon QuickSight is announcing the general availability of Client Credentials flow based OAuth through API/CLI to connect to Snowflake data sources. This launch enables you to create Snowflake connections as part of your Infrastructure as Code (IaC) efforts with full support for AWS CloudFormation.

This type of OAuth solution is used to obtain an access token for machine-to-machine communication. This flow is suitable for scenarios where a client (e.g., a server-side application or a script) needs to access resources hosted on a server without the involvement of a user. The launch includes support for Token (Client Secrets Based OAuth) & X509 (Client Private Key JWT) based OAuth. This launch also includes support for Role Based Access Control (RBAC). RBAC is used to display the corresponding schema/table information tied to that role during dataset creation by QuickSight authors.

This feature is now available in all supported Amazon QuickSight regions here. For more details, click here.

Read more


Amazon QuickSight now supports Client Credentials OAuth for Starburst through API/CLI

Today, Amazon QuickSight is announcing the general availability of Client Credentials flow based OAuth through API/CLI to connect to Starburst data sources. This launch enables you to create Starburst connections as part of your Infrastructure as Code (IaC) efforts with full support for AWS CloudFormation.

This type of OAuth solution is used to obtain an access token for machine-to-machine communication. This flow is suitable for scenarios where a client (e.g., a server-side application or a script) needs to access resources hosted on a server without the involvement of a user. The launch includes the support for Token (Client Secrets Based OAuth) & X509 (Client Private Key JWT) based OAuth. This launch also includes the support for Role Based Access Control (RBAC). RBAC is used to display the corresponding schema/table information tied to that role during dataset creation by QuickSight authors.

This feature is now available in all supported Amazon QuickSight regions here. For more details, click here.

Read more


Amazon Redshift Serverless higher base capacity of 1024 RPUs is now available in additional AWS regions

Amazon Redshift Serverless higher base capacity of up to 1024 Redshift Processing Units (RPUs) is now available in the AWS Europe (Frankfurt) and Europe (Ireland) regions. Amazon Redshift Serverless measures data warehouse capacity in RPUs, and you pay only for the duration of workloads run in RPU-hours on a per-second basis. Previously, the highest base capacity was 512 RPUs. With the new higher base capacity of 1024 RPUs, you now have even more flexibility to support workloads of large complexity, processing terabytes or petabytes in size to accelerate data loading and querying based on your price performance requirements. You now have a base capacity range from 8 to 1024 RPUs in the two additional AWS regions.

The large base capacity of Amazon Redshift Serverless can improve performance for your workloads serving use cases, such as complex and long running queries, queries with large numbers of columns, queries with joins and aggregations requiring high memory, data lake queries scanning large amounts of data, and ingesting large datasets into the data warehouse.

To get started, see the Amazon Redshift Serverless feature page, user documentation, and API Reference.

Read more


Configure Route53 CIDR blocks rules based on Internet Monitor suggestions

With Amazon CloudWatch Internet Monitor’s new traffic optimization suggestions feature, you can configure your Amazon Route 53 CIDR blocks to map your application’s client users to an optimal AWS Region based on network behavior.

Internet Monitor now provides actionable suggestions to help you optimize your Route 53 IP-based routing configurations. By leveraging the new traffic insights for your application, you can easily identify the optimal AWS Regions for routing your end user traffic, and then configure your Route 53 IP-based routing based on these recommendations.

Internet Monitor collects performance data and measures latency for your client subnets behind each DNS resolver. This enables Internet Monitor to recommend the AWS Region that will provide the lowest latency for your users, based on their locations, so that you can fine-tune your DNS routing to provide the best performance for users.

To learn more, visit the Cloud Watch Internet Monitor user guide documentation.

Read more


Amazon OpenSearch Service launches next-gen UI for enhanced data exploration and collaboration

Amazon OpenSearch Service launches a modernized operational analytics experience that enables users gain insights cross data spanning managed domains and serverless collections from a single endpoint. The launch also includes Workspaces to enhance collaboration and productivity, allowing teams to create dedicated spaces. Discover is revamped to provide a unified log exploration experience supporting languages such as SQL and Piped-Processing-Language (PPL), in addition to DQL and Lucene. Discover now features a data selector to support multiple sources, new visual design and query autocomplete for improved usability. This experience ensures users can access the latest UI enhancements, regardless of version of underlying managed cluster or collection.

The new OpenSearch analytics experience helps users gain insights from their operational data by providing purpose-built features for observability, security analytics, essentials and search use cases. With the enhanced Discover interface, users can now analyze data from multiple sources without switching tools, improving efficiency. Workspaces enable better collaboration by creating dedicated environments for teams to work on dashboards, saved queries, and other relevant content. Availability of the latest UI updates across all versions ensures uninterrupted access to the newest features and tools.

The new OpenSearch user interface can connect to OpenSearch domains (above version 1.3) and serverless collections. It is now available in 13 AWS commercial regions. To get started, create an OpenSearch application in AWS Management Console. Learn more at Amazon OpenSearch Service Developer Guide.

Read more


AWS Clean Rooms ML supports privacy-enhanced model training and inference

Today, AWS announces AWS Clean Rooms ML custom modeling, which enables organizations to generate predictive insights with their partners running their own machine-learning (ML) models and using their data in a clean rooms collaboration. With this launch, companies and their partners can train ML models and run inference on collective datasets without having to share sensitive data or proprietary models.

For example, advertisers can bring their proprietary model and data into a Clean Rooms collaboration, and invite publishers to join their data to train and deploy a custom ML model that helps them increase campaign effectiveness—all without sharing their custom model and data with one another. Similarly, financial institutions can use historical transaction records to train a custom ML model, and invite partners into a Clean Rooms collaboration to detect potential fraudulent transactions, without having to share underlying data and model among collaborators. With AWS Clean Rooms ML custom modeling, you can gain valuable insights with your partners while applying privacy-enhancing controls when running model training and inferencing by specifying the datasets to be used in a Clean Rooms environment. This allows you and your partners to approve the datasets used, and removes the need to share sensitive data or proprietary models with one another. AWS Clean Rooms ML also offers an AWS-authored lookalike modeling capability that can help you improve lookalike segment accuracy by up to 36% compared to industry baselines.

AWS Clean Rooms ML is available as a capability of AWS Clean Rooms in these AWS Regions. To learn more, visit AWS Clean Rooms ML.

Read more


Amazon OpenSearch Service announces Extended Support for engine versions

Today, we announce end of Standard Support and Extended Support timelines for legacy Elasticsearch versions and OpenSearch Versions. Standard Support ends on Nov 7, 2025, for legacy Elasticsearch versions up to 6.7, Elasticsearch versions 7.1 through 7.8, OpenSearch versions from 1.0 through 1.2, and OpenSearch versions 2.3 through 2.9. With Extended Support, for an incremental flat fee over regular instance pricing, you continue to get critical security updates beyond end of Standard Support. For more information, see blog.

All Elasticsearch versions will receive at least 12 months of Extended Support with Elasticsearch v5.6 receiving 36 months of Extended Support. OpenSearch versions running on OpenSearch Service, will get at least 12 months of Standard Support after end of support date for corresponding upstream open-source OpenSearch version, or at least 12 months of Standard Support after release of next minor version on OpenSearch Service, whichever is longer. For support timelines by version, please see documentation. While running a version in Extended Support, you will be charged an additional flat fee per Normalized Instance Hour (NIH) (e.g. $0.0065/NIH for US East (N. Virginia). NIH is computed as a factor of instance size (e.g. medium, large), and number of instance hours. For more information on Extended Support charges, please see pricing page.

End of support and Extended Support dates are applicable to all OpenSearch Service clusters running OpenSearch or Elasticsearch versions, in all AWS regions where Amazon OpenSearch Service is available. Please refer AWS Region Table for more information about Amazon OpenSearch Service availability.

Read more


Express brokers for Amazon MSK is now generally available

Today, AWS announces the general availability of Express brokers for Amazon Managed Streaming for Apache Kafka (Amazon MSK). Express brokers are a new broker type for Amazon MSK Provisioned designed to deliver up to 3x more throughput per broker, scale up to 20x faster, and reduce recovery time by 90% as compared to standard Apache Kafka brokers. Express brokers come preconfigured with Kafka best practices by default, support all Kafka APIs, and provide the same low-latency performance that Amazon MSK customers expect, so they can continue using existing client applications without any changes.

With Express brokers, customers can provision, scale up, and scale down Kafka cluster capacity in minutes, offload storage management with virtually unlimited pay-as-you-go storage, and build highly resilient applications. Customers can also continue using all of the Amazon MSK key features, including security, connectivity, and observability options, as well as popular integrations, including Amazon MSK Connect, Amazon Simple Storage Service (Amazon S3), AWS Glue Schema Registry, and more. Express brokers are currently available on Kafka version 3.6 and come in three different sizes of Graviton3-based M7g instances: large, 4xlarge, and 16xlarge. Each broker is charged an hourly rate with storage and data ingested charged separately on a pay-as-you-go basis. 

Express brokers are available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).

To learn more, check out the Amazon MSK overview page, pricing page, and developer guide.

To learn more about Express brokers, visit this AWS blog post.

 

Read more


Amazon Data Firehose support for delivering data into Apache Iceberg tables is available in additional AWS Regions

Amazon Data Firehose support for delivering data streams into Apache Iceberg tables in Amazon S3 is now available in all AWS regions except AWS China, AWS GovCloud and ap-southeast-5 regions.

With this feature, Firehose integrates with Apache Iceberg, so customers can deliver data streams directly into Apache Iceberg tables in their Amazon S3 data lake. Firehose can acquire data streams from Kinesis Data Streams, Amazon MSK, or Direct PUT API, and is also integrated to acquire streams from AWS Services such as AWS WAF web ACL logs, Amazon CloudWatch Logs, Amazon VPC Flow Logs, AWS IOT, Amazon SNS, AWS API Gateway Access logs and many others listed here. Customers can stream data from any of these sources directly into Apache Iceberg tables in Amazon S3, and avoid multi-step processes. Firehose is serverless, so customers can simply setup a stream by configuring the source and destination properties, and pay based on bytes processed.

The new feature also allows customers to route records in a data stream to different Apache Iceberg tables based on the content of the incoming record. To route records to different tables, customers can configure routing rules using JSON expressions. Additionally, customers can specify if the incoming record should apply a row-level update or delete operation in the destination Apache Iceberg table, and automate processing for data correction and right to forget scenarios.

To learn more and get started, visit Amazon Data Firehose documentation, pricing, and console.

Read more


Amazon Redshift Multi-AZ is generally available for RA3 clusters in 3 additional AWS regions

Amazon Redshift is announcing the general availability of Multi-AZ deployments for RA3 clusters in the Asia Pacific (Malaysia), Europe (London) and South America (Sao Paulo) AWS regions. Redshift Multi-AZ deployments support running your data warehouse in multiple AWS Availability Zones (AZ) simultaneously and continue operating in unforeseen failure scenarios. A Multi-AZ deployment raises the Amazon Redshift Service Level Agreement (SLA) to 99.99% and delivers a highly available data warehouse for the most demanding mission-critical workloads.

Enterprise customers with mission critical workloads require a data warehouse with fast failover times and simplified operations that minimizes impact to applications. Redshift Multi-AZ deployment helps meet these demands by reducing recovery time and automatically recovering in another AZ during an unlikely event such as an AZ failure. A Redshift Multi-AZ data warehouse also maximizes query processing throughput by operating in multiple AZs and using compute resources from both AZs to process read and write queries.

Amazon Redshift Multi-AZ is now generally available for RA3 clusters through the Redshift Console, API and CLI. For all regions where Multi-AZ is available, see the supported AWS regions.

To learn more about Amazon Redshift Multi-AZ, see the Amazon Redshift Reliability page and Amazon Redshift Multi-AZ documentation page.

Read more


Amazon DataZone Achieves HITRUST Certification

Amazon DataZone has achieved HITRUST certification, demonstrating it meets the requirements established by the Health Information Trust Alliance Common Security Framework (HITRUST CSF) for managing sensitive health data, as required by healthcare and life sciences customers.

This certification includes the testing of over 600 controls derived from multiple security frameworks such as ISO 27001 and NIST 800-53r5, providing a comprehensive set of baseline security and privacy controls. The 2024 AWS HITRUST certification is now available to AWS customers through AWS Artifact in the AWS Management Console. Customers can leverage the certification to meet applicable controls via HITRUST’s Inheritance Program as defined under the HITRUST Shared Responsibility Matrix (SRM).

Amazon DataZone is a data management service that makes it faster and easier for customers to catalog, discover, share, and govern data between data producers and consumers within their organization. For more information about Amazon DataZone and how to get started, refer to our product page and review the Amazon DataZone technical documentation.
 

Read more


New Kinesis Client Library 3.0 reduces stream processing compute costs by up to 33%

You can now reduce compute costs to process streaming data with Kinesis Client Library (KCL) 3.0 by up to 33% compared to previous KCL versions. KCL 3.0 introduces an enhanced load balancing algorithm that continuously monitors resource utilization of the stream processing workers and automatically redistributes the load from over-utilized workers to other underutilized workers. This ensures even CPU utilization across workers and removes the need to over-provision the stream processing compute workers which reduces cost. Additionally, KCL 3.0 is built with the AWS SDK for Java 2.x for improved performance and security features, fully removing the dependency on the AWS SDK for Java 1.x.

KCL is an open-source library that simplifies the development of stream processing applications with Amazon Kinesis Data Streams. It manages complex tasks associated with distributed computing such as load balancing, fault tolerance, and service coordination, allowing you to solely focus on your core business logic. You can upgrade your stream processing application running on KCL 2.x by simply replacing the current library using KCL 3.0, without any changes in your application code. KCL 3.0 supports stream processing applications running on Amazon EC2 instances or containers such as Amazon ECS, Amazon EKS, or AWS Fargate.

KCL 3.0 is available with Amazon Kinesis Data Streams in all AWS regions. To learn more, see the Amazon Kinesis Data Streams developer guide, KCL 3.0 release notes, and launch blog.

Read more


Amazon MSK now supports vector embedding generation using Amazon Bedrock

Amazon MSK (Managed Streaming for Apache Kafka) now supports new Managed Streaming for Apache Flink blueprints to generate vector-embeddings using Amazon Bedrock, making it easier to build real-time AI applications powered by up-to-date, contextual data. This blueprint simplifies the process of incorporating the latest data from your Amazon MSK streaming pipelines into your generative AI models, eliminating the need to write custom code to integrate real-time data streams, vector databases, and large language models.

With just a few clicks, customers can configure the blueprint to continuously generate vector embeddings using Bedrock's embedding models, then index those embeddings in Amazon OpenSearch for their Amazon MSK data streams. This allows customers to combine the context from real-time data with Bedrock's powerful large language models to generate accurate, up-to-date AI responses without writing custom code. Customers can also choose to improve the efficiency of data retrieval using built-in support for data chunking techniques from LangChain, an open-source library, supporting high-quality inputs for model ingestion. The blueprint manages the data integration and processing between MSK, the chosen embedding model, and the Open Search vector store, allowing customers to focus on building their AI applications rather than managing the underlying integration.

Real-time vector embedding blueprint is generally available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Paris), Europe (London), Europe (Ireland) and South America (Sao Paulo) AWS Regions. Visit the Amazon MSK documentation for the list of additional Regions, which will be supported over the next few weeks. To learn more about how to use the blueprint to generate real-time vector embeddings from your Amazon MSK data, visit the AWS blog.

Read more


Amazon RDS announces cross-region automated backups in Asia Pacific (Hyderabad) and Africa (Cape Town)

Cross-Region Automated Backup replication for Amazon RDS is now available in Asia Pacific (Hyderabad) and Africa (Cape Town) Regions. This launch allows you to setup automated backup replication between Asia Pacific (Hyderabad) and Asia Pacific (Mumbai); and between Africa (Cape Town) and Europe (Ireland), Europe (London), or Europe (Frankfurt) Regions.

Automated Backups enable recovery capability for mission-critical databases by providing you the ability to restore your database to a specific point in time within your backup retention period. With Cross-Region Automated Backup replication, RDS will replicate snapshots and transaction logs to the chosen destination AWS Region. In the event that your primary AWS Region becomes unavailable, you can restore the automated backup to a point in time in the secondary AWS Region and quickly resume operations. As transaction logs are uploaded to the target AWS Region frequently, you can achieve a Recovery Point Objective (RPO) of within the last few minutes.

You can setup Cross-Region Automated Backup replication with just a few clicks on the Amazon RDS Management Console or using the AWS SDK or CLI. Cross-Region Automated Backup replication is available on Amazon RDS for PostgreSQL, Amazon RDS for MariaDB, Amazon RDS for MySQL, Amazon RDS for Oracle, and Amazon RDS for Microsoft SQL Server. For more information, including instructions on getting started, read the Amazon RDS documentation.

Read more


AWS announces CSV result format support for Amazon Redshift Data API

Amazon Redshift Data API enables you to access data efficiently from Amazon Redshift data warehouses by eliminating the need to manage database drivers, connections, network configurations, data buffering, and more. Data API now supports comma seperated values (CSV) result format which provides flexibility in how you access and process data, allowing you to choose between JSON and CSV formats based on your application needs.

With CSV result format, you can now specify whether you want your query results formatted as JSON or CSV through the --result-format parameter when calling ExecuteStatement and BatchExecuteStatement APIs. To retrieve CSV results, use the new GetStatementResultV2 API which supports CSV results, while GetStatementResult API continues to support only JSON. If not specified, the default format remains JSON.

CSV support with Data API is now generally available for both Redshift Provisioned and Amazon Redshift Serverless data warehouses in all AWS commercial and the AWS GovCloud (US) Regions which support Data API. To get started and learn more, visit Amazon Redshift database developers guide.

Read more


application-services

Amazon SageMaker Lakehouse and Amazon Redshift support for zero-ETL integrations from eight applications

Amazon SageMaker Lakehouse and Amazon Redshift now support zero-ETL integrations from applications, automating the extraction and loading of data from eight applications, including Salesforce, SAP, ServiceNow, and Zendesk. As an open, unified, and secure lakehouse for your analytics and AI initiatives, Amazon SageMaker Lakehouse enhances these integrations to streamline your data management processes.

These zero-ETL integrations are fully managed by AWS and minimize the need to build ETL data pipelines. With this new zero-ETL integration, you can efficiently extract and load valuable data from your customer support, relationship management, and ERP applications into your data lake and data warehouse for analysis. Zero-ETL integration reduces users' operational burden and saves the weeks of engineering effort needed to design, build, and test data pipelines. By selecting a few settings in the no-code interface, you can quickly set up your zero-ETL integration to automatically ingest and continually maintain an up-to-date replica of your data in the data lake and data warehouse. Zero-ETL integrations help you focus on deriving insights from your application data, breaking down data silos in your organization and improving operational efficiency. Now run enhanced analysis on your application data using Apache Spark and Amazon Redshift for analytics or machine learning. Optimize your data ingestion processes and focus instead on analysis and gaining insights. 

Amazon SageMaker Lakehouse and Amazon Redshift support for zero-ETL integrations from eight applications is now generally available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).

You can create and manage integrations using either the AWS Glue console, the AWS Command Line Interface (AWS CLI), or the AWS Glue APIs. To learn more, visit What is zero-ETL and What is AWS Glue.

Read more


Amazon Bedrock now supports multi-agent collaboration

Amazon Bedrock now supports multi-agent collaboration, allowing organizations to build and manage multiple AI agents that work together to solve complex workflows. This feature allows developers to create agents with specialized roles tailored for specific business needs, such as financial data collection, research, and decision-making. By enabling seamless agent collaboration, Amazon Bedrock empowers organizations to optimize performance across industries like finance, customer service, and healthcare.

With multi-agent collaboration on Amazon Bedrock, organizations can effortlessly master complex workflows, achieving highly accurate and scalable results across diverse applications. In financial services, for example, specialized agents coordinate to gather data, analyze trends, and provide actionable recommendations—working in parallel to improve response times and precision. This collaborative feature allows businesses to quickly build, deploy, and scale multi-agent setups, reducing development time while ensuring seamless integration and adaptability to evolving needs.

Multi-agent collaboration is currently available in the US East (N. Virginia), US West (Oregon), and Europe (Ireland) AWS Regions.

To learn more, visit Amazon Bedrock Agents

Read more


Amazon EventBridge and AWS Step Functions announce integration with private APIs

Amazon EventBridge and AWS Step Functions now support integration with private APIs powered by AWS PrivateLink and Amazon VPC Lattice, making it easier for customers to accelerate innovation and simplify modernization of distributed applications across public and private networks, both on-premises and in the cloud. This allows customers to bring the capabilities of AWS cloud to new and existing workloads, achieving higher performance, agility, and lower costs.

Enterprises across industries are modernizing their applications to drive growth, reduce costs, and foster innovation. However, integrating applications across siloed VPCs and on-premises environments can be challenging, often requiring custom code and complex configurations. With fully-managed connectivity to private HTTPS-based APIs, customers can now securely integrate their legacy systems with cloud-native applications using event-driven architectures and workflow orchestration, allowing them to accelerate their innovation on AWS while driving higher security and regulatory compliance. These advancements allow customers to achieve faster time to market by eliminating the need to write and maintain custom networking or integration code, enabling developers to build extensible systems and add new capabilities easily.

Integration with private APIs in Amazon EventBridge and AWS Step Functions are now generally available in Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), South America (Sao Paulo), US East (N. Virginia), US East (Ohio), US West (N. California), and US West (Oregon). You can start using private APIs with Amazon EventBridge and AWS Step Functions from the AWS Management Console or using the AWS CLI and SDK. To learn more, please read the launch blog, Amazon EventBridge user guide and AWS Step Functions documentation.
 

Read more


Amazon Q Developer for the Eclipse IDE is now in public preview

The Amazon Q Developer plugin for the Eclipse IDE is now in public preview. With this launch, developers can leverage the power of Q Developer, the most capable generative AI-powered assistant for software development, within the Eclipse IDE.

Eclipse developers can now chat with Amazon Q Developer about their project and code faster with inline code suggestions within the IDE. Developers can also leverage Amazon Q Developer customization to receive tailored responses and code recommendations that conform to their team's internal libraries, proprietary algorithmic techniques, and enterprise code style. This helps users build faster while enhancing productivity across the entire software development lifecycle.

The Amazon Q Developer plugin for the Eclipse IDE Public Preview is available in all AWS regions where Q Developer is supported. Learn more and download the free Amazon Q Developer plugin for Eclipse to get started.

Read more


Introducing Amazon Q Apps with private sharing

Amazon Q Apps, a capability within Amazon Q Business to create lightweight, generative AI-powered apps, now supports private sharing. This new feature enables app creators to restrict app access to select Amazon Q Business users, providing more granular control over app visibility and usage within organizations.

Previously, Amazon Q Apps could only be kept private for individual use or published to all users of the Amazon Q Business environment through the Amazon Q Apps library. Now app creators can share their apps with specific individuals allowing for more targeted collaboration and controlled access. App users with access to shared apps can find these apps in the Amazon Q Apps Library and run them. Apps shown in the library respect the access set by the app creator so those are visible only to selected users. Private sharing enables new functional use cases. For instance, a messaging-compliant document generation app may be shared company-wide for anyone in the organization to use, while a customer outreach app could be restricted to individuals of the sales team only. Private sharing also opens up possibilities for app creators to gather early feedback from a small group of users before wider distribution of their app.

Amazon Q Apps with private sharing is now available in the same regions where Amazon Q Business is available.

To learn more about private sharing in Amazon Q Apps, visit the Q Apps documentation.

Read more


Amazon Q Apps introduces data collection (Preview)

Amazon Q Apps, the generative AI-powered app creation capability of Amazon Q Business, now offers a new data collection feature in public preview. This enhancement enables users to collate data across multiple users within their organization, further enhancing the collaborative quality of Amazon Q Apps for various business needs.

With the new ability to collect data through form cards, app creators can design apps to gather information for a diverse set of business use cases, such as conducting team surveys, compiling questions for company-wide meetings, tracking new hire onboarding progress, or running a project retrospective. These apps can further leverage generative AI to analyze the collected data, identify common themes, summarize ideas, and provide actionable insights. A shared data collection app can be instantiated into different data collections by app users, each with its own unique, shareable link. App users can participate in an ongoing data collection to submit responses, or start their own data collection without the need to duplicate the app.

Amazon Q Apps with data collection is available in the regions where Amazon Q Business is available.

To learn more about data collection in Amazon Q Apps and how it can benefit your organization, visit the Q Apps documentation.

Read more


AWS Lambda announces Provisioned Mode for Kafka event source mappings (ESMs)

AWS Lambda announces Provisioned Mode for event source mappings (ESMs) that subscribe to Apache Kafka event sources, a feature that allows you to optimize the throughput of your Kafka ESM by provisioning event polling resources that remain ready to handle sudden spikes in traffic. Provisioned Mode helps you build highly responsive and scalable event-driven Kafka applications with stringent performance requirements.

Customers building streaming data applications often use Kafka as an event source for Lambda functions, and use Lambda's fully-managed MSK ESM or self-managed Kafka ESM, which automatically scale polling resources in response to events. However, for event-driven Kafka applications that need to handle unpredictable bursts of traffic, lack of control over the throughput of ESM can lead to delays in your users’ experience. Provisioned Mode for Kafka ESM allows you to fine-tune the throughput of the ESM by provisioning and auto-scaling between a minimum and maximum number of polling resources called event pollers, and is ideal for real-time applications with stringent performance requirements.

This feature is generally available in all AWS Commercial Regions where AWS Lambda is available, except Israel (Tel Aviv), Asia Pacific (Malaysia), and Canada West (Calgary).

You can activate Provisioned Mode for MSK ESM or self-managed Kafka ESM by configuring a minimum and maximum number of event pollers in the ESM API, AWS Console, AWS CLI, AWS SDK, AWS CloudFormation, and AWS SAM. You pay for the usage of event pollers, along a billing unit called Event Poller Unit (EPU). To learn more, read Lambda ESM documentation and AWS Lambda pricing.

Read more


Amazon Q Business now available as browser extension

Today, Amazon Web Services announces the general availability of Amazon Q Business browser extensions for Google Chrome, Mozilla Firefox, and Microsoft Edge. Users can now supercharge their browsers’ intelligence and receive context-aware, generative AI assistance, making it easy to get on-the-go help for their daily tasks.

The Amazon Q Business browser extension makes it easy for users to summarize web pages, ask questions about web content or uploaded files, and leverage large language model knowledge directly within their browser. With the browser extension, users can maximize reading productivity, streamline their research and analysis of complex information, and get instant help when creating content.

The Amazon Q Business browser extension is now available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (N. California), and US West (Oregon).

Learn how to boost your productivity with AI-powered assistance within your browser by visiting the Amazon Q Business product page and the Amazon Q Business documentation site.

Read more


Cross-zone enabled Application Load Balancer now supports zonal shift and zonal autoshift

AWS Application Load Balancer (ALB) now supports Amazon Application Recovery Controller’s zonal shift and zonal autoshift features on load balancers that are enabled across zones. Zonal shift allows you to quickly shift traffic away from an impaired Availability Zone (AZ) and recover from events such as bad application deployment and gray failures. Zonal autoshift safely and automatically shifts your traffic away from an AZ when AWS identifies potential impact to it.

Enabling cross-zone on ALBs is a popular configuration for customers that require an even distribution of traffic across application targets in multiple AZs. With this launch, customers can shift traffic away from an AZ in the event of a failure just like they are able to for cross-zone disabled load balancers. When zonal shift or autoshift is triggered, the ALB will block all traffic to targets in the AZ that is impacted and remove the zonal IP from DNS. You can configure this feature in two steps: First, enable configuration to allow zonal shift to act on your load balancer(s) using the ALB console or API. Second, trigger zonal shift or enable zonal autoshift for the chosen ALBs via Amazon Application Recovery Controller console or API.

Zonal shift and zonal autoshift support on ALB is available in all commercial AWS Regions, including the AWS GovCloud (US) Regions. To learn more, please refer to the ALB zonal shift documentation.

Read more


AWS Step Functions simplifies developer experience with Variables and JSONata transformations

AWS Step Functions announces support for two new capabilities: Variables and JSONata data transformations. Variables allow developers to assign data in one state and reference it in a subsequent state, simplifying state payload management, reducing the need to pass data through multiple intermediate states. With support for JSONata, an open source query and transformation language, customers can now perform advanced data manipulation and transformation such as date and time formatting, and mathematical operations. Additionally, when using JSONata, we have simplified input and output processing by reducing the number of JSON transformation fields required to call services and pass data to the next state.

AWS Step Functions is a visual workflow service capable of orchestrating over 14,000 API actions from over 220 AWS services to build distributed applications and data processing workloads. With support for Variables and JSONata, developers can build distributed serverless applications faster and more efficiently with enhanced payload management capabilities. These features also reduce the need for custom code, lowering costs and reducing the number of state transitions needed to construct and pass data between states.

Variables and JSONata are available at no additional cost in: US East (N. Virginia, Ohio), US West (Oregon), Canada (Central), Europe (Ireland and Frankfurt), and Asia Pacific (Tokyo, Seoul, Singapore, and Sydney) with the remaining regions to follow in the coming days. We have also partnered with LocalStack and Datadog to ensure that their local emulation and observability experiences are updated to support Variables and JSONata. To learn more, please visit:

Read more


Announcing new Amazon CloudWatch Metrics for AWS Lambda Event Source Mappings (ESMs)

AWS Lambda announces new Amazon CloudWatch metrics for Lambda Event Source Mappings (ESMs), which provide customers visibility into the processing state of events read by ESMs that subscribe to Amazon SQS, Amazon Kinesis, and Amazon DynamoDB event sources. This enables customers to easily monitor issues or delays in event processing and take corrective actions.

Customers use ESMs to read events from event sources and invoke Lambda functions. Lack of visibility into processing state of events ingested by ESMs delays diagnosis of event processing issues. Customers can now use the following CloudWatch metrics to monitor the processing state of events ingested by ESMs — PolledEventCount, InvokedEventCount, FilteredOutEventCount, FailedInvokeEventCount, DeletedEventCount, DroppedEventCount, and OnFailureDestinationDeliveredEventCount. PolledEventCount counts the events read by an ESM, and InvokedEventCount counts the events that invoked a Lambda function. FilteredOutEventCount counts the events filtered out by an ESM. FailedInvokeEventCount counts the events that attempted to invoke a Lambda function, but encountered failure. DeletedEventCount counts the events that have been deleted from the SQS queue by Lambda upon successful processing. DroppedEventCount counts the events dropped due to event expiry or exhaustion of retry attempts. OnFailureDestinationDeliveredEventCount counts the events successfully sent to an on-failure destination.

This feature is generally available in all AWS Commercial Regions where AWS Lambda is available.

You can enable ESM metrics using Lambda event source mapping API, AWS Console, AWS CLI, AWS SDK, AWS CloudFormation, and AWS SAM. To learn more about these metrics, visit Lambda developer guide. These new metrics are charged at standard CloudWatch pricing for metrics.

Read more


Amazon SQS increases in-flight limit for FIFO queues from 20K to 120K

Amazon SQS increases the in-flight limit for FIFO queues from 20K to 120K messages. When a message is sent to an SQS FIFO queue, it is added to the queue backlog. Once you invoke a receive request on the FIFO queue, the message is now marked as in-flight and remains in-flight until a delete message request is invoked.

With this change to the in-flight limit, your receivers can now process a maximum of 120K messages concurrently, increased from 20K previously, via SQS FIFO queues. If you have sufficient publish throughput and were constrained by the 20K in-flight limit, you can now process up to 120K messages at a time by scaling your receivers.

The increased in-flight limits is available in all commercial and the AWS GovCloud (US) Regions where SQS FIFO queues are available.

To get started, see the following resources:

Read more


Amazon MQ is now available in the AWS Asia Pacific (Malaysia) region

Amazon MQ is now available in the AWS Asia Pacific (Malaysia) region. With this launch, Amazon MQ is now available in 34 regions.

Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that makes it easier to set up and operate message brokers on AWS. Amazon MQ reduces your operational responsibilities by managing the provisioning, setup, and maintenance of message brokers. Because Amazon MQ connects to your current applications with industry-standard APIs and protocols, you can more easily migrate to AWS without having to rewrite or modify your applications.

For more information, please visit the Amazon MQ product page, and see the AWS Region Table for complete regional availability.

Read more


AWS B2B Data Interchange now supports all X12 transaction sets

AWS B2B Data Interchange now supports all X12 transactions for versions 4010, 4030, 4050, 4060, and 5010. Versions 4050 and 4060 are new to the service and were not previously available. Each of these transactions and versions are supported for both inbound and outbound use cases, enabling you to migrate a greater number of your bi-directional EDI workloads to AWS.

This launch especially benefits customers in the manufacturing, logistics, and financial services industries by enabling them to validate, parse, and transform a wider range of X12 transactions exchanged with their trading partners. Among these new transaction sets supported are those used to reserve shipment capacity, apply for mortgage insurance benefits, and to acknowledge purchase orders, deliveries, and returns.

These new X12 transaction sets and versions are available in all AWS Regions that offer B2B Data Interchange. A full list of these transactions, along with their descriptions and categories, can be found in the documentation. To learn more about building and running your bi- directional EDI workflows with B2B Data Interchange, take the self-paced workshop.

Read more


AWS End User Messaging announces integration with Amazon EventBridge

Today, AWS End User Messaging announces an integration with Amazon EventBridge. AWS End User Messaging provides developers with a scalable and cost-effective messaging infrastructure without compromising the safety, security, or results of their communications.

Now your SMS, MMS and voice delivery events which contain information like the status of the message, price, and carrier information will be available in EventBridge. You can then send send your SMS events to other AWS services and the many SaaS applications that EventBridge integrates with. EventBridge also allows you to create rules that filter and route your SMS events to event destinations you specify.

To learn more, visit the AWS End User Messaging SMS User Guide.
 

Read more


Amazon EventBridge event delivery latency metric now in the AWS GovCloud (US) Regions

The Amazon EventBridge Event Bus end-to-end event delivery latency metric in Amazon CloudWatch that tracks the duration between event ingestion and successful delivery to the targets on your Event Bus is now available in the AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions. This new IngestionToInvocationSuccessLatency allows you to now detect and respond to event processing delays caused by under-performing, under-scaled, or unresponsive targets.

Amazon EventBridge Event Bus is a serverless event router that enables you to create highly scalable event-driven applications by routing events between your own applications, third-party SaaS applications, and other AWS services. You can set up rules to determine where to send your events, allowing for applications to react to changes in your systems as they occur. With the new IngestionToInvocationSuccessLatency metric you can now better monitor and understand event delivery latency to your targets, increasing the observability of your event-driven architecture.

To learn more about the new IngestionToInvocationSuccessLatency metric for Amazon EventBridge Event Buses, please read our blog post and documentation.
 

Read more


Announcing Infrastructure as Code template generation for AWS Step Functions

AWS Step Functions now supports exporting workflows as AWS CloudFormation or AWS Serverless Application Model (SAM) templates directly in the AWS console. This allows for centralized and repeatable provisioning and management of your workflow configurations. AWS Step Functions is a visual workflow service capable of orchestrating virtually any AWS service to automate business processes and data processing workloads.

Now, you can export and customize templates from existing workflows to easily provision them in other accounts or jump-start the creation of new workflows. When you combine the Step Functions templates you generate with those from other services, you can provision your entire application using AWS CloudFormation stacks. Additionally, you can export your workflows to the AWS Infrastructure Composer console to take advantage of the visual builder capabilities to create a new serverless application project. Using Infrastructure Composer, you can connect the workflow with other AWS resources and generate the resource configurations in an AWS SAM template.

For more information about the AWS Regions where AWS Step Functions is available, see the AWS Region table. You can get started in the AWS console. To learn more, see the AWS Step Functions Developer Guide.

Read more


AWS B2B Data Interchange introduces generative AI-assisted EDI mappings

AWS B2B Data Interchange now enables you to generate electronic data interchange (EDI) mapping code using generative AI. This new capability expedites the process of writing and testing bi-directional EDI mappings, reducing the time, effort, and costs associated with migrating your EDI workloads to AWS. AWS B2B Data Interchange is a fully managed service that automates the transformation of business-critical EDI transactions at scale, with elasticity and pay-as-you-go pricing.

With AWS B2B Data Interchange’s new generative AI-assisted mapping capability, you can leverage your existing EDI documents and transactional data stored in your Amazon S3 buckets to generate mapping code using Amazon Bedrock. Once the mapping code is generated, it is managed within AWS B2B Data Interchange where it is used to automatically transform new EDI documents to and from custom data representations. Previously, you were required to write and test each EDI mapping manually, which was a time-consuming and difficult process that required niche EDI specialization. AWS B2B Data Interchange’s new generative AI-assisted mapping capability increases developer productivity and reduces the technical expertise required to develop mapping code, so you can shift resources back to the valued-added initiatives that drive meaningful business impact.

AWS B2B Data Interchange’s generative AI-assisted mapping capability is available in US East (N. Virginia) and US West (Oregon). To learn more about building and running your EDI workflows on AWS, visit the AWS B2B Data Interchange product page or review the documentation.

Read more


Amazon EventBridge announces up to 94% improvement in end-to-end latency for Event Buses

Amazon EventBridge Event Buses announces up to 94% improvement in end-to-end latency for Event Buses, since January 2023, enabling you to handle highly latency sensitive applications, including fraud detection and prevention, industrial automation, and gaming applications. End-to-End latency is measured by the time taken from event ingestion to first event invocation attempt. This lower latency enables you to build highly responsive and efficient event-driven architectures for your time-sensitive applications. You can now detect and respond to critical events more quickly, enabling rapid innovation, faster decision-making, and improved operational efficiency.

For latency-sensitive mission-critical applications, even small delays can have a big impact. To address this, Amazon EventBridge Event Bus has been able to significantly reduce its average latency from 2235.23ms measured in January 2023, to just 129.33ms measured in August 2024 at P99. This significant improvement in latency allows EventBridge to deliver events in real-time to your mission critical applications.

Amazon EventBridge Event Bus’ lower latency is applied by default across all AWS Regions where Amazon EventBridge is available, including the AWS GovCloud (US) Regions, at no additional cost to you. Customers can monitor these improvements through the IngestionToInvocationStartLatency or the end-to-end IngestionToInvocationSuccessLatency metrics available in the EventBridge console dashboard or via Amazon CloudWatch. This benefits customers globally, and ensures consistent low-latency event processing for customers, regardless of your geographic location.

For more information on Amazon EventBridge Event Bus, please visit our documentation. To get started with Amazon EventBridge, visit the AWS Console and follow these instructions from the user guide.

Read more


Amazon SNS delivers to Amazon Data Firehose endpoints in six new regions

Amazon Simple Notification Services (Amazon SNS) now delivers to Amazon Data Firehose endpoints in Asia Pacific (Hong Kong), Asia Pacific (Hyderabad), Asia Pacific (Melbourne), Europe (Zurich), Europe (Spain), Middle East (UAE).

You can now use Amazon SNS to deliver notifications to Amazon Data Firehose (Firehose) endpoints for archiving and analysis. Through Firehose delivery streams, you can deliver events to AWS destinations such as Amazon Simple Storage Service (Amazon S3), Amazon Redshift, and Amazon OpenSearch Service, or to third-party destinations such as Datadog, New Relic, MongoDB, and Splunk. For more information, see Fanout to Firehose delivery streams.

To get started, see the following resources:

Read more


Amazon SNS delivers to Amazon Data Firehose endpoints in the AWS GovCloud (US) Regions

Amazon Simple Notification Service (Amazon SNS) now delivers to Amazon Data Firehose endpoints in the AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions.

You can now use Amazon SNS to deliver notifications to Amazon Data Firehose (Firehose) endpoints for archiving and analysis. Through Firehose delivery streams, you can deliver events to AWS destinations such as Amazon Simple Storage Service (Amazon S3), Amazon Redshift, and Amazon OpenSearch Service, or to third-party destinations such as Datadog, New Relic, MongoDB, and Splunk. For more information, see Fanout to Firehose delivery streams.

To get started, see the following resources:

Read more


Amazon SNS supports message archiving and replay for FIFO topics in the AWS GovCloud (US) Regions

Amazon SNS now supports in-place message archiving and replay for SNS FIFO topics in the AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions.

Topic owners can now set an archive policy, which defines a retention period for the messages published to their topic. Subscribers can then set a replay policy to an individual subscription, which triggers a replay of select messages from the archive, from a starting point until an ending point. Subscribers can also set a filter policy on their subscription to further select the messages in-scope for a replay.

To get started, see the following resources:

Read more


applications

AWS Wickr is now available in the AWS Asia Pacific (Malaysia) Region

AWS Wickr now allows you to establish a network in the Asia Pacific (Malaysia) Region to help you meet data residency requirements, and other obligations.

AWS Wickr is a security-first messaging and collaboration service with features designed to help keep your internal and external communications secure, private, and compliant. AWS Wickr protects one-to-one and group messaging, voice and video calling, file sharing, screen sharing, and location sharing with end-to-end encryption. Customers have full administrative control over data, which includes addressing information governance polices, configuring ephemeral messaging options, and deleting credentials for lost or stolen devices. You can log both internal and external conversations in an AWS Wickr network to a private data store that you manage, for data retention and auditing purposes.

AWS Wickr is available in the AWS US East (N. Virginia), AWS GovCloud (US-West), AWS Canada (Central), AWS Europe (London, Frankfurt, Stockholm, and Zurich), and AWS Asia Pacific (Singapore, Sydney, Tokyo and now Malaysia) Regions.

To learn more and get started, see the following resources:

Read more


Application Signals now supports burn rate for application performance goals

Amazon CloudWatch Application Signals, an application performance monitoring (APM) feature in CloudWatch, makes it easy to automatically instrument and track application performance against their most important business or service level objectives (SLOs). Customers can now receive alerts when these SLOs reach a critical burn rate. This new feature allows you to calculate how quickly your service is consuming its error budget relative to the SLO's attainment goal. Burn rate metrics provide a clear indication of whether you're meeting, exceeding, or at risk of failing your SLO goals.

Today, with burn rate metrics, you can configure CloudWatch alarms to notify you automatically when your error budget consumption exceeds specified thresholds. This allows for proactive management of service reliability, empowering your teams to take prompt action to achieve long-term performance targets. By setting multiple alarms with varying look-back windows, you can identify sudden error rate spikes and gradual shifts that could affect your error budget.

Burn rates are available in all regions where Application Signals is generally available - 28 commercial AWS Regions except CA West (Calgary) and Asia Pacific (Malaysia) regions. For pricing, see Amazon CloudWatch pricing. See SLO documentation to learn more, or refer to the user guide and AWS One Observability Workshop to get started with Application Signals.

Read more


artificial-intelligence

Amazon SageMaker introduces new capabilities to accelerate scaling of Generative AI Inference

We are excited to announce two new capabilities in SageMaker Inference that significantly enhance the deployment and scaling of generative AI models: Container Caching and Fast Model Loader. These innovations address critical challenges in scaling large language models (LLMs) efficiently, enabling faster response times to traffic spikes and more cost-effective scaling. By reducing model loading times and accelerating autoscaling, these features allow customers to improve the responsiveness of their generative AI applications as demand fluctuates, particularly benefiting services with dynamic traffic patterns.

Container Caching dramatically reduces the time required to scale generative AI models for inference by pre-caching container images. This eliminates the need to download them when scaling up, resulting in significant reduction in scaling time for generative AI model endpoints. Fast Model Loader streams model weights directly from Amazon S3 to the accelerator, loading models much faster compared to traditional methods. These capabilities allow customers to create more responsive auto-scaling policies, enabling SageMaker to add new instances or model copies quickly when defined thresholds are reached, thus maintaining optimal performance during traffic spikes while at the same time managing costs effectively.

These new capabilities are accessible in all AWS regions where Amazon SageMaker Inference is available. To learn more see our documentation for detailed implementation guidance.
 

Read more


AWS announces Amazon SageMaker Partner AI Apps

Today Amazon Web Services, Inc. (AWS) announced the general availability of Amazon SageMaker partner AI apps, a new capability that enables customers to easily discover, deploy, and use best-in-class machine learning (ML) and generative AI (GenAI) development applications from leading app providers privately and securely, all without leaving Amazon SageMaker AI so they can develop performant AI models faster.

Until today, integrating purpose-built GenAI and ML development applications that provide specialized capabilities for a variety of model development tasks, required a considerable amount of effort. Beyond the need to invest time and effort in due diligence to evaluate existing offerings, customers had to perform undifferentiated heavy lifting in deploying, managing, upgrading and scaling these applications. Furthermore, to adhere to rigorous security and compliance protocols, organizations need their data to stay within the confines of their security boundaries without needing to move their data elsewhere, for example, to a Software as a Service (SaaS) application. Finally, the resulting developer experience is often fragmented, with developers having to switch back and forth between multiple disjointed interfaces. With SageMaker partner AI apps you can quickly subscribe to a partner solution and seamlessly integrate the app with your SageMaker development environment. SageMaker partner AI apps are fully managed and run privately and securely in your SageMaker environment reducing the risk of data and model exfiltration.

At launch, you will be able to boost your team’s productivity and reduce time to market by enabling: Comet, to track, visualize, and manage experiments for AI model development; Deepchecks, to evaluate quality and compliance for AI models; Fiddler, to validate, monitor, analyze, and improve AI models in production; and, Lakera, to protect AI applications from security threats such as prompt attacks, data loss and inappropriate content.

SageMaker partner AI apps is available in all currently supported regions except Gov Cloud. To learn more please visit SageMaker partner AI app’s developer guide.
 

Read more


Amazon SageMaker HyperPod now provides flexible training plans

Amazon SageMaker HyperPod announces flexible training plans, a new capability that allows you to train generative AI models within your timelines and budgets. Gain predictable model training timelines and run training workloads within your budget requirements, while continuing to benefit from features of SageMaker HyperPod such as resiliency, performance-optimized distributed training, and enhanced observability and monitoring. 

In a few quick steps, you can specify your preferred compute instances, desired amount of compute resources, duration of your workload, and preferred start date for your generative AI model training. SageMaker then helps you create the most cost-efficient training plans, reducing time to train your model by weeks. Once you create and purchase your training plans, SageMaker automatically provisions the infrastructure and runs the training workloads on these compute resources without requiring any manual intervention. SageMaker also automatically takes care of pausing and resuming training between gaps in compute availability, as the plan switches from one capacity block to another. If you wish to remove all the heavy lifting of infrastructure management, you can also create and run training plans using SageMaker fully managed training jobs.  

SageMaker HyperPod flexible training plans are available in the US East (N. Virginia), US East (Ohio), and US West (Oregon) AWS Regions. To learn more, visit: SageMaker HyperPod, documentation, and the announcement blog

Read more


Amazon Bedrock Marketplace brings over 100 models to Amazon Bedrock

Amazon Bedrock Marketplace provides generative AI developers access to over 100 publicly available and proprietary foundation models (FMs), in addition to Amazon Bedrock’s industry-leading, serverless models. Customers deploy these models onto SageMaker endpoints where they can select their desired number of instances and instance types. Amazon Bedrock Marketplace models can be accessed through Bedrock’s unified APIs, and models which are compatible with Bedrock’s Converse APIs can be used with Amazon Bedrock’s tools such as Agents, Knowledge Bases, and Guardrails.

Amazon Bedrock Marketplace empowers generative AI developers to rapidly test and incorporate a diverse array of emerging, popular, and leading FMs of various types and sizes. Customers can choose from a variety of models tailored to their unique requirements, which can help accelerate the time-to-market, improve the accuracy, or reduce the cost of their generative AI workflows. For example, customers can incorporate models highly-specialized for finance or healthcare, or language translation models for Asian languages, all from a single place.

Amazon Bedrock Marketplace is supported in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), and South America (São Paulo).

For more information, please refer to Amazon Bedrock Marketplace's announcement blog or documentation.

Read more


Task governance is now generally available for Amazon SageMaker HyperPod

Amazon SageMaker HyperPod now provides you with centralized governance across all generative AI development tasks, such as training and inference. You have full visibility and control over compute resource allocation, ensuring the most critical tasks are prioritized and maximizing compute resource utilization, reducing model development costs by up to 40%.

With HyperPod task governance, administrators can more easily define priorities for different tasks and set up limits for how many compute resources each team can use. At any given time, administrators can also monitor and audit the tasks that are running or waiting for compute resources through a visual dashboard. When data scientists create their tasks, HyperPod automatically runs them, adhering to the defined compute resource limits and priorities. For example, when training for a high-priority model needs to be completed as soon as possible but all compute resources are in use, HyperPod frees up resources from lower-priority tasks to support the training. HyperPod pauses the low-priority task, saves the checkpoint, and reallocates the freed-up compute resources. The preempted low-priority task will resume from the last saved checkpoint as resources become available again. And when a team is not fully using the resource limits the administrator has set up, HyperPod use those idle resources to accelerate another team’s tasks. Additionally, HyperPod is now integrated with Amazon SageMaker Studio, bringing task governance and other HyperPod capabilities into the Studio environment. Data scientists can now seamlessly interact with HyperPod clusters directly from Studio, allowing them to develop, submit, and monitor machine learning (ML) jobs on powerful accelerator-backed clusters.

Task governance for HyperPod is available in all AWS Regions where HyperPod is available: US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Stockholm), and South America (São Paulo).

To learn more, visit SageMaker HyperPod webpage, AWS News Blog, and SageMaker AI documentation.

Read more


Amazon Bedrock Guardrails supports multimodal toxicity detection for image content (Preview)

Organizations are increasingly using applications with multimodal data to drive business value, improve decision-making, and enhance customer experiences. Amazon Bedrock Guardrails now supports multimodal toxicity detection for image content, enabling organizations to apply content filters to images. This new capability with Guardrails, now in public preview, removes the heavy lifting required by customers to build their own safeguards for image data or spend cycles with manual evaluation that can be error-prone and tedious.

Bedrock Guardrails helps customers build and scale their generative AI applications responsibly for a wide range of use cases across industry verticals including healthcare, manufacturing, financial services, media and advertising, transportation, marketing, education, and much more. With this new capability, Amazon Bedrock Guardrails offers a comprehensive solution, enabling the detection and filtration of undesirable and potentially harmful image content while retaining safe and relevant visuals. Customers can now use content filters for both text and image data in a single solution with configurable thresholds to detect and filter undesirable content across categories such as hate, insults, sexual, and violence, and build generative AI applications based on their responsible AI policies.

This new capability in preview is available with all foundation models (FMs) on Amazon Bedrock that support images including fine-tuned FMs in 11 AWS regions globally: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Europe (London), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Tokyo), and Asia Pacific (Mumbai), and AWS GovCloud (US-West).

To learn more, visit the Amazon Bedrock Guardrails product page, read the News blog, and documentation.

Read more


Announcing new AWS AI Service Cards to advance responsible generative AI

Today, AWS announces the availability of new AWS AI Service Cards for Amazon Nova Reel; Amazon Canvas; Amazon Nova Micro, Lite, and Pro; Amazon Titan Image Generator; and Amazon Titan Text Embeddings. AI Service Cards are a resource designed to enhance transparency by providing customers with a single place to find information on the intended use cases and limitations, responsible AI design choices, and performance optimization best practices for AWS AI services.

AWS AI Service Cards are part of our comprehensive development process to build services in a responsible way. They focus on key aspects of AI development and deployment, including fairness, explainability, privacy and security, safety, controllability, veracity and robustness, governance, and transparency. By offering these cards, AWS aims to empower customers with the knowledge they need to make informed decisions about using AI services in their applications and workflows. Our AI Service Cards will continue to evolve and expand as we engage with our customers and the broader community to gather feedback and continually iterate on our approach.

For more information, see the AI Service Cards for

To learn more about AI Service Cards, as well as our broader approach to building AI in a responsible way, see our Responsible AI webpage.

Read more


Amazon Bedrock announces preview of prompt caching

Today, AWS announces that Amazon Bedrock now supports prompt caching. Prompt caching is a new capability that can reduce costs by up to 90% and latency by up to 85% for supported models by caching frequently used prompts across multiple API calls. It allows you to cache repetitive inputs and avoid reprocessing context, such as long system prompts and common examples that help guide the model’s response. When cache is used, fewer computing resources are needed to generate output. As a result, not only can we process your request faster, but we can also pass along the cost savings from using fewer resources.

Amazon Bedrock is a fully managed service that offers a choice of high-performing FMs from leading AI companies via a single API. Amazon Bedrock also provides a broad set of capabilities customers need to build generative AI applications with security, privacy, and responsible AI capabilities built in. These capabilities help you build tailored applications for multiple use cases across different industries, helping organizations unlock sustained growth from generative AI while providing tools to build customer trust and data governance.

Prompt caching is now available on Claude 3.5 Haiku and Claude 3.5 Sonnet v2 in US West (Oregon) and US East (N. Virginia) via cross-region inference, and Nova Micro, Nova Lite, and Nova Pro models in US East (N. Virginia). At launch, only a select number of customers will have access to this feature. To learn more about participating in the preview, see this page. To learn more about prompt caching, see our documentation and blog.

Read more


Amazon Q Developer can now guide SageMaker Canvas users through ML development

Starting today, you can build ML models using natural language with Amazon Q Developer, now available in Amazon SageMaker Canvas in preview. You can now get generative AI-powered assistance through the ML lifecycle, from data preparation to model deployment. With Amazon Q Developer, users of all skill levels can use natural language to access expert guidance to build high-quality ML models, accelerating innovation and time to market.

Amazon Q Developer will break down your objective into specific ML tasks, define the appropriate ML problem type, and apply data preparation techniques to your data. Amazon Q Developer then guides you through the process of building, evaluating, and deploying custom ML models. ML models produced in SageMaker Canvas with Amazon Q Developer are production ready, can be registered in SageMaker Studio, and the code can be shared with data scientists for integration into downstream MLOps workflows.

Amazon Q Developer is available in SageMaker Canvas in preview in the following AWS Regions: US East (N. Virginia), US West (Oregon), Europe (Frankfurt), Europe (Paris), Asia Pacific (Tokyo), and Asia Pacific (Seoul). To learn more about using Amazon Q Developer with SageMaker Canvas, visit the website, read the AWS News blog, or view the technical documentation.

Read more


Amazon Bedrock Data Automation now available in preview

Today, we are announcing the preview launch of Amazon Bedrock Data Automation (BDA), a new feature of Amazon Bedrock that enables developers to automate the generation of valuable insights from unstructured multimodal content such as documents, images, video, and audio to build GenAI-based applications. These insights include video summaries of key moments, detection of inappropriate image content, automated analysis of complex documents, and much more. Developers can also customize BDA’s output to generate specific insights in consistent formats required by their systems and applications.

By leveraging BDA, developers can reduce development time and effort, making it easier to build intelligent document processing, media analysis, and other multimodal data-centric automation solutions. BDA offers high accuracy at lower cost than alternative solutions, along with features such as visual grounding with confidence scores for explainability and built-in hallucination mitigation. This ensures accurate insights from unstructured, multi-modal data content. Developers can get started with BDA on the Bedrock console, where they can configure and customize output using their sample data. They can then integrate BDA’s unified multi-modal inference API into their applications to process their unstructured content at scale with high accuracy and consistency. BDA is also integrated with Bedrock Knowledge Bases, making it easier for developers to generate meaningful information from their unstructured multi-modal content to provide more relevant responses for retrieval augmented generation (RAG).

Bedrock Data Automation is available in preview in US West (Oregon) AWS Region.

To learn more, visit the Bedrock Data Automation page.

Read more


Amazon Bedrock Knowledge Bases now supports structured data retrieval

Amazon Bedrock Knowledge Bases now supports natural language querying to retrieve structured data from your data sources. With this launch, Bedrock Knowledge Bases offers an end-to-end managed workflow for customers to build custom generative AI applications that can access and incorporate contextual information from a variety of structured and unstructured data sources. Using advanced natural language processing, Bedrock Knowledge Bases can transform natural language queries into SQL queries, allowing users to retrieve data directly from the source without the need to move or preprocess the data.

Developers often face challenges integrating structured data into generative AI applications. This includes difficulties training large language models (LLMs) to convert natural language queries to SQL queries based on complex database schemas, as well as ensuring appropriate data governance and security controls are in place. Bedrock Knowledge Bases eliminates these hurdles by providing a managed natural language to SQL (NL2SQL) module. A retail analyst can now simply ask "What were my top 5 selling products last month?", and then Bedrock Knowledge Base automatically translates that query into SQL, execute the query against the database, and return the results - or even provide a summarized narrative response. To generate accurate SQL queries, Bedrock Knowledge Base leverages database schema, previous query history, and other contextual information that are provided about the data sources.

Bedrock Knowledge Bases supports structured data retrieval from Amazon Redshift and Amazon Sagemaker Lakehouse at this time and is available in all commercial regions where Bedrock Knowledge Bases is supported. To learn more, visit here and here. For details on pricing, please refer here.

Read more


Amazon Bedrock Knowledge Bases now supports GraphRAG (preview)

Today, we are announcing the support of GraphRAG, a new capability in Amazon Bedrock Knowledge Bases that enhances Generative AI applications by providing more comprehensive, relevant and explainable responses using RAG techniques combined with graph data. Amazon Bedrock Knowledge Bases offers fully-managed, end-to-end Retrieval-Augmented Generation (RAG) workflows to create highly accurate, low latency, and custom Generative AI applications by incorporating contextual information from your company's data sources. Amazon Bedrock Knowledge Bases now offers a fully-managed GraphRAG capability with Amazon Neptune Analytics.

Previously, customers faced challenges in conducting exhaustive, multi-step searches across disparate content. By identifying key entities across documents, GraphRAG delivers insights that leverage relationships within the data, enabling improved responses to end users. For example, users can ask a travel application for family-friendly beach destinations with direct flights and good seafood restaurants. Developers building Generative AI applications can enable GraphRAG in just a few clicks by specifying their data sources and choosing Amazon Neptune Analytics as their vector store when creating a knowledge base. This will automatically generate and store vector embeddings in Amazon Neptune Analytics, along with a graph representation of entities and their relationships.

GraphRAG with Amazon Neptune is built right into Amazon Bedrock Knowledge Bases, offering an integrated experience with no additional setup or additional charges beyond the underlying services. GraphRAG is available in AWS Regions where Amazon Bedrock Knowledge Bases and Amazon Neptune Analytics are both available (see current list of supported regions). To learn more, visit the Amazon Bedrock User Guide.

Read more


Announcing Amazon SageMaker HyperPod recipes

Amazon SageMaker HyperPod recipes help you get started training and fine-tuning publicly available foundation models (FMs) in minutes with state-of-the-art performance. SageMaker HyperPod helps customers scale generative AI model development across hundreds or thousands of AI accelerators with built-in resiliency and performance optimizations, decreasing model training time by up to 40%. However, as FM sizes continue to grow to hundreds of billions of parameters, the process of customizing these models can take weeks of extensive experimenting and debugging. In addition, performing training optimizations to unlock better price performance is often unfeasible for customers, as they often require deep machine learning expertise that could cause further delays in time to market. 

With SageMaker HyperPod recipes, customers of all skill sets can benefit from state-of-the-art performance while quickly getting started training and fine-tuning popular publicly available FMs, including Llama 3.1 405B, Mixtral 8x22B, and Mistral 7B. SageMaker HyperPod recipes include a training stack tested by AWS, removing weeks of tedious work experimenting with different model configurations. You can also quickly switch between GPU-based and AWS Trainium-based instances with a one-line recipe change and enable automated model checkpointing for improved training resiliency. Finally, you can run workloads in production on the SageMaker AI training service of your choice. 

SageMaker HyperPod recipes are available in all AWS Regions where SageMaker HyperPod and SageMaker training jobs are supported. To learn more and get started, visit the SageMaker HyperPod page and blog.

Read more


Amazon Bedrock Knowledge Bases now processes multimodal data

Amazon Bedrock Knowledge Bases now enables developers to build generative AI applications that can analyze and leverage insights from both textual and visual data, such as images, charts, diagrams, and tables. Bedrock Knowledge Bases offers end-to-end managed Retrieval-Augmented Generation (RAG) workflow that enables customers to create highly accurate, low-latency, secure, and custom generative AI applications by incorporating contextual information from their own data sources. With this launch, Bedrock Knowledge Bases extracts content from both text and visual data, generates semantic embeddings using the selected embedding model, and stores them in the chosen vector store. This enables users to retrieve and generate answers to questions derived not only from text but also from visual data. Additionally, retrieved results now include source attribution for visual data, enhancing transparency and building trust in the generated outputs.

To get started, customers can choose between: Amazon Bedrock Data Automation, a managed service that automatically extracts content from multimodal data (currently in Preview), or FMs such as Claude 3.5 Sonnet or Claude 3 Haiku, with the flexibility to customize the default prompt.

Multimodal data processing with Bedrock Data Automation is available in the US West (Oregon) region in preview. FM-based parsing is supported in all regions where Bedrock Knowledge Bases is available. For details on pricing for using Bedrock Data Automation or FM as a parser, please refer to the pricing page.

To learn more, visit Amazon Bedrock Knowledge Bases product documentation.

Read more


Amazon Bedrock Intelligent Prompt Routing is now available in preview

Amazon Bedrock Intelligent Prompt Routing routes prompts to different foundational models within a model family, helping you optimize for quality of responses and cost. Using advanced prompt matching and model understanding techniques, Intelligent Prompt Routing predicts the performance of each model for each request and dynamically routes each request to the model that it predicts is most likely to give the desired response at the lowest cost. Customers can choose from two prompt routers in preview that route requests either between Claude Sonnet 3.5 and Claude Haiku, or between Llama 3.1 8B and Llama 3.1 70B.

Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models from leading AI companies via a single API. Amazon Bedrock also provides a broad set of capabilities customers need to build generative AI applications with security, privacy, and responsible AI capabilities built in. These capabilities help you build tailored applications for multiple use cases across different industries, helping organizations unlock sustained growth from generative AI while ensuring customer trust and data governance. With Intelligent Prompt Routing, Amazon Bedrock can help customers build cost-effective generative AI applications with a combination of foundation models to get better performance at lower cost than a single foundation model.

During preview, customers are charged regular on-demand pricing for the models that requests are routed to. Learn more in our documentation and blog.

Read more


Announcing GenAI Index in Amazon Kendra

Amazon Kendra is an AI-powered search service enabling organizations to build intelligent search experiences and retrieval augmented generation (RAG) systems to power generative AI applications. Starting today, AWS customers can use a new index - the GenAI Index for RAG and intelligent search. With the Kendra GenAI Index, customers get high out-of-the-box search accuracy powered by the latest information retrieval technologies and semantic models.

Kendra GenAI Index supports mobility across AWS generative AI services like Amazon Bedrock Knowledge Base and Amazon Q Business, giving customers the flexibility to use their indexed content across different use cases. It is available as a managed retriever in Bedrock Knowledge Bases, enabling customers to create a Knowledge Base powered by the Kendra GenAI Index. Customers can also integrate such Knowledge Bases with other Bedrock Services like Guardrails, Prompt Flows, and Agents to build advanced generative AI applications. The GenAI Index supports connectors for 43 different data sources, enabling customers to easily ingest content from a variety of sources.

Kendra GenAI Index is available in the US East (N. Virginia) and US West (Oregon) regions.

To learn more, see Kendra GenAI Index in the Amazon Kendra Developer Guide. For pricing, please refer to Kendra pricing page.

Read more


Amazon SageMaker Lakehouse and Amazon Redshift support for zero-ETL integrations from eight applications

Amazon SageMaker Lakehouse and Amazon Redshift now support zero-ETL integrations from applications, automating the extraction and loading of data from eight applications, including Salesforce, SAP, ServiceNow, and Zendesk. As an open, unified, and secure lakehouse for your analytics and AI initiatives, Amazon SageMaker Lakehouse enhances these integrations to streamline your data management processes.

These zero-ETL integrations are fully managed by AWS and minimize the need to build ETL data pipelines. With this new zero-ETL integration, you can efficiently extract and load valuable data from your customer support, relationship management, and ERP applications into your data lake and data warehouse for analysis. Zero-ETL integration reduces users' operational burden and saves the weeks of engineering effort needed to design, build, and test data pipelines. By selecting a few settings in the no-code interface, you can quickly set up your zero-ETL integration to automatically ingest and continually maintain an up-to-date replica of your data in the data lake and data warehouse. Zero-ETL integrations help you focus on deriving insights from your application data, breaking down data silos in your organization and improving operational efficiency. Now run enhanced analysis on your application data using Apache Spark and Amazon Redshift for analytics or machine learning. Optimize your data ingestion processes and focus instead on analysis and gaining insights. 

Amazon SageMaker Lakehouse and Amazon Redshift support for zero-ETL integrations from eight applications is now generally available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).

You can create and manage integrations using either the AWS Glue console, the AWS Command Line Interface (AWS CLI), or the AWS Glue APIs. To learn more, visit What is zero-ETL and What is AWS Glue.

Read more


Amazon Bedrock now supports multi-agent collaboration

Amazon Bedrock now supports multi-agent collaboration, allowing organizations to build and manage multiple AI agents that work together to solve complex workflows. This feature allows developers to create agents with specialized roles tailored for specific business needs, such as financial data collection, research, and decision-making. By enabling seamless agent collaboration, Amazon Bedrock empowers organizations to optimize performance across industries like finance, customer service, and healthcare.

With multi-agent collaboration on Amazon Bedrock, organizations can effortlessly master complex workflows, achieving highly accurate and scalable results across diverse applications. In financial services, for example, specialized agents coordinate to gather data, analyze trends, and provide actionable recommendations—working in parallel to improve response times and precision. This collaborative feature allows businesses to quickly build, deploy, and scale multi-agent setups, reducing development time while ensuring seamless integration and adaptability to evolving needs.

Multi-agent collaboration is currently available in the US East (N. Virginia), US West (Oregon), and Europe (Ireland) AWS Regions.

To learn more, visit Amazon Bedrock Agents

Read more


Amazon Q Developer can now automate code reviews

Starting today, Amazon Q Developer can also perform code reviews, automatically providing comments on your code in the IDE, flagging suspicious code patterns, providing patches where available, and even assessing deployment risk so you can get feedback on your code quickly.

Q Developer is a generative AI-powered assistant for designing, building, testing, deploying, and maintaining software. Its agents for software development have a deep understanding of your entire code repos, so they can accelerate many tasks beyond coding. By automating the first round of code reviews and improving review consistency, Q Developer empowers code authors to fix issues faster, streamlining the process for both authors and reviewers. With this new capability, Q Developer can help you get immediate feedback for your code reviews and code fixes where available, so you can increase the speed of iteration and improve the quality of your code.

This capability is available in the integrated development environment (IDE) through a new chat command: /review. You can start automating code reviews via the Visual Studio Code and IntelliJ IDEA Integrated Development Environments (IDEs) with both an Amazon Q Developer Free Tier or Pro Tier subscription. For more details on pricing, see Amazon Q Developer pricing

This capability is available in all AWS Regions where Amazon Q Developer is available. To get started with generating documentation, visit Amazon Q Developer or read the news blog.

Read more


Amazon Bedrock Model Distillation is now available in preview

With Amazon Bedrock Model Distillation, customers can use smaller, faster, more cost-effective models that deliver use-case specific accuracy that is comparable to the most capable models in Amazon Bedrock.

Today, fine-tuning a smaller cost-efficient model to increase its accuracy for a customers’ use-case is an iterative process where customers need to write prompts and response, refine the training dataset, ensure that the training dataset captures diverse examples, and adjust the training parameters.

Amazon Bedrock Model Distillation automates the process needed to generate synthetic data from the teacher model, trains and evaluates the student model, and then hosts the final distilled model for inference. To remove some of the burden of iteration, Model Distillation may choose to apply different data synthesis methods that are best suited for your use-case to create a distilled model that approximately matches the advanced model for the specific use-case. For example, Bedrock may expand the training dataset by generating similar prompts or generate high-quality synthetic responses using customer provided prompt-response pairs as golden examples.

Learn more in our documentation and blog.
 

Read more


Today, we are excited to announce that Amazon Q Business, including Amazon Q Apps, has expanded its capabilities with a ready-to-use library of over 50 actions spanning plugins across popular business applications and platforms. This enhancement allows Amazon Q Business users to complete tasks in other applications without leaving the Amazon Q Business interface, improving the user experience and operational efficiency.

The new plugins cover a wide range of widely used business tools, including PagerDuty, Salesforce, Jira, Smartsheet, and ServiceNow. These integrations enable users to perform tasks such as creating and updating tickets, managing incidents, and accessing project information directly from within Amazon Q Business. With Amazon Q Apps, users can further automate their everyday tasks by leveraging the newly introduced actions directly within their purpose-built apps.

The new plugins are available in all AWS Regions where Amazon Q Business is available.

To get started with the new plugins, customers can access them directly from their Amazon Q Business interface. To learn more about Amazon Q Business plugins and how they can enhance your organization's productivity, visit the Amazon Q Business product page or explore the Amazon Q Business plugin documentation.

Read more


Amazon Bedrock Guardrails now supports Automated Reasoning checks (Preview)

With the launch of the Automated Reasoning checks safeguard in Amazon Bedrock Guardrails, AWS becomes the first and only major cloud provider to integrate automated reasoning in our generative AI offerings. Automated Reasoning checks help detect hallucinations and provide a verifiable proof that a large language model (LLM) response is accurate. Automated Reasoning tools are not guessing or predicting accuracy. Instead, they rely on sound mathematical techniques to definitively verify compliance with expert-created Automated Reasoning Policies, consequently improving transparency. Organizations increasingly use LLMs to improve user experiences and reduce operational costs by enabling conversational access to relevant, contextualized information. However, LLMs are prone to hallucinations. Due to the ability of LLMs to generate compelling answers, these hallucinations are often difficult to detect. The possibility of hallucinations and an inability to explain why they occurred slows generative AI adoption for use cases where accuracy is critical.

With Automated Reasoning checks, domain experts can more easily build specifications called Automated Reasoning Policies that encapsulate their knowledge in fields such as operational workflows and HR policies. Users of Amazon Bedrock Guardrails can validate generated content against an Automated Reasoning Policy to identify inaccuracies and unstated assumptions, and explain why statements are accurate in a verifiable way. For example, you can configure Automated Reasoning checks to validate answers on topics defined in complex HR policies (which can include constraints on employee tenure, location, and performance) and explain why an answer is accurate with supporting evidence.

Contact your AWS account team to request access to Automated Reasoning checks in Amazon Bedrock Guardrails in US East (N. Virginia) and US West (Oregon) AWS regions. To learn more, visit Amazon Bedrock Guardrails and read the News blog.
 

Read more


Amazon Q Developer adds operational investigation capability (Preview)

Amazon Q Developer now helps you accelerate operational investigations across your AWS environment in just a fraction of the time. With a deep understanding of your AWS cloud environment and resources, Amazon Q Developer looks for anomalies in your environment, surfaces related signals for you to explore, identifies potential root-cause hypothesis, and suggests next steps to help you remediate issues faster. 

Amazon Q Developer works alongside you throughout your operational troubleshooting journey from issue detection and triaging, through remediation. You can initiate an investigation by selecting the Investigate action on any Amazon CloudWatch data widget across the AWS Management Console. You can also configure Amazon Q to automatically investigate when a CloudWatch alarm is triggered. When an investigation starts, Amazon Q Developer sifts through various signals about your AWS environment including CloudWatch telemetry, AWS CloudTrail Logs, deployment information, changes to resource configuration, and AWS Health events. 

CloudWatch now provides a dedicated investigation experience where teams can collaborate and add findings, view related signals and anomalies, and review suggestions for potential root cause hypotheses. This new capability also provides remediation suggestions for common operational issues across your AWS environment by surfacing relevant AWS Systems Manager Automation runbooks, AWS re:Post articles, and documentation. It also integrates with your existing operational workflows such as Slack via AWS Chatbot. 

The new operational investigation capability within Amazon Q Developer is available at no additional cost during preview in the US East (N. Virginia) Region. To learn more, see getting started and best practice documentation

Read more


Introducing Amazon SageMaker Data and AI Governance

Today, AWS announces Amazon SageMaker Data and AI Governance, a new capability that simplifies discovery, governance, and collaboration for data and AI across your lakehouse, AI models, and applications. Built on Amazon DataZone, SageMaker Data and AI Governance allows engineers, data scientists, and analysts to securely discover and access approved data and models using semantic search with generative AI–created metadata. This new offering helps organizations consistently define and enforce access policies using a single permission model with fine-grained access controls.

With SageMaker Data and AI Governance, you can accelerate data and AI discovery and collaboration at scale. You can enhance data discovery by automatically enriching your data and metadata with business context using generative AI, making it easier for all users to find, understand, and use data. You can share data, AI models, prompts, and other generative AI assets with filtering by table and column names or business glossary terms. SageMaker Data and AI Governance helps establish trust and drives transparency in your data pipelines and AI projects with built-in model monitoring to detect bias and report on how features contribute to your model predictions.

To learn more about how to govern your data and AI assets, visit SageMaker Data and AI Governance.

Read more


Data Lineage is now generally available in Amazon DataZone and next generation of Amazon SageMaker

AWS announces general availability of Data Lineage in Amazon DataZone and next generation of Amazon SageMaker, a capability that automatically captures lineage from AWS Glue and Amazon Redshift to visualize lineage events from source to consumption. Being OpenLineage compatible, this feature allows data producers to augment the automated lineage with lineage events captured from OpenLineage-enabled systems or through API, to provide a comprehensive data movement view to data consumers.

This feature automates lineage capture of schema and transformations of data assets and columns from AWS Glue, Amazon Redshift, and Spark executions in tools to maintain consistency and reduce errors. With in-built automation, domain administrators and data producers can automate capture and storage of lineage events when data is configured for data sharing in the business data catalog. Data consumers can gain confidence in an asset's origin from the comprehensive view of its lineage while data producers can assess the impact of changes to an asset by understanding its consumption. Additionally, the data lineage feature versions lineage with each event, enabling users to visualize lineage at any point in time or compare transformations across an asset's or job's history. This historical lineage provides a deeper understanding of how data has evolved, essential for troubleshooting, auditing, and validating the integrity of data assets.

The data lineage feature is generally available in all AWS Regions where Amazon DataZone and next generation of Amazon SageMaker are available.

To learn more, visit Amazon DataZone and next generation of Amazon SageMaker.
 

Read more


Amazon Q in QuickSight unifies insights from structured and unstructured data

Now generally available, Amazon Q in QuickSight provides users with unified insights from structured and unstructured data sources through integration with Amazon Q Business. While structured data is managed in conventional systems, unstructured data such as document libraries, webpages, images and more has remained largely untapped due to its diverse and distributed nature.

With Amazon Q in QuickSight business users can now augment insights from traditional BI data sources such as databases, data lakes and data warehouses, with contextual information from unstructured sources. Users can get augmented insights within QuickSight's BI interface across multi-visual Q&A and Data Stories. Users can use multi-visual Q&A to ask questions in natural language and get visualizations and data summaries augmented with contextual insights from Amazon Q Business. With data stories in Amazon Q in QuickSight users can upload documents, or connect to unstructured data sources from Amazon Q Business to create richer narratives or presentations explaining their data with additional context. This integration enables organizations to harness insights from all their data without the need for manual collation, leading to more informed decision-making, time savings, and a significant competitive edge in the data-driven business landscape.

This new capability is generally available to all Amazon QuickSight Pro Users in US East (N. Virginia), and US West (Oregon) AWS Regions.

To learn more visit the AWS Business Intelligence Blog, the Amazon Q Business What’s New Post and try QuickSight free for 30-days.
 

Read more


Amazon Q Developer can now generate documentation within your source code

Starting today, Amazon Q Developer can document your code by automatically generating readme files and data-flow diagrams within your projects. 

Today, developers report they spend an average of just one hour per day coding. They spend most of their time on tedious, undifferentiated tasks such as learning codebases, writing and reviewing documentation, testing, managing deployments, troubleshooting issues or finding and fixing vulnerabilities. Q Developer is a generative AI-powered assistant for designing, building, testing, deploying, and maintaining software. Its agents for software development have a deep understanding of your entire code repos, so they can accelerate many tasks beyond coding. With this new capability, Q Developer can help you understand your existing code bases faster, or quickly document new features, so you can focus on shipping features for your customers.

This capability is available in the integrated development environment (IDE) through a new chat command: /doc . You can get started generating documentation within the Visual Studio Code and IntelliJ IDEA IDEs with an Amazon Q Developer Free Tier or Pro Tier subscription. For more details on pricing, see Amazon Q Developer pricing.

This capability is available in all AWS Regions where Amazon Q Developer is available. To get started with generating documentation, visit Amazon Q Developer or read the news blog

Read more


Announcing Amazon Bedrock IDE in preview as part of Amazon SageMaker Unified Studio

Today we are announcing the preview launch of Amazon Bedrock IDE, a governed collaborative environment integrated within Amazon SageMaker Unified Studio (preview) that enables developers to swiftly build and tailor generative AI applications. It provides an intuitive interface for developers across various skill levels to access Amazon Bedrock's high-performing foundation models (FMs) and advanced customization capabilities in order to collaboratively build custom generative AI applications.

Amazon Bedrock IDE's integration into Amazon SageMaker Unified Studio removes barriers between data, tools, and builders, for generative AI development. Teams can now access their preferred analytics and ML tools alongside Amazon Bedrock IDE's specialized tools for building generative AI applications. Developers can leverage Retrieval Augmented Generation (RAG) to create Knowledge Bases from their proprietary data sources, Agents for complex task automation, and Guardrails for responsible AI development. This unified workspace reduces complexity, accelerating the prototyping, iteration, and deployment of production-ready, responsible generative AI apps aligned with business needs.

Amazon Bedrock IDE is now available in Amazon SageMaker Unified Studio and supported in 5 regions. For more information on supported regions, please refer to the Amazon SageMaker Unified Studio regions guide.

Learn more about Amazon Bedrock IDE and its features by visiting the Amazon Bedrock IDE user guide and get started with Bedrock IDE by enabling a “Generative AI application development” project profile using this admin guide.
 

Read more


Amazon Q Business now provides insights from your databases and data warehouses (preview)

Today, AWS announces the public preview of the integration between Amazon Q Business and Amazon QuickSight, delivering a transformative capability that unifies answers from structured data sources (databases, warehouses) and unstructured data (documents, wikis, emails) in a single application.

Amazon Q Business is a generative AI–powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. Amazon QuickSight is a business intelligence (BI) tool that helps you visualize and understand your structured data through interactive dashboards, reports, and analytics. While organizations want to leverage generative AI for business insights, they experience fragmented access to unstructured and structured data.

With the QuickSight integration, customers can now link their structured sources to Amazon Q Business through QuickSight’s extensive set of data source connectors. Amazon Q Business responds in real time, combining the QuickSight answer from your structured sources with any other relevant information found in documents. For example, users could ask about revenue comparisons, and Amazon Q Business will return an answer from PDF financial reports along with real-time charts and metrics from QuickSight. This integration unifies insights across knowledge sources, helping organizations make more informed decisions while reducing the time and complexity traditionally required to gather insights.

This integration is available to all Amazon Q Business Pro, and Amazon QuickSight Reader Pro, and Author Pro users in the US East (N. Virginia) and US West (Oregon) AWS Regions. To learn more, visit the Amazon Q Business documentation site.

Read more


Announcing Amazon Nova foundation models available today in Amazon Bedrock

We’re excited to announce Amazon Nova, a new generation of state-of-the-art (SOTA) foundation models (FMs) that deliver frontier intelligence and industry leading price performance. Amazon Nova models available today on Amazon Bedrock are:

  • Amazon Nova Micro, a text only model that delivers the lowest latency responses at very low cost.
  • Amazon Nova Lite, a very low-cost multimodal model that is lightning fast for processing image, video, and text inputs
  • Amazon Nova Pro, a highly capable multimodal model with the best combination of accuracy, speed, and cost for a wide range of tasks.
  • Amazon Nova Canvas, a state-of-the-art image generation model.
  • Amazon Nova Reel, a state-of-the-art video generation model.

Amazon Nova Micro, Amazon Nova Lite, and Amazon Nova Pro are among the fastest and most cost-effective models in their respective intelligence classes. These models have also been optimized to make them easy to use and effective in RAG and agentic applications. With text and vision fine-tuning on Amazon Bedrock, you can customize Amazon Micro, Lite, and Pro to deliver the optimal intelligence, speed, and cost for your needs. With Amazon Nova Canvas and Amazon Nova Reel, you get access to production-grade visual content, with built-in controls for safe and responsible AI use like watermarking and content moderation. You can see the latest benchmarks and examples of these models on the Amazon Nova product page.

Amazon Nova foundation models are available in Amazon Bedrock in the US East (N. Virginia) region. Amazon Nova Micro, Lite, and Pro models are also available in the US West (Oregon), and US East (Ohio) regions via cross-region inference. Learn more about Amazon Nova at the AWS News Blog, the Amazon Nova product page, or the Amazon Nova user guide. You can get started with Amazon Nova foundation models in Amazon Bedrock from the Amazon Bedrock console.

Read more


Amazon Q Developer transformation capabilities for mainframe modernization are now available (Preview)

Today, AWS announces new generative AI–powered capabilities of Amazon Q Developer in public preview to help customers and partners accelerate large-scale assessment and modernization of mainframe applications.

Amazon Q Developer is enterprise-ready, offering a unified web experience tailored for large-scale modernization, federated identity, and easier collaboration. Keeping you in the loop, Amazon Q Developer agents analyze and document your code base, identify missing assets, decompose monolithic applications into business domains, plan modernization waves, and refactor code. You can chat with Amazon Q Developer in natural language to share high-level transformation objectives, source repository access, and project context. Amazon Q Developer agents autonomously classify and organize application assets and create comprehensive code documentation to understand and expand the knowledge base of your organization. The agents combine goal-driven reasoning using generative AI and modernization expertise to develop modernization plans customized for your code base and transformation objectives. You can then collaboratively review, adjust, and approve the plans through iterative engagement with the agents. Once you approve the proposed plan, Amazon Q Developer agents autonomously refactor the COBOL code into cloud-optimized Java code while preserving business logic.

By delegating tedious tasks to autonomous Amazon Q Developer agents with your review and approvals, you and your team can collaboratively drive faster modernization, larger project scale, and better transformation quality and performance using generative AI large language models. You can enhance governance and compliance by maintaining a well-documented and explainable trail of transformation decisions.

To learn more, read the blog and visit Amazon Q Developer transformation capabilities webpage and documentation.

Read more


Amazon EC2 P5en instances, optimized for generative AI and HPC, are generally available

Today, AWS announces the general availability of Amazon Elastic Compute Cloud (Amazon EC2) P5en instances, powered by the latest NVIDIA H200 Tensor Core GPUs. These instances deliver the highest performance in Amazon EC2 for deep learning and high performance computing (HPC) applications.

You can use Amazon EC2 P5en instances for training and deploying increasingly complex large language models (LLMs) and diffusion models powering the most demanding generative AI applications. You can also use P5en instances to deploy demanding HPC applications at scale in pharmaceutical discovery, seismic analysis, weather forecasting, and financial modeling.

P5en instances feature up to 8 H200 GPUs which have 1.7x GPU memory size and 1.5x GPU memory bandwidth than H100 GPUs featured in P5 instances. P5en instances pair the H200 GPUs with high performance custom 4th Generation Intel Xeon Scalable processors, enabling Gen5 PCIe between CPU and GPU which provides up to 4x the bandwidth between CPU and GPU and boosts AI training and inference performance. P5en, with up to 3200 Gbps of third generation of EFA using Nitro v5, shows up to 35% improvement in latency compared to P5 that uses the previous generation of EFA and Nitro. This helps improve collective communications performance for distributed training workloads such as deep learning, generative AI, real-time data processing, and high-performance computing (HPC) applications. To address customer needs for large scale at low latency, P5en instances are deployed in Amazon EC2 UltraClusters, and provide market-leading scale-out capabilities for distributed training and tightly coupled HPC workloads.

P5en instances are now available in the US East (Ohio), US West (Oregon), and Asia Pacific (Tokyo) AWS Regions and US East (Atlanta) Local Zone us-east-1-atl-2a in the p5en.48xlarge size.

To learn more about P5en instances, see Amazon EC2 P5en Instances.

Read more


Introducing latency-optimized inference for foundation models in Amazon Bedrock

Latency-optimized inference for foundation models in Amazon Bedrock now available in public preview, delivering faster response times and improved responsiveness for AI applications. Currently, these new inference options support Anthropic's Claude 3.5 Haiku model and Meta's Llama 3.1 405B and 70B models offering reduced latency compared to standard models without compromising accuracy. As verified by Anthropic, with latency-optimized inference in Amazon Bedrock, Claude 3.5 Haiku runs faster on AWS than anywhere else. Additionally, with latency-optimized inference in Bedrock, Llama 3.1  405B and 70B runs faster on AWS than any other major cloud provider.

As more customers move their generative AI applications to production, optimizing the end-user experience becomes crucial, particularly for latency-sensitive applications such as real-time customer service chatbots and interactive coding assistants. Using purpose-built AI chips like AWS Trainium2 and advanced software optimizations in Amazon Bedrock, customers can access more options to optimize their inference for a particular use case. Accessing these capabilities requires no additional setup or model fine-tuning, allowing for immediate enhancement of existing applications with faster response times.

Latency-optimized inference is available for Anthropic’s Claude 3.5 Haiku and Meta’s Llama 3.1 405B and 70B in the US East (Ohio) Region via cross-region inference. To get started, visit the Amazon Bedrock console. For more information about Amazon Bedrock and its capabilities, visit the Amazon Bedrock product page, pricing page, and documentation.

Read more


Amazon Bedrock Knowledge Bases now supports RAG evaluation (Preview)

Today, we are announcing RAG evaluation support in Amazon Bedrock Knowledge Bases. This capability allows you to evaluate your retrieval-augmented generation (RAG) applications built on Amazon Bedrock Knowledge Bases. You can evaluate either information retrieval or the retrieval plus content generation. Evaluations are powered by LLM-as-a-Judge technology, with customers having a choice of several judge models to use. For retrieval evaluation, you can select from metrics such as context relevance and coverage. For retrieve plus generation evaluation, you can select from quality metrics such as correctness, completeness, and faithfulness (hallucination detection), as well as responsible AI metrics such as harmfulness, answer refusal, and stereotyping. You can also compare across evaluation jobs in order to compare Knowledge Bases with different settings like chunking strategy or vector length, or different content generating models.

Evaluating RAG applications can be difficult, as there are many components in the retrieval and generation that need to be optimized. Now, Amazon Bedrock Knowledge Bases’s RAG evaluation tool allows customers to evaluate their Knowledge Base-powered applications conveniently and quickly where their data and LLMs already live. Additionally, you can incorporate Amazon Bedrock Guardrails directly into your evaluation for even more thorough testing. Using these RAG evaluation tools on Amazon Bedrock can save cost as well as weeks of time compared to a full offline human-based evaluation, allowing you to make improvements in your application faster and easier.

To learn more, including region availability, read the AWS News blog and visit the Amazon Bedrock Evaluations page. To get started, log into the Amazon Bedrock Console or use the Amazon Bedrock APIs.

Read more


Amazon Bedrock Model Evaluation now includes LLM-as-a-judge (Preview)

Amazon Bedrock Model Evaluation allows you to evaluate, compare, and select the best foundation models for your use case. Now, you can use a new evaluation capability: LLM-as-a-judge in Preview. This allows you to choose an LLM as your judge to ensure you have the right combination of evaluator models and models being evaluated. You can choose from several available judge LLMs on Amazon Bedrock. You can also select curated quality metrics such as correctness, completeness, and professional style and tone, as well as responsible AI metrics such as harmfulness and answer refusal. You can now also bring your own prompt dataset to ensure the evaluation is customized for your data, and you can compare results across evaluation jobs to make decisions faster.

Previously, you had a choice between human-based model evaluation and automatic evaluation with exact string matching and other traditional NLP metrics. These methods, while fast, did not provide a strong correlation with human evaluators. Now, with LLM-as-a-judge, you can get human-like evaluation quality at a much lower cost than full human-based evaluations, while saving weeks of time. You can use built-in metrics to evaluate objective facts or perform subjective evaluations of writing style and tone on your dataset.

To learn more about Amazon Bedrock Model Evaluation’s new LLM-as-a-judge, including available AWS regions read the AWS News Blog and visit the Amazon Bedrock Evaluations page. To get started, sign in to the AWS Management Console or use the Amazon Bedrock APIs.

Read more


Amazon Bedrock Knowledge Bases now provides auto-generated query filters for improved retrieval

Amazon Bedrock Knowledge Bases offers fully-managed, end-to-end Retrieval-Augmented Generation (RAG) workflows to create highly accurate, low latency, secure, and custom GenAI applications by incorporating contextual information from your data sources. Today, we are launching automatically-generated query filters which improves retrieval accuracy by ensuring the documents retrieved are relevant to the query. This feature extends the existing capability of manual metadata filtering, by allowing customers to narrow down search results without the need to manually construct complex filter expressions.

RAG applications process user queries by searching across a large set of documents. However, in many situations you may need to retrieve documents with specific attributes and/or content. With automatic generated query filters enabled, you can receive filtered search results which are based on the document’s metadata without the need to manually construct complex filter expressions. For example, for a query like "How to file a claim in Washington", the state as "Washington" will be automatically applied as a filter to retrieve only those documents pertaining to the particular state.

The capability is available in US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Seoul), Europe (Frankfurt), Europe (Zurich) and AWS GovCloud (US-West). To learn more, visit the documentation.

Read more


Amazon Bedrock Knowledge Bases now supports custom connectors and ingestion of streaming data

Amazon Bedrock Knowledge Bases now supports custom connector and ingestion of streaming data, allowing developers to add, update, or delete data in their knowledge base through direct API calls. Amazon Bedrock Knowledge Bases offers fully-managed, end-to-end Retrieval-Augmented Generation (RAG) workflows to create highly accurate, low latency, secure, and custom GenAI applications by incorporating contextual information from your company's data sources. With this new capability, customers can easily ingest specific documents from custom data sources or Amazon S3 without requiring a full sync, and ingest streaming data without the need for intermediary storage.

This enhancement enables customers to ingest specific documents from any custom data source and reduce latency and operational costs for intermediary storage while ingesting streaming data. For instance, a financial services firm can now keep its knowledge base continuously updated with the latest market data, ensuring that their GenAI applications deliver the most relevant information to end-users. By eliminating time-consuming full syncs and storage steps, customers gain faster access to data, reducing latency, and improving application performance.

Customers can start using this feature either through the console or programmatically via the APIs. In the console, users can select a custom connector as the data source, then add documents, text, or base64 encoded text strings.

This capability is available in all regions where Amazon Bedrock Knowledge Bases is supported. There is no additional cost for using this new custom connector capability.

To learn more, visit Amazon Bedrock Knowledge Bases product documentation.
 

Read more


Amazon Bedrock Knowledge Bases now supports streaming responses

Amazon Bedrock Knowledge Bases offers fully-managed, end-to-end Retrieval-Augmented Generation (RAG) workflows to create highly accurate, low latency, secure, and custom GenAI applications by incorporating contextual information from your company's data sources. Today, we are announcing the support of RetrieveAndGenerateStream API in Bedrock Knowledge Bases. This new streaming API allows Bedrock Knowledge Base customers to receive the response as it is being generated by the Large Language Model (LLM), rather than waiting for the complete response.

RAG workflow involves several steps, including querying the data store, gathering relevant context, and then sending the query to a LLM for response summarization. This final step of response generation could take few seconds, depending on the latency of the underlying model used in response generation. To reduce this latency for building latency-sensitive applications, we're now offering the RetrieveAndGenerateStream API which provides the response as a stream as it is being generated by the model. This results in a reduced latency for the first response, providing users with a more seamless and responsive experience when interacting with Bedrock Knowledge Bases.

This new capability is currently supported in all existing Amazon Bedrock Knowledge Base regions. To learn more, visit the documentation.
 

Read more


Amazon Q Business adds support to extract insights from visual elements within documents

Amazon Q Business is a fully managed, generative AI–powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. Amazon Q Business now offers capabilities to answer questions and extract insights from visual elements embedded within documents.

This new feature enables users to query information embedded in various types of visual, including diagrams, infographics, charts, and image-based content. With this launch, customers can now uncover valuable insights that are captured within visual content embedded in documents including PDF, Microsoft PowerPoint and Word, and Google Docs and Google Slides. Amazon Q Business provides transparency by surfacing the specific images utilized to generate the responses, enabling users to contextualize the extracted information.

The new visual analysis feature is available in all AWS Regions where Amazon Q Business is available. To learn more, visit the Amazon Q Business product page.

Read more


Amazon Bedrock now supports Rerank API to improve accuracy of RAG applications

Amazon Bedrock announces support for reranker models through the Rerank API, enabling developers to improve the relevance of responses in Retrieval-Augmented Generation (RAG) applications. The reranker models rank a set of retrieved documents based on their relevance to user's query, helping to prioritize the most relevant content to be passed to the foundation models (FM) for response generation. Amazon Bedrock Knowledge Bases offers fully-managed, end-to-end RAG workflows to create custom generative AI applications by incorporating contextual information from various data sources. For Amazon Bedrock Knowledge Base users, enabling the reranker is through a setting available in Retrieve and RetrieveAndGenerate APIs.

Semantic search in RAG systems can improve document retrieval relevance but may struggle with complex or ambiguous queries. For example, a customer service chatbot asked about returning an online purchase might retrieve documents on both return policies and shipping guidelines. Without proper ranking, the generated response could focus on shipping instead of returns, missing the user's intent. Now, Amazon Bedrock provides access to reranking models which will address this by reordering retrieved documents based on their relevance to the user query. This ensures the most useful information is sent to the foundation model for response generation, optimizing the context window usage and potentially reducing costs.

The Rerank API supports Amazon Rerank 1.0 and Cohere Rerank 3.5 models. These models are available in US West (Oregon), Canada (Central), Europe (Frankfurt) and Asia Pacific (Tokyo).

Please visit the Amazon Bedrock product documentation. For details on pricing, please refer to the pricing page.
 

Read more


PartyRock improves app discovery and announces upcoming free daily use

Starting today, PartyRock is supporting improved app discovery using search, making it even easier to explore and build with generative AI. In addition, a new and improved daily free usage model will replace the current free trial grant in 2025 to further empower everyone to build AI apps on PartyRock with daily recurring free use.

Previously, AWS offered new PartyRock users a free trial for a limited time, but starting in 2025 you can access and experiment with PartyRock apps, without the worry of exhausting the free trial credits through a free daily use grant. Since its launch in November 2023, more than a half million apps have been created by PartyRock users. Until now, discovering those apps required link or playlist sharing, or browsing featured apps on the PartyRock Discover page. Users can now use the search bar on the homepage to explore all publicly published PartyRock apps.

Discover how you can build apps to help improve your everyday individual productivity and experiment with these new features by trying PartyRock today. To learn more, read our AWS News Blog.
 

Read more


Announcing Amazon Q Developer transformation capabilities for VMware (Preview)

Today, AWS announces the preview of Amazon Q Developer transformation capabilities for VMware, the first generative AI–powered assistant that can simplify and accelerate the migration and modernization of VMware workloads to Amazon Elastic Compute Cloud (EC2). These new capabilities help you streamline complex VMware transformation tasks, reducing the time and effort required to move VMware workloads to the cloud. Using advanced AI techniques to automate critical steps in the migration process, Amazon Q Developer helps accelerate your cloud journey, reduce costs, and drive innovation.

Amazon Q Developer transformation agents simplify and automate VMware transformation tasks including on-premises application data discovery, wave planning, network translation and deployment, and orchestration of the overall migration process. Two of the most challenging aspects of VMware transformations— wave planning and network translation— are now automated using VMware domain-expert agents and large language models (LLMs). These AI-powered tools convert VMware networking configurations and firewall rules into native AWS network constructs, significantly reducing complexity and potential errors. Importantly, Amazon Q Developer maintains a balance between automation and human oversight, proactively promoting user input at key decision points to ensure accuracy and control throughout the migration and modernization process.

The preview of Amazon Q Developer transformation capabilities for VMware is available in US East (N. Virginia) AWS region. To learn more about Amazon Q Developer and how it can accelerate your migration to AWS, visit Amazon Q Developer.

Read more


The Amazon Q index enhances software vendors’ AI experiences

Independent software vendors (ISVs) like Asana, Miro, PagerDuty, Zoom, and more are integrating the Amazon Q index into their applications to enrich their generative AI experiences with enterprise knowledge and user context spanning multiple applications. End customers remain in control of which applications can access their data, and the index retains user-level permissions.

The Amazon Q index is a canonical source of content and data that unites data from across over 40 supported connectors. Amazon Q Business customers create an index based on their enterprise data so that generated responses, insights, and actions are most relevant to employees. Software providers register their application with Amazon Q Business, and then their customers permit them to access their indexed data. Once connected, the software vendor uses the additional data to enrich their native generative AI features to deliver more personalized responses back to the customer. This new feature inherits the same security, privacy, and guardrails as Amazon Q Business, accelerating an ISV’s generative AI roadmap so they can focus their efforts on innovative, differentiated features for their end users. 

ISVs can use the Amazon Q index in all AWS Regions where Amazon Q Business is available. 

Learn more about the Amazon Q index for software providers

Read more


Amazon Q Developer can now provide more personalized chat answers based on console context

Today, AWS announces the general availability of console context awareness for the Amazon Q Developer chat within the AWS Management Console. This new capability allows Amazon Q Developer to dynamically understand and respond to inquiries based on the specific AWS service you are currently viewing or configuring and the region you are operating within. For example, if you are working within the Amazon Elastic Container Service (Amazon ECS) console, you can ask "How can I create a cluster?" and Amazon Q Developer will recognize the context and provide relevant guidance tailored to creating ECS clusters.

This update enables more natural conversations without providing repetitive context details, allowing you to arrive at the answers you seek faster. This capability is included at no additional cost in the Amazon Q Developer Free Tier. For the Amazon Q Developer Pro Tier, which requires a paid subscription, this capability is also included. For more information on pricing, please see the Amazon Q Developer Pricing page. You can access this feature in all regions Amazon Q Developer chat is available in the AWS Management Console. You can get started today by chatting with Amazon Q Developer in the AWS Management Console.
 

Read more


Amazon Bedrock Agents now supports custom orchestration

Amazon Bedrock Agents now supports custom orchestration, allowing developers to control how agents handle multistep tasks, make decisions, and execute complex workflows. This capability enables developers to define custom orchestration logic for their agents using AWS Lambda, providing them flexibility to tailor agent’s behavior to fit specific use cases.

With Custom Orchestration, developers can implement any customized orchestration strategy for their agents, including Plan and Solve, Tree of Thought, and Standard Operating Procedures (SOP). This ensures agents perform tasks in the desired order, manage states effectively, and integrate seamlessly with external tools. Whether handling complex business processes or automating intricate workflows, custom orchestration offers greater control, accuracy, and efficiency to meet business objectives.

Custom Orchestration is now available in all AWS Regions where Amazon Bedrock Agents are supported. To learn more, visit the documentation.
 

Read more


Introducing Amazon Q Apps with private sharing

Amazon Q Apps, a capability within Amazon Q Business to create lightweight, generative AI-powered apps, now supports private sharing. This new feature enables app creators to restrict app access to select Amazon Q Business users, providing more granular control over app visibility and usage within organizations.

Previously, Amazon Q Apps could only be kept private for individual use or published to all users of the Amazon Q Business environment through the Amazon Q Apps library. Now app creators can share their apps with specific individuals allowing for more targeted collaboration and controlled access. App users with access to shared apps can find these apps in the Amazon Q Apps Library and run them. Apps shown in the library respect the access set by the app creator so those are visible only to selected users. Private sharing enables new functional use cases. For instance, a messaging-compliant document generation app may be shared company-wide for anyone in the organization to use, while a customer outreach app could be restricted to individuals of the sales team only. Private sharing also opens up possibilities for app creators to gather early feedback from a small group of users before wider distribution of their app.

Amazon Q Apps with private sharing is now available in the same regions where Amazon Q Business is available.

To learn more about private sharing in Amazon Q Apps, visit the Q Apps documentation.

Read more


Amazon Q Apps introduces data collection (Preview)

Amazon Q Apps, the generative AI-powered app creation capability of Amazon Q Business, now offers a new data collection feature in public preview. This enhancement enables users to collate data across multiple users within their organization, further enhancing the collaborative quality of Amazon Q Apps for various business needs.

With the new ability to collect data through form cards, app creators can design apps to gather information for a diverse set of business use cases, such as conducting team surveys, compiling questions for company-wide meetings, tracking new hire onboarding progress, or running a project retrospective. These apps can further leverage generative AI to analyze the collected data, identify common themes, summarize ideas, and provide actionable insights. A shared data collection app can be instantiated into different data collections by app users, each with its own unique, shareable link. App users can participate in an ongoing data collection to submit responses, or start their own data collection without the need to duplicate the app.

Amazon Q Apps with data collection is available in the regions where Amazon Q Business is available.

To learn more about data collection in Amazon Q Apps and how it can benefit your organization, visit the Q Apps documentation.

Read more


Amazon Q Java transformation launches Step-by-Step and Library Upgrades

Amazon Q Developer Java upgrade transformation now offers step-by-step upgrades, and library upgrades for Java 17 applications. This new feature allows developers to review and accept code changes in multiple diffs, and to test proposed changes in each diff step-by-step. Additionally, Amazon Q can now upgrade libraries for applications already on Java 17, enabling continuous maintenance.

This launch significantly improves the code review and application modernization process. By allowing developers to review smaller amount of code changes at a time, it makes error fixes easier when manual completion is required. The ability to upgrade apps already on Java 17 to the latest reliable libraries helps organizations save time and effort in maintaining their applications across the board.

This capability is available within the Visual Studio Code and IntelliJ IDEs.

To learn more and get started with these new features here.

Read more


Amazon Q Developer now provides natural language cost analysis

Today, AWS announces the addition of cost analysis capabilities to Amazon Q Developer, allowing customers to retrieve and interpret their AWS cost data through natural language interactions. Amazon Q Developer is a generative AI-powered assistant that helps customers build, deploy, and operate applications on AWS. The cost analysis capability helps users of all skill levels to better understand and manage their AWS spending without previous knowledge of AWS Cost Explorer.

Customers can now ask Amazon Q Developer questions about their AWS costs such as "Which region had the largest cost increase last month?" or "What services cost me the most last quarter?". Q interprets these questions, analyzes the relevant cost data, and provides easy-to-understand responses. Each answer includes transparency on the Cost Explorer parameters used and a link to visualize the data in Cost Explorer.

This feature is now available in all AWS Regions where Amazon Q Developer is supported. Customers can access it via the Amazon Q icon in the AWS Management Console. To get started, see the AWS Cost Management user guide.
 

Read more


Amazon Q Developer now transforms embedded SQL from Oracle to PostgreSQL

When you use AWS Database Migration Service (DMS) and DMS Schema Conversion to migrate a database, you might need to convert the embedded SQL in your application to be compatible with your target database. Rather than converting it manually, you can use Amazon Q Developer in the IDE to automate the conversion.

Amazon Q Developer uses metadata from a DMS Schema Conversion to convert embedded SQL in your application to a version that is compatible with your target database. Amazon Q Developer will detect Oracle SQL statements in your application and convert them to PostgreSQL. You can review and accept the proposed changes, view a summary of the transformation, and follow the recommended next steps in the summary to verify and test the transformed code.

This capability is available within the Visual Studio Code and IntelliJ IDEs.

Learn more and get started here.
 

Read more


Amazon SageMaker introduces Scale Down to Zero for AI inference to help customers save costs

We are excited to announce Scale Down to Zero, a new capability in Amazon SageMaker Inference that allows endpoints to scale to zero instances during periods of inactivity. This feature can significantly reduce costs for running inference using AI models, making it particularly beneficial for applications with variable traffic patterns such as chatbots, content moderation systems, and other generative AI usecases.

With Scale Down to Zero, customers can configure their SageMaker inference endpoints to automatically scale to zero instances when not in use, then quickly scale back up when traffic resumes. This capability is effective for scenarios with predictable traffic patterns, intermittent inference traffic, and development/testing environments. Implementing Scale Down to Zero is simple with SageMaker Inference Components. Customers can configure auto-scaling policies through the AWS SDK for Python (Boto3), SageMaker Python SDK, or the AWS Command Line Interface (AWS CLI). The process involves setting up an endpoint with managed instance scaling enabled, configuring scaling policies, and creating CloudWatch alarms to trigger scaling actions.

Scale Down to Zero is now generally available in all AWS regions where Amazon SageMaker is supported. To learn more about implementing Scale Down to Zero and optimizing costs for generative AI deployments, please visit our documentation page.
 

Read more


Amazon Q Developer Pro tier introduces a new, improved dashboard for user activity

Amazon Q Developer Pro tier now provides a detailed usage activity dashboard that gives administrators greater visibility into how their subscribed users are leveraging Amazon Q Developer features and improving their productivity. The dashboard offers insights into user activity metrics, including the number of AI-generated code lines and the acceptance rate of individual features such as, inline code and chat suggestions in developer’s integrated development environment (IDE). This information enables administrators to monitor usage and evaluate productivity gains achieved through Amazon Q Developer.

New customers will have this usage dashboard enabled by default. Existing Amazon Q Developer administrators can activate the dashboard through the AWS Management Console to start tracking detailed usage metrics. Existing customers can also continue to view a copy of the previous set of metrics and usage data, in addition to the new detailed usage metrics dashboard. To learn more about this feature, visit Amazon Q Developer User Guide.

These improvements come in conjunction with the recently launched per-user activity report and last activity date features for Amazon Q Developer admins, further enhancing visibility and control over user activity.

To learn more about Amazon Q Developer Pro tier subscription management features, visit the AWS Console.

Read more


Announcing InlineAgents for Agents for Amazon Bedrock

Agents for Amazon Bedrock now offers InlineAgents, a new feature that allows developers to define and configure Bedrock Agents dynamically at runtime. This enhancement provides greater flexibility and control over agent capabilities, enabling users to specify foundation models, instructions, action groups, guardrails, and knowledge bases on-the-fly without relying on pre-configured control plane settings.

With InlineAgents, developers can easily customize their agents for specific tasks or user requirements without creating new agent versions or preparing the agent. This feature enables rapid experimentation with different AI configurations, trying out various agent features and dynamically updating the tools available to an Agent without creating separate agents.
InlineAgents is available through the new InvokeInlineAgent API in the Amazon Bedrock Agent Runtime service. This feature maintains full compatibility with existing Bedrock Agents while offering improved flexibility and ease of use. InlineAgents is now available in all AWS Regions where Agents Amazon Bedrock is supported.

To learn more about InlineAgents and how to get started, see the Amazon Bedrock Developer Guide and the AWS SDK documentation for the InvokeInlineAgent API and a code sample to create dynamic tooling.

Read more


Amazon SageMaker launches Multi-Adapter Model Inference

Today, Amazon SageMaker introduces new multi-adapter inference capabilities that unlock exciting possibilities for customers using pre-trained language models. This feature allows you to deploy hundreds of fine-tuned LoRA (Low-Rank Adaptation) model adapters behind a single endpoint, dynamically loading the appropriate adapters in milliseconds based on the request. This enables you to efficiently host many specialized LoRA adapters built on a common base model, delivering high throughput and cost-savings compared to deploying separate models.

With multi-adapter inference, you can quickly customize pre-trained models to meet diverse business needs. For example, marketing and SaaS companies can personalize AI/ML applications using each customer's unique images, communication style, and documents to generate tailored content in seconds. Similarly, enterprises in industries like healthcare and financial services can reuse a common LoRA-powered base model to tackle a variety of specialized tasks, from medical diagnosis to fraud detection, by simply swapping in the appropriate fine-tuned adapter. This flexibility and efficiency unlocks new opportunities to deploy powerful, adaptable AI across your organization.

The multi-adapter inference feature is generally available in: Asia Pacific (Tokyo, Seoul, Mumbai, Singapore, Sydney, Jakarta), Canada (Central), Europe (Frankfurt, Stockholm, Ireland, London), Middle East (UAE), South America (Sao Paulo), US East (N. Virginia, Ohio), and US West (Oregon).

To get started, refer to the Amazon SageMaker developer guide for information on using LoRA and managing model adapters.
 

Read more


Amazon S3 Connector for PyTorch now supports Distributed Checkpoint

Amazon S3 Connector for PyTorch now supports Distributed Checkpoint (DCP), improving the time to write checkpoints to Amazon S3. DCP is a PyTorch feature for saving and loading machine learning (ML) models from multiple training processes in parallel. PyTorch is an open source ML framework used to build and train ML models.

Distributed training jobs often run for several hours or even days, and checkpoints are written frequently to improve fault tolerance. For example, jobs training large foundation models often run for several days and generate checkpoints that are hundreds of gigabytes in size. Using DCP with Amazon S3 Connector for PyTorch helps you reduce the time to write these large checkpoints to Amazon S3, keeping your compute resources utilized, ultimately resulting in lower compute cost.

Amazon S3 Connector for PyTorch is an open source project. To get started, visit the GitHub page.

Read more


Amazon Q Business now available as browser extension

Today, Amazon Web Services announces the general availability of Amazon Q Business browser extensions for Google Chrome, Mozilla Firefox, and Microsoft Edge. Users can now supercharge their browsers’ intelligence and receive context-aware, generative AI assistance, making it easy to get on-the-go help for their daily tasks.

The Amazon Q Business browser extension makes it easy for users to summarize web pages, ask questions about web content or uploaded files, and leverage large language model knowledge directly within their browser. With the browser extension, users can maximize reading productivity, streamline their research and analysis of complex information, and get instant help when creating content.

The Amazon Q Business browser extension is now available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (N. California), and US West (Oregon).

Learn how to boost your productivity with AI-powered assistance within your browser by visiting the Amazon Q Business product page and the Amazon Q Business documentation site.

Read more


Smartsheet connector for Amazon Q Business is now generally available

Today, AWS announces the general availability of the Smartsheet connector for Amazon Q Business. Smartsheet is a modern enterprise work management platform. This connector makes it easy to synchronize data from your Smartsheet instance with your Amazon Q Business index. When implemented, your employees can use Amazon Q Business to query their intelligent assistant on information about their Smartsheet projects and tasks.

Amazon Q Business is a generative AI–powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. It empowers employees to be more creative, data-driven, efficient, prepared, and productive. The over 40 connectors supported by Amazon Q Business can be scheduled to automatically sync your index with your selected data sources, so you're always securely searching through the most updated content.

To learn more about Amazon Q Business and its integration with Smartsheet, visit our Amazon Q Business connectors webpage and documentation. The new connector with Smartsheet is available in all AWS Regions where Amazon Q Business is available.

Read more


Amazon Bedrock Model Evaluation now available in Asia Pacific (Seoul)

Model Evaluation on Amazon Bedrock allows you to evaluate, compare, and select the best foundation models for your use case. Amazon Bedrock offers a choice of automatic evaluation and human evaluation. You can use automatic evaluation with predefined algorithms for metrics such as accuracy, robustness, and toxicity. Additionally, for those metrics or subjective and custom metrics, such as friendliness, style, and alignment to brand voice, you can set up a human evaluation workflow with a few clicks. Human evaluation workflows can leverage your own employees or an AWS-managed team as reviewers. Model evaluation provides built-in curated datasets or you can bring your own datasets.

Now, customers can evaluate models in the Asia Pacific (Seoul) region.

Model Evaluation on Amazon Bedrock is now Generally Available in these commercial regions and the AWS GovCloud (US-West) Region.

To learn more about Model Evaluation on Amazon Bedrock, see the Amazon Bedrock developer experience web page. To get started, sign in to Amazon Bedrock on the AWS Management Console or use the Amazon Bedrock APIs.
 

Read more


Amazon Bedrock Flows is now generally available with two new capabilities

Today, we’re announcing the general availability of Amazon Bedrock Flows, previously known as Prompt Flows, and adding two key new capabilities. Bedrock Flows enables you to link the latest foundation models, Prompts, Agents, Knowledge Base and other AWS services together in an intuitive visual builder to accelerate the creation and execution of generative AI workflows. Bedrock Flows now also provides real-time visibility into workflow execution and safeguards with Amazon Bedrock Guardrails.

Authoring multi-step generative AI workflows is an iterative, time-consuming process, and requires manually adding output nodes to each step to validate the flow execution. With Bedrock Flows, you can now view the input and output of each step from the test window to quickly validate and debug the flow execution in real-time. You can also configure Amazon Bedrock Runtime API InvokeFlow to publish trace events to track the flow execution programmatically. Next, to safeguard your workflows from potential harmful content, you can attach Bedrock Guardrails for Prompt and Knowledge Base nodes directly in the Flows builder. This seamless integration allows you to block unwanted topics, and filter out harmful content, or sensitive information in the flows.

Bedrock Flows with the new capabilities are now generally available in all regions that Amazon Bedrock is available except for GovCloud regions. For pricing, visit the Amazon Bedrock Pricing page. To get started, see the following list of resources:

  1. Video demo
  2. Blog post
  3. AWS user guide

Read more


Amazon Bedrock Knowledge Bases now supports binary vector embeddings to build RAG applications

Amazon Bedrock Knowledge Bases now supports binary vector embeddings for building Retrieval Augmented Generation (RAG) applications. This feature is available with Titan Text Embeddings V2 model and Cohere Embed models. Amazon Bedrock Knowledge Bases offers fully-managed RAG workflows to create highly accurate, low latency, secure and customizable retrieval-augmented-generation (RAG) applications by incorporating contextual information from an organization's data sources.

Binary vector embeddings represent document embeddings as binary vectors, with each dimension encoded as a single binary digit (0 or 1). Binary embeddings in RAG applications offer significant benefits in storage efficiency, computational speed, and scalability. They are particularly useful for large-scale information retrieval, resource-constrained environments, and real-time applications.

This new capability is currently supported with Amazon OpenSearch Serverless as vector store. It is supported in all Amazon Bedrock Knowledge Bases regions where Amazon Opensearch Serverless and Amazon Titan Text Embeddings V2 or Cohere Embed are available.

For more information, please refer to the documentation.

Read more


Amazon Q Business introduces ability to reuse recently uploaded files in a conversation

Amazon Q Business is a fully managed, generative AI–powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. Users can upload files and Amazon Q can help summarize or answers about the files. Starting today, users can drag and drop files to upload and reuse any recently uploaded files in new conversations without uploading the files again.

With the recent documents list, users save time searching and re-uploading frequently used files to Amazon Q Business. The list is only viewable by the individual who uploaded the file and they can clear the cached list by deleting the conversation in which the file was used. Along with the recent documents list, users can now drag and drop files they want upload directly into any conversation inside Amazon Q Business.

The ability to attach from recent files is available in all AWS Regions where Amazon Q Business is available.

You can enable attach from recent files for your team by following steps in the AWS Documentation. To learn more about Amazon Q Business, visit the Amazon Q homepage.

Read more


AWS Announces Amazon Q account resources chat in the AWS Console Mobile App

Today, Amazon Web Services (AWS) is announcing the general availability of Amazon Q Developer’s AWS account resources chat capability in the AWS Console Mobile Application. With this capability, you can use your device’s voice input and output capabilities along with natural language prompts to list resources in your AWS account, get specific resource details, and ask about related resources while on-the-go.

From the Amazon Q tab in the AWS Console Mobile App, you can ask Q to “list my running EC2 instances in us-east-1” or “list my S3 buckets” and Amazon Q returns a list of resource details, along with a summary. You can ask “what Amazon EC2 instances is Amazon CloudWatch alarm <name> monitoring” or ask “what related resources does my ec2 instance <id> have?” and Amazon Q will respond with specific resource details in a mobile friendly format.

The Console Mobile App lets users view and manage a select set of resources to stay informed and connected with their AWS resources while on-the-go. Visit the product page for more information about the Console Mobile Application.
 

Read more


Amazon Q Business now supports integrations to Asana in (Preview)

Amazon Q Business now supports, in preview, a connector to Asana, a leading enterprise work management platform. This managed connector makes it easy for Amazon Q Business users to synchronize data from their Asana instance with their Amazon Q index. When connected, Amazon Q Business can help users answer questions and generate summaries with context from Asana projects.

Amazon Q Business is a generative AI–powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. It empowers employees to be more creative, data-driven, efficient, prepared, and productive. The over 40 connectors supported by Amazon Q Business can be scheduled to automatically sync your index with your selected data sources, so you're always securely searching through the most up-to-date content.

To learn more about Amazon Q Business and its integration with Asana and Google Calendar visit the Amazon Q Business connectors page here These new connector are available in all AWS Regions where Amazon Q Business is available.
 

Read more


Amazon Q Business now supports an integration to Google Calendar in (Preview)

Amazon Q Business now supports a connector to Google Calendar. This expands Amazon Q Business’s support of Google Workspace to include Google Drive, Gmail, and now Google Calendar. Each managed connectors makes it easy to synchronize your data with your Amazon Q index.

Amazon Q Business is a generative AI–powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. It empowers employees to be more creative, data-driven, efficient, prepared, and productive. The over 40 connectors supported by Amazon Q Business can be scheduled to automatically sync your index with your selected data sources, so you're always securely searching through the most up-to-date content.

To learn more about Amazon Q Business and its integration with Asana and Google Calendar visit the Amazon Q Business connectors page here. These new connector are available in all AWS Regions where Amazon Q Business is available.
 

Read more


Amazon Q Business now supports answers from tables embedded in documents

Amazon Q Business is a generative AI–powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. A large portion of that information is found in text narratives stored in various document formats such as PDFs, Word files, and HTML pages. Some information is also stored in tables (e.g. price or product specification tables) embedded in those same document types, CSVs, or spreadsheets. Although Amazon Q Business can provide accurate answers from narrative text, getting answers from these tables requires special handling of more structured information.

Today, we are happy to announce support for tabular search in Amazon Q Business, enabling end-users to extract answers from tables embedded in documents ingested in Amazon Q Business. With tabular search in Amazon Q Business, users can ask questions like “what’s the credit card with the lowest APR and no annual fees?” or “which credit cards offer travel insurance?” where the answers may be found in a product-comparison table, inside a marketing PDF stored in an internal repository, or on a website. Answers are returned as tables, lists or text narratives depending on the context. Tabular search is an out-of-the-box feature in Amazon Q Business that works seamlessly across many domains, with no setup required from admin or end-users. The feature supports tables embedded in HTML, PDF, Word, Excel, CSV, and SmartSheet (via SmartSheet connector) formats.

Amazon Q Business tabular search is available in all AWS Regions where Amazon Q Business is available. To explore Amazon Q Business, visit the website.

Read more


Amazon EC2 G6e instances now available in additional regions

Starting today, the Amazon EC2 G6e instances powered by NVIDIA L40S Tensor Core GPUs are now available in Asia Pacific (Tokyo) and Europe (Frankfurt, Spain). G6e instances can be used for a wide range of machine learning and spatial computing use cases. G6e instances deliver up to 2.5x better performance compared to G5 instances and up to 20% lower inference costs than P4d instances.

Customers can use G6e instances to deploy large language models (LLMs) with up to 13B parameters and diffusion models for generating images, video, and audio. Additionally, the G6e instances will unlock customers’ ability to create larger, more immersive 3D simulations and digital twins for spatial computing workloads. G6e instances feature up to 8 NVIDIA L40S Tensor Core GPUs with 384 GB of total GPU memory (48 GB of memory per GPU) and third generation AMD EPYC processors. They also support up to 192 vCPUs, up to 400 Gbps of network bandwidth, up to 1.536 TB of system memory, and up to 7.6 TB of local NVMe SSD storage. Developers can run AI inference workloads on G6e instances using AWS Deep Learning AMIs, AWS Deep Learning Containers, or managed services such as Amazon Elastic Kubernetes Service (Amazon EKS) and AWS Batch, with Amazon SageMaker support coming soon.

Amazon EC2 G6e instances are available today in the AWS US East (N. Virginia, Ohio), US West (Oregon), Asia Pacific (Tokyo), and Europe (Frankfurt, Spain) regions. Customers can purchase G6e instances as On-Demand Instances, Reserved Instances, Spot Instances, or as part of Savings Plans.

To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the G6e instance page.

Read more


Introducing Prompt Optimization in Preview in Amazon Bedrock

Today we are announcing the preview launch of Prompt Optimization in Amazon Bedrock. Prompt Optimization rewrites prompts for higher quality responses from foundational models.

Prompt engineering is the process of designing prompts to guide foundational models to generating relevant responses. These prompts need to be tailored for each specific foundational model, following best practices and guidelines for each model. Developers can now use Prompt Optimization in Amazon Bedrock to rewrite their prompts for improved performance on Claude Sonnet 3.5, Claude Sonnet, Claude Opus, Claude Haiku, Llama 3 70B, Llama 3.1 70B, Mistral Large 2 and Titan Text Premier models. Developers can easily compare the performance of optimized prompts against the original prompts without the need of any deployment. All optimized prompts are saved as part of Prompt Builder for developers to use for their generative AI applications.

Amazon Bedrock Prompt Optimization is now available in preview. Learn more here.
 

Read more


Amazon Polly launches more synthetic generative voices

Today, we are excited to announce the general availability of seven highly expressive Amazon Polly Generative voices in English, French, Spanish, German, and Italian.

Amazon Polly is a fully-managed service that turns text into lifelike speech, allowing you to create applications that talk and to build engaging speech-enabled products depending on your business needs.

Amazon Polly releases two new female-sounding voices (Indian English Kajal and Italian Bianca) and five new male-sounding generative voices: i.e., US Spanish Pedro, Mexican Spanish Andrés, European Spanish Sergio, German Daniel, and French Rémi. This launch not only expands the Polly Generative engine to twenty voices, but also offers a unique feature where the five new male-sounding voices have the same voice identity as the US English voice Matthew. The polyglot capability of the voice combined with high expressivity will be useful for customers with a global outreach. The same voice identity can speak multiple languages natively so that the end customers enjoy an accent-less switch from one language to another.

Kajal, Bianca, Pedro, Andrés, Sergio, Daniel, and Rémi generative voices are accessible in the US East (North Virginia), Europe (Frankfurt), and US West (Oregon) regions and complement the other types of voices that are already available in the same regions.

To hear how Polly voices sound, go to Amazon Polly Features. For more details on the Polly offerings and use, please read the Amazon Polly documentation and visit our pricing page.
 

Read more


AWS HealthOmics workflows now support call caching and intermediate file access

We are excited to announce that AWS HealthOmics workflows now support the ability to reuse task results from previous runs, saving time and compute costs for customers. AWS HealthOmics is a fully managed service that empowers healthcare and life science organizations to store, query, analyze omics data to generate insights to improve health and drive scientific discoveries. With this release, customers can accelerate development of new pipelines by resuming runs from a previous point of failure or code change.

Call caching, or the ability to resume runs, enables customers to restart runs from the point where new code changes are introduced, skipping unchanged tasks that have already been computed to enable faster iterative workflow development cycles. In addition, task intermediate files are stored in a run cache, enabling advanced debugging and troubleshooting of workflow errors during development. In production workflows, call caching saves partial results from failed runs so that customers can rerun the sample from the point of failure, rather than computing successfully completed tasks again, shortening reprocessing times.

Call caching is now supported for Nextflow, WDL, and CWL workflow languages in all regions where AWS HealthOmics is available: US East (N. Virginia), US West (Oregon), Europe (Frankfurt, Ireland, London), Asia Pacific (Singapore), and Israel (Tel Aviv). To get started with call caching, see the AWS HealthOmics documentation.

Read more


Accelerate AWS CloudFormation troubleshooting with Amazon Q Developer assistance

AWS CloudFormation now offers generative AI assistance powered by Amazon Q Developer to help troubleshoot unsuccessful CloudFormation deployments. This new capability provides easy-to-understand analysis and actionable steps to simplify the resolution of the most common resource provisioning errors encountered during CloudFormation deployments.

When creating or modifying a CloudFormation stack, CloudFormation can encounter errors in resource provisioning, such as missing required parameters for an EC2 instance or inadequate permissions. Previously, troubleshooting a failed stack operation could be a time-consuming process. After identifying the root cause of the failure, you had to search through blogs and documentation for solutions and determine the next steps, leading to longer resolution times. Now, when you review a failed stack operation in the CloudFormation Console, CloudFormation automatically highlights the likely root cause of the failure. You can click the "Diagnose with Q" button in the error alert box and Amazon Q Developer will provide a human-readable analysis of the error, helping you understand what went wrong. If you need further assistance, you can click the "Help me resolve" button to receive actionable resolution steps tailored to your specific failure scenario, helping you accelerate resolution of the error.

To get started, open the CloudFormation Console and navigate to the stack events tab for a provisioned stack. This feature is available in AWS Regions where AWS CloudFormation and Amazon Q Developer are available. Refer to the AWS Region table for service availability details. Visit our user guide to learn more about this feature.
 

Read more


OpenSearch’s vector engine adds support for UltraWarm on Amazon OpenSearch Service

UltraWarm is a fully managed, warm storage tier that’s designed to deliver cost savings on the Amazon OpenSearch Service. With OpenSearch 2.17+ domains, you can now store k-NN (vector) indexes on UltraWarm storage reducing the cost of serving infrequently access k-NN indexes through warm and cold storage tiers. With UltraWarm storage, you can further cost optimize vector search workloads on the OpenSearch vector engine. To learn more, refer to the documentation.

Read more


Disk-optimized vector engine now available on the Amazon OpenSearch Service

Amazon OpenSearch's vector engine can now run modern search applications at a third of the cost on OpenSearch 2.17 domains. When you configure a k-NN (vector) index for disk mode, it becomes optimized for operating in a low memory environment. With disk mode on, the index is compressed using techniques like binary quantization and search quality (recall) is retained through a disk-optimized rescoring mechanism using full-precision vectors. Disk-mode is an excellent option for vector search workloads that require high accuracy, cost efficiency and are satisfied by low hundreds-of-milliseconds latency. It provides customers with a lower cost alternative to the existing in-memory mode when single-digit latency is unnecessary. To learn more, refer to the documentation.

Read more


Introducing Binary Embeddings for Titan Text Embeddings model in Amazon Bedrock

Amazon Titan Text Embeddings V2 now supports Binary Embeddings. With Binary Embeddings, customers can reduce the storage cost for their Retrieval Augmented Generation (RAG) applications while maintaining similar accuracy of regular embeddings.

Amazon Titan Text Embeddings model generates semantic representations of documents, paragraphs, and sentences, as 1,024 (default), 512, or 256 dimensional vector. With Binary Embeddings, Titan Text Embeddings V2 will represent data as binary vectors with each dimension encoded as a single binary digit (0 or 1). This binary representation converts high-dimensional data into a more efficient format for storage in Amazon OpenSearch Serverless in Bedrock Knowledge Bases for cost-effective RAG applications.

Binary Embeddings is supported in Titan Text Embeddings V2, Amazon OpenSearch Serverless and Amazon Bedrock Knowledge Bases in all regions where Amazon Titan Text Embeddings V2 is supported. To learn more, visit the documentation for Binary Embeddings.

Read more


AWS Amplify launches the full-stack AI kit for Amazon Bedrock

Today, AWS announces the general availability of the AWS Amplify AI kit for Amazon Bedrock, the quickest way for fullstack developers to build web apps with AI capabilities such as chat, conversational search, and summarization. The Amplify AI kit allows developers to easily leverage their data to get customized responses from Amazon Bedrock AI models. The Amplify AI kit allows anyone with knowledge of JavaScript or TypeScript, and web frameworks like React or Next.js, to add AI experiences to their apps, without any prior machine learning expertise.

The AI kit offers the following capabilities:

  • A pre-built, fully customizable <AIConversation> React UI component that offers a real-time, streaming chat experience along with features like UI responses instead of plain-text, chat history, and resumable conversations.
  • A type-safe client that provides secure server-side access to Amazon Bedrock.
  • Secure, built-in capabilities to share user context (e.g. data the user can access) with Amazon Bedrock models.
  • Define tools with additional context that can be invoked by the models.
  • A fullstack TypeScript developer experience layered on Amplify Gen 2 and AWS AppSync.


To get started with the AI kit, see our launch blog.

Read more


Amazon Q generative SQL in Amazon Redshift Query Editor now available in additional AWS regions

Amazon Q generative SQL in Amazon Redshift Query Editor is available in AWS South America (Sao Paulo), Europe (London), and Canada (Central) regions. Amazon Q generative SQL is available in Amazon Redshift Query Editor, an out-of-the-box web-based SQL editor for Amazon Redshift, to simplify SQL query authoring and increase your productivity by allowing you to express SQL queries in natural language and receive SQL code recommendations. Furthermore, it allows you to get insights faster without extensive knowledge of your organization’s complex Amazon Redshift database metadata.

Amazon Q generative SQL uses generative Artificial Intelligence (AI) to analyze user intent, SQL query patterns, and schema metadata to identify common SQL query patterns directly within Amazon Redshift, accelerating the SQL query authoring process for users, and reducing the time required to derive actionable data insights. Amazon Q generative SQL provides a conversational interface where users can submit SQL queries in natural language, within the scope of their current data permissions. For example, when you submit a question such as 'Find total revenue by region,' Amazon Q generative SQL will recognize and suggest the appropriate SQL code for this frequent query pattern by joining multiple Amazon Redshift tables, thus saving time and decreasing the likelihood of errors. You can either accept the query or enhance your prior query by asking additional questions.

To learn more about pricing, visit the Amazon Q Developer pricing page. See the documentation to get started.
 

Read more


AWS App Studio is now generally available

AWS App Studio, a generative AI–powered app-building service that uses natural language to build enterprise-grade applications, is now generally available. App Studio helps technical professionals (such as IT project managers, data engineers, enterprise architects, and solution architects) build intelligent, secure, and scalable applications without requiring deep software development skills. App Studio handles deployments, operations, and maintenance, allowing users to focus on solving business challenges and boosting productivity.

App Studio is the fastest and easiest way to build enterprise-grade applications. Getting started is simple. Users describe the application they need in natural language, and App Studio’s generative AI–powered assistant creates an application with a multipage UI, a data model, and business logic. Builders can easily modify applications using natural language, or with App Studio’s visual canvas. They can also enhance their applications with generative AI using built-in components to generate content, summarize information, and analyze files. Applications can connect to existing data using built-in connectors for AWS (such as Amazon Aurora, Amazon DynamoDB, and Amazon S3) and Salesforce, and also hundreds of third-party services (such as HubSpot, Jira, Twilio, and Zendesk) using an API connector. Users can customize the look and feel of their applications to align with brand guidelines by selecting their logo and company color palette. With App Studio it’s free to build—you only pay for the time employees spend using the published applications, saving up to 80% compared to other comparable offerings.

App Studio is generally available in the following AWS Regions: US West (Oregon) and Europe (Ireland).

To learn more and get started, visit AWS App Studio, review the documentation, and read the announcement.

Read more


Amazon SageMaker Notebook Instances now support Trainium1 and Inferentia 2 based instances

We are pleased to announce general availability of Trainium1 and Inferentia2 based EC2 instances on SageMaker Notebook Instances.

Amazon EC2 Trn1 instances, powered by AWS Trainium chips, and Inf2 instances, powered by AWS Inferentia chips, are purpose-built for high-performance deep learning training and inference, respectively. Trn1 instances offer cost savings over other comparable Amazon EC2 instances for training 100B+ parameter generative AI models like large language models (LLMs) and latent diffusion. Inf2 instances deliver low-cost, high-performance inference for generative AI including LLMs and vision transformers. You can use Trn1 and Inf2 instances across a broad set of applications, such as text summarization, code generation, question answering, image and video generation, recommendation, and fraud detection.

Amazon EC2 Trn1 instances are available for SageMaker Notebook Instances in AWS US East (N. Virginia and Ohio) and US West (Oregon) regions. Amazon EC2 Trn1n instances are available for SageMaker NBI in AWS US East (N. Virginia and Ohio). Amazon EC2 Inf2 instances are available for SageMaker NBI in AWS US West (Oregon), AWS US East (N. Virginia and Ohio), AWS EU (Ireland), AWS EU (Frankfurt), AWS Asia Pacific (Tokyo), AWS Asia Pacific (Sydney), AWS Asia Pacific (Mumbai), AWS EU (London), AWS Asia Pacific (Singapore), AWS EU (Stockholm), AWS EU (Paris), and AWS South America (São Paulo).

Visit developer guide for instructions on setting up and using SageMaker Notebook Instances.
 

Read more


Amazon SageMaker now provides new set up experience for Amazon DataZone projects

Amazon SageMaker now provides a new set up experience for Amazon DataZone projects, making it easier for customers to govern access to data and machine learning (ML) assets. With this capability, administrators can now set up Amazon DataZone projects by importing their existing authorized users, security configurations, and policies from Amazon SageMaker domains.

Today, Amazon SageMaker customers use domains to organize list of authorized users, and a variety of security, application, policy, and Amazon Virtual Private Cloud configurations. With this launch, administrators can now accelerate the process of setting up governance for data and ML assets in Amazon SageMaker. They can import users and configurations from existing SageMaker domains to Amazon DataZone projects, mapping SageMaker users to corresponding Amazon DataZone project members. This enables project members to search, discover, and consume ML and data assets within Amazon SageMaker capabilities such as Studio, Canvas, and notebooks. Also, project members can publish these assets from Amazon SageMaker to the DataZone business catalog, enabling other project members to discover and request access to them.

This capability is available in all Amazon Web Services regions where Amazon SageMaker and Amazon DataZone are currently available. To get started, see the Amazon SageMaker administrator guide.

Read more


Three new Long-Form Voices

The Amazon Polly Long-Form engine now introduces two voices in Spanish and one in US English.

Amazon Polly is a service that turns text into lifelike speech, allowing our customers to build speech-enabled products matching their business needs. Today, we add three new long-form voices to our premium Polly Text-to-Speech (TTS) line of products that we offer for synthesizing speech for longer content, such as articles, stories, or training materials.

Male-sounding US English voice Patrick, female-sounding Spanish voice Alba, and male-sounding Spanish voice Raúl can now read long texts, such as blogs, articles, or learning materials. We trained them using the cutting edge technology that uses semantic cues to modify voice’s speaking style depending on the context. The result is natural-sounding, expressive voices that not only provide our customers with the ability of synthesizing their content in human-like Spanish and English, but expand their use-cases to long content reading.

Patrick, Alba, and Raúl long-form voices are accessible in the US East (North Virginia) region and complement the other long-form voices that are already available for developing speech products for a variety of use cases.

To hear how Polly voices sound, go to Amazon Polly Features. For more details on the Polly offerings and use, please read the Amazon Polly documentation and visit our pricing page.
 

Read more


SageMaker Model Registry now supports model lineage to improve model governance

Amazon SageMaker Model Registry now supports tracking machine learning (ML) model lineage, enabling you to automatically capture and retain information about the steps of an ML workflow, from data preparation and training to model registration and deployment.

Customers use Amazon SageMaker Model Registry as a purpose-built metadata store to manage the entire lifecycle of ML models. With this launch, data scientists and ML engineers can now easily capture and view the model lineage details such as datasets, training jobs, and deployment endpoints in Model Registry. When they register a model, Model Registry begins tracking the lineage of the model from development to deployment. This creates an audit trail that enables traceability and reproducibility, providing visibility across the model lifecycle to improve model governance.

This capability is available in all AWS regions where Amazon SageMaker Model Registry is currently available except GovCloud regions. To learn more, see view Model Lineage details in Amazon SageMaker Studio.
 

Read more


Amazon EC2 Capacity Blocks expands to new regions

Today, Amazon Web Services announces that Amazon Elastic Compute Cloud (Amazon EC2) Capacity Blocks for ML is available for P5 instances in two new regions: US West (Oregon) and Asia Pacific (Tokyo). You can use EC2 Capacity Blocks to reserve highly sought-after GPU instances in Amazon EC2 UltraClusters for a future date for the amount of time that you need to run your machine learning (ML) workloads.

EC2 Capacity Blocks enable you to reserve GPU capacity up to eight weeks in advance for durations up to 28 days in cluster sizes of one to 64 instances (512 GPUs), giving you the flexibility to run a broad range of ML workloads. They are ideal for short duration pre-training and fine-tuning workloads, rapid prototyping, and for handling surges in inference demand. EC2 Capacity Blocks deliver low-latency, high-throughput connectivity through colocation in Amazon EC2 UltraClusters.

With this expansion, EC2 Capacity Blocks for ML are available for the following instance types and AWS Regions: P5 instances in US East (N. Virginia), US East (Ohio), US West (Oregon), and Asia Pacific (Tokyo); P5e instances in US East (Ohio); P4d instances in US East (Ohio) and US West (Oregon); Trn1 instances in Asia Pacific (Melbourne).

To get started, visit the AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs. To learn more, see the Amazon EC2 Capacity Blocks for ML User Guide.

Read more


Amazon SageMaker Model Registry now supports defining machine learning model lifecycle stages

Today, we are excited to announce that Amazon SageMaker Model Registry now supports custom machine learning (ML) model lifecycle stages. This capability further improves model governance by enabling data scientists and ML engineers to define and control the progression of their models across various stages, from development to production.

Customers use Amazon SageMaker Model Registry as a purpose-built metadata store to manage the entire lifecycle of ML models. With this launch, data scientists and ML engineers can now define custom stages such as development, testing, and production for ML models in the model registry. This makes it easy to track and manage models as they transition across different stages in the model lifecycle from training to inference. They can also track stage approval status such as Pending Approval, Approved, and Rejected to check when the model is ready to move to the next stage. These custom stages and approval status help data scientists and ML engineers define and enforce model approval workflows, ensuring that models meet specific criteria before advancing to the next stage. By implementing these custom stages and approval processes, customers can standardize their model governance practices across their organization, maintain better oversight of model progression, and ensure that only approved models reach production environments.

This capability is available in all AWS regions where Amazon SageMaker Model Registry is currently available except GovCloud regions. To learn more, see Staging Construct for your Model Lifecycle.

Read more


Amazon Q Developer Pro tier adds enhanced administrator capabilities to view user activity

The Amazon Q Developer Pro tier now offers administrators greater visibility into the activity from subscribed users. Amazon Q Developer Pro tier administrators can now view user last activity information and enable daily user activity reports.

Organization administrators can now view the last activity information for each user's subscription and applications within that subscription, enabling better monitoring of usage. This allows inactive subscriptions to be easily identified through filtering and sorting across all associated applications. Member account administrators can view the last active date specific to the users, applications, and accounts they manage. The last active date is only shown for activity on or after October 30, 2024.

Additionally, member account administrators can enable detailed per-user activity reports in the Amazon Q Developer settings by specifying an Amazon S3 bucket where the reports should be published. When enabled, you will receive a daily report in Amazon S3 with detailed user activity metrics, such as the number of messages sent, and AI lines of code generated.

To learn more about Amazon Q Developer Pro tier subscription management features, visit the AWS Console.

Read more


Amazon Bedrock Prompt Management is now generally available

Earlier this year, we launched Amazon Bedrock Prompt Management in preview to simplify the creation, testing, versioning, and sharing of prompts. Today, we’re announcing its general availability and adding several new key features. First, we are introducing the ability to easily run prompts stored in your AWS account. Amazon Bedrock Runtime APIs Converse and InvokeModel now support executing a prompt using a Prompt identifier. Next, while creating and storing the prompts, you can now specify system prompt, multiple user/assistant messages, and tool configuration in addition to the model choice and inference configuration available in preview — this enables advanced prompt engineers to leverage function calling capabilities provided by certain model families such as the Anthropic Claude models. You can now store prompts for Bedrock Agents in addition to Foundation Models, and we have also introduced the ability to compare two versions of a prompt to quickly review the differences between versions. Finally, we now support custom metadata to be stored with the prompts via the Bedrock SDK, enabling you to store metadata such as author, team, department, etc. to meet your enterprise prompt management needs.

Amazon Bedrock is a fully managed service that offers a choice of high-performing large language models (LLMs) and other FMs from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, as well as Amazon via a single API.

Learn more here and in our documentation. Read our blog here.
 

Read more


Amazon Bedrock now available in the Europe (Zurich) Regions

Beginning today, customers can use Amazon Bedrock in the Europe (Zurich) region to easily build and scale generative AI applications using a variety of foundation models (FMs) as well as powerful tools to build generative AI applications.

Amazon Bedrock is a fully managed service that offers a choice of high-performing large language models (LLMs) and other FMs from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, as well as Amazon via a single API. Amazon Bedrock also provides a broad set of capabilities customers need to build generative AI applications with security, privacy, and responsible AI built in. These capabilities help you build tailored applications for multiple use cases across different industries, helping organizations unlock sustained growth from generative AI while ensuring customer trust and data governance.

To get started, visit the Amazon Bedrock page and see the Amazon Bedrock documentation for more details.

Read more


AWS Clean Rooms ML supports privacy-enhanced model training and inference

Today, AWS announces AWS Clean Rooms ML custom modeling, which enables organizations to generate predictive insights with their partners running their own machine-learning (ML) models and using their data in a clean rooms collaboration. With this launch, companies and their partners can train ML models and run inference on collective datasets without having to share sensitive data or proprietary models.

For example, advertisers can bring their proprietary model and data into a Clean Rooms collaboration, and invite publishers to join their data to train and deploy a custom ML model that helps them increase campaign effectiveness—all without sharing their custom model and data with one another. Similarly, financial institutions can use historical transaction records to train a custom ML model, and invite partners into a Clean Rooms collaboration to detect potential fraudulent transactions, without having to share underlying data and model among collaborators. With AWS Clean Rooms ML custom modeling, you can gain valuable insights with your partners while applying privacy-enhancing controls when running model training and inferencing by specifying the datasets to be used in a Clean Rooms environment. This allows you and your partners to approve the datasets used, and removes the need to share sensitive data or proprietary models with one another. AWS Clean Rooms ML also offers an AWS-authored lookalike modeling capability that can help you improve lookalike segment accuracy by up to 36% compared to industry baselines.

AWS Clean Rooms ML is available as a capability of AWS Clean Rooms in these AWS Regions. To learn more, visit AWS Clean Rooms ML.

Read more


Six new synthetic generative voices for Amazon Polly

Today, we are excited to announce the general availability of six highly expressive Amazon Polly generative voices in English, French, Spanish, and German.
Amazon Polly is a managed service that turns text into lifelike speech, allowing you to create applications that talk and to build speech-enabled products depending on your business needs.

The generative engine is Amazon Polly's most advanced text-to-speech (TTS) model. Today, we release six new synthetic female-sounding generative voices: i.e., Ayanda (South African English), Léa (French), Lucia (European Spanish), Lupe (American Spanish), Mía (Mexican Spanish), and Vicki (German). This launch increases the number of generative Polly voices from seven to thirteen and expands our footprint from three to nine locales. Leveraging the same Gen-AI technology that powered the English generative voices, Polly now supports German, Spanish, and French to provide our customers with more options of highly expressive and engaging voices.

Ayanda, Léa, Lucia, Lupe, Mía, and Vicki generative voices are accessible in the US East (North Virginia), Europe (Frankfurt), and US West (Oregon) regions and complement the other types of voices that are already available in the same regions.

To hear how Polly voices sound, go to Amazon Polly Features. For more details on the Polly offerings and use, please read the Amazon Polly documentation and visit our pricing page.

Read more


Amazon MSK now supports vector embedding generation using Amazon Bedrock

Amazon MSK (Managed Streaming for Apache Kafka) now supports new Managed Streaming for Apache Flink blueprints to generate vector-embeddings using Amazon Bedrock, making it easier to build real-time AI applications powered by up-to-date, contextual data. This blueprint simplifies the process of incorporating the latest data from your Amazon MSK streaming pipelines into your generative AI models, eliminating the need to write custom code to integrate real-time data streams, vector databases, and large language models.

With just a few clicks, customers can configure the blueprint to continuously generate vector embeddings using Bedrock's embedding models, then index those embeddings in Amazon OpenSearch for their Amazon MSK data streams. This allows customers to combine the context from real-time data with Bedrock's powerful large language models to generate accurate, up-to-date AI responses without writing custom code. Customers can also choose to improve the efficiency of data retrieval using built-in support for data chunking techniques from LangChain, an open-source library, supporting high-quality inputs for model ingestion. The blueprint manages the data integration and processing between MSK, the chosen embedding model, and the Open Search vector store, allowing customers to focus on building their AI applications rather than managing the underlying integration.

Real-time vector embedding blueprint is generally available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Paris), Europe (London), Europe (Ireland) and South America (Sao Paulo) AWS Regions. Visit the Amazon MSK documentation for the list of additional Regions, which will be supported over the next few weeks. To learn more about how to use the blueprint to generate real-time vector embeddings from your Amazon MSK data, visit the AWS blog.

Read more


Anthropic’s Claude 3.5 Haiku model now available in Amazon Bedrock

Anthropic’s Claude 3.5 Haiku model is now available in Amazon Bedrock. Claude 3.5 Haiku is the next generation of Anthropic’s fastest model, combining rapid response times with improved reasoning capabilities, making it ideal for tasks that require both speed and intelligence. Claude 3.5 Haiku improves across every skill set and surpasses even Claude 3 Opus, the largest model in Anthropic’s previous generation, on many intelligence benchmarks—including coding.

With improved instruction following and more accurate tool use, Claude 3.5 Haiku is well suited for entry level user-facing products, specialized sub-agent tasks, and generating personalized experiences from huge volumes of data—like purchase history, pricing, or inventory data. Claude 3.5 Haiku can help efficiently process and categorize large volumes of unstructured data in finance, healthcare, research, and other industries. Claude 3.5 Haiku can also help with use cases such as fast and accurate code suggestions, highly interactive customer service chatbots that require rapid response times, e-commerce solutions, and educational platforms. The new Claude 3.5 Haiku is currently available as a text-only model with support for image inputs to follow.

The Claude 3.5 Haiku model is now available in Amazon Bedrock in the US West (Oregon) Region and in the US East (N. Virginia) Region via cross-region inference. To learn more, read the AWS News launch blog, Claude in Amazon Bedrock product page, and documentation. To get started with Claude, visit the Amazon Bedrock console.

Read more


Announcing enhanced support for medical imaging data with lossy compression in AWS HealthImaging

Today, HealthImaging launched enhancements that better handle lossy compressed medical imaging data. Some medical images, such as whole slide microscopy, ultrasound, and cardiology, utilize lossy image compression. With this feature launch, HealthImaging better supports lossy encoded data, and helps lower storage costs.

The HealthImaging import process encodes most image frames (pixel data) in the High-Throughput JPEG 2000 (HTJ2K) lossless format. With this launch, JPEG Baseline Lossy 8-bit, JPEG 2000 lossy, and High-Throughput JPEG 2000 lossy image compression will be persisted without transcoding. This means HealthImaging will store your lossy encoded data more efficiently, and thereby reducing your storage costs

With this launch, HealthImaging has also enhanced support for DICOM binary segmentation objects. Now image frames with Segmentation Type BINARY will be returned in the Explicit Little Endian (ELE) transfer syntax, as most applications expect.

AWS HealthImaging is a HIPAA-eligible service that empowers healthcare providers and their software partners to store, analyze, and share medical images at petabyte scale. With AWS HealthImaging, you can run your medical imaging applications at scale from a single, authoritative copy of each medical image in the cloud, while reducing total cost of ownership. To learn more about how HealthImaging import jobs work, see the AWS HealthImaging Developer Guide.

AWS HealthImaging is generally available in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (Ireland).
 

Read more


Amazon Q Business adds simplified setup and new web app experience

Amazon Q Business is a fully managed, generative AI–powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. Amazon Q Business now offers a simplified onboarding that helps administrators deliver a secure AI assistant quickly and a web app experience that allows end users to start using generative AI for their work immediately.

With this launch, administrators can provide end users with the web app even before indexing their internal corporate knowledge for use with Amazon Q Business. This allows end users to ask questions based on local files or world knowledge right away, providing immediate value for their jobs. As administrators index corporate data sources like wikis, documentation, and other information into Amazon Q Business, end users gain even richer insights from their generative AI assistant.

The new setup and web experience is available in all AWS Regions where Amazon Q Business is available.

You can get started with new express setup and web experience in the Amazon Q Business console. To explore Amazon Q Business, visit the Amazon Q homepage.

Read more


Amazon SageMaker Notebook Instances now support JupyterLab 4 notebooks

We're excited to announce the availability of JupyterLab 4 on Amazon SageMaker Notebook Instances, providing you with a powerful and modern interactive development environment (IDE) for your data science and machine learning (ML) workflows.

With this update, you can now leverage the latest features and improvements in JupyterLab 4, including faster performance and notebook windowing, making working with large notebooks much more efficient. The Extension Manager now includes both prebuilt Python extensions and extensions from PyPI, making it easier to discover and install the tools you need. The Search and Replace functionality has been improved with new features, including highlighting matches in rendered Markdown cells, searching in the current selection, and regular expression support for replacements. By providing JupyterLab 4 on Amazon SageMaker Notebook Instances, we're empowering you with a cutting-edge development environment to boost your productivity and efficiency when building ML models and exploring data.

JupyterLab 4 notebooks are available in all commercial AWS regions where SageMaker Notebook Instance is available. Visit developer guides for instructions on setting up and using SageMaker notebook instances.

Read more


Amazon Bedrock announces support for cost allocation tags on inference profiles

Amazon Bedrock now enables customers to allocate and track on-demand foundation model usage. Customers can categorize their GenAI inference costs by department, team, or application using AWS cost allocation tags. You can leverage this feature by creating an application inference profile and tagging it.

Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models from leading AI companies via a single API. Amazon Bedrock also provides a broad set of capabilities customers need to build generative AI applications with security, privacy, and responsible AI capabilities built in. These capabilities help you build tailored applications for multiple use cases across different industries, helping organizations unlock sustained growth from generative AI while ensuring customer trust and data governance.

For more information about Amazon Bedrock, visit the Amazon Bedrock page and see the Amazon Bedrock documentation for more details. For more information about the AWS Regions where application inference profiles are available, see this page.

Read more


Fine-tuning for Anthropic’s Claude 3 Haiku in Amazon Bedrock is now generally available

Fine-tuning for Anthropic's Claude 3 Haiku model in Amazon Bedrock is now generally available. Amazon Bedrock is the only fully managed service that provides you with the ability to fine tune Claude models. Claude 3 Haiku is Anthropic’s most compact model, and is one of the most affordable and fastest options on the market for its intelligence category, according to Anthropic. By providing your own task-specific training dataset, you can fine tune and customize Claude 3 Haiku to boost model accuracy, quality, and consistency to further tailor generative AI for your business.

Fine-tuning allows Claude 3 Haiku to excel in areas crucial to your business compared to more general models by encoding company and domain knowledge. By fine tuning Claude 3 Haiku within your secure AWS environment and adapting its knowledge to your exact business requirements, you can generate higher-quality results and create unique user experiences that reflect your company’s proprietary information, brand, products, and more. You can also enhance performance for domain-specific actions such as classification, interactions with custom APIs, or industry-specific data interpretation. Amazon Bedrock makes a separate copy of the base foundation model that is accessible only by you and trains this private copy of the model.

Fine-tuning for Anthropic's Claude 3 Haiku in Amazon Bedrock is now generally available in the US West (Oregon) AWS Region. To learn more, read the launch blog, technical blog, and documentation. To get started with Claude 3 in Amazon Bedrock, visit the Amazon Bedrock console.

Read more


aws-account-billing

AWS announces Invoice Configuration

Today, AWS announces the general availability of Invoice Configuration, which enables you to customize your invoicing experience to receive separate AWS invoices based on your organizational structure. This enables you to group AWS accounts according to your internal business entities such as legal entities, subsidiaries, cost centers etc. and receive separate AWS invoices for each of your business entities, within the same AWS Organization. A separate invoice per business entity enables you to track invoices separately, thus enabling faster processing of AWS Invoices by removing manual processes to split the AWS invoice on an entity level.

With Invoice Configuration, you can create Invoice Units, which are groups of member accounts, that best represent your business entities and then designate a member or management account as the receiver for the invoice of the business entity. You can optionally associate a purchase order by Invoice Unit and visualize charges by Invoice Units using Cost Categories in Cost explorer and Cost and Usage Report.

You can either use the Invoice Configuration through the AWS Billing and Cost management console or access it through the AWS SDKs or AWS CLI to programmatically create and manage Invoice Units.

Invoice Configuration is available in all public AWS Regions, excluding GovCloud (US) Regions and China (Beijing) and China (Ningxia) Regions. To learn more visit the product page, blog post, or review the User Guide and API Reference.
 

Read more


AWS delivers enhanced root cause insights to help explain cost anomalies

Today, AWS announces new enhanced root cause analysis capabilities for AWS Cost Anomaly Detection, empowering you to better pinpoint and remediate underlying factors driving unplanned cost increases. By creating anomaly monitors, you can analyze spend across services, member accounts, Cost Allocation Tags, and Cost Categories. Once a cost anomaly is detected, Cost Anomaly Detection now analyzes and ranks all possible combinations of services, accounts, regions, and usage types by cost impact, surfacing up to the top 10 root causes with their corresponding cost contributions.

With more information on the key drivers behind an anomaly, you can better identify the specific factors that contributed the most to a cost spike, such as which combination of linked account, region, and usage type led to increased spend in a particular service. With the top root causes ranked by their cost impact, you can more easily take fast, targeted actions to address these key issues before unplanned costs accrue further.

The enhanced root cause analysis is available in all AWS Regions, except the AWS GovCloud (US) Regions and the China Regions. To learn more about this new feature, AWS Cost Anomaly Detection, and how to reduce your risk of spend surprises, visit the AWS Cost Anomaly Detection product page, documentation, and launch blog.

 

Read more


AWS Billing and Cost Management Data Exports for FOCUS 1.0 is now generally available

Today, AWS announces the general availability (GA) of Data Exports for FOCUS 1.0, which has been in public preview since June 2024. FOCUS 1.0 is an open-source cloud cost and usage specification that provides standardization to simplify cloud financial management across multiple sources. Data Exports for FOCUS 1.0 enables customers to export their AWS cost and usage data with the FOCUS 1.0 schema to Amazon S3. The GA release of FOCUS 1.0 is a new table in Data Exports in which key specification conformance gaps have been solved compared to the preview table.

With Data Exports for FOCUS 1.0 (GA), customers receive their costs in four standardized columns, ListCost, ContractedCost, BilledCost, and EffectiveCost. It provides a consistent treatment of discounts and amortization of Savings Plans and Reserved Instances. The standardized schema of FOCUS ensures data can be reliably referenced across sources.

Data Exports for FOCUS 1.0 (GA) is available in the US East (N. Virginia) Region, but includes cost and usage data covering all AWS Regions, except AWS GovCloud (US) Regions and AWS China (Beijing and Ningxia) Regions.

Learn more about Data Exports for FOCUS 1.0 in the User Guide, product details page, and at the FOCUS project webpage. Get started by visiting the Data Exports page in the AWS Billing and Cost Management console and creating an export of the new GA table named “FOCUS 1.0 with AWS columns”. After creating a FOCUS 1.0 GA export, you will no longer need your preview export. You can view the specification conformance of the GA release here.
 

Read more


AWS Billing and Cost Management announces Savings Plans Purchase Analyzer

Today, AWS announces Savings Plans Purchase Analyzer, a new AWS Billing and Cost Management feature that enables you to quickly estimate the cost, coverage, and utilization impact of your planned Savings Plan purchases, so you can make informed purchase decisions in just a few clicks.

Savings Plans Purchase Analyzer enables you to interactively model a wide range of Savings Plan purchase scenarios with customizable parameters, including commitment amounts, custom lookback periods, and the option to exclude expiring Savings Plans. You can compare estimated savings percentage, coverage, and utilization across different purchase scenarios, and evaluate the hourly impact of recommended or custom commitments for renewals or new purchases of Savings Plans.

Savings Plans Purchase Analyzer is available in all AWS Regions, except the AWS GovCloud (US) Regions and the China Regions.

To get started with Savings Plans Purchase Analyzer, visit the product details page and user guide.

Read more


AWS End User Messaging announces cost allocation tags for SMS

Today, AWS End User Messaging announces cost allocation tags for SMS resources, allowing you to track spend for each tag associated with a resource. AWS End User Messaging provides developers with a scalable and cost-effective messaging infrastructure without compromising the safety, security, or results of their communications.

You can now assign a tag to each resource, and summarize the spend of that resource using cost allocation tags in the AWS Billing and Cost management console.

To learn more, visit the AWS End User Messaging SMS User Guide.
 

Read more


AWS SDK now supports ListBillingViewsAPI for AWS Billing Conductor users

Today, AWS announces the general availability of ListBillingViews API in the AWS SDKs, to enable AWS Billing Conductor (ABC) users to create pro forma Cost and Usage Reports (CUR) programmatically.

Today, CUR PutReportDefinition API requires a BillingViewArn - the Amazon resource name for a billing view, to populate the CUR with proforma data. Prior to this launch customers had to manually construct the BillingView Arn by retrieving Payer account and Primary account IDs and adding the metadata to the string arn:aws:billing::payer-account-id:billingview/billing-group-primary-account-id. ABC users can now eliminate these manual steps to retrieve the BillingViewArn, and automate the end-to-end CUR file configuration journey, based on each pro forma billing view available. As a result, ListBillingView API enables ABC users to simplify ABC onboarding and accelerate the ability to set up their rebilling operations.

ListBillingViews API is available in all commercial AWS Regions, excluding the Amazon Web Services China (Beijing) Region, operated by Sinnet and Amazon Web Services China (Ningxia) Region, operated by NWCD.

To learn more about this feature integration, visit the AWS Billing Conductor product page, or review the API Reference.

Read more


Announcing financing program for AWS Marketplace purchases for select US customers

Today, AWS announces the availability of a new financing program supported by PNC Vendor Finance, enabling select customers in the United States (US) to finance AWS Marketplace software purchases directly from the AWS Billing and Cost Management console. For the first time, select US customers can apply for, utilize, and manage financing within the console for AWS Marketplace software purchases.

AWS Marketplace helps customers find, try, buy, and launch third-party software, while consolidating billing and management with AWS. With thousands of software products available in AWS Marketplace, this financing program enables you to buy the software you need to drive innovation. With financing amounts ranging from $10,000 - $100,000,000, subject to credit approval, you have more options to pay for your AWS Marketplace purchases. If approved, you can utilize financing for AWS Marketplace software purchases that have at least 12-month contracts. Financing can be applied to multiple purchases from multiple AWS Marketplace sellers. This financing program gives you the flexibility to better manage your cash flow by spreading payments over time, while only paying financing cost on what you use.

This new financing program supported by PNC Vendor Finance is available in the AWS Billing and Cost Management console for select AWS Marketplace customers in the US, excluding NV, NC, ND, TN, & VT.

To learn more about financing options for AWS Marketplace purchases and details about the financing program supported by PNC Vendor Finance, visit the AWS Marketplace financing page.
 

Read more


aws-amplify

Storage Browser for Amazon S3 is now generally available

Amazon S3 is announcing the general availability of Storage Browser for S3, an open source component that you can add to your web applications to provide your end users with a simple interface for data stored in S3. With Storage Browser for S3, you can provide authorized end users, such as customers, partners, and employees, with access to easily browse, download, and upload data in S3 directly from your own applications. Storage Browser for S3 is available in the AWS Amplify React and JavaScript client libraries.

With the general availability of Storage Browser for S3, your end users can now search for their data based on file name and can copy and delete data they have access to. Additionally, Storage Browser for S3 now automatically calculates checksums of the data your end users upload and blocks requests that do not pass these durability checks.

We welcome your contributions and feedback on our roadmap, which outlines the plan for adding new capabilities to Storage Browser for S3. Storage Browser for S3 is backed by AWS Support, which means customers with AWS Business and Enterprise Support plans get 24/7 access to cloud support engineers. To learn more and get started, visit the AWS News Blog and the UI documentation.
 

Read more


AWS Amplify introduces passwordless authentication with Amazon Cognito

AWS Amplify is excited to announce support for Amazon Cognito's new passwordless authentication features, enabling developers to implement secure sign-in methods using SMS one-time passwords, email one-time passwords, and WebAuthn passkeys in their applications with Amplify client libraries for JavaScript, Swift, and Android. This update simplifies the implementation of passwordless authentication flows, addressing the growing demand for more secure and user-friendly login experiences while reducing the risks associated with traditional password-based systems.

This new capability enhances application security and user experience by eliminating the need for traditional passwords, reducing the risk of credential-based attacks while streamlining the login process. Passwordless authentication is ideal for organizations aiming to strengthen security and increase user adoption across various sectors, including e-commerce, finance, and healthcare. By removing the frustration of remembering complex passwords, this feature can significantly improve user engagement and simplify account management for both users and organizations.

The passwordless authentication feature is now available in all AWS regions where Amazon Cognito is supported, enabling developers worldwide to leverage this functionality in their applications.

To get started with passwordless authentication in AWS Amplify, visit the AWS Amplify documentation for detailed guides and examples

Read more


AWS Amplify launches the full-stack AI kit for Amazon Bedrock

Today, AWS announces the general availability of the AWS Amplify AI kit for Amazon Bedrock, the quickest way for fullstack developers to build web apps with AI capabilities such as chat, conversational search, and summarization. The Amplify AI kit allows developers to easily leverage their data to get customized responses from Amazon Bedrock AI models. The Amplify AI kit allows anyone with knowledge of JavaScript or TypeScript, and web frameworks like React or Next.js, to add AI experiences to their apps, without any prior machine learning expertise.

The AI kit offers the following capabilities:

  • A pre-built, fully customizable <AIConversation> React UI component that offers a real-time, streaming chat experience along with features like UI responses instead of plain-text, chat history, and resumable conversations.
  • A type-safe client that provides secure server-side access to Amazon Bedrock.
  • Secure, built-in capabilities to share user context (e.g. data the user can access) with Amazon Bedrock models.
  • Define tools with additional context that can be invoked by the models.
  • A fullstack TypeScript developer experience layered on Amplify Gen 2 and AWS AppSync.


To get started with the AI kit, see our launch blog.

Read more


aws-appconfig

AWS AppConfig supports automatic rollback safety from third-party alerts

AWS AppConfig has added support for third-party monitors to trigger automatic rollbacks when there are problems with updates to feature flags, experimental flags, or configuration data. Customers can now connect AWS AppConfig to third-party application performance monitoring (APM) solutions; previously monitoring required Amazon CloudWatch. This monitoring gives more confidence and additional safety controls when making any change on production.

Unexpected downtime or degraded performance can occur from faulty changes to feature flags or configuration data. AWS AppConfig provides safety guardrails to reduce this risk. One key safety guardrail for AWS AppConfig is the ability to have AWS AppConfig immediately roll back a change when a monitor alerts during the rollout of a feature flag or configuration change. This automation can typically remediate problems faster than a human operator can. Customers can use AWS AppConfig Extensions to connect to any API-enabled APM, including proprietary solutions.

Third-party alarm rollback for AWS AppConfig is available in all AWS Regions, including the AWS GovCloud (US) Regions. To get started, use the AWS AppConfig Getting Started Guide, or learn about AWS AppConfig automatic rollback.
 

Read more


aws-application-discovery-service

AWS Application Discovery Service now supports data from commercially available discovery tools

Today, AWS announces additional file support for AWS Application Discovery Service (ADS), which adds the ability to import VMware data generated by 3rd-party datacentre tools. With today’s launch, you can now directly take an export from Dell Technology’s RVTools and load it into ADS without any file manipulation.

ADS provides a system of record for configuration, performance, tags, network connections, and application grouping of your existing on-premises workloads. Now with the support for additional file formats, you have the option to kick off your migration journey using the data you already have. At any time later you have the option to deploy either ADS Discovery Agents or ADS Agentless Collectors and the data will automatically be merged into a unified view of your datacentre.

These new capabilities are available in all AWS Regions where AWS Application Discovery Service is available.

To learn more, please see the user guide for AWS Application Discovery Service. For more information on using the ADS import action via the AWS SDK or CLI, please see the API reference.

Read more


AWS Application Discovery Service adds integration with AWS Application Migration Service

Today AWS announces an integration between AWS Application Discovery Service (ADS) and AWS Application Migration Service (MGN), which allows data collected about your on-premises workloads to directly feed into your migration execution plan. This new capability provides a one-click export of the on-premises server configuration, tags, application grouping, and Amazon EC2 recommendations gathered during planning in a format supported by MGN.

ADS provides a system of record for configuration, performance, tags, and application groupings of your existing on-premises workloads. Now when using the Amazon EC2 instance recommendations feature, you also are provided an MGN-ready inventory file. This file can then be directly imported into MGN, removing the need to rediscover your workloads.

This new no-cost capability is available in all AWS Regions where AWS Application Discovery Service is available.

To learn more, please see the user guides for AWS Application Discovery Service and AWS Application Migration Service.
 

Read more


AWS Application Discovery Service (ADS) now supports AWS PrivateLink, providing private connectivity between virtual private clouds (VPCs), on-premises networks and ADS without exposing traffic to the public internet. With this integration, administrators can use VPC endpoint policies to seamlessly route their discovery data from either the ADS Agentless Collector or ADS Discovery Agent directly into ADS for analysis and migration planning.

This new feature is available in all AWS Regions where AWS Application Discovery Service and AWS PrivateLink are available.

To get started, see the AWS PrivateLink section of AWS Application Discovery Service user guide.

Read more


Network connections is now discoverable with AWS Application Discovery Service Agentless Collector

Starting today, the AWS Application Discovery Service Agentless Collector supports the discovery of on-premises network connections, allowing you to understand your on-premises dependencies and plan your AWS migration. With the Agentless Collector, one virtual appliance deployed within your on-premises data center can discover and monitor the performance of VMware virtual machines, database metadata and utilization metrics, and now network connections.

Using network connection data to build applications is an important step when building a migration plan to the AWS cloud. By using AWS Migration Hub to explore the relationship and dependencies between servers, migration practitioners can be confident which servers should be part of a migration wave or application.

The network connections capability is now generally available, and can be used in all AWS Regions where AWS Application Discovery Service is available. Customers already running the Agentless Collector with active auto-updates only need to provide read-only credentials to enable the feature.

To learn more, read the user guide. Accelerate your migration with AWS Application Discovery Service today.

Read more


aws-appsync

AWS AppSync now supports cross account sharing of GraphQL APIs

AWS AppSync is a fully managed API management service that connects applications to events, data, and AI models. AppSync now supports sharing GraphQL APIs across AWS accounts using AWS Resource Access Manager (RAM). This new feature allows customers to securely share their AppSync GraphQL APIs configured with IAM authorization, including private APIs, with other AWS accounts within their organization or with third parties.

Before today, customers had to set up additional networking infrastructure to share their private GraphQL APIs between their organization accounts. With this enhancement, customers can now centralize their GraphQL API management in a dedicated account and share access to these APIs with other accounts. For example, a central API team can create and manage private GraphQL APIs, then share them with different application or networking teams in different accounts. This approach simplifies API governance, improves security, and enables more flexible and scalable architectures for multi-account environments. Customers can optionally enable CloudTrail to capture API activities related to AWS AppSync GraphQL APIs as events for additional security and visibility.

This feature is now available in all AWS Regions where AWS AppSync is available.

To get started, refer to the AWS AppSync GraphQL documentation, and visit the AWS RAM console to start sharing your APIs. For more information about sharing resources with AWS RAM, see the AWS RAM User Guide.

Read more


AWS AppSync launches AI gateway capabilities with new Amazon Bedrock integration in AppSync GraphQL

AWS AppSync, a fully managed API management service that connects applications to events, data, and AI models. Today, customers use AppSync as an AI gateway to trigger generative AI workflows and use subscriptions, powered by WebSockets, to return progressive updates from long-running invocations. This allows them to implement asynchronous patterns. However, in some cases, customers need to make short synchronous invocations to their models. AWS AppSync now supports Amazon Bedrock runtime as a data source for GraphQL APIs, enabling seamless integration of generative AI capabilities. This new feature allows developers to make short synchronous invocations (10 seconds or less) to foundation models and inference profiles in Amazon Bedrock directly from their AppSync GraphQL APIs.

The integration supports calling the converse and invokeModel APIs. Developers can interact with Anthropic models like Claude 3.5 Haiku and Claude 3.5 Sonnet for data analysis and structured object generation tasks. They can also use Amazon Titan models to generate embeddings, create summaries, or extract action items from meeting minutes.

For longer-running invocations, customers can continue using AWS Lambda functions in event mode to interact with Bedrock models and send progressive updates to clients via subscriptions.

This new data source is available in all AWS Regions where AWS AppSync is available. To get started, customers can visit the AWS AppSync console and refer to the AWS AppSync documentation for more information.
 

Read more


AWS AppSync GraphQL APIs now support data plane logging to AWS CloudTrail

Today, AWS AppSync announced support for logging GraphQL data plane operations (query, mutation, and subscription operations and connect requests to your real-time WebSocket endpoint) using AWS CloudTrail, enabling customers to have greater visibility into GraphQL API activity in their AWS account for best practices in security and operational troubleshooting. AWS AppSync GraphQL is a serverless GraphQL service that gives application developers the ability to access data from multiple databases, micro-services, and AI models with a single GraphQL API request.

CloudTrail captures API activities related to AWS AppSync GraphQL APIs as events, including calls from the AWS console and calls made programmatically to the AWS AppSync GraphQL API endpoints. Using the information that CloudTrail collects, you can identify a specific request to an AWS AppSync GraphQL API, the IP address of the requester, the requester's identity, and the date and time of the request. Logging AWS AppSync GraphQL APIs using CloudTrail helps you enable operational and risk auditing, governance, and compliance of your AWS account.

To opt-in for CloudTrail logging you can simply configure logging on your data stream using the AWS CloudTrail Console or by using CloudTrail APIs.

Logging data plane AWS AppSync GraphQL APIs using AWS CloudTrail is now available in all commercial AWS Regions where AppSync is available. To learn more about logging data plane APIs using AWS CloudTrail, see AWS Documentation. For more information about CloudTrail, see the AWS CloudTrail User Guide.

Read more


aws-artifact

AWS Artifact enhances agreements with improved access control and tracking

We are excited to announce enhancements to the agreement functionality on AWS Artifact that will improve how you manage and track agreement execution.

You can now provide fine-grained access to agreements in AWS Artifact at the AWS Identity and Access Management (IAM) Action and Resource level. To make it easy for you to configure IAM permissions, we have introduced “AWSArtifactAgreementsReadOnlyAccess”and “AWSArtifactAgreementsFullAccessmanaged policies for AWS Artifact agreements, which provide read-only permissions and full permissions respectively. We have also implemented CloudTrail logging for agreement activities on AWS Artifact. This enables you to easily track and audit user activity and API calls related to agreements. To take advantage of the new features through Artifact console, please update your IAM policies and opt in to use the new fine-grained permissions by selecting that option on the Artifact Agreements console.

We also introduced a new API called listCustomerAgreements that allows you to list active customer agreements for each AWS Account. This API enables automation and efficient tracking of active agreements for customers, especially for those managing a large number of accounts or complex compliance requirements.

These features are available in all AWS commercial regions. To learn more about AWS Artifact and how to manage agreements, refer to the documentation and AWS Artifact API reference.
 

Read more


aws-b2b-data-interchange

AWS B2B Data Interchange now supports all X12 transaction sets

AWS B2B Data Interchange now supports all X12 transactions for versions 4010, 4030, 4050, 4060, and 5010. Versions 4050 and 4060 are new to the service and were not previously available. Each of these transactions and versions are supported for both inbound and outbound use cases, enabling you to migrate a greater number of your bi-directional EDI workloads to AWS.

This launch especially benefits customers in the manufacturing, logistics, and financial services industries by enabling them to validate, parse, and transform a wider range of X12 transactions exchanged with their trading partners. Among these new transaction sets supported are those used to reserve shipment capacity, apply for mortgage insurance benefits, and to acknowledge purchase orders, deliveries, and returns.

These new X12 transaction sets and versions are available in all AWS Regions that offer B2B Data Interchange. A full list of these transactions, along with their descriptions and categories, can be found in the documentation. To learn more about building and running your bi- directional EDI workflows with B2B Data Interchange, take the self-paced workshop.

Read more


AWS B2B Data Interchange introduces generative AI-assisted EDI mappings

AWS B2B Data Interchange now enables you to generate electronic data interchange (EDI) mapping code using generative AI. This new capability expedites the process of writing and testing bi-directional EDI mappings, reducing the time, effort, and costs associated with migrating your EDI workloads to AWS. AWS B2B Data Interchange is a fully managed service that automates the transformation of business-critical EDI transactions at scale, with elasticity and pay-as-you-go pricing.

With AWS B2B Data Interchange’s new generative AI-assisted mapping capability, you can leverage your existing EDI documents and transactional data stored in your Amazon S3 buckets to generate mapping code using Amazon Bedrock. Once the mapping code is generated, it is managed within AWS B2B Data Interchange where it is used to automatically transform new EDI documents to and from custom data representations. Previously, you were required to write and test each EDI mapping manually, which was a time-consuming and difficult process that required niche EDI specialization. AWS B2B Data Interchange’s new generative AI-assisted mapping capability increases developer productivity and reduces the technical expertise required to develop mapping code, so you can shift resources back to the valued-added initiatives that drive meaningful business impact.

AWS B2B Data Interchange’s generative AI-assisted mapping capability is available in US East (N. Virginia) and US West (Oregon). To learn more about building and running your EDI workflows on AWS, visit the AWS B2B Data Interchange product page or review the documentation.

Read more


aws-backup

AWS Backup now supports Amazon Timestream in Asia Pacific (Mumbai)

Today, we are announcing the availability of AWS Backup support for Amazon Timestream for LiveAnalytics in the Asia Pacific (Mumbai) Region. AWS Backup is a policy-based, fully managed and cost-effective solution that enables you to centralize and automate data protection of Amazon Timestream for LiveAnalytics along with other AWS services (spanning compute, storage, and databases) and third-party applications. Together with AWS Organizations, AWS Backup enables you to centrally deploy policies to configure, manage, and govern your data protection activity.

With this launch, AWS Backup support for Amazon Timestream for LiveAnalytics is available in the following Regions: US East (N. Virginia, Ohio, Oregon), Asia Pacific (Sydney, Tokyo), and Europe (Frankfurt, Ireland). For more information on regional availability, feature availability, and pricing, see the AWS Backup pricing page and the AWS Backup Feature Availability page.

To learn more about AWS Backup support for Amazon Timestream for LiveAnalytics, visit AWS Backup’s technical documentation. To get started, visit the AWS Backup console.
 

Read more


AWS Backup for Amazon S3 adds new restore parameter

AWS Backup introduces a new restore parameter for Amazon S3 backups, offering you the ability to choose how many versions of an object to restore.

By default, AWS Backup restores only the latest version of objects from the version stack at any point in time. The new parameter will now allow you to recover all versions of your data by restoring the entire version stack. You can also recover just the latest version(s) of an object without the overhead of restoring all older versions. With this feature, you now have more flexibility to control the data recovery process of Amazon S3 buckets/prefixes from your Amazon S3 backups, tailoring restore jobs to your requirements.

This feature is available in all Regions where AWS Backup for Amazon S3 is available. For more information on Regional availability and pricing, see the AWS Backup pricing page.

To learn more about AWS Backup for Amazon S3, visit the product page and technical documentation. To get started, visit the AWS Backup console.
 

Read more


AWS Backup now supports resource type and multiple tag selections in backup policies

Today, AWS Backup announces additional options to assign resources to a backup policy on AWS Organizations. Customers can now select specific resources by resource type and exclude them based on resource type or tag. They can also use the combination of multiple tags within the same resource selection.

With additional options to select resources, customers can implement flexible backup strategies across their organizations by combining multiple resource types and/or tags. They can also exclude resources they do not want to back up using resource type or tag, optimizing cost on non-critical resources.

To get started, use your AWS Organizations' management account to create or edit an AWS Backup policy. Then, create or modify a resource selection using the AWS Organizations' API, CLI, or JSON editor in either the AWS Organizations or AWS Backup console.

AWS Backup support for enhanced resource selection in backup policies is available in all commercial regions where AWS Backup’s cross account management is available. For more information, visit our documentation and launch blog.

Read more


AWS Backup now supports copying Amazon S3 backups across Regions and accounts in opt-in Regions

AWS Backup for Amazon S3 adds support to copy your Amazon S3 backups across AWS Regions and accounts in AWS opt-in Regions (Regions that are disabled by default).

With the support of Amazon S3 backup copies in multiple AWS Regions, you can maintain separate, protected copies of your backup data to help meet the compliance requirements for data protection and disaster recovery. With the support of Amazon S3 backups across accounts, an additional layer of protection is provided against inadvertent or unauthorized actions.

The ability to copy Amazon S3 backups across AWS Regions and accounts is now available in all commercial AWS Regions. For more information on regional availability and pricing, see AWS Backup pricing page.

To learn more about AWS Backup for Amazon S3, visit the product page and technical documentation. To get started, visit the AWS Backup console.
 

Read more


aws-batch

AWS Batch now supports multiple EC2 Launch Templates per Compute Environment

AWS Batch now supports association of multiple Launch Templates (LTs) with AWS Batch Compute Environment (CE). You no longer need to create separate AWS Batch CEs if you wanted to apply different configurations based on the size and type of your Amazon Elastic Compute Cloud (EC2) instances. With support for multiple LTs per CE, you can dynamically choose a unique Amazon Machine Image (AMI), provision right amount of storage, or apply unique resource tags and more by associating different EC2 launch templates with different EC2 instance types used by a CE, enabling you to define flexible configurations for running your workloads using fewer CEs.

You can associate multiple LTs while creating a new CE or update an existing CE to use multiple LTs for different instance types. AWS Batch allows you to define up to 10 LTs, overriding the default LT, per CE for different EC2 instance families or instance family and size combinations. For more information, see Launch Templates page in the AWS Batch User Guide.

AWS Batch supports developers, scientists, and engineers in running efficient batch processing for ML model training, simulations, and analysis at any scale. Multi-Node Parallel jobs are available in any AWS Region where AWS Batch is available.
 

Read more


aws-chatbot

Announcing general availability of AWS Chatbot SDK

AWS announces general availability of AWS Chatbot SDKs. This launch provides developers access to AWS Chatbot’s control plane APIs by using the AWS SDK.

With this launch, customers can programmatically implement ChatOps workflows in their chat channels. They can now utilize the SDK to configure Microsoft Teams and Slack channels for monitoring and diagnosing issues. They can use SDK to configure action buttons and command aliases so that channel members can fetch telemetry and diagnose issues quickly. They can also programmatically tag resources to enforce tag-based controls in their environments.

AWS Chatbot SDKs are available at no additional cost in AWS Regions where AWS Chatbot is offered. Visit the AWS Chatbot product page and API guide in AWS Chatbot documentation to learn more.
 

Read more


AWS support case management is now available in AWS Chatbot for Microsoft Teams and Slack

AWS Chatbot announces general availability of AWS Support case management in Microsoft Teams and Slack. AWS customers can now use AWS Chatbot to monitor AWS support cases updates and respond to them from chat channels.

When troubleshooting issues, customers need to stay informed up-to-date on the latest support case updates in a place where they are collaborating. Previously, customers had to install a separate app or navigate to the Console to manage support cases. Now, customers can monitor and manage support cases from Microsoft Teams and Slack with AWS Chatbot.

To manage support cases from chat channels with AWS Chatbot, customers subscribe a chat channel to support case events published in EventBridge. As new case correspondences get added, AWS Chatbot sends the support case update notifications to the configured chat channels. Channel members can the use action buttons on the notifications to view the latest case updates and respond to them without leaving the chat channel.

To interact with support cases in chat channels, you must have a Business, Enterprise On-Ramp, or Enterprise Support plan. The case management in chat applications is available at no additional cost in AWS Regions where AWS Chatbot is offered. Get started with AWS Chatbot by visiting the AWS Management Chatbot Console and by downloading the AWS Chatbot app from the Microsoft Teams marketplace or Slack App Directory. Visit the AWS Chatbot product page and Managing AWS Support cases from chat channels in AWS Chatbot documentation to learn more.
 

Read more


AWS Chatbot adds support for chatting about AWS resources with Amazon Q Developer in Microsoft Teams and Slack

We are excited to announce the general availability of Amazon Q Developer in AWS Chatbot, which provides answers to customers’ AWS resource related queries in Microsoft Teams and Slack.

When issues occur, customers need to quickly find relevant resources to troubleshoot issues. Customer can now ask questions in natural language in chat channels to list resources in AWS accounts, get specific resource details, and ask about related resources using Amazon Q Developer.

With Amazon Q Developer in AWS Chatbot, customers find AWS resources by typing "@aws show ec2 instances in running state in us-east-1" or “@aws what is the size of the auto scaling group XX in us-east-2?”

Get started with AWS Chatbot by visiting the Chatbot Console and by downloading the AWS Chatbot app from the Microsoft Teams marketplace or Slack App Directory. To get started with chatting with Amazon Q in AWS Chatbot, visit the Asking Amazon Q questions in AWS Chatbot in AWS Chatbot documentation.

Read more


aws-clean-rooms

AWS Clean Rooms now supports multiple clouds and data sources

Today, AWS Clean Rooms announces support for collaboration with datasets from multiple clouds and data sources. This launch allows companies and their partners to easily collaborate with data stored in Snowflake and Amazon Athena, without having to move or share their underlying data among collaborators.

With AWS Clean Rooms expanded data sources and clouds support, organizations can seamlessly collaborate with any company leveraging datasets across AWS and Snowflake, without any party having to move, reveal, or copy their underlying datasets. This launch enables companies to collaborate on the most up-to-date data with zero extract, transform, and load (zero-ETL), eliminating the cost and complexity associated with migrating datasets out of existing environments. For example, a media publisher with data stored in Amazon S3 and an advertiser with data stored in Snowflake can analyze their collective datasets to evaluate the advertiser's spend without having to build ETL data pipelines, or share underlying data with one another. We are just getting started, and will continue to expand the ways in which customers can securely collaborate in AWS Clean Rooms while maintaining control of their records and information.

With AWS Clean Rooms, customers can create a secure data clean room in minutes and collaborate with any company on AWS or Snowflake, to generate unique insights about advertising campaigns, investment decisions, and research and development. For more information about the AWS Regions where AWS Clean Rooms is available, see the AWS Regions table. To learn more about collaborating with AWS Clean Rooms, visit AWS Clean Rooms.
 

Read more


AWS Clean Rooms ML supports privacy-enhanced model training and inference

Today, AWS announces AWS Clean Rooms ML custom modeling, which enables organizations to generate predictive insights with their partners running their own machine-learning (ML) models and using their data in a clean rooms collaboration. With this launch, companies and their partners can train ML models and run inference on collective datasets without having to share sensitive data or proprietary models.

For example, advertisers can bring their proprietary model and data into a Clean Rooms collaboration, and invite publishers to join their data to train and deploy a custom ML model that helps them increase campaign effectiveness—all without sharing their custom model and data with one another. Similarly, financial institutions can use historical transaction records to train a custom ML model, and invite partners into a Clean Rooms collaboration to detect potential fraudulent transactions, without having to share underlying data and model among collaborators. With AWS Clean Rooms ML custom modeling, you can gain valuable insights with your partners while applying privacy-enhancing controls when running model training and inferencing by specifying the datasets to be used in a Clean Rooms environment. This allows you and your partners to approve the datasets used, and removes the need to share sensitive data or proprietary models with one another. AWS Clean Rooms ML also offers an AWS-authored lookalike modeling capability that can help you improve lookalike segment accuracy by up to 36% compared to industry baselines.

AWS Clean Rooms ML is available as a capability of AWS Clean Rooms in these AWS Regions. To learn more, visit AWS Clean Rooms ML.

Read more


aws-client-vpn

AWS Client VPN now supports the latest Ubuntu OS versions - 22.04 LTS and 24.04 LTS

AWS Client VPN now supports Linux desktop client with Ubuntu versions 22.04 LTS and 24.04 LTS. You can now run the AWS supplied VPN client on the latest Ubuntu OS versions. AWS Client VPN desktop clients are available free of charge, and can be downloaded here.

AWS Client VPN is a managed service that securely connects your remote workforce to AWS or on-premises networks. It supports desktop clients for MacOS, Windows, and Ubuntu-Linux. With this release, CVPN now supports the latest version of Ubuntu client (i.e. 22.04 LTS and 24.04 LTS). It already support Mac OS version 12.0, 13.0 and 14.0, and Windows 10 and 11.

This client version is available in all regions where AWS Client VPN is generally available with no additional cost.

To learn more about Client VPN:

Read more


aws-cloud-wan

AWS Cloud WAN simplifies on-premises connectivity via AWS Direct Connect

AWS Cloud WAN now supports native integration with AWS Direct Connect, simplifying connectivity between your on-premises networks and the AWS cloud. The new capability enables you to directly attach your Direct Connect gateways to Cloud WAN without the need for an intermediate AWS Transit Gateway, allowing seamless connectivity between your data centers or offices with AWS Virtual Private Clouds (VPCs) across AWS regions globally.

Cloud WAN allows you to build, monitor, and manage a unified global network that interconnects your resources in the AWS cloud and your on-premises environments. Direct Connect allows you to create a dedicated network connection to AWS, bypassing the public Internet. Until today, customers needed to deploy an intermediate transit gateway to interconnect their Direct Connect-based networks with Cloud WAN. Starting today, you can directly attach your Direct Connect gateway to a Cloud WAN core network simplifying connectivity between your on-premises locations and VPCs. The new Cloud WAN Direct Connect attachment adds support for automatic route propagation between AWS and on-premises networks using Border Gateway Protocol (BGP). Direct Connect attachments also supports existing Cloud WAN features such as central policy-based management, tag-based attachment automation and segmentation for advanced security.

The new Direct Connect attachment for Cloud WAN is initially available in eleven commercial regions. Pricing for Direct Connect attachment is the same as any other Cloud WAN attachment. For additional information, please visit Cloud WAN documentation, pricing page and blog post.

Read more


AWS Transit Gateway and AWS Cloud WAN enhance visibility metrics and Path MTU support

AWS Transit Gateway (TGW) and AWS Cloud WAN now support per availability zone (AZ) metrics delivered to CloudWatch. Furthermore, both services now support Path Maximum Transmission Unit Discovery (PMTUD) for effective mitigation against MTU mismatch issues in their global networks.

TGW and Cloud WAN allow customers to monitor their global network through performance and traffic metrics such as bytes in/out, packets in/out, and packets dropped. Until now, these metrics were available at an attachment level, and aggregate TGW and Core Network Edge (CNE) levels. With this launch, customers have more granular visibility into AZ-level metrics for VPC attachments. AZ-level metrics enable customers to rapidly troubleshoot any AZ impairments and provide deeper visibility in AZ-level traffic patterns across TGW and Cloud WAN.

TGW and Cloud WAN now also support standard PMTUD mechanism for traffic ingressing on VPC attachments. Until now, jumbo sized packets exceeding the TGW/CNE MTU (8500 bytes) would get silently dropped on VPC attachments. With this launch, an Internet Control Message Protocol (ICMP) Fragmentation Needed response message is sent back to sender hosts allowing them to remediate packet MTU size and thus minimize packet loss due to MTU mismatches in their network. PMTUD support is available for both IPv4 and IPv6 packets.

The per-AZ CloudWatch metrics and PMTUD support are available within each service in all AWS Regions where TGW or Cloud WAN are available. For more information, see the AWS Transit Gateway and AWS Cloud WAN documentation pages.

Read more


aws-cloudformation

Accelerate AWS CloudFormation troubleshooting with Amazon Q Developer assistance

AWS CloudFormation now offers generative AI assistance powered by Amazon Q Developer to help troubleshoot unsuccessful CloudFormation deployments. This new capability provides easy-to-understand analysis and actionable steps to simplify the resolution of the most common resource provisioning errors encountered during CloudFormation deployments.

When creating or modifying a CloudFormation stack, CloudFormation can encounter errors in resource provisioning, such as missing required parameters for an EC2 instance or inadequate permissions. Previously, troubleshooting a failed stack operation could be a time-consuming process. After identifying the root cause of the failure, you had to search through blogs and documentation for solutions and determine the next steps, leading to longer resolution times. Now, when you review a failed stack operation in the CloudFormation Console, CloudFormation automatically highlights the likely root cause of the failure. You can click the "Diagnose with Q" button in the error alert box and Amazon Q Developer will provide a human-readable analysis of the error, helping you understand what went wrong. If you need further assistance, you can click the "Help me resolve" button to receive actionable resolution steps tailored to your specific failure scenario, helping you accelerate resolution of the error.

To get started, open the CloudFormation Console and navigate to the stack events tab for a provisioned stack. This feature is available in AWS Regions where AWS CloudFormation and Amazon Q Developer are available. Refer to the AWS Region table for service availability details. Visit our user guide to learn more about this feature.
 

Read more


AWS CloudFormation Hooks now allows AWS Cloud Control API resource configurations evaluation

AWS CloudFormation Hooks now allow you to evaluate resource configurations from AWS Cloud Control API (CCAPI) create and update operations. Hooks allow you to invoke custom logic to enforce security, compliance, and governance policies on your resource configurations. CCAPI is a set of common application programming interfaces (APIs) that is designed to make it easy for developers to manage their cloud infrastructure in a consistent manner and leverage the latest AWS capabilities faster. By extending Hooks to CCAPI, customers can now inspect resource configurations prior to CCAPI create and update operations, and block or warn the operations if there is a non-compliant resource found.

Before this launch, customers would publish Hooks that would only be invoked during CloudFormation operations. Now, customers can extend their resource Hook evaluations beyond CloudFormation to CCAPI based operations. Customers with existing resource Hooks, or who are using the recently launched pre-built Lambda and Guard hooks, simply need to specify “Cloud_Control” as a target in the hooks’ configuration.

Hooks is available in all AWS Commercial Regions. The CCAPI support is available for customers who use CCAPI directly or third-party IaC tools that have CCAPI providers support.

To get started, refer to Hooks user guide and CCAPI user guide for more information. Learn the detail of this feature from this AWS DevOps Blog.
 

Read more


Author AWS CloudFormation Hooks using the CloudFormation Guard domain specific language

AWS CloudFormation Hooks now allows customers to use the AWS CloudFormation Guard domain specific language to author hooks. Customers use AWS CloudFormation Hooks to invoke custom logic to inspect resource configurations prior to a create, update or delete AWS CloudFormation stack operation. If a non-compliant configuration is found, Hooks can block the operation or let the operation continue with a warning. With this launch, you can now author hooks by simply pointing to a Guard rule set stored as an S3 object.

Prior to this launch, customers authored hooks using a programming language and registered the hooks as extensions on the CloudFormation registry using the cfn-cli. This pre-built hook simplifies this authoring process and provides customers the ability to extend their existing Guard rules used for static template validation. Now, you can store your Guard rules, either as individual or compressed files in an S3 bucket, and provide your S3 URI in your hooks configuration.

The Guard hook is available at no additional charge in all AWS Commercial Regions. To get started, you can use the new Hooks console workflow within CloudFormation console, AWS CLI, or CloudFormation.

To learn more about the Guard hook, check out the AWS DevOps Blog or refer to the Guard Hook User Guide. Refer to Guard User Guide to learn more about Guard including how to write Guard rules.
 

Read more


Announcing AWS CloudFormation support for Recycle Bin rules

Today, AWS announces AWS CloudFormation support for Recycle Bin, a data recovery feature that enables restoration of accidentally deleted Amazon EBS Snapshots and EBS-backed AMIs. You can now use Recycle Bin rules as a resource in your AWS CloudFormation templates, stacks, and stack sets.

Using AWS CloudFormation, you can now create, edit, and delete Recycle Bin rules as part of your CloudFormation templates and incorporate Recycle Bin rules into your automated infrastructure deployments. For example, region-level Recycle Bin rules protects all resources of the specified type in the AWS Region in which the rule is created. If you have a template that automates the provisioning of new accounts, you can now add a region-level Recycle Bin rule to it. This ensures that all EBS Snapshots and/or EBS-backed AMIs in those accounts are automatically protected from accidental deletions and stored in the Recycle Bin according to the region-level rule.

This feature is now available in all AWS Commercial Regions and the AWS GovCloud (US) Regions.

To get started using Recycle Bin in AWS CloudFormation, visit the AWS CloudFormation console. Please refer to the AWS CloudFormation user guide for information on using Recycle Bin rules as a resource in your templates, stacks, and stack sets. Learn more about Recycle Bin here.
 

Read more


AWS CloudFormation Hooks introduces stack and change set target invocation points

AWS CloudFormation Hooks announces the general availability of new target invocation points: stack and change set. CloudFormation Hooks allows you to invoke custom logic to inspect resource configurations prior to CloudFormation operations to enforce organizational best practices and ensure only compliant resources are provisioned. Today’s launch extends this capability beyond resource properties, enabling expressive safety checks that consider the entire context of a stack and the planned CloudFormation operation changes.

Customers previously used Hooks to run validation checks on resource properties before provisioning. Now, by targeting the stack as the control point, you can run hooks against the entire template payload and target multiple resources at once. This allows you to examine resource relationships and their dependencies. Moreover, you can use the change set invocation point to run Hooks when a change set is created to evaluate the updated template and change set payload. This allows you to automate your change set review, and reduce the end-to-end time to resolve issues. You can set Hooks to fail the deployment or warn about the operations if there is any non-compliant configurations found.

The stack and change set target control points are now available in all AWS Commercial Regions. Refer to Hooks developer guide to learn more.

Read more


AWS CloudFormation Hooks now support custom AWS Lambda functions

AWS CloudFormation Hooks introduces a pre-built hook that allows you to simply point to an AWS Lambda function in your account. With CloudFormation Hooks, you can provide custom logic that proactively evaluate your resource configurations before provisioning. Today’s launch allows you to provide your custom logic as a Lambda function, allowing a simpler way for you to author a hook while gaining extended flexibility of hosting Lambda functions in your account.

Prior to this launch, customers used the CloudFormation CLI (cfn-cli) to author and publish hooks to the CloudFormation registry. Now, customers can simply activate the Lambda hook and pass a Lambda Amazon Resource Names (ARNs) for hooks to invoke. This allows you to directly edit your Lambda function to make updates without re-configuring your hook. Additionally, you no longer have to register your custom logic to CloudFormation registry.

The Lambda hook is available at no additional charge in all AWS Commercial Regions. Customers will incur a charge for Lambda usage. Refer to Lambda’s pricing guide for more information. To get started, you can use the new Hooks console workflow within CloudFormation console, AWS CLI, or CloudFormation.

To learn more about the Lambda hook, check out the detailed feature walkthrough on the AWS DevOps Blog or refer to the Lambda Hook User Guide. To get started with creating your Lambda function, visit AWS Lambda User Guide.
 

Read more


Get x-ray vision into AWS CloudFormation deployments with a timeline view

AWS CloudFormation now offers a capability called deployment timeline view that allows customers to monitor and visualize the sequence of actions CloudFormation takes in a stack operation. This capability provides visibility into the ordering and duration of resource provisioning actions for a stack operation. This empowers developers to optimize their CloudFormation templates and speed up troubleshooting of deployment issues.

When you create, update, or delete a stack, CloudFormation initiates resource-level provisioning actions based on a resource dependency graph. For example, if you submit a CloudFormation template with an EC2 instance, Security Group, and VPC, CloudFormation creates the VPC, Security Group, and EC2 instance in that order. Previously, you could only see the chronological list of stack operation events, which provided limited visibility into dependencies between resources and the ordering of provisioning actions. Now, you can see a graphical visualization that shows the order in which CloudFormation provisions resources within a stack, color-coding the status of each resource, and the duration of each provisioning action. If a resource provisioning encounters an error, it highlights the likely root cause. This allows you to determine the optimal grouping of resources into templates, for minimizing deployment times and improving maintainability.

The new capability is available in all AWS Regions where CloudFormation is supported. Refer to the AWS Region table for service availability details.

Get started by initiating a stack operation and accessing the deployment timeline view from the stack events tab in the CloudFormation Console. To learn more about the deployment timeline view, visit the AWS CloudFormation User Guide.
 

Read more


aws-cloudtrail

AWS CloudTrail Lake launches enhanced analytics and cross-account data access

AWS announces two significant enhancements to CloudTrail Lake, a managed data lake that enables you to aggregate, immutably store, and analyze your activity logs at scale:

  • Comprehensive dashboard capabilities: A new "Highlights" dashboard provides an at-a-glance overview of your AWS activity logs including AI-powered insights (AI-powered insights is in preview). Additionally, we have added 14 new pre-built dashboards catering to various use cases such as security and operational monitoring. These dashboards provide a starting point to analyze trends, detect anomalies, and conduct efficient investigations across your AWS environments. For example, the security dashboard displays top access denied events, failed console login attempts, and more. You can also create custom dashboards with scheduled refreshes, tailoring your monitoring to specific needs.
  • Cross-account sharing of event data stores: This feature allows you to securely share your event data stores with select IAM identities using Resource-Based Policies (RBP). These identities can then query the shared event data store within the same AWS Region where the event data store was created, facilitating more comprehensive analysis across your organization while maintaining security.

These features are available in all AWS Regions where AWS CloudTrail Lake is supported, except AI-powered insights on the “Highlights" dashboard, which is in preview in N. Virginia, Oregon, and Tokyo Regions. While these enhancements are available at no additional cost, standard CloudTrail Lake query charges apply when running queries to generate results or create visualizations for the CloudTrail Lake dashboards. To learn more, visit the AWS CloudTrail documentation or read our News Blog.

Read more


AWS CloudTrail Lake enhances log analysis with AI-powered features

AWS announces two AI-powered enhancements to AWS CloudTrail Lake, a managed data lake that helps you capture, immutably store, access, and analyze your activity logs, as well as AWS Config configuration items. These new capabilities simplify log analysis, enabling deeper insights and quicker investigations across your AWS environments:

  • AI-powered natural language query generation in CloudTrail Lake is now generally available in seven AWS Regions: Mumbai, N. Virginia, London, Tokyo, Oregon, Sydney, and Canada (Central). This feature allows you to ask questions about your AWS activity in plain English, without writing complex SQL queries. For example, you can ask, "Which API events failed in the last week due to missing permissions?" CloudTrail Lake then generates the corresponding SQL query, streamlining your analysis of AWS activity logs (management and data events).
  • AI-powered query result summarization is now available in preview in the N. Virginia, Oregon, and Tokyo Regions. This feature provides natural language summaries of your query results, regardless of whether the query was generated through the natural language query generation feature or manually written in SQL. This capability significantly reduces the time and effort required to extract meaningful insights from your AWS activity logs (management, data, and network activity events). For example, after running a query to find users with the most access denied requests, you can click "Summarize" to get a concise overview of the key findings.

Please note that running queries will incur CloudTrail Lake query charges. Refer to CloudTrail pricing for details. To learn more, visit the AWS CloudTrail documentation.

Read more


AWS CloudTrail Lake announces enhanced event filtering

AWS enhances event filtering in AWS CloudTrail Lake, a managed data lake that helps you capture, immutably store, access, and analyze your activity logs, as well as AWS Config configuration items. Enhanced event filtering expands upon existing filtering capabilities, giving you even greater control over which CloudTrail events are ingested into your event data stores. This enhancement increases the efficiency and precision of your security, compliance, and operational investigations while helping reduce costs.

You can now filter both management and data events by the following new attributes:

  • eventSource: The service that the request was made to
  • eventType: Type of event that generated the event record (e.g., AwsApiCall, AwsServiceEvent, etc)
  • userIdentity.arn: IAM entity that made the request
  • sessionCredentialFromConsole: Whether the event originated from an AWS Management Console session or not

For management events, you can additionally filter by eventName which identifies the requested API action.

For each of these attributes, you can specify values to include or exclude. For example, you can now filter CloudTrail events based on the userIdentity.arn attribute to exclude events generated by specific IAM roles or users. You can exclude a dedicated IAM role used by a service that performs frequent API calls for monitoring purposes. This allows you to significantly reduce the volume of CloudTrail events ingested into CloudTrail Lake, lowering costs while maintaining visibility into relevant user and system activities.

Enhanced event filtering is available in all AWS Regions where AWS CloudTrail Lake is supported, at no additional charge. To learn more, visit the AWS CloudTrail documentation.

Read more


aws-codebuild

AWS CodeBuild now supports Windows Docker builds in reserved capacity fleets

AWS CodeBuild now supports building Windows docker images in reserved capacity fleets. AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages ready for deployment.

Additionally, you can bring in your own Amazon Machine Images (AMIs) in reserved capacity for Linux and Windows platforms. This enables you to customize your build environment including building and testing with different kernel modules, for more flexibility.

The feature is now available in US East (N. Virginia), US East (Ohio), US West (Oregon), South America (Sao Paulo), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Mumbai), Europe (Ireland), and Europe (Frankfurt) where reserved capacity fleets are supported.

You can follow the Windows docker image sample to get started. To configure your own AMIs in reserved capacity fleets, please visit reserved capacity documentation. To learn more about how to get started with CodeBuild, visit the AWS CodeBuild product page.

Read more


AWS CodeBuild now supports additional compute types for reserved capacity

AWS CodeBuild now supports 18 new compute options for your reserved capacity fleets. You can select up to 96 vCPUs and 192 GB of memory to build and test your software applications on Linux x86, Arm, and Windows platforms. AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages ready for deployment.

Customers using reserved capacity can now access the new compute types by configuring vCPU, memory size, and disk space attributes on the fleets. With the addition of these new types, you now have a wider range of compute options across different Linux and Windows platforms for your workloads.

The new compute types are now available in US East (N. Virginia), US East (Ohio), US West (Oregon), South America (Sao Paulo), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Mumbai), Europe (Ireland), and Europe (Frankfurt) where reserved capacity fleets are supported.

To learn more about compute options in reserved capacity, please visit our documentation. To learn more about how to get started with CodeBuild, visit the AWS CodeBuild product page.
 

Read more


aws-codepipeline

AWS CodePipeline now supports publishing ECR image and AWS InspectorScan as new actions

AWS CodePipeline introduces the ECRBuildAndPublish action and the AWS InspectorScan action in its action catalog. The ECRBuildAndPublish action enables you to easily build a docker image and publish it to ECR as part of your pipeline execution. The InspectorScan action enables you to scan your source code repository or docker image as part of your pipeline execution.

Previously, if you wanted to build and publish a docker image, or run vulnerability scan, you had to create a CodeBuild project, configure the project with the appropriate commands, and add a CodeBuild action to your pipeline to run the project. Now, you can simply add these actions to your pipeline, and let the pipeline handle the rest for you.

To learn more about using the ECRBuildAndPublish action in your pipeline, visit our documentation. To learn more about using the InspectorScan action in your pipeline, visit our documentation. For more information about AWS CodePipeline, visit our product page. These new actions are available in all regions where AWS CodePipeline is supported, except the AWS GovCloud (US) Regions and the China Regions.

Read more


AWS CodePipeline open source starter templates for simplified getting started experience

Today, AWS CodePipeline open-sourced its starter templates library, which allows you to view the CloudFormation templates that power the different pipeline scenarios available in CodePipeline.

The starter template library is a valuable resource if you are new to CodePipeline. With the starter templates, you can see the resources being provisioned, understand how different pipeline stages are configured, and use these templates as a starting point for building more advanced pipelines. This increased transparency allows you to take a more hands-on approach to your CI/CD workflows and align them with your specific business requirements.

AWS CodePipeline starter templates library is released as an open-source project under the Apache 2.0 license. You can access the source code in the GitHub repository here. For more information about AWS CodePipeline, visit our product page.

Read more


aws-command-line-interface

AWS Command Line Interface adds PKCE-based authorization for single sign-on

The AWS Command Line Interface (AWS CLI) v2 now supports OAuth 2.0 authorization code flows using the Proof Key for Code Exchange (PKCE) standard. This provides a simple and safe way to retrieve credentials for AWS CLI commands.

The AWS CLI is a unified tool that enables you to control multiple AWS services from the command line and to automate them through scripts. AWS CLI v2 offers integration with AWS IAM Identity Center, the recommended service for managing workforce access to AWS applications and multiple AWS accounts. The authorization code flow with PKCE is the recommended best practice for access to AWS resources from desktops and mobile devices with web browsers. It is now the default behavior when running the aws sso login or aws configure sso commands.

To learn more, see Configuring IAM Identity Center authentication with the AWS CLI in the AWS CLI User Guide. Share your questions, comments, and issues with us on GitHub. AWS IAM Identity Center is available at no additional cost in AWS Regions.
 

Read more


aws-compute-optimizer

AWS Compute Optimizer now supports rightsizing recommendations for Amazon Aurora

AWS Compute Optimizer now provides recommendations for Amazon Aurora DB instances. These recommendations help you identify idle database instances and choose the optimal DB instance class, so you can reduce costs for unused resources and increase the performance of under-provisioned workloads.

AWS Compute Optimizer automatically analyzes Amazon CloudWatch metrics such as CPU utilization, network throughput, and database connections to generate recommendations for your DB instances running Amazon Aurora MySQL-Compatible Edition and Aurora PostgreSQL-Compatible Edition engines. If you enable Amazon RDS Performance Insights on your DB instances, Compute Optimizer will analyze additional metrics such as DBLoad and out-of-memory counters to give you more insights to choose the optimal DB instance configuration. With this launch, AWS Compute Optimizer now supports recommendations for Amazon RDS for MySQL, Amazon RDS for PostgreSQL, and Amazon Aurora database engines.

This new feature is available in all AWS Regions where AWS Compute Optimizer is available except the AWS GovCloud (US) and the China Regions. To learn more about the new feature updates, please visit Compute Optimizer’s product page and user guide.

Read more


AWS Compute Optimizer now supports idle resource recommendation

Today, AWS announces that AWS Compute Optimizer now supports recommendations to help you identify idle AWS resources. With this new recommendation type, you will be able to identify resources that are un-used and may be candidates for turning off or deleting, resulting in cost savings.

With the new idle resource recommendation, you will be able to identify idle EC2 instances, EC2 Auto Scaling groups, EBS volumes, ECS services running on Fargate, and RDS instances. You can view the total savings potential of stopping or deleting these idle resources. Compute Optimizer analyzes 14 consecutive days of utilization history to validate if resources are idle to provide trustworthy savings opportunities. You can also view idle resource recommendation across all AWS accounts in your organization through the Cost Optimization Hub, with de-duplicated estimated savings with other recommendations on the same resources.

For more information about the AWS Regions where Compute Optimizer is available, see AWS Region table.

For more information about Compute Optimizer, visit our product page and documentation. You can start using AWS Compute Optimizer through the AWS Management Console, AWS CLI, and AWS SDK.

Read more


aws-config

Amazon CloudWatch now provides centralized visibility into telemetry configurations

Amazon CloudWatch now offers centralized visibility into critical AWS service telemetry configurations, such as Amazon VPC Flow Logs, Amazon EC2 Detailed Metrics, and AWS Lambda Traces. This enhanced visibility enables central DevOps teams, system administrators, and service teams to identify potential gaps in their infrastructure monitoring setup. The telemetry configuration auditing experience seamlessly integrates with AWS Config to discover AWS resources, and can be turned on for the entire organization using the new AWS Organizations integration with Amazon CloudWatch.

With visibility into telemetry configurations, you can identify monitoring gaps that might have been missed in your current setup. For example, this helps you identify gaps in your EC2 detailed metrics so that you can address them and easily detect short-lived performance spikes and build responsive auto-scaling policies. You can audit telemetry configuration coverage at both resource type and individual resource levels, refining the view by filtering across specific accounts, resource types, or resource tags to focus on critical resources.

The telemetry configurations auditing experience is available in US East (N. Virginia), US West (Oregon), US East (Ohio), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm) regions. There is no additional cost to turn on the new experience, including for AWS Config.

You can get started with auditing your telemetry configurations using the Amazon CloudWatch Console, by clicking on Telemetry config in the navigation panel, or programmatically using the API/CLI. To learn more, visit our documentation.

Read more


AWS Config now supports a service-linked recorder

AWS Config added support for a service-linked recorder, a new type of AWS Config recorder that is managed by an AWS service and can record configuration data on service-specific resources, such as the new Amazon CloudWatch telemetry configurations audit. By enabling the service-linked recorder in Amazon CloudWatch, you gain centralized visibility into critical AWS service telemetry configurations, such as Amazon VPC Flow Logs, Amazon EC2 Detailed Metrics, and AWS Lambda Traces.

With service-linked recorders, an AWS service can deploy and manage an AWS Config recorder on your behalf to discover resources and utilize the configuration data to provide differentiated features. For example, an Amazon CloudWatch managed service-linked recorder helps you identify monitoring gaps within specific critical resources within your organization, providing a centralized, single-pane view of telemetry configuration status. Service-linked recorders are immutable to ensure consistency, prevention of configuration drift, and simplified experience. Service-linked recorders operate independently of any existing AWS Config recorder, if one is enabled. This allows you to independently manage your AWS Config recorder for your specific use cases while authorized AWS services can manage the service-linked recorder for feature specific requirements.

Amazon CloudWatch managed service-linked recorder is now available in US East (N. Virginia), US West (Oregon), US East (Ohio), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney) Europe (Frankfurt), Europe (Ireland), Europe (Stockholm) regions. The AWS Config service-linked recorder specific to Amazon CloudWatch telemetry configuration feature is available to customers at no additional cost.

To learn more, please refer to our documentation.
 

Read more


aws-console-mobile-application

AWS Announces Amazon Q account resources chat in the AWS Console Mobile App

Today, Amazon Web Services (AWS) is announcing the general availability of Amazon Q Developer’s AWS account resources chat capability in the AWS Console Mobile Application. With this capability, you can use your device’s voice input and output capabilities along with natural language prompts to list resources in your AWS account, get specific resource details, and ask about related resources while on-the-go.

From the Amazon Q tab in the AWS Console Mobile App, you can ask Q to “list my running EC2 instances in us-east-1” or “list my S3 buckets” and Amazon Q returns a list of resource details, along with a summary. You can ask “what Amazon EC2 instances is Amazon CloudWatch alarm <name> monitoring” or ask “what related resources does my ec2 instance <id> have?” and Amazon Q will respond with specific resource details in a mobile friendly format.

The Console Mobile App lets users view and manage a select set of resources to stay informed and connected with their AWS resources while on-the-go. Visit the product page for more information about the Console Mobile Application.
 

Read more


aws-control-tower

Amazon Web Services announces declarative policies

Today, AWS announces the general availability of declarative policies, a new management policy type within AWS Organizations. These policies simplify the way customers enforce durable intent, such as baseline configuration for AWS services within their organization. For example, customers can configure EC2 to allow instance launches using AMIs vended by specific providers and block public access in their VPC with a few simple clicks or commands for their entire organization using declarative policies.

Declarative policies are designed to prevent actions that are non-compliant with the policy. The configuration defined in the declarative policy is maintained even when services add new APIs or features, or when customers add new principals or accounts to their organization. With declarative policies, governance teams have access to the account status report which provides insight into the current configuration for an AWS service across their organization. This helps them asses readiness to enforce configuration at scale. Administrators can provide additional transparency to end users by configuring custom error messages to redirect them to internal wikis or ticketing systems through declarative policies.

To get started, navigate to the AWS Organizations console to create and attach declarative policies. You can also use AWS Control Tower, AWS CLI or CloudFormation templates to configure these policies. Declarative policies today support EC2, EBS and VPC configurations with support for other services coming soon. To learn more see documentation and blog post.

Read more


AWS Control Tower launches managed controls using declarative policies

Today, we are excited to announce the general availability of managed, preventive controls implemented using declarative policies in AWS Control Tower. These policies are a set of new optional controls that help you consistently enforce the desired configuration for a service. For example, customers can deploy a declarative, policy-based preventive control that disallows public sharing of Amazon Machine Images (AMIs). Declarative policies help you ensure that the controls configured are always enforced regardless of the introduction of new APIs, or when new principals or accounts are added.

Today, AWS Control Tower is releasing declarative, policy-based preventive controls for Amazon Elastic Compute Cloud (Amazon EC2) service, Amazon Virtual Private Cloud (Amazon VPC) and Amazon Elastic Block Store (Amazon EBS). These controls help you achieve control objectives such as limit network access, enforce least privilege, and manage vulnerabilities. AWS Control Tower’s new declarative policy-based preventive controls complement AWS Control Tower’s existing control capabilities, enabling you to disallow actions that lead to policy violations.

The combination of preventive, proactive, and detective controls helps you monitor whether your multi-account AWS environment is secure and managed in accordance with best practices. For a full list of AWS regions where AWS Control Tower is available, see AWS Region Table.

Read more


AWS Control Tower adds prescriptive backup plans to landing zone capabilities

Today, AWS Control Tower added AWS Backup to the list of AWS services you can optionally configure with prescriptive guidance. This configuration option allows you to select from a range of recommended backup plans, seamlessly integrating data backup and recovery workflows into your Control Tower landing zone and organizational units. A landing zone is a well-architected, multi-account AWS environment based on security and compliance best practices. AWS Control Tower automates the setup of a new landing zone using best-practices blueprints for identity, federated access, logging, account structure, and with this launch adds data retention.

When you choose to enable AWS Backup on your landing zone, and then select applicable organizational units, Control Tower creates a backup plan with predefined rules, like retention days, frequency, and time window during which backups occur, that define how to backup AWS resources across all governed member accounts. Applying the backup plan at the Control Tower landing zone ensures it is consistent for all member accounts in-line with best practice recommendations from AWS Backup.

For a full list of Regions where AWS Control Tower is available, see the AWS Region Table. To learn more, visit the AWS Control Tower homepage or see the AWS Control Tower User Guide.

Read more


AWS Control Tower improves Hooks management for proactive controls and extends proactive controls support in additional regions

Today, we are excited to release an improved AWS CloudFormation Hooks management capability for AWS Control Tower proactive controls. With this release, Hooks deployed for proactive controls will now be managed by AWS Control Tower. Additionally, we are releasing proactive controls in AWS Canada West (Calgary) and Asia Pacific (Malaysia) regions. These controls help you meet control objectives such as establish logging and monitoring, encrypt data at rest, or improve resiliency. To see a full list of the proactive controls, see the Controls Reference Guide.

AWS Control Tower’s proactive control capabilities leverage AWS CloudFormation Hooks to identify and block non-compliant resources proactively before AWS CloudFormation provisions them. Previously, proactive control deployed Hooks were protected to ensure only AWS Control Tower can modify them, preventing customers from authoring their own Hooks. With this release, proactive control deployed Hooks are now directly managed by the AWS Control Tower service, allowing customers to author their own Hooks, while also benefiting from the AWS Control Tower proactive controls.

AWS Control Tower’s proactive controls are available in all AWS commercial Regions where AWS Control Tower is available. For a full list of AWS Regions where AWS Control Tower is available, see AWS Region Table. You can start deploying the AWS Control Tower controls from the console or using AWS Control Tower control APIs.
 

Read more


AWS Control Tower launches configurable managed controls implemented using resource control policies

Today we are excited to announce the launch of AWS managed controls implemented using resource control policies (RCPs) in AWS Control Tower. These new optional preventive controls help you centrally apply organization-wide access controls around AWS resources in your organization. Additionally, you can now configure the new RCP and existing service control policies (SCP) preventive controls to specify AWS IAM (principal and resource) exemptions where applicable. Exemptions can be configured when you don’t want a principal or a resource to be governed by the control. To see a full list of the new controls, see the controls reference guide.

With this addition, AWS Control Tower now supports over 30 configurable preventive controls, providing off-the-shelf AWS-managed controls to help you scale your business using new AWS workloads and services. At launch, you can enable AWS Control Tower RCPs for Amazon Simple Storage Service, AWS Security Token Service, AWS Key Management Service, Amazon Simple Queue Service, and AWS Secrets Manager service. For example, an RCP can enforce the requirement that “Require the organization's Amazon S3 resources to be accessible only by IAM principals that belong to the organization,” regardless of the permissions granted on individual S3 bucket policies.

AWS Control Tower’s new RCP based preventive controls are available in all AWS commercial Regions where AWS Control Tower is available. For a full list of AWS regions where AWS Control Tower is available, see AWS Region Table.
 

Read more


AWS Control Tower launches the ability to resolve drift for optional controls

AWS Control Tower customers can now use the ResetEnabledControl API to programmatically resolve the control drift or re-deploy the control to its intended configuration. A control drift occurs when the AWS Control Tower managed control is modified outside the AWS Control Tower governance. Resolving drift helps you to adhere to your governance and compliance requirements. You can use this API with all AWS Control Tower optional controls except service control policies(SCPs) based preventive controls. AWS Control Tower APIs enhance the end-to-end developer experience by enabling automation for integrated workflows and managing workloads at scale.

Below is the list of AWS Control Tower control APIs that are now supported in the regions where AWS Control Tower is available. Please visit the AWS Control Tower API reference for more information.

  • AWS Control Tower Control APIs - EnableControl, DisableControl, GetControlOperation, GetEnabledControl, ListEnabledControls, UpdateEnabledControl, TagResource, UnTagResource, ListTagsForResource, ResetEnabledControl API.

To learn more, visit the AWS Control Tower homepage. For more information about the AWS Regions where AWS Control Tower is available, see the AWS Region table.
 

Read more


aws-cost-explorer

Amazon Q Developer now provides natural language cost analysis

Today, AWS announces the addition of cost analysis capabilities to Amazon Q Developer, allowing customers to retrieve and interpret their AWS cost data through natural language interactions. Amazon Q Developer is a generative AI-powered assistant that helps customers build, deploy, and operate applications on AWS. The cost analysis capability helps users of all skill levels to better understand and manage their AWS spending without previous knowledge of AWS Cost Explorer.

Customers can now ask Amazon Q Developer questions about their AWS costs such as "Which region had the largest cost increase last month?" or "What services cost me the most last quarter?". Q interprets these questions, analyzes the relevant cost data, and provides easy-to-understand responses. Each answer includes transparency on the Cost Explorer parameters used and a link to visualize the data in Cost Explorer.

This feature is now available in all AWS Regions where Amazon Q Developer is supported. Customers can access it via the Amazon Q icon in the AWS Management Console. To get started, see the AWS Cost Management user guide.
 

Read more


aws-data-exchange

Announcing enhanced purchase order support for AWS Marketplace

Today, AWS Marketplace is extending transaction purchase order number support to products with pay-as-you-go pricing, including Amazon Bedrock subscriptions, software as a service (SaaS) contracts with consumption pricing, and AMI annuals. Additionally, you can update purchase order numbers post-subscription prior to invoice creation to ensure your invoices reflect the proper purchase order. This launch helps you allocate costs and makes it easier to process and pay invoices.

The purchase order feature in AWS Marketplace allows the purchase order number that you provide at the time of the transaction in AWS Marketplace to appear on all invoices related to that purchase. Now, you can provide a purchase order at the time of purchase for most products available in AWS Marketplace, including products with pay-as-you-go pricing. You can add or update purchase orders post-subscription, prior to invoice generation, within the AWS Marketplace console. You can also provide more than one PO for products appearing on your monthly AWS Marketplace invoice and receive a unique invoice for each purchase order. Additionally, you can add a unique PO for each fixed charge and associated AWS Marketplace monthly usage charges at the time of purchase, or post-subscription in the AWS Marketplace console.

You can update purchase orders for existing subscriptions under manage subscriptions in the AWS Marketplace console. To enable transaction purchase orders for AWS Marketplace, sign in to the management account (for AWS Organizations) and enable the AWS Billing integration in the AWS Marketplace Console settings. To learn more, read the AWS Marketplace Buyer Guide.

Read more


aws-data-transfer-terminal

AWS announces AWS Data Transfer Terminal for high-speed data uploads

Today, AWS announces the launch of AWS Data Transfer Terminal, a secure, physical location where you can bring your storage devices, connect directly to the AWS network, and upload data to AWS including Amazon Simple Storage Service (Amazon S3), Amazon Elastic File System (Amazon EFS), and others using a high throughput connection. Currently, Data Transfer Terminals are located in Los Angeles, and New York. You can reserve a time slot to visit your nearest Data Transfer Terminal facility to upload data.

AWS Data Transfer Terminals are ideal for customer scenarios that create or collect large amounts of data that need to be transferred to the AWS cloud quickly and securely on an as-needed basis. These use cases span various industries and applications, including video production data for processing in the media and entertainment industry, training data for Advanced Driver Assistance Systems (ADAS) in the automotive industry, migrating legacy data in the financial services industry, and uploading equipment sensor data in the industrial and agricultural . By using Data Transfer Terminal, you can significantly reduce the time it takes to upload large amounts of data, enabling you to process ingested data within minutes, as opposed to days or weeks. Once data is uploaded to AWS, you can efficiently analyze large datasets with Amazon Athena, train and run machine learning models with ingested data using Amazon SageMaker, or build scalable applications using Amazon Elastic Compute Cloud (Amazon EC2).

To learn more, visit the Data Transfer Terminal product page and documentation. To get started, make a reservation at your nearby Data Transfer Terminal in the AWS Console.

Read more


aws-database-migration-service

AWS DMS Schema Conversion now uses generative AI

AWS Database Migration Service (AWS DMS) Schema Conversion with generative AI is now available. The feature is currently available for database schema conversion from commercial engines, such as Microsoft SQL Server, to Amazon Aurora PostgreSQL-Compatible Edition and Amazon Relational Database Service (Amazon RDS) for PostgreSQL.

Using generative AI recommendations, you can simplify and accelerate your database migration projects, particularly when converting complex code objects which typically require manual conversion, such as stored procedures, functions, or triggers. AWS DMS Schema Conversion with generative AI converts up to 90% of your schema.

AWS DMS Schema Conversion with generative AI is currently available in three AWS Regions: US East (N. Virginia), US West (Oregon), and Europe (Frankfurt).

You can use this feature in the AWS Management Console or AWS Command Line Interface (AWS CLI) by selecting a commercial database such as Microsoft SQL Server as your source database and Amazon Aurora PostgreSQL or Amazon RDS for PostgreSQL as your target when initiating a schema conversion project. When converting applicable objects, you will see an option to enable generative AI for conversion. To get started, visit the AWS DMS Schema Conversion User Guide and check out this blog post.

Read more


AWS DMS now supports Data Masking

AWS Database Migration Service (AWS DMS) now supports Data Masking, enabling customers to transform sensitive data at the column level during migration, helping to comply with data protection regulations like GDPR. Using AWS DMS, you can now create copies of data that redacts information at a column level that you need to protect.

AWS Data Masking will automatically mask the portions of data you specify. Data Masking offers three transformation techniques: digit randomization, digit masking, and hashing. It's available for all endpoints supported by DMS Classic and DMS Serverless in version 3.5.4.

To learn more about Data Masking with AWS DMS, please please refer to the AWS DMS technical documentation.

Read more


AWS DMS now delivers improved performance for data validation

AWS Database Migration Service (AWS DMS) has enhanced data validation performance for database migrations, enabling customers to validate large datasets with significantly faster processing times.

This enhanced data validation is now available in version 3.5.4 of the replication engine for both full load and full load with CDC migration tasks. Currently, this enhancement supports migration paths from Oracle to PostgreSQL, SQL Server to PostgreSQL, Oracle to Oracle, and SQL Server to SQL Server, with additional migration paths planned for future releases.

To learn more about data validation performance improvements with AWS DMS, please refer to the AWS DMS Technical Documentation.

Read more


Announcing AWS DMS Serverless improved Oracle to S3 full load throughput

AWS Database Migration Service Serverless (AWS DMSS) now offers improved throughput for Oracle to Amazon S3 full load migrations. With this enhancement, you can now migrate data from Oracle databases to S3 up to two times faster than previously possible with AWS DMSS.

AWS DMSS Oracle to Amazon S3 Full Load performance enhancements will be applied automatically whenever AWS DMSS detects a full load migration between an Oracle database and Amazon S3. For detailed information on these improvements, refer to the AWS DMSS enhanced throughput documentation.

To learn more, see the AWS DMS Full Load for Oracle databases documentation. For AWS DMS regional availability, please refer to the AWS Region Table.

Read more


aws-deadline-cloud

AWS Deadline Cloud now supports GPU accelerated EC2 Instance Types

Today, AWS announces support for NVIDIA GPU accelerated instances in service-managed fleets (SMF) in AWS Deadline Cloud. AWS Deadline Cloud is a fully managed service that simplifies render management for teams creating computer-generated 2D/3D graphics and visual effects for films, TV shows, commercials, games, and industrial design.

Now you can use Deadline Cloud SMF to create auto-scaling fleets of GPU accelerated instances without having to set up, configure, or manage the worker infrastructure yourself. Deadline Cloud SMF can be set up in minutes to deploy NVIDIA GPU accelerated EC2 Instance Types (G4dn, G5, G6, Gr6, G6e) with NVIDIA GRID drivers and Windows Server 2022 or Linux (AL2023) operating systems. This expands the digital content creation software you can use within a fully managed render farm.

NVIDIA GPU accelerated instances, are supported in service-managed fleets in all AWS Regions where Deadline Cloud is available.

For more information, please visit the Deadline Cloud product page, and see the Deadline Cloud pricing page for price details.

Read more


aws-directory-service

AWS Directory Service is available in the AWS Asia Pacific (Malaysia) Region

AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft AD, and AD Connector are now available in the AWS Asia Pacific (Malaysia) Region.

Built on actual Microsoft Active Directory (AD), AWS Managed Microsoft AD enables you to migrate AD-aware applications while reducing the work of managing AD infrastructure in the AWS Cloud. You can use your Microsoft AD credentials to connect to AWS applications such as Amazon Relational Database Service (RDS) for SQL Server, Amazon RDS for PostgreSQL, and Amazon RDS for Oracle. You can keep your identities in your existing Microsoft AD or create and manage identities in your AWS managed directory.

AD Connector is a proxy that enables AWS applications to use your existing on-premises AD identities without requiring AD infrastructure in the AWS Cloud. You can also use AD Connector to join Amazon EC2 instances to your on-premises AD domain and manage these instances using your existing group policies.

Please see all AWS Regions where AWS Managed Microsoft AD and AD Connector are available. To learn more, see AWS Directory Service.
 

Read more


aws-elastic-beanstalk

AWS Elastic Beanstalk adds support for Ruby 3.3

AWS Elastic Beanstalk is a service that provides the ability to deploy and manage applications in AWS without worrying about the infrastructure that runs those applications. Ruby 3.3 on AL2023 adds support for a new parser, a new pure-Ruby just-in-time compiler and several performance improvements. You can create Elastic Beanstalk environment(s) running Ruby 3.3 on AL2023 using any of the Elastic Beanstalk interfaces such as Elastic Beanstalk Console, Elastic Beanstalk CLI, and the Elastic Beanstalk API.

This platform is generally available in commercial regions where Elastic Beanstalk is available including the AWS GovCloud (US) Regions. For a complete list of regions and service offerings, see AWS Regions.

For more information about Ruby and Linux Platforms, see the Elastic Beanstalk developer guide. To learn more about Elastic Beanstalk, visit the Elastic Beanstalk product page.

Read more


AWS Elastic Beanstalk adds support for Node.js 22

AWS Elastic Beanstalk now supports building and deploying Node.js 22 applications on AL2023 Beanstalk environments.

AWS Elastic Beanstalk is a service that provides the ability to deploy and manage applications in AWS without worrying about the infrastructure that runs those applications. Node.js 22 on AL2023 provides updates to the V8 JavaScript engine, improved garbage collection and performance improvements. You can create Elastic Beanstalk environment(s) running Node.js 22 on AL2023 using any of the Elastic Beanstalk interfaces such as Elastic Beanstalk Console, Elastic Beanstalk CLI, and the Elastic Beanstalk API.

This platform is generally available in commercial regions where Elastic Beanstalk is available including the AWS GovCloud (US) Regions. For a complete list of regions and service offerings, see AWS Regions.

For more information about Node.js and Linux Platforms, see the Elastic Beanstalk developer guide. To learn more about Elastic Beanstalk, visit the Elastic Beanstalk product page.

Read more


AWS Elastic Beanstalk adds support for Windows Bundled Logs

AWS Elastic Beanstalk is now providing Windows Bundled logs to enhance log collection capabilities for customers running applications on the Windows platforms.

AWS Elastic Beanstalk is now providing enhanced log collection capabilities for customers running applications on Windows platforms. Customers can request full logs and Beanstalk will automatically collect and bundle together the most important log files into a single downloadable zip file. This bundled log set can include logs for HealthD Service, IIS, Application Event, Elastic Beanstalk and Cloud Formation.

Elastic Beanstalk support for Windows Bundled Logs is available in all of the AWS Commercial Regions and AWS GovCloud (US) Regions that Elastic Beanstalk supports. For a complete list of regions and service offerings, see AWS Regions.

For more information about Elastic Beanstalk and Windows Bundled Logs see in the AWS Elastic Beanstalk Developer Guide.

Read more


aws-elastic-disaster-recovery

Amazon Application Recovery Controller zonal shift and zonal autoshift support Application Load Balancers

Amazon Application Recovery Controller (ARC) zonal shift and zonal autoshift have expanded their capabilities and now support Application Load Balancers (ALB) with cross-zone configuration enabled. ARC zonal shift helps you quickly recover an unhealthy application in an Availability Zone (AZ), and reduce the duration and severity of impact to the application due to events such as power outages and hardware or software failures. ARC zonal autoshift safely and automatically shifts your application’s traffic away from an AZ when AWS identifies a potential failure affecting that AZ.

All ALB customers with cross-zone enabled load balancers can now shift traffic away from an AZ in the event of a failure. Zonal shift works with ALB by blocking all traffic to targets in the impaired AZ and removing the zonal IP from DNS. You need to first enable your ALBs for zonal shift using the ALB console or API, and then trigger a zonal shift or enabled autoshift via ARC zonal shift console or API. Read this launch blog to see how zonal shift can be used with ALB.

Zonal shift and zonal autoshift support for ALB with cross-zone configuration enabled is now available in all commercial AWS Regions and the AWS GovCloud (US) Regions.

There is no additional charge for using zonal shift or zonal autoshift. To get started, visit the product page or read the documentation.

Read more


aws-elemental-medialive

AWS announces Media Quality-Aware Resiliency for live streaming

Starting today, you can enable Media Quality-Aware Resiliency (MQAR), an integrated capability between Amazon CloudFront and AWS Media Services that provides dynamic, cross-region origin selection and failover based on a dynamically generated video quality score. Built for customers that need always-on ‘eyes-on-glass’ to deliver live events and 24/7 programming channels, MQAR automatically switches between regions in seconds to recover from video quality degradation in one of the regions. This is designed to help deliver a high quality of experience to viewers.

Previously, you could use a CloudFront origin group to failover between two AWS Elemental MediaPackage origins in different AWS Regions based on HTTP error codes. Now with MQAR, your live event streaming workflow has the resiliency to withstand video quality issues including black frames, freeze or dropped frames, or repeated frames. AWS Elemental MediaLive analyzes the video input delivered from the source and dynamically generates a quality score reflecting perceived changes in video quality. Subsequently, your CloudFront distribution continuously selects the MediaPackage origin that reports the highest quality score. You can create CloudWatch alerts to be notified of quality issues using the provided metrics for quality indicators.

To get started with MQAR, deploy a cross-region channel delivery using AWS Media Services and configure CloudFront to use MQAR in the origin group. CloudFormation support will be coming soon. There is no additional cost for enabling MQAR, standard pricing applies for CloudFront and AWS Media Services. To learn more about MQAR, refer to the launch blog and the CloudFront Developer Guide.

Read more


aws-elemental-mediapackage

AWS announces Media Quality-Aware Resiliency for live streaming

Starting today, you can enable Media Quality-Aware Resiliency (MQAR), an integrated capability between Amazon CloudFront and AWS Media Services that provides dynamic, cross-region origin selection and failover based on a dynamically generated video quality score. Built for customers that need always-on ‘eyes-on-glass’ to deliver live events and 24/7 programming channels, MQAR automatically switches between regions in seconds to recover from video quality degradation in one of the regions. This is designed to help deliver a high quality of experience to viewers.

Previously, you could use a CloudFront origin group to failover between two AWS Elemental MediaPackage origins in different AWS Regions based on HTTP error codes. Now with MQAR, your live event streaming workflow has the resiliency to withstand video quality issues including black frames, freeze or dropped frames, or repeated frames. AWS Elemental MediaLive analyzes the video input delivered from the source and dynamically generates a quality score reflecting perceived changes in video quality. Subsequently, your CloudFront distribution continuously selects the MediaPackage origin that reports the highest quality score. You can create CloudWatch alerts to be notified of quality issues using the provided metrics for quality indicators.

To get started with MQAR, deploy a cross-region channel delivery using AWS Media Services and configure CloudFront to use MQAR in the origin group. CloudFormation support will be coming soon. There is no additional cost for enabling MQAR, standard pricing applies for CloudFront and AWS Media Services. To learn more about MQAR, refer to the launch blog and the CloudFront Developer Guide.

Read more


aws-fault-injection-simulator

AWS Fault Injection Service now generates experiment reports

AWS Fault Injection Service (AWS FIS) now generates reports for experiments, reducing the time and effort to produce evidence of resilience testing. The report summarizes experiment actions and captures application response from a customer-provided Amazon CloudWatch Dashboard.

With AWS FIS, you can run fault injection experiments to create realistic failure conditions under which to practice your disaster recovery and failover tests. To provide evidence of this testing and your application’s recovery response, you can configure experiments to generate a report that you can download from the AWS FIS Console and that is automatically delivered to an Amazon S3 bucket of your choice. After the experiment completes, you can review the report to evaluate the impact of the experiment on your key application and resource metrics. Additionally, you can share the reports with stakeholders, including your compliance teams and auditors as evidence of required testing.

Experiment reports are generally available in all commercial AWS Regions where FIS is available. To get started, you can log into the AWS FIS Console, or you can use the FIS API, SDK, or AWS CLI. For detailed pricing information, please visit the FIS pricing page. To learn more, view the documentation.

Read more


aws-firewall-manager

AWS Firewall Manager is now available in the AWS Asia Pacific (Malaysia) Region

AWS Firewall Manager is now available in the AWS Asia Pacific (Malaysia) region, enabling customers to create policies to manage their VPC Security Groups, VPC network access control lists (NACLs), and AWS WAF protections for applications running in this region. Support for other policy types will be available in the coming months. Firewall Manager is now available in a total of 32 AWS commercial regions, 2 GovCloud regions, and all Amazon CloudFront edge locations.

AWS Firewall Manager is a security management service that enables customers to centrally configure and manage firewall rules across their accounts and resources. Using AWS Firewall Manager, customers can manage AWS WAF rules, AWS Shield Advanced protections, AWS Network Firewall, Amazon Route53 Resolver DNS Firewall, VPC security groups, and VPC network access control lists (NACLs) across their AWS Organizations. AWS Firewall Manager makes it easier for customers to ensure that all firewall rules are consistently enforced and compliant, even as new accounts and resources are created.

To get started, see the AWS Firewall Manager documentation for more details and the AWS Region Table for the list of regions where AWS Firewall Manager is currently available. To learn more about AWS Firewall Manager, its features, and its pricing, visit the AWS Firewall Manager website.
 

Read more


aws-glue

Amazon S3 Access Grants now integrate with AWS Glue

Amazon S3 Access Grants now integrate with AWS Glue for analytics, machine learning (ML), and application development workloads in AWS. S3 Access Grants map identities from your Identity Provider (IdP), such as Entra ID and Okta or AWS Identity and Access Management (IAM) principals, to datasets stored in Amazon S3. This integration gives you the ability to manage S3 permissions for end users running jobs with Glue 5.0 or later, without the need to write and maintain bucket policies or individual IAM roles.

AWS Glue provides a data integration service that simplifies data exploration, preparation, and integration from multiple sources, including S3. Using S3 Access Grants, you can grant permissions to buckets or prefixes in S3 to users and groups in an existing corporate directory, or to IAM users and roles. When end users in the appropriate user groups access S3 using Glue ETL for Apache Spark, they will then automatically have the necessary permissions to read and write data. S3 Access Grants also automatically update S3 permissions as users are added and removed from user groups in the IdP.

Amazon S3 Access Grants support is available when using AWS Glue 5.0 and later, and is available in all commercial AWS Regions where AWS Glue 5.0 and AWS IAM Identity Center are available. For pricing details, visit Amazon S3 pricing and Amazon Glue pricing. To learn more about S3 Access Grants, refer to the S3 User Guide.
 

Read more


AWS expands data connectivity for Amazon SageMaker Lakehouse and AWS Glue

Amazon SageMaker Lakehouse announces unified data connectivity capabilities to streamline the creation, management, and usage of connections to data sources across databases, data lakes and enterprise applications. SageMaker Lakehouse unified data connectivity provides a connection configuration template, support for standard authentication methods like basic authentication and OAuth 2.0, connection testing, metadata retrieval, and data preview. Customers can create SageMaker Lakehouse connections through SageMaker Unified Studio (preview), AWS Glue console, or custom-built application using APIs under AWS Glue.

With SageMaker Lakehouse unified data connectivity, a data connection is configured once and can be reused by SageMaker Unified Studio, AWS Glue and Amazon Athena for use cases in data integration, data analytics and data science. You will gain confidence in the established connection by validating credentials with connection testing. With the ability to browse metadata, you can understand the structure and schema of the data source and identify relevant tables and fields. Lastly, the data preview capability supports mapping source fields to target schemas, identifying needed data transformation, and receiving immediate feedback on the source data queries.

SageMaker Lakehouse unified connectivity is available where Amazon SageMaker Lakehouse or AWS Glue is available. To get started, visit AWS Glue connection documentation or the Amazon SageMaker Lakehouse data connection documentation.

Read more


Introducing AWS Glue 5.0

Today, we are excited to announce the general availability of AWS Glue 5.0. With AWS Glue 5.0, you get improved performance, enhanced security, support for Amazon Sagemaker Unified Studio and Sagemaker Lakehouse, and more. AWS Glue 5.0 enables you to develop, run, and scale your data integration workloads and get insights faster.

AWS Glue is a serverless, scalable data integration service that makes it simple to discover, prepare, move, and integrate data from multiple sources. AWS Glue 5.0 upgrades the engines to Apache Spark 3.5.2, Python 3.11, and Java 17, with new performance and security improvements. Glue 5.0 updates open table format support to Apache Hudi 0.15.0, Apache Iceberg 1.6.1, and Delta Lake 3.2.0 so you can solve advanced use cases around performance, cost, governance, and privacy in your data lakes. AWS Glue 5.0 adds Spark native fine grained access control with AWS Lake Formation so you can apply table, column, row, and cell level permissions on Amazon S3 data lakes. Finally, Glue 5.0 adds support for Sagemaker Lakehouse to unify all your data across Amazon S3 data lakes and Amazon Redshift data warehouses.

AWS Glue 5.0 is generally available today in US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (London), Europe (Stockholm), Europe (Frankfurt), Asia Pacific (Hong Kong), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), and South America (São Paulo) regions.

To learn more, visit the AWS Glue product page and documentation.

Read more


Amazon SageMaker Lakehouse and Amazon Redshift support for zero-ETL integrations from eight applications

Amazon SageMaker Lakehouse and Amazon Redshift now support zero-ETL integrations from applications, automating the extraction and loading of data from eight applications, including Salesforce, SAP, ServiceNow, and Zendesk. As an open, unified, and secure lakehouse for your analytics and AI initiatives, Amazon SageMaker Lakehouse enhances these integrations to streamline your data management processes.

These zero-ETL integrations are fully managed by AWS and minimize the need to build ETL data pipelines. With this new zero-ETL integration, you can efficiently extract and load valuable data from your customer support, relationship management, and ERP applications into your data lake and data warehouse for analysis. Zero-ETL integration reduces users' operational burden and saves the weeks of engineering effort needed to design, build, and test data pipelines. By selecting a few settings in the no-code interface, you can quickly set up your zero-ETL integration to automatically ingest and continually maintain an up-to-date replica of your data in the data lake and data warehouse. Zero-ETL integrations help you focus on deriving insights from your application data, breaking down data silos in your organization and improving operational efficiency. Now run enhanced analysis on your application data using Apache Spark and Amazon Redshift for analytics or machine learning. Optimize your data ingestion processes and focus instead on analysis and gaining insights. 

Amazon SageMaker Lakehouse and Amazon Redshift support for zero-ETL integrations from eight applications is now generally available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).

You can create and manage integrations using either the AWS Glue console, the AWS Command Line Interface (AWS CLI), or the AWS Glue APIs. To learn more, visit What is zero-ETL and What is AWS Glue.

Read more


Announcing Amazon S3 Metadata (Preview) – Easiest and fastest way to manage your metadata

Amazon S3 Metadata is the easiest and fastest way to help you instantly discover and understand your S3 data with automated, easily-queried metadata that updates in near real-time. This helps you to curate, identify, and use your S3 data for business analytics, real-time inference applications, and more. S3 Metadata supports object metadata, which includes system-defined details like size and the source of the object, and custom metadata, which allows you to use tags to annotate your objects with information like product SKU, transaction ID, or content rating, for example.

S3 Metadata is designed to automatically capture metadata from objects as they are uploaded into a bucket, and to make that metadata queryable in a read-only table. As data in your bucket changes, S3 Metadata updates the table within minutes to reflect the latest changes. These metadata tables are stored in S3 Tables, the new S3 storage offering optimized for tabular data. S3 Tables integration with AWS Glue Data Catalog is in preview, allowing you to stream, query, and visualize data—including S3 Metadata tables—using AWS Analytics services such as Amazon Data Firehose, Athena, Redshift, EMR, and QuickSight. Additionally, S3 Metadata integrates with Amazon Bedrock, allowing for the annotation of AI-generated videos with metadata that specifies its AI origin, creation timestamp, and the specific model used for its generation.

Amazon S3 Metadata is currently available in preview in the US East (N. Virginia), US East (Ohio), and US West (Oregon) Regions, and coming soon to additional Regions. For pricing details, visit the S3 pricing page. To learn more, visit the product page, documentation, and AWS News Blog.

Read more


AWS Glue Data catalog now automates generating statistics for new tables

AWS Glue Data Catalog now automates generating statistics for new tables. These statistics are integrated with cost-based optimizer (CBO) from Amazon Redshift and Amazon Athena, resulting in improved query performance and potential cost savings.

Table statistics are used by a query engine, such as Amazon Redshift and Amazon Athena, to determine the most efficient way to execute a query. Previously, creating statistics for Apache Iceberg tables in AWS Glue Data Catalog required you to continuously monitor and update configurations for your tables. Now, AWS Glue Data Catalog lets you generate statistics automatically for new tables with one time catalog configuration. You can get started by selecting default catalog in the Lake Formation console and enabling table statistics in the table optimization configuration tab. As new tables are created or existing tables are updated, statistics are generated using a sample of rows for all columns and will be refreshed periodically. For Apache Iceberg tables, these statistics include the number of distinct values (NDVs). For other file formats like Parquet, additional statistics are collected, such as the number of nulls, maximum and minimum values, and average length. Amazon Redshift and Amazon Athena use the updated statistics to optimize queries, using optimizations such as optimal join order or cost based aggregation pushdown. Glue Catalog console provides you visibility into the updated statistics and statistics generation runs.

The support for automation for AWS Glue Catalog statistics is generally available in the following AWS regions: US East (N. Virginia, Ohio), US West (N. California, Oregon), Europe (Ireland), Asia Pacific (Tokyo) regions. Read the blog post and visit AWS Glue Catalog documentation to learn more.
 

Read more


Announcing generative AI troubleshooting for Apache Spark in AWS Glue (Preview)

AWS Glue announces generative AI troubleshooting for Apache Spark, a new capability that helps data engineers and scientists quickly identify and resolve issues in their Spark jobs. Spark Troubleshooting uses machine learning and generative AI technologies to provide automated root cause analysis for Spark job issues, along with actionable recommendations to fix identified issues.

AWS Glue is a serverless, scalable data integration service that makes it easier to discover, prepare, and combine data for analytics, machine learning, and application development. With Spark troubleshooting, you can initiate automated analysis of failed jobs with a single click in the AWS Glue console. This feature provides root cause analysis and remediation steps for hard-to-diagnose Spark issues like memory errors, data skew problems, and resource not found exceptions. This helps you reduce downtime in critical data pipelines. Powered by Amazon Bedrock, Spark troubleshooting reduces debugging time from days to minutes.

The generative AI troubleshooting for Apache Spark preview is available for jobs running on AWS Glue 4.0, and in the following AWS Regions: US East (N. Virginia), US West (Oregon), Europe (Ireland), US East (Ohio), and more. To learn more, visit the AWS Glue website, read the Launch blog, or read the documentation.
 

Read more


Announcing generative AI upgrades for Apache Spark in AWS Glue (preview)

AWS Glue announces generative AI upgrades for Apache Spark, a new generative AI capability that enables data practitioners to quickly upgrade and modernize their existing Spark jobs. Powered by Amazon Bedrock, this feature automates the analysis and updating of Spark scripts and configurations, reducing the time and effort required for Spark upgrades from weeks to minutes.

AWS Glue is a serverless, scalable data integration service that makes it easier to discover, prepare, and combine data for analytics, machine learning, and application development. With Spark Upgrades, you can initiate automated upgrades with a single click in the AWS Glue console to modernize your Spark jobs from an older version to AWS Glue version 4.0. This feature analyzes your Python-based Spark jobs and generates upgrade plans detailing code changes and configuration modifications. It leverages generative AI to iteratively improve and validate the upgraded code by executing test runs as Glue jobs. Once validation is successful, you receive a detailed summary of all changes for review, enabling confident deployment of your upgraded Spark jobs. This automated approach reduces the complexity of Spark upgrades while maintaining the reliability of your data pipelines.

The generative AI upgrades for Apache Spark preview is available for AWS Glue in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), and Asia Pacific (Sydney). To learn more, visit the AWS Glue website, read the Launch blog, or read the documentation.
 

Read more


AWS Glue Data Catalog now supports Apache Iceberg automatic table optimization through Amazon VPC

AWS Glue Data Catalog now supports automatic optimization of Apache Iceberg tables that can be only accessed from a specific Amazon Virtual Private Cloud (VPC) environment. You can enable automatic optimization by providing a VPC configuration to optimize storage and improve query performance while keeping your tables secure.

AWS Glue Data Catalog supports compaction, snapshot retention and unreferenced file management that help you reduce metadata overhead, control storage costs and improve query performance. Customers who have governance and security configurations that require an Amazon S3 bucket to reside within a specific VPC can now use it with Glue Catalog. This gives you broader capabilities for automatic management of your Apache Iceberg data, regardless of where it's stored on Amazon S3.

Automatic optimization for Iceberg tables through Amazon VPC is available in 13 AWS regions US East (N. Virginia, Ohio), US West (Oregon), Europe (Ireland, London, Frankfurt, Stockholm), Asia Pacific (Tokyo, Seoul, Mumbai, Singapore, Sydney), South America (São Paulo). Customers can enable this through the AWS Console, AWS CLI, or AWS SDKs.

To get started, you can now provide the Glue network connection as an additional configuration along with optimization settings such as default retention period and days to keep unreferenced files. The AWS Glue Data Catalog will use the VPC information in the Glue connection to access Amazon S3 buckets and optimize Apache Iceberg tables.
To learn more, read the blog, and visit the AWS Glue Data Catalog documentation.
 

Read more


AWS Glue expands connectivity to 19 native connectors for Enterprise applications

AWS Glue announces 19 new connectors for Enterprise applications to expand its connectivity portfolio. Now, customers can use AWS Glue native connectors to ingest data from Facebook Ads, Google Ads, Google Analytics 4, Google Sheets, Hubspot, Instagram Ads, Intercom, Jira Cloud, Marketo, Oracle NetSuite, SAP OData, Salesforce Marketing Cloud, Salesforce Marketing Cloud Account Engagement, ServiceNow, Slack, Snapchat Ads, Stripe, Zendesk and Zoho CRM.

As enterprises increasingly rely on data-driven decisions, they are looking for services making it easier to integrate with data from various Enterprise applications. With these 19 new connectors, customers can easily establish a connection to their Enterprise applications using AWS console or AWS Glue APIs without the need to learn application specific APIs. These connectors are scalable and performant with AWS Glue Spark engine and support for standard authorization and authentication method like OAuth 2.0. With these connectors, customers can test connection, validate their connection credential, browse metadata and preview data.

AWS Glue native connectors to Facebook Ads, Google Ads, Google Analytics 4, Google Sheets, Hubspot, Instagram Ads, Intercom, Jira Cloud, Marketo, Oracle NetSuite, SAP OData, Salesforce Marketing Cloud, Salesforce Marketing Cloud Account Engagement, ServiceNow, Slack, Snapchat Ads, Stripe, Zendesk and Zoho CRM are available in all AWS commercial regions.

To get started, create new AWS Glue connections with these connectors and use them as source in AWS Glue studio. To learn more, visit AWS Glue documentation for connectors.

Read more


AWS Glue is now available in Asia Pacific (Malaysia)

We are happy to announce that AWS Glue, a serverless data integration service, is now available in the AWS Asia Pacific (Malaysia) Regions.

AWS Glue is a serverless data integration service that makes it simple to discover, prepare, and combine data for analytics, machine learning, and application development. AWS Glue provides both visual and code-based interfaces to make data integration simpler so you can analyze your data and put it to use in minutes instead of months.

To learn more, visit the AWS Glue product page and our documentation. For AWS Glue region availability, please see the AWS Region table.
 

Read more


AWS Glue Data Catalog now supports scheduled generation of column level statistics

AWS Glue Data Catalog now supports the scheduled generation of column-level statistics for Apache Iceberg tables and file formats such as Parquet, JSON, CSV, XML, ORC, and ION. With this launch, you can simplify and automate the generation of statistics by creating a recurring schedule in the Glue Data Catalog. These statistics are integrated with the cost-based optimizer (CBO) from Amazon Redshift Spectrum and Amazon Athena, resulting in improved query performance and potential cost savings.

Previously, to setup recurring statistics generation schedule, you had to call AWS services using a combination of AWS Lambda and Amazon EventBridge Scheduler. With this new feature, you can now provide the recurring schedule as an additional configuration to Glue Data Catalog along with sampling percentage. For each scheduled run, the number of distinct values (NDVs) are collected for Apache Iceberg tables, and additional statistics such as the number of nulls, maximum, minimum, and average length are collected for other file formats. As the statistics are updated, Amazon Redshift and Amazon Athena use them to optimize queries, using optimizations such as optimal join order or cost based aggregation pushdown. You have visibility into the status and timing of each statistics generation run, as well as the updated statistics values.

To get started, you can schedule statistics generation using the AWS Glue Data Catalog Console or AWS Glue APIs. The support for scheduled generation of AWS Glue Catalog statistics is generally available in all regions where Amazon EventBridge Scheduler is available. Visit AWS Glue Catalog documentation to learn more.

Read more


aws-govcloud-us

Amazon Bedrock Guardrails supports multimodal toxicity detection for image content (Preview)

Organizations are increasingly using applications with multimodal data to drive business value, improve decision-making, and enhance customer experiences. Amazon Bedrock Guardrails now supports multimodal toxicity detection for image content, enabling organizations to apply content filters to images. This new capability with Guardrails, now in public preview, removes the heavy lifting required by customers to build their own safeguards for image data or spend cycles with manual evaluation that can be error-prone and tedious.

Bedrock Guardrails helps customers build and scale their generative AI applications responsibly for a wide range of use cases across industry verticals including healthcare, manufacturing, financial services, media and advertising, transportation, marketing, education, and much more. With this new capability, Amazon Bedrock Guardrails offers a comprehensive solution, enabling the detection and filtration of undesirable and potentially harmful image content while retaining safe and relevant visuals. Customers can now use content filters for both text and image data in a single solution with configurable thresholds to detect and filter undesirable content across categories such as hate, insults, sexual, and violence, and build generative AI applications based on their responsible AI policies.

This new capability in preview is available with all foundation models (FMs) on Amazon Bedrock that support images including fine-tuned FMs in 11 AWS regions globally: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Europe (London), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Tokyo), and Asia Pacific (Mumbai), and AWS GovCloud (US-West).

To learn more, visit the Amazon Bedrock Guardrails product page, read the News blog, and documentation.

Read more


Amazon Bedrock Knowledge Bases now processes multimodal data

Amazon Bedrock Knowledge Bases now enables developers to build generative AI applications that can analyze and leverage insights from both textual and visual data, such as images, charts, diagrams, and tables. Bedrock Knowledge Bases offers end-to-end managed Retrieval-Augmented Generation (RAG) workflow that enables customers to create highly accurate, low-latency, secure, and custom generative AI applications by incorporating contextual information from their own data sources. With this launch, Bedrock Knowledge Bases extracts content from both text and visual data, generates semantic embeddings using the selected embedding model, and stores them in the chosen vector store. This enables users to retrieve and generate answers to questions derived not only from text but also from visual data. Additionally, retrieved results now include source attribution for visual data, enhancing transparency and building trust in the generated outputs.

To get started, customers can choose between: Amazon Bedrock Data Automation, a managed service that automatically extracts content from multimodal data (currently in Preview), or FMs such as Claude 3.5 Sonnet or Claude 3 Haiku, with the flexibility to customize the default prompt.

Multimodal data processing with Bedrock Data Automation is available in the US West (Oregon) region in preview. FM-based parsing is supported in all regions where Bedrock Knowledge Bases is available. For details on pricing for using Bedrock Data Automation or FM as a parser, please refer to the pricing page.

To learn more, visit Amazon Bedrock Knowledge Bases product documentation.

Read more


Amazon Web Services announces declarative policies

Today, AWS announces the general availability of declarative policies, a new management policy type within AWS Organizations. These policies simplify the way customers enforce durable intent, such as baseline configuration for AWS services within their organization. For example, customers can configure EC2 to allow instance launches using AMIs vended by specific providers and block public access in their VPC with a few simple clicks or commands for their entire organization using declarative policies.

Declarative policies are designed to prevent actions that are non-compliant with the policy. The configuration defined in the declarative policy is maintained even when services add new APIs or features, or when customers add new principals or accounts to their organization. With declarative policies, governance teams have access to the account status report which provides insight into the current configuration for an AWS service across their organization. This helps them asses readiness to enforce configuration at scale. Administrators can provide additional transparency to end users by configuring custom error messages to redirect them to internal wikis or ticketing systems through declarative policies.

To get started, navigate to the AWS Organizations console to create and attach declarative policies. You can also use AWS Control Tower, AWS CLI or CloudFormation templates to configure these policies. Declarative policies today support EC2, EBS and VPC configurations with support for other services coming soon. To learn more see documentation and blog post.

Read more


Amazon S3 adds new default data integrity protections

Amazon S3 updates the default behavior of object upload requests with new data integrity protections that build upon S3’s existing durability posture. The latest AWS SDKs now automatically calculate CRC-based checksums for uploads as data is transmitted over the network. S3 independently verifies these checksums and accepts objects after confirming that data integrity was maintained in transit over the public internet. Additionally, S3 now stores a CRC-based whole-object checksum in object metadata, even for multipart uploads, which helps you to verify the integrity of an object stored in S3 at any time.

S3 has always validated the integrity of object uploads from the S3 API to storage by calculating MD5 checksums and allowed customers to provide their own pre-calculated MD5 checksums for integrity validation. S3 also supports five additional checksum algorithms, CRC64NVME, CRC32, CRC32C, SHA-1, and SHA-256, for integrity validations on upload and download. Using checksums for data validation is a best practice for data durability, and this new default behavior adds additional data integrity protections with no changes to your applications and at no additional cost.

Default checksum protections are rolling out across all AWS Regions in the next few weeks. To get started, you can use the AWS Management Console or the latest AWS SDKs to upload objects. To learn more about checksums in S3, visit the AWS News Blog and the S3 User Guide.

Read more


Storage Browser for Amazon S3 is now generally available

Amazon S3 is announcing the general availability of Storage Browser for S3, an open source component that you can add to your web applications to provide your end users with a simple interface for data stored in S3. With Storage Browser for S3, you can provide authorized end users, such as customers, partners, and employees, with access to easily browse, download, and upload data in S3 directly from your own applications. Storage Browser for S3 is available in the AWS Amplify React and JavaScript client libraries.

With the general availability of Storage Browser for S3, your end users can now search for their data based on file name and can copy and delete data they have access to. Additionally, Storage Browser for S3 now automatically calculates checksums of the data your end users upload and blocks requests that do not pass these durability checks.

We welcome your contributions and feedback on our roadmap, which outlines the plan for adding new capabilities to Storage Browser for S3. Storage Browser for S3 is backed by AWS Support, which means customers with AWS Business and Enterprise Support plans get 24/7 access to cloud support engineers. To learn more and get started, visit the AWS News Blog and the UI documentation.
 

Read more


Amazon CloudWatch Container Insights launches enhanced observability for Amazon ECS

Amazon CloudWatch Container Insights introduces enhanced observability for Amazon Elastic Container Service (ECS) running on Amazon EC2 and Amazon Fargate with out-of-the-box detailed metrics, from cluster level down to container level to deliver faster problem isolation and troubleshooting.

Enhanced observability enables customers to visually drill up and down across various container layers and directly spot issues like memory leaks in individual containers, reducing mean time to resolution. With enhanced observability customers can now view their clusters, services, tasks or containers sorted by resource consumption, quickly identify anomalies, and mitigate risks pro-actively before end user experience is impacted. Using Container Insights’ new landing page, customers can now easily understand overall health and performance of clusters across multiple accounts, identify the ones operating under high utilization and pinpoint the root cause by directly browsing to the related detailed dashboards view saving time and effort.

You can get started with enhanced observability at cluster level or account level by selecting “Enhanced” radio button on Amazon ECS console or through the AWS CLI, CloudFormation and CDK. You can also collect instance level metrics from EC2 by launching the CloudWatch agent as a daemon service on your Container Insights enabled clusters.

Container Insights is available in all public AWS Regions, including the AWS GovCloud (US) Regions, China (Beijing, operated by Sinnet), and China (Ningxia, operated by NWCD). Container Insights with enhanced observability for ECS comes with a flat metric pricing – see pricing page for details. For further information, visit the Container Insights documentation.

Read more


Amazon Bedrock Knowledge Bases now provides auto-generated query filters for improved retrieval

Amazon Bedrock Knowledge Bases offers fully-managed, end-to-end Retrieval-Augmented Generation (RAG) workflows to create highly accurate, low latency, secure, and custom GenAI applications by incorporating contextual information from your data sources. Today, we are launching automatically-generated query filters which improves retrieval accuracy by ensuring the documents retrieved are relevant to the query. This feature extends the existing capability of manual metadata filtering, by allowing customers to narrow down search results without the need to manually construct complex filter expressions.

RAG applications process user queries by searching across a large set of documents. However, in many situations you may need to retrieve documents with specific attributes and/or content. With automatic generated query filters enabled, you can receive filtered search results which are based on the document’s metadata without the need to manually construct complex filter expressions. For example, for a query like "How to file a claim in Washington", the state as "Washington" will be automatically applied as a filter to retrieve only those documents pertaining to the particular state.

The capability is available in US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Seoul), Europe (Frankfurt), Europe (Zurich) and AWS GovCloud (US-West). To learn more, visit the documentation.

Read more


Amazon Bedrock Knowledge Bases now supports streaming responses

Amazon Bedrock Knowledge Bases offers fully-managed, end-to-end Retrieval-Augmented Generation (RAG) workflows to create highly accurate, low latency, secure, and custom GenAI applications by incorporating contextual information from your company's data sources. Today, we are announcing the support of RetrieveAndGenerateStream API in Bedrock Knowledge Bases. This new streaming API allows Bedrock Knowledge Base customers to receive the response as it is being generated by the Large Language Model (LLM), rather than waiting for the complete response.

RAG workflow involves several steps, including querying the data store, gathering relevant context, and then sending the query to a LLM for response summarization. This final step of response generation could take few seconds, depending on the latency of the underlying model used in response generation. To reduce this latency for building latency-sensitive applications, we're now offering the RetrieveAndGenerateStream API which provides the response as a stream as it is being generated by the model. This results in a reduced latency for the first response, providing users with a more seamless and responsive experience when interacting with Bedrock Knowledge Bases.

This new capability is currently supported in all existing Amazon Bedrock Knowledge Base regions. To learn more, visit the documentation.
 

Read more


Amazon EC2 introduces Allowed AMIs to enhance AMI governance

Amazon EC2 introduces Allowed AMIs, a new account-wide setting that enables you to limit the discovery and use of Amazon Machine Images (AMIs) within your AWS accounts. You can now simply specify the AMI owner accounts or AMI owner aliases permitted within your account, and only AMIs from these owners will be visible and available to you to launch EC2 instances.

Prior to today, you could use any AMI explicitly shared with your account or any public AMI, regardless of its origin or trustworthiness, putting you at risk of accidentally using an AMI that didn’t meet your organization's compliance requirements. Now with Allowed AMIs, your administrators can specify the accounts or owner aliases whose AMIs are permitted for discovery and use within your AWS environment. This streamlined approach provides guardrails to reduce the risk of inadvertently using non-compliant or unauthorized AMIs. Allowed AMIs also supports an audit-mode functionality to identify EC2 instances launched using AMIs not permitted by this setting, helping you identify non-compliant instances before the setting is applied. You can apply this setting across AWS Organizations and Organizational Units using Declarative Policies, allowing you to manage and enforce this setting at scale.

Allowed AMI setting only applies to public AMIs and AMIs explicitly shared with your AWS accounts. By default, this setting is disabled for all AWS accounts. You can enable it by using the AWS CLI, SDKs, or Console. To learn more, please visit our documentation.

Read more


AWS Control Tower launches managed controls using declarative policies

Today, we are excited to announce the general availability of managed, preventive controls implemented using declarative policies in AWS Control Tower. These policies are a set of new optional controls that help you consistently enforce the desired configuration for a service. For example, customers can deploy a declarative, policy-based preventive control that disallows public sharing of Amazon Machine Images (AMIs). Declarative policies help you ensure that the controls configured are always enforced regardless of the introduction of new APIs, or when new principals or accounts are added.

Today, AWS Control Tower is releasing declarative, policy-based preventive controls for Amazon Elastic Compute Cloud (Amazon EC2) service, Amazon Virtual Private Cloud (Amazon VPC) and Amazon Elastic Block Store (Amazon EBS). These controls help you achieve control objectives such as limit network access, enforce least privilege, and manage vulnerabilities. AWS Control Tower’s new declarative policy-based preventive controls complement AWS Control Tower’s existing control capabilities, enabling you to disallow actions that lead to policy violations.

The combination of preventive, proactive, and detective controls helps you monitor whether your multi-account AWS environment is secure and managed in accordance with best practices. For a full list of AWS regions where AWS Control Tower is available, see AWS Region Table.

Read more


Amazon Bedrock Agents now supports custom orchestration

Amazon Bedrock Agents now supports custom orchestration, allowing developers to control how agents handle multistep tasks, make decisions, and execute complex workflows. This capability enables developers to define custom orchestration logic for their agents using AWS Lambda, providing them flexibility to tailor agent’s behavior to fit specific use cases.

With Custom Orchestration, developers can implement any customized orchestration strategy for their agents, including Plan and Solve, Tree of Thought, and Standard Operating Procedures (SOP). This ensures agents perform tasks in the desired order, manage states effectively, and integrate seamlessly with external tools. Whether handling complex business processes or automating intricate workflows, custom orchestration offers greater control, accuracy, and efficiency to meet business objectives.

Custom Orchestration is now available in all AWS Regions where Amazon Bedrock Agents are supported. To learn more, visit the documentation.
 

Read more


Amazon EBS announces Time-based Copy for EBS Snapshots

Today, Amazon Elastic Block Store (Amazon EBS), a high-performance block storage service, announces the general availability of Time-based Copy. This new feature helps you meet your business and compliance requirements by ensuring that your EBS Snapshots are copied within and across AWS Regions within a specified timeframe.

Customers use EBS Snapshots to back up their EBS volumes, and copy them across multiple AWS Regions and accounts, for disaster recovery, data migration and compliance purposes. Time-based Copy gives you predictability when copying your snapshots across Regions. With this feature, you can specify a desired completion duration, ranging from 15 minutes to 48 hours, for individual copy requests, ensuring that your EBS Snapshots meet their duration requirements or Recovery Point Objectives (RPOs). You can now also monitor your Copy operations via EventBridge and the new SnapshotCopyBytesTransferred CloudWatch metric, available by default at a 1-minute frequency at no additional charge.

Amazon EBS Time-based Copy is available in all AWS commercial Regions and the AWS GovCloud (US) Regions, through the AWS Console, AWS Command Line Interface (CLI), and AWS SDKs. For pricing information, please visit the EBS pricing page. To learn more, see the technical documentation for Time-based Copy for Snapshots.
 

Read more


Amazon Redshift multi-data warehouse writes through data sharing is now generally available

AWS announces the general availability of Amazon Redshift multi-data warehouse writes through data sharing. You can now start writing to Amazon Redshift databases from multiple Amazon Redshift data warehouses in just a few clicks. The written data is available to all Amazon Redshift warehouses as soon as it is committed. This allows your teams to flexibly scale compute by adding warehouses of different types and sizes based on their write workloads’ price-performance needs, isolate compute to more easily meet your workload performance requirements, and easily and securely collaborate with other teams.

With Amazon Redshift multi-data warehouse writes through data sharing, you can easily keep extract, load and transform (ETL) jobs more predictable by splitting workloads between multiple warehouses, helping you meet your workload performance requirements with less time and effort. You can track usage and control costs as each team or application can write using its own warehouse, regardless of where the data is stored. You can use different types of RA3 and Serverless warehouses across different sizes to meet each individual workload's price-performance needs. Your data is immediately available across AWS accounts and regions once committed, enabling better collaboration across your organization.

Amazon Redshift multi-warehouse writes through data sharing is available for RA3 provisioned clusters and Serverless workgroups in all AWS regions where Amazon Redshift data sharing is supported. To get started with Amazon Redshift multi-warehouse writes through data sharing, visit the documentation page.

Read more


Amazon RDS for SQL Server Supports Minor Versions in November 2024

New minor versions of Microsoft SQL Server are now available on Amazon RDS for SQL Server, providing performance enhancements and security fixes. Amazon RDS for SQL Server now supports these latest minor versions of SQL Server 2016, 2017, 2019 and 2022 across the Express, Web, Standard, and Enterprise editions.

We encourage you to upgrade your Amazon RDS for SQL Server database instances at your convenience. You can upgrade with just a few clicks in the Amazon RDS Management Console or by using the AWS CLI. Learn more about upgrading your database instances from the Amazon RDS User Guide. The new minor versions include:
 

  • SQL Server 2016 GDR for SP3 - 13.0.6455.2
  • SQL Server 2017 CU31 GDR - 14.0.3485.1
  • SQL Server 2019 CU29 GDR - 15.0.4410.1
  • SQL Server 2022 CU16 - 16.0.4165.4


These minor versions are available in all AWS commercial regions where Amazon RDS for SQL Server databases are available, including the AWS GovCloud (US) Regions.

Amazon RDS for SQL Server makes it simple to set up, operate, and scale SQL Server deployments in the cloud. See Amazon RDS for SQL Server Pricing for pricing details and regional availability.

Read more


AWS Network Firewall expands the list of supported protocols and keywords in firewall rules

Today, we are excited to announce support for new protocols in AWS Network Firewall so you can protect your Amazon VPCs using application-specific inspection rules. With this launch, AWS Network Firewall will detect protocols like HTTP2, QUIC, and PostgreSQL so you can apply firewall inspection rules to these protocols. You can also use new rule keywords in TLS, SNMP, DHCP, and Kerberos rules to apply granular security controls to your stateful inspection rules.

AWS Network Firewall is a managed firewall service that makes it easy to deploy essential network protections for all your Amazon VPCs. It’s flexible rules engine lets you define firewall rules that give you fine-grained control over network traffic. You can also enable AWS Managed Rules for intrusion detection and prevention signatures that protect against threats such as botnets, scanners, web attacks, phishing and emerging events.

You can create AWS Network Firewall rules using Amazon VPC console, AWS CLI or the Network Firewall API. To see which regions AWS Network Firewall is available in, visit the AWS Region Table. For more information, please see the AWS Network Firewall product page and the service documentation.
 

Read more


Amazon S3 now supports enforcement of conditional write operations for S3 general purpose buckets

Amazon S3 now supports enforcement of conditional write operations for S3 general purpose buckets using bucket policies. With enforcement of conditional writes, you can now mandate that S3 check the existence of an object before creating it in your bucket. Similarly, you can also mandate that S3 check the state of the object’s content before updating it in your bucket. This helps you to simplify distributed applications by preventing unintentional data overwrites, especially in high-concurrency, multi-writer scenarios.

To enforce conditional write operations, you can now use s3:if-none-match or s3:if-match condition keys to write a bucket policy that mandates the use of HTTP if-none-match or HTTP if-match conditional headers in S3 PutObject and CompleteMultipartUpload API requests. With this bucket policy in place, any attempt to write an object to your bucket without the required conditional header will be rejected. You can use this to centrally enforce the use of conditional writes across all the applications that write to your bucket.

You can use bucket policies to enforce conditional writes at no additional charge in all AWS Regions. You can use the AWS SDK, API, or CLI to perform conditional writes. To learn more about conditional writes, visit the S3 User Guide.

Read more


Amazon S3 adds new functionality for conditional writes

Amazon S3 can now perform conditional writes that evaluate if an object is unmodified before updating it. This helps you coordinate simultaneous writes to the same object and prevents multiple concurrent writers from unintentionally overwriting the object without knowing the state of its content. You can use this capability by providing the ETag of an object using S3 PutObject or CompleteMultipartUpload API requests in both S3 general purpose and directory buckets.

Conditional writes simplify how distributed applications with multiple clients concurrently update data across shared datasets. Similar to using the HTTP if-none-match conditional header to check for the existence of an object before creating it, clients can now perform conditional-write checks on an object’s Etag, which reflects changes to the object, by specifying it via the HTTP if-match header in the API request. S3 then evaluates if the object's ETag matches the value provided in the API request before committing the write and prevents your clients from overwriting the object until the condition is satisfied. This new conditional header can help improve the efficiency of your large-scale analytics, distributed machine learning, and other highly parallelized workloads by reliably offloading compare and swap operations to S3.

This new conditional-write functionality is available at no additional charge in all AWS Regions. You can use the AWS SDK, API, or CLI to perform conditional writes. To learn more about conditional writes, visit the S3 User Guide.

Read more


Announcing Idle Disconnect Timeout for Amazon WorkSpaces

Amazon WorkSpaces now supports Idle Disconnect Timeout for Windows WorkSpaces Personal with the Amazon DCV protocol. WorkSpaces administrators can now configure how long a user can be inactive while connected to a personal WorkSpace, before they are disconnected. This setting is already available for WorkSpaces Pools, but this launch includes end user notifications for idle users, warning that their session will be disconnected soon, for both Personal and Pools.

Idle Disconnect Timeout helps Amazon WorkSpaces administrators better optimize costs and resources for their fleet. This feature helps ensure that customers who pay for their resources hourly are only paying for the WorkSpaces that are actually in use. The notifications also provide improved overall user experience for both Personal and Pools end users, by warning them about the pending disconnection and giving them a chance to continue or save their work beforehand.

Idle Disconnect Timeout is available at no additional cost for Windows WorkSpaces running DCV, in all the AWS Regions where WorkSpaces is currently available. To get started with Amazon WorkSpaces, see Getting Started with Amazon WorkSpaces.

To enable this feature, you must be using Windows WorkSpaces Personal DCV host agent version 2.1.0.1554 or later. Your users must be on WorkSpaces Windows or macOS client versions 5.24 or later, WorkSpaces Linux client version 2024.7 or later, or on Web Access. Refer to the client version release notes for more details. To learn more, visit Manage your Windows WorkSpaces in the Amazon WorkSpaces Administrator Guide.

Read more


AWS Control Tower adds prescriptive backup plans to landing zone capabilities

Today, AWS Control Tower added AWS Backup to the list of AWS services you can optionally configure with prescriptive guidance. This configuration option allows you to select from a range of recommended backup plans, seamlessly integrating data backup and recovery workflows into your Control Tower landing zone and organizational units. A landing zone is a well-architected, multi-account AWS environment based on security and compliance best practices. AWS Control Tower automates the setup of a new landing zone using best-practices blueprints for identity, federated access, logging, account structure, and with this launch adds data retention.

When you choose to enable AWS Backup on your landing zone, and then select applicable organizational units, Control Tower creates a backup plan with predefined rules, like retention days, frequency, and time window during which backups occur, that define how to backup AWS resources across all governed member accounts. Applying the backup plan at the Control Tower landing zone ensures it is consistent for all member accounts in-line with best practice recommendations from AWS Backup.

For a full list of Regions where AWS Control Tower is available, see the AWS Region Table. To learn more, visit the AWS Control Tower homepage or see the AWS Control Tower User Guide.

Read more


Amazon Connect now allows agents to self-assign tasks

Amazon Connect now allows agents to create and assign a task to themselves by checking a box from the agent workspace or contact control panel (CCP). For example, an agent can schedule a follow up action to update to a customer by scheduling a task for a preferred time and checking the self assignment option. Amazon Connect Tasks empowers you to prioritize, assign, and track all contact center agent tasks to completion, improving agent productivity and ensuring customer issues are quickly resolved.

This feature is supported in all AWS regions where Amazon Connect is offered. To learn more, see our documentation. To learn more about Amazon Connect, the easy-to-use cloud contact center, visit the Amazon Connect website.

Read more


AWS AppConfig supports automatic rollback safety from third-party alerts

AWS AppConfig has added support for third-party monitors to trigger automatic rollbacks when there are problems with updates to feature flags, experimental flags, or configuration data. Customers can now connect AWS AppConfig to third-party application performance monitoring (APM) solutions; previously monitoring required Amazon CloudWatch. This monitoring gives more confidence and additional safety controls when making any change on production.

Unexpected downtime or degraded performance can occur from faulty changes to feature flags or configuration data. AWS AppConfig provides safety guardrails to reduce this risk. One key safety guardrail for AWS AppConfig is the ability to have AWS AppConfig immediately roll back a change when a monitor alerts during the rollout of a feature flag or configuration change. This automation can typically remediate problems faster than a human operator can. Customers can use AWS AppConfig Extensions to connect to any API-enabled APM, including proprietary solutions.

Third-party alarm rollback for AWS AppConfig is available in all AWS Regions, including the AWS GovCloud (US) Regions. To get started, use the AWS AppConfig Getting Started Guide, or learn about AWS AppConfig automatic rollback.
 

Read more


Request future dated Amazon EC2 Capacity Reservations

Today, we are announcing that you can request Amazon EC2 Capacity Reservations to start on a future date. Capacity Reservations provide assurance for your critical workloads by allowing you to immediately reserve compute capacity in a specific Availability Zone. Starting today, you can now create Capacity Reservations to start on a future date, enabling you to secure capacity for your future needs and providing you with peace of mind for your critical future scaling events.

You can create future dated Capacity Reservations by specifying the capacity you need, start date, and the minimum duration you commit to use the reservation. Once EC2 approves the request, your reservation will be scheduled to go active on the chosen start date and upon activation, you can immediately launch instances.

This new capability is available to all Capacity Reservations customers in all AWS commercial regions, AWS China regions, and the AWS GovCloud (US) Regions at no additional cost. To learn more about these features, please refer to the Capacity Reservations user guide.

Read more


Today, AWS announced support for a new Apache Flink connector for Amazon Simple Queue Service. The new connector, contributed by AWS for the Apache Flink open source project, adds Amazon Simple Queue Service as a new destination for Apache Flink. You can use the new connector to send processed data from Amazon Managed Service for Apache Flink to Amazon Simple Queue Service messages with Apache Flink, a popular framework and engine for processing and analyzing streaming data.

Amazon Managed Service for Apache Flink makes it easier to transform and analyze streaming data in real time with Apache Flink. Apache Flink is an open source framework and engine for processing data streams. Amazon Managed Service for Apache Flink reduces the complexity of building and managing Apache Flink applications and integrates with Amazon Managed Streaming for Apache Kafka (Amazon MSK), Amazon Kinesis Data Streams, Amazon OpenSearch Service, Amazon DynamoDB streams, Amazon S3, custom integrations, and more using built-in connectors.

Amazon Simple Queue Service offers a secure, durable, and available hosted queue that lets you integrate and decouple distributed software systems and components. Amazon SQS offers common constructs such as deal-letter queues and cosrt allocation tags.

You can learn more about Amazon Managed Service for Apache Flink and Amazon Simple Queue Service in our documentation. To learn more about open source Apache Flink connectors visit the official website. For Amazon Managed Service for Apache Flink and Amazon Simple Queue Service region availability, refer to the AWS Region Table.

Read more


AWS Application Load Balancer introduces Certificate Authority advertisement to simplify client behavior while using Mutual TLS

Application Load balancer (ALB) now supports advertise Certificate Authority (CA) subject name stored in its associated Trust Store to simplify the certificate selection experience. By enabling this feature, the ALB will send a list of CA subject names to clients attempting to connect to the load balancer. Clients can use this list to identify which of their certificates will be accepted by the ALB, which reduces connection errors during mutual authentication.

You can optionally configure the Advertise CA subject name feature using AWS APIs, AWS CLI, or the AWS Management Console. This feature is available for ALBs in all commercial AWS Regions, the AWS GovCloud (US) Regions and China Regions. To learn more, refer to the ALB documentation.

Read more


Amazon QuickSight launches Highcharts visual (preview)

Amazon QuickSight now offers Highcharts visuals, enabling authors to create custom visualizations using the Highcharts Core library. This new feature extends your visualization capabilities beyond QuickSight's standard chart offerings, allowing you to create bespoke charts such as sunburst charts, network graphs, 3D charts and many more.

Using declarative JSON syntax , authors can configure charts with greater flexibility and granular customization. You can easily reference QuickSight fields and themes in the JSON using QuickSight expressions. The integrated code editor includes contextual assistance features, providing autocomplete and real-time validation to ensure proper configuration. To maintain security, the Highcharts visual editor prevents the injection of CSS and JavaScript. Refer documentation for supported list of JSON and QuickSight expressions

Highcharts visual is now available in all supported Amazon QuickSight regions - US East (Ohio and N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Jakarta, Mumbai, Seoul, Singapore, Sydney and Tokyo), Canada (Central), Europe (Frankfurt, Ireland, London, Milan, Paris, Stockholm, Zurich), South America (São Paulo) and AWS GovCloud (US-West). To learn more about the Highcharts visual and how to leverage its capabilities in your QuickSight dashboards, visit our documentation.

Read more


Amazon QuickSight now supports import visual capability (preview)

Amazon QuickSight introduces the ability to import visuals from an existing dashboard or analysis into your current analysis where authors have ownership privileges. This feature streamlines dashboard and report creation by allowing you to transfer associated dependencies such as datasets, parameters, calculated fields, filter definitions, and visual properties, including conditional formatting rules.

Authors can boost productivity by importing visuals instead of recreating them, facilitating collaboration across teams. The feature intelligently resolves conflicts, eliminates duplicates, rescopes filter definitions, and adjusts visuals to match the destination sheet type and theme. Imported visuals are forked from the source, ensuring independent customization. To learn more, click here.

The Import Visuals feature is available in all supported Amazon QuickSight regions - US East (Ohio and N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Jakarta, Mumbai, Seoul, Singapore, Sydney and Tokyo), Canada (Central), Europe (Frankfurt, Ireland, London, Milan, Paris, Stockholm, Zurich), South America (São Paulo) and AWS GovCloud (US-West).

Read more


Amazon QuickSight launches Image component

Amazon QuickSight now includes Image Component. This provides authors greater flexibility to incorporate static images into their QuickSight dashboards, analysis, reports and stories.

With Image component, Authors can upload images directly from your local desktop to QuickSight for a variety of use cases, such as adding company logos and branding, including background images with free-form layout, and creating captivating story covers. It also supports tooltip and alt text, providing additional context and accessibility for readers. Furthermore, it offers navigation and URL actions, enabling authors to make their images interactive, such as triggering specific dashboard actions when the image is clicked. For more details refer to documentation.

Image component is now available in all supported Amazon QuickSight regions - US East (Ohio and N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Jakarta, Mumbai, Seoul, Singapore, Sydney and Tokyo), Canada (Central), Europe (Frankfurt, Ireland, London, Milan, Paris, Stockholm, Zurich), South America (São Paulo) and AWS GovCloud (US-West).

Read more


Announcing AWS STS support for ECDSA-based signatures of OIDC tokens

Today, AWS Security Token Service (STS) is announcing support for digitally signing OpenID Connect (OIDC) JSON Web Tokens (JWTs) using Elliptic Curve Digital Signature Algorithm (ECDSA) keys. A digital signature guarantees the JWT’s authenticity and integrity and ECDSA is a popular, NIST-approved digital signature algorithm. When your identity provider (IdP) authenticates a user, it crafts a signed OIDC JWT representing that user’s identity. When your authenticated user calls the AssumeRoleWithWebIdentity API and passes their OIDC JWT, STS vends short-term credentials that enable access to your protected AWS resources.

You now have a choice between using RSA and ECDSA keys when your IdP digitally signs an OIDC JWT. To begin using ECDSA keys with your OIDC IdP, update your IdP’s JWKS document with the new key information. No change to your AWS Identity and Access Management (IAM) configuration is needed to use ECDSA-based signatures of your OIDC JWTs.

Support for ECDSA-based signatures of OIDC JWTs is available in all AWS Regions, including the AWS GovCloud (US) Regions .

To learn more about using OIDC to authenticate your users and workloads, please visit OIDC Federation in the IAM Users Guide.

Read more


Amazon QuickSight now supports font customization for visuals

Amazon QuickSight now supports the ability to customize fonts across specific visuals. Authors can now completely customize fonts for Table and Pivot table, while for remaining visuals they can customize fonts for specific properties including title, subtitle, legends title and legends values.

Authors can set the font size(in pixels), font family, color, and styling options like bold, italics, and underline across analysis, including dashboard, reports and embedded scenarios. With this update, you can align the dashboard's fonts with your organization's branding guidelines, creating a cohesive and visually appealing experience. Additionally, the font customization options can help improve the readability and meet accessibility standards, especially when viewing visuals on a large screen.

Font customization for above listed visuals is now available in all supported Amazon QuickSight regions - US East (Ohio and N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Jakarta, Mumbai, Seoul, Singapore, Sydney and Tokyo), Canada (Central), Europe (Frankfurt, Ireland, London, Milan, Paris, Stockholm, Zurich), South America (São Paulo) and AWS GovCloud (US-West).
 

Read more


Amazon Bedrock Knowledge Bases now supports binary vector embeddings to build RAG applications

Amazon Bedrock Knowledge Bases now supports binary vector embeddings for building Retrieval Augmented Generation (RAG) applications. This feature is available with Titan Text Embeddings V2 model and Cohere Embed models. Amazon Bedrock Knowledge Bases offers fully-managed RAG workflows to create highly accurate, low latency, secure and customizable retrieval-augmented-generation (RAG) applications by incorporating contextual information from an organization's data sources.

Binary vector embeddings represent document embeddings as binary vectors, with each dimension encoded as a single binary digit (0 or 1). Binary embeddings in RAG applications offer significant benefits in storage efficiency, computational speed, and scalability. They are particularly useful for large-scale information retrieval, resource-constrained environments, and real-time applications.

This new capability is currently supported with Amazon OpenSearch Serverless as vector store. It is supported in all Amazon Bedrock Knowledge Bases regions where Amazon Opensearch Serverless and Amazon Titan Text Embeddings V2 or Cohere Embed are available.

For more information, please refer to the documentation.

Read more


Amazon Application Recovery Controller zonal shift and zonal autoshift support Application Load Balancers

Amazon Application Recovery Controller (ARC) zonal shift and zonal autoshift have expanded their capabilities and now support Application Load Balancers (ALB) with cross-zone configuration enabled. ARC zonal shift helps you quickly recover an unhealthy application in an Availability Zone (AZ), and reduce the duration and severity of impact to the application due to events such as power outages and hardware or software failures. ARC zonal autoshift safely and automatically shifts your application’s traffic away from an AZ when AWS identifies a potential failure affecting that AZ.

All ALB customers with cross-zone enabled load balancers can now shift traffic away from an AZ in the event of a failure. Zonal shift works with ALB by blocking all traffic to targets in the impaired AZ and removing the zonal IP from DNS. You need to first enable your ALBs for zonal shift using the ALB console or API, and then trigger a zonal shift or enabled autoshift via ARC zonal shift console or API. Read this launch blog to see how zonal shift can be used with ALB.

Zonal shift and zonal autoshift support for ALB with cross-zone configuration enabled is now available in all commercial AWS Regions and the AWS GovCloud (US) Regions.

There is no additional charge for using zonal shift or zonal autoshift. To get started, visit the product page or read the documentation.

Read more


Amazon Managed Service for Apache Flink now offers a new Apache Flink connector for Amazon Kinesis Data Streams. This open-source connector, contributed by AWS, supports Apache Flink 2.0 and provides several enhancements. It enables in-order reads during stream scale-up or scale-down, supports Apache Flink's native watermarking, and improves observability through unified connector metrics. Additionally, the connector uses AWS SDK for Java 2.x which supports enhanced performance and security features, and native retry strategy.

Amazon Kinesis Data Streams is a serverless data streaming service that enables customers to capture, process, and store data streams at any scale. Amazon Managed Service for Apache Flink makes it easier to transform and analyze streaming data in real time with Apache Flink without having to manage servers or clusters. You can use the new connector to consume data from a Kinesis Data Stream source for real-time processing in your Apache Flink application and can also send data back to a Kinesis Data Streams destination. You can use the new connector to read data from a Kinesis data stream starting with Apache Flink version 1.19.

To learn more about Apache Flink Amazon Kinesis Data Streams connector, visit the official Apache Flink documentation. You can also check the GitHub repositories for Apache AWS connectors.
 

Read more


AWS Resilience Hub introduces a summary view

AWS Resilience Hub introduces a new summary view, providing an executive level view of the resilience posture of the application portfolio defined on Resilience Hub. The new summary view allows you to visualize the state of your application portfolio, so you can efficiently manage and improve your applications’ ability to withstand and recover from disruptions.

Understanding the current state of application resilience can be a challenge, especially when it comes to identifying which applications need attention and communicating this information across your organization. The new summary view in Resilience Hub helps you to quickly identify applications that require remediation and streamline resilience management across your application portfolio. In addition to the new summary view, we are providing the ability to export the data powering the summary view to allow you to create custom reports for stakeholder communication. The summary and export functions allows teams to quickly assess the current state of application resilience and take necessary actions to improve it.

The new summary view is available in all of the AWS Regions where AWS Resilience Hub is supported. For the most up-to-date availability information, see the AWS Regional Services List.

To learn more about AWS Resilience Hub, visit our product page. To get started with AWS Resilience Hub, sign into the AWS console.

Read more


AWS Lambda adds support for Node.js 22

AWS Lambda now supports creating serverless applications using Node.js 22. Developers can use Node.js 22 as both a managed runtime and a container base image, and AWS will automatically apply updates to the managed runtime and base image as they become available.

Node.js 22 is the latest long-term support (LTS) release of Node.js and is expected to be supported for security and bug fixes until April 2027. It provides access to the latest Node.js language features, such as the ‘fetch’ API. You can use Node.js 22 with Lambda@Edge in supported Regions, allowing you to customize low-latency content delivered through Amazon CloudFront. Powertools for AWS Lambda (TypeScript), a developer toolkit to implement serverless best practices and increase developer velocity, also supports Node.js 22.

The Node.js 22 runtime is available in all Regions where Lambda is available, including China and the AWS GovCloud (US) Regions.

You can use the full range of AWS deployment tools, including the Lambda console, AWS CLI, AWS Serverless Application Model (AWS SAM), AWS CDK, and AWS CloudFormation to deploy and manage serverless applications written in Node.js 22. For more information, including guidance on upgrading existing Lambda functions, see our blog post. For more information about AWS Lambda, visit our product page.

Read more


Amazon RDS Blue/Green Deployments support minor version upgrade for RDS for PostgreSQL

Amazon Relational Database Service (Amazon RDS) Blue/Green Deployments now supports safer, simpler, and faster minor version upgrades for your Amazon RDS for PostgreSQL databases using physical replication. The use of PostgreSQL physical replication for database change management, such as minor version upgrade, simplifies your RDS Blue/Green Deployments upgrade experience by overcoming PostgreSQL community logical replication limitations.

You can now use Amazon RDS Blue/Green Deployments for deploying multiple database changes to production such as minor version upgrades, shrink storage volume, maintenance updates, and scaling instances in a single switchover event using physical replication. RDS Blue/Green Deployments for PostgreSQL relies on logical replication for major version upgrades.

Blue/Green Deployments for PostgreSQL creates a fully managed staging environment using physical replication for minor version upgrades, that allows you to deploy and test production changes, keeping your current production database safer. With a few clicks, you can switchover the staging environment to be the new production system in as fast as a minute, with no data loss and no changes to your application for database endpoint management.

Amazon RDS Blue/Green Deployments is now available for Amazon RDS for PostgreSQL using physical replication for all minor versions for the major versions 11 and higher in all applicable AWS Regions. In a few clicks, update your databases using Amazon RDS Blue/Green Deployments via the Amazon RDS Console. Learn more about Blue/Green Deployments on the Amazon RDS features page.
 

Read more


AWS Lake Formation now supports named LF-Tag expressions

Today, AWS announces the general availability of named LF-Tag expressions in AWS Lake Formation. With this launch, customers can create and manage named combinations of LF-Tags. With Named LF-Tag expressions, customers can now create permission expressions that better represent complex business requirements in permissions.

Customers use LF-Tags to create complex data grants based on attributes and want to manage the combination of LF-Tags. Now, when customers want to grant the same combination of LF-Tags to multiple users, they can create a named LF-Tag expression and grant that expression to multiple users rather than providing the full expression for every grant. Additionally, changes in a customer’s LF-Tag ontology, for example for changes in business requirements, means customers can update a single expression instead of all permissions that used the changed LF-Tags.

Named LF-Tag expressions are generally available in commercial AWS Regions where AWS Lake Formation is available and the AWS GovCloud (US) Regions.

To get started with this feature, visit the AWS Lake Formation documentation.
 

Read more


AWS Application Load Balancer introduces header modification for enhanced traffic control and security

Application Load Balancer (ALB) now supports HTTP request and response header modification giving you greater controls to manage your application’s traffic and security posture without having to alter your application code.

This feature introduces three key capabilities: renaming specific load balancer generated headers, inserting specific response headers, and disabling server response header. With header rename, you can now rename all ALB generated Transport Layer Security (TLS) headers that the load balancer adds to requests, which includes the six mTLS headers and two TLS headers (version and cipher). This capability enables seamless integration with existing applications that expect headers in a specific format, thereby minimizing changes to your backends while using TLS features on the ALB. With header insertion, you can insert custom headers related to Cross-Origin Resource Sharing (CORS) and critical security headers like HTTP Strict-Transport-Security (HSTS). Finally, the capability to disable the ALB generated “Server” header in responses reduces exposure of server-specific information, adding an extra layer of protection to your application. These response header modification features give you the ability to centrally enforce your organizations security posture at the load balancer instead of enforcement at individual applications, which can be prone to errors.

You can configure Header Modification feature using AWS APIs, AWS CLI, or the AWS Management Console. This feature is available for ALBs in all commercial AWS Regions, AWS GovCloud (US) Regions and China Regions. To learn more, refer to the ALB documentation.
 

Read more


Amazon VPC IPAM now supports enabling IPAM for organizational units within AWS Organizations

Today, AWS announced the ability for Amazon VPC IP Address Manager (IPAM) to be enabled and used for specific organizational units (OUs) within AWS Organizations. This allows you to enable IPAM for specific types of workloads, such as production workloads, or for specific business subsidiaries, that are grouped as OUs in your organization.

VPC IPAM makes it easier for you to plan, track, and monitor IP addresses for your AWS workloads. Typically, you would enable IPAM for the entire organization giving you a unified view of all the IP addresses. In some cases, you may want to enable IPAM only for parts of your organization. For example, you want to enable IPAM for all types of workloads, except sandbox which is isolated from your core-network and contains only experimental workloads. Or, you want to onboard selected business subsidiaries that need IPAM ahead of others in the organization. In such cases, you can use this new feature to enable IPAM for specific parts of your organization that are grouped as OUs.

Amazon VPC IPAM is available in all AWS Regions, including China (Beijing, operated by Sinnet), and China (Ningxia, operated by NWCD), and the AWS GovCloud (US) Regions.

To learn more about this feature, view the service documentation. For details on IPAM pricing, refer to the IPAM tab on the Amazon VPC Pricing page.

Read more


Amazon EC2 now provides lineage information for your AMIs

Amazon EC2 now provides source details for your Amazon Machine Images (AMIs). With this lineage information, you can easily trace any copied or derived AMI back to their original AMI source.

Prior to today, you had to maintain a list of AMIs, use tags, and create custom scripts to track the origins of an AMI. This approach was time-consuming, hard to scale, and resulted in operational overheads. Now with this capability, you can easily view details of the source AMI, making it easier for you to understand from where a particular AMI originated. When copying AMIs across AWS Regions, the lineage information clearly links the copied AMIs to their original AMIs. This new capability provides a more streamlined and efficient way to manage and understand the lineage of AMIs within your AWS environment

You can view these details by using the AWS CLI, SDKs, or Console. This capability is available at no additional cost in all AWS Regions, including AWS GovCloud (US) and AWS China Regions. To learn more, please visit our documentation here.

Read more


Amazon RDS for PostgreSQL supports pgvector 0.8.0

Amazon Relational Database Service (RDS) for PostgreSQL now supports pgvector 0.8.0, an open-source extension for PostgreSQL for storing and efficiently querying vector embeddings in your database, letting you use retrieval-augemented generation (RAG) when building your generative AI applications. pgvector 0.8.0 release includes improvements on PostgreSQL query planner’s selection of index when filters are present, which can deliver better query performance and improve search result quality.

pgvector 0.8.0 release includes a variety of improvements to how pgvector filters data using conditions in WHERE clauses and joins that can improve query performance and usability. Additionally, the iterative index scans help prevent ‘overfiltering’, ensuring generation of sufficient results to satisfy the conditions of a query. If an initial index scan doesn't satisfy the query conditions, pgvector will continue to search the index until it hits a configurable threshold. This release also has performance improvements for searching and building HNSW indexes.

pgvector 0.8.0 is available on database instances in Amazon RDS running PostgreSQL 17.1 and higher, 16.5 and higher, 15.9 and higher, 14.14 and higher, and 13.17 and higher in all applicable AWS Regions.

Amazon RDS for PostgreSQL makes it simple to set up, operate, and scale PostgreSQL deployments in the cloud. See Amazon RDS for PostgreSQL Pricing for pricing details and regional availability. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console.

Read more


Amazon RDS Blue/Green Deployments Green storage fully performant prior to switchover

Amazon Relational Database Service (Amazon RDS) Blue/Green Deployments now support managed initialization of Green storage volumes that accelerates the loading of storage blocks from Amazon S3. This ensures that the volumes are fully performant prior to switchover of the Green databases. Blue/Green Deployments create a fully managed staging environment, or Green database, by restoring the Blue database snapshot. The Green database allows you to deploy and test production changes, keeping your current production database, or Blue database, safer.

Previously, you had to manually initialize the storage volumes of the Green databases. With this launch, RDS Blue/Green Deployments will proactively manage and accelerate the storage initialization for your green database instances. You will be able to view the progress of storage initialization using the RDS Console and command line interface (CLI). Managed storage initialization of the Green databases is supported for Blue/Green deployments created for RDS for PostgreSQL, RDS for MySQL, and RDS for MariaDB engines.

Amazon RDS Blue/Green Deployments are available for Amazon RDS for PostgreSQL major versions 12 and higher, RDS for MySQL major versions 5.7 and higher, and Amazon RDS for MariaDB major versions 10.4 and higher.

In a few clicks, update your databases using Amazon RDS Blue/Green Deployments via the Amazon RDS Console. Learn more about RDS Blue/Green Deployments and the supported engine versions here.
 

Read more


Amazon ElastiCache version 8.0 for Valkey brings faster scaling and improved memory efficiency

Today, Amazon ElastiCache introduces support for Valkey 8.0, the latest Valkey major version. This release brings faster scaling for ElastiCache Serverless for Valkey and improved memory efficiency on node-based ElastiCache, compared to previous versions of ElastiCache for Valkey and Redis OSS. Valkey is an open-source, high-performance key-value datastore stewarded by the Linux Foundation and is a drop-in replacement for Redis OSS. Backed by over 40 companies, Valkey has seen rapid adoption since its inception in March 2024.

Hundreds of thousands of customers use ElastiCache to scale their applications, improve performance, and optimize costs. ElastiCache Serverless version 8.0 for Valkey scales to 5 million requests per second (RPS) per cache in minutes, up to 5x faster than Valkey 7.2, with microsecond read latency. With node-based ElastiCache, you can benefit from improved memory efficiency, with 32 bytes less memory per key compared to ElastiCache version 7.2 for Valkey and ElastiCache for Redis OSS. AWS has made significant contributions to open source Valkey in the areas of performance, scalability, and memory optimizations, and we are bringing these benefits into ElastiCache version 8.0 for Valkey.

ElastiCache version 8.0 for Valkey is now available in all AWS regions. You can upgrade from ElastiCache version 7.2 for Valkey or any ElastiCache for Redis OSS version to ElastiCache version 8.0 for Valkey in a few clicks without downtime. You can get started using the AWS Management Console, Software Development Kit (SDK), or Command Line Interface (CLI). For more information, please visit the ElastiCache features page, blog and documentation.
 

Read more


Amazon RDS for PostgreSQL supports minor versions 17.2, 16.6, 15.10, 14.15, 13.18, and 12.22

Amazon Relational Database Service (RDS) for PostgreSQL now supports the latest minor versions 17.2, 16.6, 15.10, 14.15, 13.18, and 12.22. We recommend that you upgrade to the latest minor versions to fix known security vulnerabilities in prior versions of PostgreSQL, and to benefit from the bug fixes added by the PostgreSQL community.

You are able to leverage automatic minor version upgrades to automatically upgrade your databases to more recent minor versions during scheduled maintenance window. Learn more about upgrading your database instances in the Amazon RDS User Guide.

Additionally, starting with PostgreSQL major version 18, Amazon RDS for PostgreSQL will deprecate plcoffee and plls PostgreSQL extensions. We recommend that you stop using Coffee scripts and LiveScript in your applications, ensuring you have an upgrade path for future.

Amazon RDS for PostgreSQL makes it simple to set up, operate, and scale PostgreSQL deployments in the cloud. See Amazon RDS for PostgreSQL Pricing for pricing details and regional availability. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console.
 

Read more


AWS Backup for Amazon S3 adds new restore parameter

AWS Backup introduces a new restore parameter for Amazon S3 backups, offering you the ability to choose how many versions of an object to restore.

By default, AWS Backup restores only the latest version of objects from the version stack at any point in time. The new parameter will now allow you to recover all versions of your data by restoring the entire version stack. You can also recover just the latest version(s) of an object without the overhead of restoring all older versions. With this feature, you now have more flexibility to control the data recovery process of Amazon S3 buckets/prefixes from your Amazon S3 backups, tailoring restore jobs to your requirements.

This feature is available in all Regions where AWS Backup for Amazon S3 is available. For more information on Regional availability and pricing, see the AWS Backup pricing page.

To learn more about AWS Backup for Amazon S3, visit the product page and technical documentation. To get started, visit the AWS Backup console.
 

Read more


AWS Elastic Beanstalk adds support for Ruby 3.3

AWS Elastic Beanstalk is a service that provides the ability to deploy and manage applications in AWS without worrying about the infrastructure that runs those applications. Ruby 3.3 on AL2023 adds support for a new parser, a new pure-Ruby just-in-time compiler and several performance improvements. You can create Elastic Beanstalk environment(s) running Ruby 3.3 on AL2023 using any of the Elastic Beanstalk interfaces such as Elastic Beanstalk Console, Elastic Beanstalk CLI, and the Elastic Beanstalk API.

This platform is generally available in commercial regions where Elastic Beanstalk is available including the AWS GovCloud (US) Regions. For a complete list of regions and service offerings, see AWS Regions.

For more information about Ruby and Linux Platforms, see the Elastic Beanstalk developer guide. To learn more about Elastic Beanstalk, visit the Elastic Beanstalk product page.

Read more


Amazon SQS increases in-flight limit for FIFO queues from 20K to 120K

Amazon SQS increases the in-flight limit for FIFO queues from 20K to 120K messages. When a message is sent to an SQS FIFO queue, it is added to the queue backlog. Once you invoke a receive request on the FIFO queue, the message is now marked as in-flight and remains in-flight until a delete message request is invoked.

With this change to the in-flight limit, your receivers can now process a maximum of 120K messages concurrently, increased from 20K previously, via SQS FIFO queues. If you have sufficient publish throughput and were constrained by the 20K in-flight limit, you can now process up to 120K messages at a time by scaling your receivers.

The increased in-flight limits is available in all commercial and the AWS GovCloud (US) Regions where SQS FIFO queues are available.

To get started, see the following resources:

Read more


Amazon RDS for MySQL now supports MySQL 8.4 LTS release

Amazon RDS for MySQL now supports MySQL major version 8.4, the latest long-term support (LTS) release from the MySQL community. RDS for MySQL 8.4 is integrated with AWS Libcrypto (AWS-LC) FIPS module (Certificate #4816), and includes support for multi-source replication plugin for analytics, Group Replication plugin for continuous availability, as well as several performance and feature improvements added by the MySQL community. Learn more about the community enhancements in the MySQL 8.4 release notes.

You can leverage Amazon RDS Managed Blue/Green deployments to upgrade your databases from MySQL 8.0 to MySQL 8.4. Learn more about RDS for MySQL 8.4 features and upgrade options, including Managed Blue/Green deployments in the Amazon RDS User Guide.

Amazon RDS for MySQL 8.4 is now available in all AWS Commercial and the AWS GovCloud (US) Regions.

Amazon RDS for MySQL makes it straightforward to set up, operate, and scale MySQL deployments in the cloud. Learn more about pricing details and regional availability at Amazon RDS for MySQL. Create or update a fully managed Amazon RDS for MySQL 8.4 database in the Amazon RDS Management Console.
 

Read more


Enhanced account linking experience across AWS Marketplace and AWS Partner Central

Today, AWS announces an improved account linking experience for AWS Partners to create and connect their AWS Marketplace accounts with AWS Partner Central, as well as onboarding associated users. Account Linking allows Partners to seamlessly navigate between Partner Central and Marketplace Management Portal using Single Sign-On (SSO), connect Partner Central solutions to AWS Marketplace listings, link private offers to opportunities for tracking deals from pipeline to customer offers, and access AWS Marketplace insights within centralized AWS Partner Analytics Dashboard. Linking accounts also unlocks access to valuable Amazon Partner Network (APN) program benefits such as ISV Accelerate and accelerated sales cycles.

The new account linking experience introduces three major improvements to streamline the self-guided linking workflow. First, it simplifies the process to associate your AWS account with AWS Marketplace by registering your legal business name. Second, it automates the creation and bulk assignment of Identity and Access Management (IAM) roles to AWS Partner Central users, eliminating the need for manual creation in the AWS IAM console. Third, it introduces three new AWS managed policies to simplify permission management for AWS Partner Central and Marketplace access. The new policies offer fine-grained access options, ranging from full Partner Central access to personalized access to co-sell or marketplace offer management.

This new experience is available for all AWS Partners. To get started, navigate to the “Account Linking” feature on the AWS Partner Central homepage. To learn more, review the AWS Partner Central documentation.

Read more


Amazon API Gateway now supports Custom Domain Name for private REST APIs

Amazon API Gateway (APIGW) now gives you the ability to manage your private REST APIs using custom user-friendly private DNS name like private.example.com, simplifying API discovery. This feature enhances your security posture by continuing to encrypt your private API traffic with Transport Layer Security (TLS), while providing full control over managing the lifecycle of the TLS certificate associated with your domain.

API providers can get started with this feature in four simple steps using APIGW console and/or API(s). First, create a private custom domain. Second, configure an Amazon Certificate Manager (ACM) provided or imported certificate for the domain. Third, map multiple private APIs using base path mappings. Fourth, control invokes to the domain using resource policies. API providers can optionally share the domain across accounts using Amazon Resource Access Manager (RAM) to provide consumers the ability to access APIs from different accounts. Once a domain is shared using RAM, a consumer can use VPC endpoint(s) to invoke multiple private custom domains across accounts.

Custom domain name for private REST APIs is now available on API Gateway in all AWS Regions, including the AWS GovCloud (US) Regions. Please visit the API Gateway documentation and AWS blog post to learn more.
 

Read more


AWS CloudTrail Lake launches enhanced analytics and cross-account data access

AWS announces two significant enhancements to CloudTrail Lake, a managed data lake that enables you to aggregate, immutably store, and analyze your activity logs at scale:

  • Comprehensive dashboard capabilities: A new "Highlights" dashboard provides an at-a-glance overview of your AWS activity logs including AI-powered insights (AI-powered insights is in preview). Additionally, we have added 14 new pre-built dashboards catering to various use cases such as security and operational monitoring. These dashboards provide a starting point to analyze trends, detect anomalies, and conduct efficient investigations across your AWS environments. For example, the security dashboard displays top access denied events, failed console login attempts, and more. You can also create custom dashboards with scheduled refreshes, tailoring your monitoring to specific needs.
  • Cross-account sharing of event data stores: This feature allows you to securely share your event data stores with select IAM identities using Resource-Based Policies (RBP). These identities can then query the shared event data store within the same AWS Region where the event data store was created, facilitating more comprehensive analysis across your organization while maintaining security.

These features are available in all AWS Regions where AWS CloudTrail Lake is supported, except AI-powered insights on the “Highlights" dashboard, which is in preview in N. Virginia, Oregon, and Tokyo Regions. While these enhancements are available at no additional cost, standard CloudTrail Lake query charges apply when running queries to generate results or create visualizations for the CloudTrail Lake dashboards. To learn more, visit the AWS CloudTrail documentation or read our News Blog.

Read more


Amazon CloudWatch Synthetics now automatically deletes Lambda resources associated with canaries

Amazon CloudWatch Synthetics, an outside-in monitoring capability which continually verifies your customers’ experience by running snippets of code on AWS Lambda called canaries, will now automatically delete your associated Lambda resources when you try to delete Synthetics canaries minimizing the manual upkeep required to manage AWS resources in your account.

CloudWatch Synthetics creates Lambdas to execute canaries to monitor the health and performance of your web applications or API endpoints. When you delete a canary the Lambda function and its layers are no longer usable. With the release of this feature these Lambdas will be automatically removed when a canary is deleted, reducing the need for additional housekeeping in maintaining your Synthetics canaries. Canaries deleted via AWS console will automatically cleanup related lambda resources. Any new canaries created via CLI/SDK or CFN will automatically opt-in to this feature whereas canaries created before this launch need to be explicitly opted in.

This feature is available in all commercial regions, the AWS GovCloud (US) Regions, and China regions at no additional cost to the customers.

To learn more about the delete behavior of canaries, see the documentation, or refer to the user guide and One Observability Workshop to get started with CloudWatch Synthetics.
 

Read more


AWS Elastic Beanstalk adds support for Node.js 22

AWS Elastic Beanstalk now supports building and deploying Node.js 22 applications on AL2023 Beanstalk environments.

AWS Elastic Beanstalk is a service that provides the ability to deploy and manage applications in AWS without worrying about the infrastructure that runs those applications. Node.js 22 on AL2023 provides updates to the V8 JavaScript engine, improved garbage collection and performance improvements. You can create Elastic Beanstalk environment(s) running Node.js 22 on AL2023 using any of the Elastic Beanstalk interfaces such as Elastic Beanstalk Console, Elastic Beanstalk CLI, and the Elastic Beanstalk API.

This platform is generally available in commercial regions where Elastic Beanstalk is available including the AWS GovCloud (US) Regions. For a complete list of regions and service offerings, see AWS Regions.

For more information about Node.js and Linux Platforms, see the Elastic Beanstalk developer guide. To learn more about Elastic Beanstalk, visit the Elastic Beanstalk product page.

Read more


AWS announces support for predictive scaling for Amazon ECS services

Today, AWS announces support for predictive scaling for Amazon Elastic Container Service (Amazon ECS). Predictive scaling leverages advanced machine learning algorithms to proactively scale your Amazon ECS services ahead of demand surges, reducing overprovisioning costs while improving application responsiveness and availability.

Amazon ECS offers a rich set of service auto scaling options, including target tracking and step scaling policies, that automatically adjust task counts in response to observed load, as well as scheduled scaling to manually define rules to adjust capacity for routine demand patterns. Many applications observe recurring patterns of steep demand changes, such as early morning spikes when business resumes, wherein a reactive scaling policy can be slow to respond. Predictive scaling is a new capability that harnesses advanced machine learning algorithms, pre-trained on millions of data points, to proactively scale out ECS services ahead of anticipated demand surges. You can use predictive scaling alongside your existing auto scaling policies, such as target tracking or step scaling, so that your applications scale based on both real-time and historic patterns. You can also choose a “forecast only” mode to evaluate its accuracy and suitability, before enabling it to “forecast and scale“. Predictive scaling enhances responsiveness and availability for applications with recurring demand patterns, while also reducing the operational effort of manually configuring scaling policies and the costs from overprovisioning.

You can use AWS management console, SDK, CLI, CloudFormation, and CDK to configure predictive auto scaling for your ECS services. For a list of supported AWS Regions, see documentation. To learn more, visit this blog post and documentation.

Read more


Bottlerocket announces new AMIs that are preconfigured to use FIPS 140-3 validated cryptographic modules

Today, AWS has announced new AMIs for Bottlerocket that are preconfigured to use FIPS 140-3 validated cryptographic modules, including the Amazon Linux 2023 Kernel Crypto API and AWS-LC. Bottlerocket is a Linux-based operating system purpose-built for running containers, with a focus on security, minimal footprint, and safe updates.

With these FIPS-enabled Bottlerocket AMIs, the host software uses only FIPS-approved cryptographic algorithms for TLS connections. This includes connectivity to AWS services such as EC2 and Amazon Elastic Container Registry (ECR). Additionally, in regions where FIPS endpoints are available, the AMIs automatically use FIPS-compliant endpoints for these services by default, streamlining secure configurations for containerized workloads.

The FIPS-enabled Bottlerocket AMIs are now available in all commercial and AWS GovCloud (US) Regions. To see the regions where FIPS-endpoints are supported, visit the AWS FIPS 140-3 page.

To get started with Bottlerocket, see the Bottlerocket User Guide. You can also visit the Bottlerocket product page and explore the Bottlerocket GitHub repository for more information.

Read more


Amazon QuickSight supports fine grained permissions for capabilities with APIs for IAM Identity Center users

Amazon QuickSight now supports user level custom permissions profile assignment for IAM Identity Center users. Custom permissions profiles enable administrators to restrict access to capabilities in the QuickSight application by adding the profile to a user. A custom permissions profile defines which capabilities are disabled for a user or role. For example, administrators can restrict specific users from exporting data to excel and csv and prevent users from sharing QuickSight assets.

Custom permissions profiles are managed with the following APIs: CreateCustomPermissions, ListCustomPermissions, DescribeCustomPermissions, UpdateCustomPermissions and DeleteCustomPermissions. Custom permissions assignment to users is managed with the following APIs: UpdateUserCustomPermission and DeleteUserCustomPermission. These APIs are supported with all identity types in QuickSight.

This feature is available in all AWS Regions where Amazon QuickSight is available. To learn more, see Customizing access to Amazon QuickSight capabilities.

Read more


Amazon Kinesis Data Streams On-Demand mode supports streams writing up to 10GB/s

Amazon Kinesis Data Streams On-Demand Mode now automatically scales to support streaming applications that write up to 10GB/s per stream and consumers that read up to 20 GB/s per stream. This is a 5x increase from the previously supported limits of 2 GB/s per stream for writers and 4 GB/s for readers.

Amazon Kinesis Data Streams is a serverless data streaming service that allows customers to build de-coupled applications that publish and consume real-time data streams. It includes integrations with 40+ AWS and third-party services, enabling customers to easily build real-time stream processing, analytics, and machine learning applications. Customers use Kinesis Data Streams On-demand Mode for workloads with unpredictable and variable traffic patterns, so they do not have to manage capacity. They can pay based on the amount of data streamed. Customers can now use On-demand Mode for high-throughput data streams.

There is no action required on your part to use this feature in US East (N. Virginia), US West (Oregon) and Europe (Ireland) AWS Regions. When you write data to your Kinesis On-demand stream, it will automatically scale to write up to 10 GB/s. For other AWS Regions, you can reach out to AWS support to raise the peak write throughput capacity of your OD Streams to 10 GB/s. To learn more, see the Kinesis Data Streams Quotas and Limits documentation.

Read more


Amazon RDS Blue/Green Deployments support storage volume shrink

Amazon Relational Database Service (Amazon RDS) Blue/Green Deployments now supports the ability to shrink the storage volumes for your RDS database instances, allowing you to better utilize your storage resources and manage their costs. You can now increase and decrease your storage volume size based on anticipated application demands.

Previously, to shrink a storage volume, you had to manually create a new database instance with a smaller volume size, manually migrate the data from your current database to the newly created database instance, and switch database endpoints, often resulting in extended downtime. Blue/Green Deployments create a fully managed staging environment, or Green databases, with your specified storage size, and keep the Blue and Green databases in sync. With a few clicks, you can promote the Green databases to be the new production system in as fast as a minute, with no data loss and no changes to you're application to switch database endpoints.

Amazon RDS Blue/Green Deployments support for storage volume shrink is available for Amazon RDS for PostgreSQL major versions 12 and higher, RDS for MySQL major versions 5.7 and higher, and Amazon RDS for MariaDB major versions 10.4 and higher.

In a few clicks, update your databases using Amazon RDS Blue/Green Deployments via the Amazon RDS Console. Learn more about RDS Blue/Green Deployments and the supported engine versions here.

Read more


Announcing Amazon EMR 7.4 Release

Today, we are excited to announce the general availability of Amazon EMR 7.4. Amazon EMR 7.4 supports Apache Spark 3.5.2, Apache Hadoop 3.4.0, Trino 446, Apache HBase 2.5.5, Apache Phoenix 5.2.0, Apache Flink 1.19.0, Presto 0.287 and Apache Zookeeper 3.9.2.

Amazon EMR 7.4 enables in-transit encryption for 7 additional endpoints used with distributed applications like Apache Livy, Apache Hue, JupyterEnterpriseGateway, Apache Ranger and Apache Zookeeper. This update builds on the previous release Amazon EMR 7.3, which enabled in-transit encryption for 22 endpoints. In-Transit Encryption enables you to run workloads that meet strict regulatory or compliance requirements by protecting the confidentiality and integrity of your data.

Amazon EMR 7.4 is now available in all regions where Amazon EMR is available. To learn how to enable in transit encryption for your Amazon EMR clusters, view the TLS documentation. See Regional Availability of Amazon EMR, and our release notes for more detailed information.

Read more


Amazon ECS announces AZ rebalancing that speeds up mean time to recovery after an infrastructure event

Amazon Web Services (AWS) has announced the launch of Availability Zone (AZ) rebalancing for Amazon Elastic Container Service (ECS), a new feature that automatically redistributes containerized workloads across AZs. This capability helps reduce the mean time to recovery after infrastructure events, enabling applications to maintain high availability without requiring manual intervention.

Customers spread tasks across multiple AZs to enhance application resilience and minimize the impact of AZ-level failures, following AWS best practices. However, infrastructure events (such as an AZ outage) can leave the task distribution for an ECS service in an uneven state, potentially causing an availability risk to customer applications. With AZ rebalancing, ECS now automatically adjusts task placement to maintain an even balance, ensuring your applications remain highly available even in the face of failure.

Starting today, customers can enable AZ rebalancing for new and existing ECS services through the AWS CLI or the ECS Console. The feature is available in all Commercial and AWS GovCloud (US) Regions, and supports ECS Fargate and Amazon EC2 launch types. To learn more about AZ rebalancing and how to get started, visit the Amazon ECS documentation page.
 

Read more


Amazon WorkSpaces introduces support for Rocky Linux

Amazon Web Services today announced support for Rocky Linux from CIQ on Amazon WorkSpaces Personal, a fully managed virtual desktop offering. With this launch, organizations can provide their end users with an RPM Package Manager compatible environment, optimized for running compute-intensive applications, while helping to improve IT agility and reduce costs. Now WorkSpaces Personal customers have the flexibility to choose from a wider range of Linux distributions including Rocky Linux, Red Hat Enterprise Linux, and Ubuntu Desktop.

With Rocky Linux on WorkSpaces Personal, IT organizations can enable developers to work in an environment that is consistent with their production environment, and provide power users like engineers and data scientists with on-demand access to Rocky Linux environments as needed - quickly spinning up and tearing down instances and managing the entire fleet through the AWS Console, without the burden of capacity planning or license management. WorkSpaces Personal offers a wide range of high-performance, license-included, fully-managed virtual desktop bundles—enabling organizations to only pay for the resources they use.

Rocky Linux on WorkSpaces Personal is available in all AWS Regions where WorkSpaces Personal is available, except for AWS China Regions. Depending on the WorkSpaces Personal running mode, you will be charged hourly or monthly for your virtual desktops. For more details on pricing, refer to Amazon WorkSpaces Pricing.

To get started with Rocky Linux on WorkSpaces Personal, sign in to the AWS Management Console and open the Amazon WorkSpaces console.  For more information, see the Amazon WorkSpaces Administration Guide.
 

Read more


Load Balancer Capacity Unit Reservation for Application and Network Load Balancers

Application Load Balancer (ALB) and Network Load Balancer (NLB) now support Load Balancer Capacity Unit (LCU) Reservation that allows you to proactively set a minimum capacity for your load balancer, complementing its existing ability to auto-scale based on your traffic pattern.

With this feature, you can prepare for anticipated traffic surges by reserving a guaranteed minimum capacity in advance, providing customers increased scale and availability during high-demand events. LCU Reservation is ideal for scenarios such as event ticket sales, new product launches, or release of popular content. When using this feature, you pay only for the reserved LCUs and any additional usage above the reservation. You can easily configure this feature through the ELB console or API.

The feature is available for ALB in all commercial AWS Regions, including the AWS GovCloud (US) Regions and NLB in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm). To learn more, please refer to the ALB Documentation and NLB Documentation.

Read more


AWS Elastic Beanstalk adds support for Windows Bundled Logs

AWS Elastic Beanstalk is now providing Windows Bundled logs to enhance log collection capabilities for customers running applications on the Windows platforms.

AWS Elastic Beanstalk is now providing enhanced log collection capabilities for customers running applications on Windows platforms. Customers can request full logs and Beanstalk will automatically collect and bundle together the most important log files into a single downloadable zip file. This bundled log set can include logs for HealthD Service, IIS, Application Event, Elastic Beanstalk and Cloud Formation.

Elastic Beanstalk support for Windows Bundled Logs is available in all of the AWS Commercial Regions and AWS GovCloud (US) Regions that Elastic Beanstalk supports. For a complete list of regions and service offerings, see AWS Regions.

For more information about Elastic Beanstalk and Windows Bundled Logs see in the AWS Elastic Beanstalk Developer Guide.

Read more


Announcing customized delete protection for Amazon EBS Snapshots and EBS-backed AMIs

Customers can now further customize Recycle Bin rules to exclude EBS Snapshots and EBS-backed Amazon Machine Images (AMIs) based on tags. Customers use Recycle Bin to protect their resources from accidental deletion by retaining them for a time period that customers specify before being permanently deleted. The newly launched feature helps customers save cost by customizing their Recycle Bin rules for delete protection of only critical data in their resources, while excluding non-critical data that do not require delete protection.

Creating Region-level retention rules is a simple way to have peace of mind that all EBS Snapshots and EBS-backed AMIs in an AWS Region are protected from accidental deletion by Recycle Bin. However, in some cases, customers have security scanning workflows that create temporary EBS Snapshots that are not used for recovery. Customers may also have backup automation that do not require additional delete protection. The newly added feature to add resource exclusion tags in Recycle Bin can help you reduce storage costs by excluding the resources that do not require deletion protection from moving to Recycle Bin.

This feature is now available in all AWS commercial Regions and AWS GovCloud (US) Regions. Customers can add exclusion tags to their Recycle Bin rules via EC2 Console, API/CLI, or SDK.

To learn more about using Recycle Bin with exclusion tags, please refer to the technical documentation.

Read more


Announcing AWS CloudFormation support for Recycle Bin rules

Today, AWS announces AWS CloudFormation support for Recycle Bin, a data recovery feature that enables restoration of accidentally deleted Amazon EBS Snapshots and EBS-backed AMIs. You can now use Recycle Bin rules as a resource in your AWS CloudFormation templates, stacks, and stack sets.

Using AWS CloudFormation, you can now create, edit, and delete Recycle Bin rules as part of your CloudFormation templates and incorporate Recycle Bin rules into your automated infrastructure deployments. For example, region-level Recycle Bin rules protects all resources of the specified type in the AWS Region in which the rule is created. If you have a template that automates the provisioning of new accounts, you can now add a region-level Recycle Bin rule to it. This ensures that all EBS Snapshots and/or EBS-backed AMIs in those accounts are automatically protected from accidental deletions and stored in the Recycle Bin according to the region-level rule.

This feature is now available in all AWS Commercial Regions and the AWS GovCloud (US) Regions.

To get started using Recycle Bin in AWS CloudFormation, visit the AWS CloudFormation console. Please refer to the AWS CloudFormation user guide for information on using Recycle Bin rules as a resource in your templates, stacks, and stack sets. Learn more about Recycle Bin here.
 

Read more


AWS Advanced NodeJS Driver is Generally Available

The Amazon Web Services (AWS) Advanced NodeJS Driver is now generally available for use with Amazon RDS and Amazon Aurora PostgreSQL and MySQL-compatible database clusters. This database driver provides support for faster switchover and failover times, Federated Authentication, and authentication with AWS Secrets Manager or AWS Identity and Access Management (IAM).

The Amazon Web Services (AWS) Advanced NodeJS Driver is a standalone driver and supports the underlying NodeJS driver with the PostgreSQL Client or the MySQL2 Client. You can install the PostgreSQL and MySQL packages for Windows, Mac or Linux by following established installation guides in GitHub. The driver relies on monitoring the database cluster status and being aware of the cluster topology to determine the new writer. This approach reduces writer failover times to single digit seconds compared to the open-source driver.

The AWS Advanced NodeJS driver is released as an open-source project under the Apache 2.0 Public License. For more details click here to view Getting Started instructions and guidance on how to raise issues.

Read more


Amazon ECS now allows you to configure software version consistency

Amazon Elastic Container Service (Amazon ECS) now allows you to configure software version consistency for specific containers within your Amazon ECS services.

By default, Amazon ECS resolves container image tags to the image digest (SHA256 hash of the image manifest) when you create a new Amazon ECS service or deploy an update to the service. This enforces that all tasks in the service are identical and launched with this image digest(s). However, for certain containers within the task (e.g. telemetry sidecars provided by a 3rd party) customers may prefer to not enforce consistency and intead use a mutable container image tag (e.g. LATEST). Now, you can disable software version consistency for one or more containers in your ECS service by configuring the new versionConsistency attribute in the container definition. ECS applies changes to version consistency when you redeploy your ECS service with the task definition revision.

You can disable software version consistency for your Amazon ECS services running on AWS Fargate platform version 1.4.0 or higher and/or version v1.70.0 or higher of the Amazon ECS Agent in all commercial and the AWS GovCloud (US) Regions. To learn more, please visit our documentation.
 

Read more


Amazon OpenSearch Service now scales to 1000 data nodes on a single cluster

Amazon OpenSearch Service now enables you to scale a single cluster to 1000 data nodes (1000 hot nodes and/or 750 warm nodes) and enables you to manage 25 petabytes of data (10 Petabytes in hot nodes and further 15 Petabytes in warm nodes). You no longer need to setup multiple clusters for workloads that require more than 200 data nodes or more than 3 Petabytes of data.

Today, for workloads of more than 3 to 4 petabytes of data, you need to create multiple clusters in OpenSearch Service. This may have required you to refactor your applications or business logic to work with your workload split across multiple clusters. In addition, every cluster requires its own configuration, management, and monitoring, adding to the operational overhead. With this launch, you can scale a single cluster up to 1000 nodes, or 25 petabytes of data, removing the operational overhead that comes with managing multiple clusters.

To scale a cluster beyond 200 nodes, you have to request an increase through Service Quota, after which you can modify your cluster configuration using the AWS Console, AWS CLI, or the AWS SDK. Depending on the size of the cluster, OpenSearch Service will recommend configuration pre-requisites across data nodes, cluster manager nodes, and coordinator nodes. For more information, refer to the documentation.

The new limits are available to all OpenSearch Service clusters running OpenSearch 2.17 and above in all AWS regions where Amazon OpenSearch Service is available. Please refer to the AWS Region Table for more information about Amazon OpenSearch Service availability.

Read more


AWS announces Block Public Access for Amazon Virtual Private Cloud

Today, AWS announced Virtual Private Cloud (VPC) Block Public Access (BPA), a new centralized declarative control that enables network and security administrators to authoritatively block Internet traffic for their VPCs. VPC BPA supersedes any other setting and ensures your VPC resources are protected from unfettered Internet access in compliance with your organizations security and governance policy.

Amazon VPC allows customers to launch AWS resources in a logically isolated virtual network. Often times customers have thousands of AWS accounts and VPCs that are owned by multiple business units or application developer teams. Central administrators have the critical responsibility to ensure that resources in their VPCs are accessible to the public Internet in a highly controlled fashion. VPC BPA offers a single declarative control that allows admins to easily block Internet access to VPCs via the Internet Gateway or the Egress-only Internet Gateway and ensures that there is no unintended public exposure to their AWS resources regardless of their routing and security configuration. Admins can apply BPA across all or select VPCs in their account, block bi-directional or ingress-only Internet connectivity and exclude select subnets for resources that need Internet access. VPC BPA is integrated with AWS Network Access Analyzer and VPC Flow Logs to support impact analysis, provide advanced visibility and help customers meet audit and compliance requirements.

VPC BPA is available in all AWS Regions where Amazon VPC is offered. There is no additional charge for using this feature. For additional information, visit the Amazon VPC documentation and blog post.
 

Read more


Amazon EKS enhances Kubernetes control plane monitoring

Amazon EKS enhances visibility into the Kubernetes control plane by offering new intuitive dashboards in EKS console and providing a broader set of Kubernetes control plane metrics. This enables cluster administrators to quickly detect, troubleshoot, and remediate issues. All EKS clusters on Kubernetes version 1.28 and above will now automatically display a curated set of dashboards visualizing key control plane metrics within the EKS console, making it easy to observe the health and performance of the control plane. Additionally, a broader set of control plane metrics are made available in Amazon CloudWatch and in a Prometheus endpoint, providing customers with the flexibility to utilize their preferred monitoring solution — be it Amazon CloudWatch, Amazon Managed Service for Prometheus, or third-party monitoring tools.

Newly introduced pre-configured dashboards in the EKS console provide cluster administrators with visual representations of key control plane metrics, enabling rapid assessment of control plane health and performance. Additionally, the EKS console dashboards now integrate with Amazon CloudWatch Log Insights queries, surfacing critical insights from control plane logs directly within the console. Finally, customers now get access to Kubernetes control plane metrics from kube-scheduler and kube-controller-manager, in addition to the existing API server metrics.

The new set of dashboards and metrics are available at no additional charge in all AWS commercial regions and AWS GovCloud (US) Regions. To learn more, visit the launch blog post or EKS user guide.

Read more


Amazon Aurora MySQL 3.08 (compatible with MySQL 8.0.39) is generally available

Starting today, Amazon Aurora MySQL-Compatible Edition 3 (with MySQL 8.0 compatibility) will support MySQL 8.0.39. In addition to several security enhancements and bug fixes, MySQL 8.0.39 contains enhancements that improve database availability when handling large number of tables and reduce InnoDB issues related to redo logging, and index handling.

Aurora MySQL 3.08 also includes multiple availability improvements to reduce database restarts, memory management telemetry improvements with new CloudWatch metrics, major version upgrade optimizations for Aurora MySQL 2 to 3 upgrades, and general improvements around memory management and observability. For more details, refer to the Aurora MySQL 3.08 and MySQL 8.0.39 release notes.

To upgrade to Aurora MySQL 3.08, you can initiate a minor version upgrade manually by modifying your DB cluster, or you can enable the “Auto minor version upgrade” option when creating or modifying a DB cluster. This release is available in all AWS regions where Aurora MySQL is available.

Amazon Aurora is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. To get started with Amazon Aurora, take a look at our getting started page.

Read more


Amazon DynamoDB announces general availability of attribute-based access control

Amazon DynamoDB is a serverless, NoSQL, fully managed database with single-digit millisecond performance at any scale. Today, we are announcing the general availability of attribute-based access control (ABAC) support for tables and indexes in all AWS Commercial Regions and the AWS GovCloud (US) Regions. ABAC is an authorization strategy that lets you define access permissions based on tags attached to users, roles, and AWS resources. Using ABAC with DynamoDB helps you simplify permission management with your tables and indexes as your applications and organizations scale.

ABAC uses tag-based conditions in your AWS Identity and Access Management (IAM) policies or other policies to allow or deny specific actions on your tables or indexes when IAM principals’ tags match the tags for the tables. Using tag-based conditions, you can also set more granular access permissions based on your organizational structures. ABAC automatically applies your tag-based permissions to new employees and changing resource structures, without rewriting policies as organizations grow.

There is no additional cost to use ABAC. You can get started with ABAC using the AWS Management Console, AWS API, AWS CLI, AWS SDK, or AWS CloudFormation. Learn more at Using attribute-based access control with DynamoDB.

Read more


Self-service capacity management for AWS Outposts

AWS Outposts now supports self-service capacity management making it easy for you to view and manage compute capacity on your Outposts. Outposts brings native AWS services, infrastructure, and operating models to virtually any data center, co-location space, or on-premises facility by providing the same services, tools, and partner solutions with EC2 on premises. Customers have evolving business requirements and often need to fine-tune their application needs as their business scales. Capacity management enables viewing and modifying the configuration of EC2 capacity installed on Outposts.

Customers define their configuration when ordering a new Outposts to support a variety of different instances. Customers utilize capacity management to view these instances on their Outposts, their configured sizes, and their placement within the Outposts. Customers can also use capacity management to view, plan, and modify their capacity configuration which they will customize through this new self-service UI and API.

These capacity management features are available in all AWS Regions where Outposts is supported. Check out the Outposts rack FAQs page and the Outposts servers FAQs page for the full list of supported Regions.

To learn more about these capacity management capabilities for Outposts, read the Outposts user guide. To discuss Outposts capacity needs for your on-premises workloads with an Outposts specialist, submit this form.
 

Read more


AWS End User Messaging announces cost allocation tags for SMS

Today, AWS End User Messaging announces cost allocation tags for SMS resources, allowing you to track spend for each tag associated with a resource. AWS End User Messaging provides developers with a scalable and cost-effective messaging infrastructure without compromising the safety, security, or results of their communications.

You can now assign a tag to each resource, and summarize the spend of that resource using cost allocation tags in the AWS Billing and Cost management console.

To learn more, visit the AWS End User Messaging SMS User Guide.
 

Read more


Amazon Application Recovery Controller zonal shift and zonal autoshift extend support for EC2 Auto Scaling

Amazon Application Recovery Controller (ARC) zonal shift and zonal autoshift have expanded their capabilities and now support EC2 Auto Scaling. ARC zonal shift helps you quickly recover an unhealthy application in an Availability Zone (AZ), and reduce the duration and severity of impact to the application due to events such as power outages and hardware or software failures. ARC zonal autoshift safely and automatically shifts your application’s traffic away from an AZ when AWS identifies a potential failure affecting that AZ.

EC2 Auto Scaling customers can now shift traffic away from an AZ in the event of a failure. Zonal shift works with EC2 Auto Scaling by stopping dynamic scale-in, so that capacity is not unnecessarily removed and launching new EC2 instances in the healthy AZs only. In addition, you can set health checks to enabled in the impaired AZ or disable health checks in the impaired AZ. When disabled, it will pause unhealthy instance replacement in the AZ that has an active zonal shift. Enable your EC2 Auto Scaling Groups for zonal shift using the EC2 Auto Scaling console or API, and then trigger a zonal shift or enable autoshift via ARC zonal shift console or API. To learn more review the ARC documentation and read this launch blog.

There is no additional charge for using zonal shift or zonal autoshift. See the AWS Regional Services List for the most up-to-date availability information.
 

Read more


EC2 Auto Scaling now supports Amazon Application Recovery Controller zonal shift and zonal autoshift

EC2 Auto Scaling now supports Amazon Application Recovery Controller (ARC) zonal shift and zonal autoshift to help you quickly recover an impaired application from failures in an Availability Zone (AZ). Starting today, you can shift the launches of EC2 instances in an Auto Scaling Group (ASG) away from an impaired AZ to quickly recover your unhealthy application in another AZ, reducing the duration and severity of impact due to events such as power outages and hardware, or software failures. This new integration also brings support for ARC zonal autoshift, which automatically starts a zonal shift for enabled ASGs when AWS identifies a potential failure affecting an AZ.

You can initiate a zonal shift for an ASG from the Amazon EC2 Auto Scaling or Application Recovery Controller console. You can also use the AWS SDK to start a zonal shift and programmatically shift the instances in your ASG away from an AZ, and shift it back once the affected AZ is healthy.

There is no additional charge for using zonal shift. Zonal shift is now available in all AWS Regions. To get started, read the launch blog, or refer to the documentation.
 

Read more


AWS End User Messaging introduces phone number block/allow rules

Today, AWS End User Messaging expands SMS protect capabilities with phone number rules. With phone number rules, you can explicitly block or allow messages to individual phone numbers overriding your country rule settings.

You can use the new rules to fine tune your messaging strategy. For instance, you can use “block” rules to stop sending messages to specific numbers where you see abuse, helping you avoid unnecessary SMS costs. The phone number rules can be configured in the AWS End User Messaging console or accessed via APIs, enabling seamless integration with customer data platforms, contact centers, or other systems and databases that you integrate with.

To learn more and start using phone number block/allow rules, visit the AWS End User Messaging SMS User Guide.

Read more


AWS End User Messaging launches message feedback tracking

Today, AWS End User Messaging now allows you to track feedback for messages sent through the SMS, and MMS channel. AWS End User Messaging provides developers with a scalable and cost-effective messaging infrastructure without compromising the safety, security, or results of their communications.

For each SMS and MMS you send, you can now track message feedback rates like one-time passcode conversions, promotional offer link clicks, or online shopping cart additions. Message feedback rates allow you to track leading indicators for message performance that is specific to your use-case.

To learn more, visit the AWS End User Messaging SMS User Guide.
 

Read more


AWS Command Line Interface adds PKCE-based authorization for single sign-on

The AWS Command Line Interface (AWS CLI) v2 now supports OAuth 2.0 authorization code flows using the Proof Key for Code Exchange (PKCE) standard. This provides a simple and safe way to retrieve credentials for AWS CLI commands.

The AWS CLI is a unified tool that enables you to control multiple AWS services from the command line and to automate them through scripts. AWS CLI v2 offers integration with AWS IAM Identity Center, the recommended service for managing workforce access to AWS applications and multiple AWS accounts. The authorization code flow with PKCE is the recommended best practice for access to AWS resources from desktops and mobile devices with web browsers. It is now the default behavior when running the aws sso login or aws configure sso commands.

To learn more, see Configuring IAM Identity Center authentication with the AWS CLI in the AWS CLI User Guide. Share your questions, comments, and issues with us on GitHub. AWS IAM Identity Center is available at no additional cost in AWS Regions.
 

Read more


AWS End User Messaging announces integration with Amazon EventBridge

Today, AWS End User Messaging announces an integration with Amazon EventBridge. AWS End User Messaging provides developers with a scalable and cost-effective messaging infrastructure without compromising the safety, security, or results of their communications.

Now your SMS, MMS and voice delivery events which contain information like the status of the message, price, and carrier information will be available in EventBridge. You can then send send your SMS events to other AWS services and the many SaaS applications that EventBridge integrates with. EventBridge also allows you to create rules that filter and route your SMS events to event destinations you specify.

To learn more, visit the AWS End User Messaging SMS User Guide.
 

Read more


AWS IoT Core adds capabilities to enrich MQTT messages and simplify permission management

AWS IoT Core, a managed cloud service that lets you securely connect Internet of Things (IoT) devices to the cloud and manage them at scale, announces two new capabilities - ability to enrich MQTT messages with additional data and use thing-to-connection association for simplifying permission management. Message enrichment capability enables developers to augment MQTT messages from devices with additional information from thing registry, without modifying their devices. The thing-to-connection association enables mapping an MQTT client to a registry thing, for client IDs that don’t match thing name. This will enable developers to leverage registry information in IoT policies, easily associate device actions to lifecycle events, and utilize existing capabilities like custom cost allocation and resource-specific logging, previously only available for matching client IDs and thing names.

To enrich all messages from devices, developers can define a subset of registry attributes as propagating attributes. They can customize their message routing, processing workflows using this appended data. For example, in automotive applications, developers can selectively route messages to the desired backend depending on the appended metadata, such as vehicle make and type stored in thing registry. Additionally, with thing-to-connection association, developers can leverage existing features like using registry metadata in IoT policies, associate AWS IoT Core lifecycle events to a thing, do custom cost allocation through billing groups, and enable resource-specific logging, even if MQTT client ID and thing name differ.

These new features are available in all AWS regions where AWS IoT Core is present. For more information refer to the developer guide and API documentation.

Read more


Introducing Amazon Route 53 Resolver DNS Firewall Advanced

Today, AWS announced Amazon Route 53 Resolver DNS Firewall Advanced, a new set of capabilities on Route 53 Resolver DNS Firewall that allow you to monitor and block suspicious DNS traffic associated with advanced DNS threats, such as DNS tunneling and Domain Generation Algorithms (DGAs), that are designed to avoid detection by threat intelligence feeds or are difficult for threat intelligence feeds alone to track and block in time.

Today, Route 53 Resolver DNS Firewall helps you block DNS queries made for domains identified as low-reputation or suspected to be malicious, and to allow queries for trusted domains. With DNS Firewall Advanced, you can now enforce additional protections that monitor and block your DNS traffic in real-time based on anomalies identified in the domain names being queried from your VPCs. To get started, you can configure one or multiple DNS Firewall Advanced rule(s), specifying the type of threat (DGA, DNS tunneling) to be inspected. You can add the rule(s) to a DNS Firewall rule group, and enforce it on your VPCs by associating the rule group to each desired VPC directly or by using AWS Firewall Manager, AWS Resource Access Manager (RAM), AWS CloudFormation, or Route 53 Profiles.

Route 53 Resolver DNS Firewall Advanced is available in all AWS Regions, including the AWS GovCloud (US) Regions. To learn more about the new capabilities and the pricing, visit the Route 53 Resolver DNS Firewall webpage and the Route 53 pricing page. To get started, visit the Route 53 documentation.

Read more


Centrally manage root access in AWS Identity and Access Management (IAM)

Today, AWS Identity and Access Management (IAM) is launching a new capability allowing customers to centrally manage their root credentials, simplify auditing of credentials, and perform tightly scoped privileged tasks across their AWS member accounts managed using AWS Organizations.

Now, administrators can remove unnecessary root credentials for member accounts in AWS Organizations and then, if needed, perform tightly scoped privileged actions using temporary credentials. By removing unnecessary credentials, administrators have fewer highly privileged root credentials that they must secure with multi-factor authentication (MFA), making it easier to effectively meet MFA compliance requirements. This helps administrators control highly privileged access in their accounts, reduces operational effort, and makes it easier for them to secure their AWS environment.

The capability to manage root access in AWS member accounts is available in all AWS Regions, including the AWS GovCloud (US) Regions and China Regions. To get started managing your root access in IAM, visit the list of resources below:

Read more


Amazon EventBridge event delivery latency metric now in the AWS GovCloud (US) Regions

The Amazon EventBridge Event Bus end-to-end event delivery latency metric in Amazon CloudWatch that tracks the duration between event ingestion and successful delivery to the targets on your Event Bus is now available in the AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions. This new IngestionToInvocationSuccessLatency allows you to now detect and respond to event processing delays caused by under-performing, under-scaled, or unresponsive targets.

Amazon EventBridge Event Bus is a serverless event router that enables you to create highly scalable event-driven applications by routing events between your own applications, third-party SaaS applications, and other AWS services. You can set up rules to determine where to send your events, allowing for applications to react to changes in your systems as they occur. With the new IngestionToInvocationSuccessLatency metric you can now better monitor and understand event delivery latency to your targets, increasing the observability of your event-driven architecture.

To learn more about the new IngestionToInvocationSuccessLatency metric for Amazon EventBridge Event Buses, please read our blog post and documentation.
 

Read more


Amazon EC2 G6 instances now available in the AWS GovCloud (US-West) Region

Starting today, the Amazon Elastic Compute Cloud (Amazon EC2) G6 instances powered by NVIDIA L4 GPUs are now available in the AWS GovCloud (US-West) Region. G6 instances can be used for a wide range of graphics-intensive and machine learning use cases.

Customers can use G6 instances for deploying ML models for natural language processing, language translation, video and image analysis, speech recognition, and personalization as well as graphics workloads, such as creating and rendering real-time, cinematic-quality graphics and game streaming. G6 instances feature up to 8 NVIDIA L4 Tensor Core GPUs with 24 GB of memory per GPU and third generation AMD EPYC processors. They also support up to 192 vCPUs, up to 100 Gbps of network bandwidth, and up to 7.52 TB of local NVMe SSD storage.

Customers can purchase G6 instances as On-Demand Instances, Reserved Instances, Spot Instances, or as part of Savings Plans. To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the G6 instance page.

Read more


Amazon S3 Access Grants now integrate with Amazon Redshift

Amazon S3 Access Grants now integrate with Amazon Redshift. S3 Access Grants map identities from your Identity Provider (IdP), such as Entra ID and Okta, to datasets stored in Amazon S3, helping you to easily manage data permissions at scale. This integration gives you the ability to manage S3 permissions for AWS IAM Identity Center users and groups when using Redshift, without the need to write and maintain bucket policies or individual IAM roles.

Using S3 Access Grants, you can grant permissions to buckets or prefixes in S3 to users and groups in your IdP by connecting S3 with IAM Identity Center. Then, when you use Identity Center authentication for Redshift, end users in the appropriate user groups will automatically have permission to read and write data in S3 using COPY, UNLOAD, and CREATE LIBRARY SQL commands. S3 Access Grants then automatically update S3 permissions as users are added and removed from user groups in the IdP.

Amazon S3 Access Grants with Amazon Redshift are available for users federated via IdP in all AWS Regions where AWS IAM Identity Center is available. For pricing details, visit Amazon S3 pricing and Amazon Redshift pricing. To learn more about S3 Access Grants, refer to the documentation.

Read more


Amazon S3 now supports up to 1 million buckets per AWS account

Amazon S3 has increased the default bucket quota from 100 to 10,000 per AWS account. Additionally, any customer can request a quota increase up to 1 million buckets. As a result, customers can create new buckets for individual datasets that they store in S3 to more easily take advantage of capabilities such as default encryption, security policies, S3 Replication, and more to remove barriers to scaling and optimize their S3 storage architecture.

Amazon S3’s new default bucket quota of 10,000 buckets is now applied to all AWS accounts and requires no action by customers. To increase your bucket quota from 10,000 to up to 1 million buckets, simply request a quota increase via Service Quotas. You can create your first 2,000 buckets at no cost. Above 2,000 buckets, you are charged a small monthly fee.

The increased default general purpose bucket limit per account now applies to all AWS Regions. To learn more about general purpose bucket quotas, visit the S3 User Guide.
 

Read more


Amazon Keyspaces (for Apache Cassandra) reduces prices by up to 75%

Amazon Keyspaces (for Apache Cassandra) is a scalable, serverless, highly available, and fully managed Apache Cassandra-compatible database service. Effective Today, Amazon Keyspaces (for Apache Cassandra) is reducing prices by up to 75% across several pricing dimensions.

Amazon Keyspaces supports both on-demand and provisioned capacity modes for writing and reading data within a Region or across multiple Regions. Keyspaces’ on-demand mode provides a fully serverless experience with pay-as-you-go pricing and automatic scaling, eliminating the need for capacity planning. Many customers choose on-demand mode for its simplicity, enabling them to build modern, serverless applications that can start small and seamlessly scale to millions of requests per second.

Amazon Keyspaces has lowered prices for on-demand mode by up to 56% for single-Region and up to 65% for multi-Region usage, and for provisioned mode by up to 13% for single-Region and up to 20% for multi-Region usage. Additionally, to make data deletion more cost-effective, Keyspaces has lowered time-to-live (TTL) delete prices by 75%. Previously, on-demand was the cost-effective choice for spiky workloads, but with this pricing change, it now offers a lower cost for most provisioned capacity workloads as well. This change transforms on-demand mode into the recommended and default choice for the majority of Keyspaces workloads.

Together, these price reductions make Amazon Keyspaces even more cost-effective and simplify building, scaling, and managing Cassandra workloads. This pricing change is available in all AWS Regions where AWS offers Amazon Keyspaces. To learn more about the new price reductions, visit the Amazon Keyspaces Pricing.

Read more


Amazon DynamoDB reduces prices for on-demand throughput and global tables

Amazon DynamoDB is a serverless, NoSQL, fully managed database with single-digit millisecond performance at any scale. Starting today, we have made Amazon DynamoDB even more cost-effective by reducing prices for on-demand throughput by 50% and global tables by up to 67%.

DynamoDB on-demand mode offers a truly serverless experience with pay-per-request pricing and automatic scaling without the need for capacity planning. Many customers prefer the simplicity of on-demand mode to build modern, serverless applications that can start small and scale to millions of requests per second. While on-demand was previously cost effective for spiky workloads, with this pricing change, most provisioned capacity workloads on DynamoDB will achieve a lower price with on-demand mode. This pricing change is transformative as it makes on-demand the default and recommended mode for most DynamoDB workloads.

Global tables provide a fully managed, multi-active, multi-Region data replication solution that delivers increased resiliency, improved business continuity, and 99.999% availability for globally distributed applications at any scale. DynamoDB has reduced pricing for multi-Region replicated writes to match the pricing of single-Region writes, simplifying cost modeling for multi-Region applications. For on-demand tables, this price change lowers replicated write pricing by 67%, and for tables using provisioned capacity, replicated write pricing has been reduced by 33%.

These pricing changes are already in effect, in all AWS Regions, starting November 1, 2024 and will be automatically reflected in your AWS bill. To learn more about the new price reductions, see the AWS Database Blog, or visit the Amazon DynamoDB Pricing page.
 

Read more


Amazon OpenSearch Service now supports OpenSearch version 2.17

You can now run OpenSearch version 2.17 in Amazon OpenSearch Service. With OpenSearch 2.17, we have made several improvements in the areas of vector search, query performance and machine learning (ML) toolkit to help accelerate application development and enable generative AI workloads.

This launch introduces disk-optimized vector search, a new option for the vector engine that's designed to run efficiently with less memory to deliver accurate, economical vector search at scale. In addition to this, OpenSearch’s FAISS engine now supports byte vectors lowering cost and latency by compressing k-NN indexes with minimal recall degradation. You can now encode numeric terms as a roaring bitmap that enables you to perform aggregations, filtering and more, with lower retrieval latency and reduced memory usage.

This launch also includes key features to help you build ML-powered applications. Firstly, with ML inference search processors you can now run model predictions while executing search queries. In addition to this, you can also perform high-volume ML tasks, such as generating embeddings for large datasets and ingesting them into k-NN indexes using asynchronous batch ingestion. Finally, this launch adds threat intelligence capabilities to Security Analytics solution. This enables you to use customized Structured Threat Information Expression (STIX)-compliant threat intelligence feeds to provide insights to support decision-making and remediation.

For information on upgrading to OpenSearch 2.17, please see the documentation. OpenSearch 2.17 is now available in all AWS Regions where Amazon OpenSearch Service is available.

Read more


AWS launches user-based subscription of Microsoft Remote Desktop Services

Today, AWS announces the general availability of Microsoft Remote Desktop Services with AWS provided licenses. Customers can now purchase user-based subscription of Microsoft Remote Desktop Services licenses directly from AWS. This new offering provides licensing flexibility and business continuity for customers running graphical user interface (GUI) based applications on Amazon Elastic Compute Cloud (Amazon EC2) Windows instances.

Thousands of customers use Windows Server on Amazon EC2 to host custom applications or independent software vendor (ISV) products that require remote connectivity via Microsoft Remote Desktop Services. Previously, customers had to procure the licenses through various Microsoft licensing agreements. With the AWS provided subscription, customers can now access Microsoft Remote Desktop Services licenses from AWS on a per-user, per-month basis, eliminating the need for separate licensing agreements and reducing operational overhead. Unlike AWS provided Microsoft Office and Visual Studio, customers can continue using their existing Active Directory(s) for managing user access to GUI-based applications on Amazon EC2. Moreover, customers can have more than two concurrent user sessions with Windows Server instances. Lastly, AWS License Manager enables centralized tracking for license usage, simplifying governance and cost management. Customers can start using AWS provided Microsoft Remote Desktop Services licenses without rebuilding their existing Amazon EC2 instances, providing a seamless migration path for existing workloads.

AWS provided user-based subscription of Microsoft Remote Desktop Services license is available in all AWS Regions currently License Manager supports. For further questions, visit the user guide. To learn more and get started, visit here.
 

Read more


AWS Transit Gateway and AWS Cloud WAN enhance visibility metrics and Path MTU support

AWS Transit Gateway (TGW) and AWS Cloud WAN now support per availability zone (AZ) metrics delivered to CloudWatch. Furthermore, both services now support Path Maximum Transmission Unit Discovery (PMTUD) for effective mitigation against MTU mismatch issues in their global networks.

TGW and Cloud WAN allow customers to monitor their global network through performance and traffic metrics such as bytes in/out, packets in/out, and packets dropped. Until now, these metrics were available at an attachment level, and aggregate TGW and Core Network Edge (CNE) levels. With this launch, customers have more granular visibility into AZ-level metrics for VPC attachments. AZ-level metrics enable customers to rapidly troubleshoot any AZ impairments and provide deeper visibility in AZ-level traffic patterns across TGW and Cloud WAN.

TGW and Cloud WAN now also support standard PMTUD mechanism for traffic ingressing on VPC attachments. Until now, jumbo sized packets exceeding the TGW/CNE MTU (8500 bytes) would get silently dropped on VPC attachments. With this launch, an Internet Control Message Protocol (ICMP) Fragmentation Needed response message is sent back to sender hosts allowing them to remediate packet MTU size and thus minimize packet loss due to MTU mismatches in their network. PMTUD support is available for both IPv4 and IPv6 packets.

The per-AZ CloudWatch metrics and PMTUD support are available within each service in all AWS Regions where TGW or Cloud WAN are available. For more information, see the AWS Transit Gateway and AWS Cloud WAN documentation pages.

Read more


AWS Lambda adds support for Python 3.13

AWS Lambda now supports creating serverless applications using Python 3.13. Developers can use Python 3.13 as both a managed runtime and a container base image, and AWS will automatically apply updates to the managed runtime and base image as they become available.

Python 3.13 is the latest long-term support (LTS) release of Python and is expected to be supported for security and bug fixes until October 2029. This release provides Lambda customers access to the latest Python 3.13 language features. You can use Python 3.13 with Lambda@Edge (in supported Regions), allowing you to customize low-latency content delivered through Amazon CloudFront. Powertools for AWS Lambda (Python), a developer toolkit to implement serverless best practices and increase developer velocity, also supports Python 3.13.

The Python 3.13 runtime is available in all Regions where Lambda is available, including China and the AWS GovCloud (US) Regions.

You can use the full range of AWS deployment tools, including the Lambda console, AWS CLI, AWS Serverless Application Model (AWS SAM), AWS CDK, and AWS CloudFormation to deploy and manage serverless applications written in Python 3.13. For more information, including guidance on upgrading existing Lambda functions, read our blog post. For more information about AWS Lambda, visit the product page.

Read more


Amazon OpenSearch Service now supports 4th generation Intel (C7i, M7i, R7i) instances

Amazon OpenSearch Service now supports 4th Generation Intel Xeon Scalable processors based compute optimized (C7i), general purpose (M7i), and memory optimized (R7i) instances. These instances deliver up to 15% better price performance over 3rd generation Intel C6i, M6i & R6i instances respectively. You can update your domain to the new instances seamlessly through the OpenSearch Service console or APIs.

These instances support new Intel Advanced Matrix Extensions (AMX) that accelerate matrix multiplication operations for applications such as CPU-based ML. The 4th generation Intel instances support the latest DDR5 memory, offering higher bandwidth compared to 3rd generation Intel processors. To learn more about 4th generation intel improvements, please see the following C7i blog, M7i blog & R7i blog.

One or more than one 4th generation Intel instance types are now available on Amazon OpenSearch Service across 22 regions globally: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Canada (Central), Asia Pacific (Jakarta), Asia Pacific (Malaysia), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Paris), Europe (Spain), Europe (Stockholm), South America (Sao Paulo), AWS GovCloud (US-East) and AWS GovCloud (US-West).

To learn more about region specific instance type availability and their pricing, visit our pricing page. To learn more about Amazon OpenSearch Service, please visit the product page.

Read more


AWS Control Tower launches the ability to resolve drift for optional controls

AWS Control Tower customers can now use the ResetEnabledControl API to programmatically resolve the control drift or re-deploy the control to its intended configuration. A control drift occurs when the AWS Control Tower managed control is modified outside the AWS Control Tower governance. Resolving drift helps you to adhere to your governance and compliance requirements. You can use this API with all AWS Control Tower optional controls except service control policies(SCPs) based preventive controls. AWS Control Tower APIs enhance the end-to-end developer experience by enabling automation for integrated workflows and managing workloads at scale.

Below is the list of AWS Control Tower control APIs that are now supported in the regions where AWS Control Tower is available. Please visit the AWS Control Tower API reference for more information.

  • AWS Control Tower Control APIs - EnableControl, DisableControl, GetControlOperation, GetEnabledControl, ListEnabledControls, UpdateEnabledControl, TagResource, UnTagResource, ListTagsForResource, ResetEnabledControl API.

To learn more, visit the AWS Control Tower homepage. For more information about the AWS Regions where AWS Control Tower is available, see the AWS Region table.
 

Read more


Amazon DynamoDB announces user experience enhancements to organize your tables in the AWS GovCloud (US) Regions

Amazon DynamoDB now enables customers to easily find frequently used tables in the DynamoDB console in the AWS GovCloud (US) Regions. Customers can favorite their tables in the console’s tables page for quicker table access.

Customers can click the favorites icon to view their favorited tables in the console’s tables page. With this update, customers have a faster and more efficient way to find and work with tables that they often monitor, manage, and explore.

Customers can start using favorite tables at no additional cost. Get started with creating a DynamoDB table from the AWS Management Console.

Read more


Amazon EBS now supports detailed performance statistics on EBS volume health

Today, Amazon announced the availability of detailed performance statistics for Amazon Elastic Block Store (EBS) volumes. This new capability provides you with real-time visibility into the performance of your EBS volumes, making it easier to monitor the health of your storage resources and take actions sooner.

With detailed performance statistics, you can access 11 metrics at up to a per-second granularity to monitor input/output (I/O) statistics of your EBS volumes, including driven I/O and I/O latency histograms. The granular visibility provided by these metrics helps you quickly identify and proactively troubleshoot application performance bottlenecks that may be caused by factors such as reaching an EBS volume's provisioned IOPS or throughput limits, enabling you to enhance application performance and resiliency.

Detailed performance statistics for EBS volumes are available by default for all EBS volumes attached to a Nitro-based EC2 instance in all AWS Commercial, China, and the AWS GovCloud (US) Regions, at no additional charge.

To get started with EBS detailed performance statistics, please visit the documentation here to learn more about the available metrics and how to access them using NVMe-CLI tools.

Read more


Amazon EventBridge announces up to 94% improvement in end-to-end latency for Event Buses

Amazon EventBridge Event Buses announces up to 94% improvement in end-to-end latency for Event Buses, since January 2023, enabling you to handle highly latency sensitive applications, including fraud detection and prevention, industrial automation, and gaming applications. End-to-End latency is measured by the time taken from event ingestion to first event invocation attempt. This lower latency enables you to build highly responsive and efficient event-driven architectures for your time-sensitive applications. You can now detect and respond to critical events more quickly, enabling rapid innovation, faster decision-making, and improved operational efficiency.

For latency-sensitive mission-critical applications, even small delays can have a big impact. To address this, Amazon EventBridge Event Bus has been able to significantly reduce its average latency from 2235.23ms measured in January 2023, to just 129.33ms measured in August 2024 at P99. This significant improvement in latency allows EventBridge to deliver events in real-time to your mission critical applications.

Amazon EventBridge Event Bus’ lower latency is applied by default across all AWS Regions where Amazon EventBridge is available, including the AWS GovCloud (US) Regions, at no additional cost to you. Customers can monitor these improvements through the IngestionToInvocationStartLatency or the end-to-end IngestionToInvocationSuccessLatency metrics available in the EventBridge console dashboard or via Amazon CloudWatch. This benefits customers globally, and ensures consistent low-latency event processing for customers, regardless of your geographic location.

For more information on Amazon EventBridge Event Bus, please visit our documentation. To get started with Amazon EventBridge, visit the AWS Console and follow these instructions from the user guide.

Read more


Amazon Kinesis Data Streams launches CloudFormation support for resource policies

Amazon Kinesis Data Streams now provides AWS CloudFormation supports for managing resource policies for data streams and consumers. You can use CloudFormation templates to programmatically deploy resource policies in a secure, efficient, and repeatable way, reducing the risk of human error from manual configuration.

Kinesis Data Streams allows users to capture, process, and store data streams in real time at any scale. CloudFormation uses stacks to manage AWS resources, allowing you to track changes, apply updates automatically, and easily roll back changes when needed.

CloudFormation support for resource policies is available in all AWS regions where Amazon Kinesis Data Streams is offered, including the AWS GovCloud (US) Regions and China Regions. To learn more about Amazon Kinesis Data Streams resource policies, visit the developer guide.

Read more


Get x-ray vision into AWS CloudFormation deployments with a timeline view

AWS CloudFormation now offers a capability called deployment timeline view that allows customers to monitor and visualize the sequence of actions CloudFormation takes in a stack operation. This capability provides visibility into the ordering and duration of resource provisioning actions for a stack operation. This empowers developers to optimize their CloudFormation templates and speed up troubleshooting of deployment issues.

When you create, update, or delete a stack, CloudFormation initiates resource-level provisioning actions based on a resource dependency graph. For example, if you submit a CloudFormation template with an EC2 instance, Security Group, and VPC, CloudFormation creates the VPC, Security Group, and EC2 instance in that order. Previously, you could only see the chronological list of stack operation events, which provided limited visibility into dependencies between resources and the ordering of provisioning actions. Now, you can see a graphical visualization that shows the order in which CloudFormation provisions resources within a stack, color-coding the status of each resource, and the duration of each provisioning action. If a resource provisioning encounters an error, it highlights the likely root cause. This allows you to determine the optimal grouping of resources into templates, for minimizing deployment times and improving maintainability.

The new capability is available in all AWS Regions where CloudFormation is supported. Refer to the AWS Region table for service availability details.

Get started by initiating a stack operation and accessing the deployment timeline view from the stack events tab in the CloudFormation Console. To learn more about the deployment timeline view, visit the AWS CloudFormation User Guide.
 

Read more


AWS IAM Identity Center now supports search by permission set name

Today, AWS IAM Identity Center announced support for permission set search, enabling you to filter existing permission sets based on their names. This simplifies managing access to AWS accounts via IAM Identity Center, allowing you to use any substring in the permission set name to quickly lookup a permission set.

IAM Identity Center is where you create, or connect, your workforce users once and centrally manage their access to multiple AWS accounts and applications. Now, you can filter and find a permission set using any part of the name that you gave to the permission set, in addition to using the Amazon Resource Name (ARN).

IAM Identity Center enables you to connect your existing source of workforce identities to AWS once and manage access to multiple AWS accounts from a central place, as well as access the personalized experiences offered by AWS applications, such as Amazon Q; and define and audit user-aware access to data in AWS services, such as Amazon Redshift. IAM Identity Center is available at no additional cost in all AWS Regions where it is supported. To learn more, see the AWS IAM Identity Center User Guide.

Read more


AWS CloudTrail Lake announces enhanced event filtering

AWS enhances event filtering in AWS CloudTrail Lake, a managed data lake that helps you capture, immutably store, access, and analyze your activity logs, as well as AWS Config configuration items. Enhanced event filtering expands upon existing filtering capabilities, giving you even greater control over which CloudTrail events are ingested into your event data stores. This enhancement increases the efficiency and precision of your security, compliance, and operational investigations while helping reduce costs.

You can now filter both management and data events by the following new attributes:

  • eventSource: The service that the request was made to
  • eventType: Type of event that generated the event record (e.g., AwsApiCall, AwsServiceEvent, etc)
  • userIdentity.arn: IAM entity that made the request
  • sessionCredentialFromConsole: Whether the event originated from an AWS Management Console session or not

For management events, you can additionally filter by eventName which identifies the requested API action.

For each of these attributes, you can specify values to include or exclude. For example, you can now filter CloudTrail events based on the userIdentity.arn attribute to exclude events generated by specific IAM roles or users. You can exclude a dedicated IAM role used by a service that performs frequent API calls for monitoring purposes. This allows you to significantly reduce the volume of CloudTrail events ingested into CloudTrail Lake, lowering costs while maintaining visibility into relevant user and system activities.

Enhanced event filtering is available in all AWS Regions where AWS CloudTrail Lake is supported, at no additional charge. To learn more, visit the AWS CloudTrail documentation.

Read more


Amazon Bedrock now available in the AWS GovCloud (US-East) Region

Beginning today, customers can use Amazon Bedrock in the AWS GovCloud (US-East) region to easily build and scale generative AI applications using a variety of foundation models (FMs) as well as powerful tools to build generative AI applications. Visit the Amazon Bedrock documentation pages for information about model availability and cross-region inferencing.

Amazon Bedrock is a fully managed service that offers a choice of high-performing large language models (LLMs) and other FMs from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, as well as Amazon via a single API. Amazon Bedrock also provides a broad set of capabilities customers need to build generative AI applications with security, privacy, and responsible AI built in. These capabilities help you build tailored applications for multiple use cases across different industries, helping organizations unlock sustained growth from generative AI while ensuring customer trust and data governance.

To get started, visit the Amazon Bedrock page and see the Amazon Bedrock documentation for more details.

Read more


Starting today, AWS Identity and Access Management (IAM) now supports AWS PrivateLink in the AWS GovCloud (US) Regions. With IAM, you can specify who or what can access services and resources in AWS by creating and managing resources such as IAM roles and policies. You can now establish a private connection between your virtual private cloud (VPC) and IAM to manage IAM resources, helping you meet your compliance and regulatory requirements to limit public internet connectivity.

By using PrivateLink with both IAM and the AWS Security Token Service (STS), which already supports PrivateLink, you can now manage your IAM resources such as IAM roles and request temporary credentials to access your AWS resources end to end without going through the public Internet. Interface VPC endpoints for IAM in the AWS GovCloud (US) Regions can only be created in the AWS GovCloud (US-West) Region, where the IAM control plane is located. If your VPC is in a different Region, use AWS Transit Gateway to allow access to the IAM interface VPC endpoint from another Region.

For more information about AWS PrivateLink and IAM, please see the IAM User Guide.

Read more


Amazon SNS delivers to Amazon Data Firehose endpoints in the AWS GovCloud (US) Regions

Amazon Simple Notification Service (Amazon SNS) now delivers to Amazon Data Firehose endpoints in the AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions.

You can now use Amazon SNS to deliver notifications to Amazon Data Firehose (Firehose) endpoints for archiving and analysis. Through Firehose delivery streams, you can deliver events to AWS destinations such as Amazon Simple Storage Service (Amazon S3), Amazon Redshift, and Amazon OpenSearch Service, or to third-party destinations such as Datadog, New Relic, MongoDB, and Splunk. For more information, see Fanout to Firehose delivery streams.

To get started, see the following resources:

Read more


Amazon QuickSight now supports Client Credentials OAuth for Starburst through API/CLI

Today, Amazon QuickSight is announcing the general availability of Client Credentials flow based OAuth through API/CLI to connect to Starburst data sources. This launch enables you to create Starburst connections as part of your Infrastructure as Code (IaC) efforts with full support for AWS CloudFormation.

This type of OAuth solution is used to obtain an access token for machine-to-machine communication. This flow is suitable for scenarios where a client (e.g., a server-side application or a script) needs to access resources hosted on a server without the involvement of a user. The launch includes the support for Token (Client Secrets Based OAuth) & X509 (Client Private Key JWT) based OAuth. This launch also includes the support for Role Based Access Control (RBAC). RBAC is used to display the corresponding schema/table information tied to that role during dataset creation by QuickSight authors.

This feature is now available in all supported Amazon QuickSight regions here. For more details, click here.

Read more


EC2 Auto Scaling introduces provisioning control on strict availability zone balance

Amazon EC2 Auto Scaling Groups (ASG) introduces a new capability for customers to strictly balance their workloads across Availability Zones, enabling greater control over provisioning and management of their EC2 instances.

Previously, customers that wanted to strictly balance an ASGs EC2 instances across availability zones had to override default behaviors of EC2 Auto Scaling and invest in custom code to modify the ASG’s existing behaviors with life cycle hooks or through maintaining multiple ASGs. With this feature, customers can now to easily achieve strict availability zone balance and ensure higher levels of resiliency for their applications.

This capability is now available through the AWS Command Line Interface (CLI), AWS SDKs, or the AWS Console in all AWS Regions. To get started, please refer to the documentation.

Read more


AWS introduces service versioning and deployment history for Amazon ECS services

Amazon Elastic Container Service (Amazon ECS) now allows you to view the service revision and deployment history for your long-running applications deployed as Amazon ECS services. This capability makes it easier for you to track and view changes to applications deployed using Amazon ECS, monitor on-going deployments, and debug deployment failures.

Typically, customers deploy long running applications as Amazon ECS services and deploy software updates using a rolling update mechanism where tasks running the old software version are gradually replaced by tasks running the new version. With today’s release, you can now view the deployment history for your Amazon ECS services on the AWS Management Console as well as using the new listServiceDeployments API. You can look at the details of a specific deployment, including whether it succeeded, when it started and completed, and service revision information before and after the deployment using the Console and describeServiceDeployment API. Furthermore, you can look at the immutable configuration for a specific service version, including the task definition, container image digests, load balancer, service connect configuration, etc. using the Console and describeServiceRevision API.

You can view the service version and deployment history for their services deployed on or after October 25, 2024 using the AWS Management Console, API, SDK, and CLI in all AWS Regions. To learn more, visit this blog post and documentation.

Read more


AWS Mainframe Modernization achieves FedRAMP Moderate and SOC compliance

AWS Mainframe Modernization has added approval for Federal Risk and Authorization Management Program (FedRAMP) Moderate and System and Organization Controls (SOC) reports.

AWS Mainframe Modernization has achieved Federal Risk and Authorization Management Program (FedRAMP) Moderate authorization, listed on the FedRAMP marketplace, approved by the FedRAMP Joint Authorization Board (JAB) for the AWS US East / West Region which includes US East (Ohio), US East (N. Virginia), US West (N. California), and US West (Oregon) Regions. FedRAMP is a US government-wide program that delivers a standard approach to the security assessment, authorization, and continuous monitoring for cloud products and services.

AWS Mainframe Modernization is now System and Organization Controls (SOC) compliant. AWS System and Organization Controls (SOC) Reports are independent third-party examination reports that demonstrate how AWS achieves key compliance controls and objectives. The purpose of these reports is to help you and your auditors understand the AWS controls established to support operations and compliance. AWS Mainframe Modernization is SOC compliant in all AWS regions where it is generally available, including the AWS GovCloud (US) Regions.

The AWS Mainframe Modernization service allows customers and partners to modernize and migrate on-premise mainframe applications and test, run, and operate them on AWS Cloud native managed runtimes. It enables modernization patterns like refactor and replatform, as well as augmentation patterns supported by data replication and file transfer. To learn more, please visit AWS Mainframe Modernization service product and documentation pages.
 

Read more


Amazon SNS supports message archiving and replay for FIFO topics in the AWS GovCloud (US) Regions

Amazon SNS now supports in-place message archiving and replay for SNS FIFO topics in the AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions.

Topic owners can now set an archive policy, which defines a retention period for the messages published to their topic. Subscribers can then set a replay policy to an individual subscription, which triggers a replay of select messages from the archive, from a starting point until an ending point. Subscribers can also set a filter policy on their subscription to further select the messages in-scope for a replay.

To get started, see the following resources:

Read more


Amazon OpenSearch Service announces Extended Support for engine versions

Today, we announce end of Standard Support and Extended Support timelines for legacy Elasticsearch versions and OpenSearch Versions. Standard Support ends on Nov 7, 2025, for legacy Elasticsearch versions up to 6.7, Elasticsearch versions 7.1 through 7.8, OpenSearch versions from 1.0 through 1.2, and OpenSearch versions 2.3 through 2.9. With Extended Support, for an incremental flat fee over regular instance pricing, you continue to get critical security updates beyond end of Standard Support. For more information, see blog.

All Elasticsearch versions will receive at least 12 months of Extended Support with Elasticsearch v5.6 receiving 36 months of Extended Support. OpenSearch versions running on OpenSearch Service, will get at least 12 months of Standard Support after end of support date for corresponding upstream open-source OpenSearch version, or at least 12 months of Standard Support after release of next minor version on OpenSearch Service, whichever is longer. For support timelines by version, please see documentation. While running a version in Extended Support, you will be charged an additional flat fee per Normalized Instance Hour (NIH) (e.g. $0.0065/NIH for US East (N. Virginia). NIH is computed as a factor of instance size (e.g. medium, large), and number of instance hours. For more information on Extended Support charges, please see pricing page.

End of support and Extended Support dates are applicable to all OpenSearch Service clusters running OpenSearch or Elasticsearch versions, in all AWS regions where Amazon OpenSearch Service is available. Please refer AWS Region Table for more information about Amazon OpenSearch Service availability.

Read more


Amazon Verified Permissions launches new API to get multiple policies

Amazon Verified Permissions has launched a new API called batchGetPolicies. Customers can now make a single API call that returns multiple policies, for example to populate a list of policies that apply to a specific principal or resource. Amazon Verified Permissions is a permissions management and fine-grained authorization service for the applications that you build. Amazon Verified Permissions uses the Cedar policy language to enable developers and admins to define policy-based access controls based on roles and attributes. For example, a patient management application might call Amazon Verified Permissions (AVP) to determine if Alice is permitted access to Bob’s patient records.

The new API accepts up to 100 policy IDs and returns the corresponding set of policies, from across one or more policy stores. This simplifies the integration and reduces latency. Using the API reduces the number of calls that an application needs to make to Verified Permissions. For example, when building a permissions management UX that lists Cedar policies, the application now needs to make only one call to get 50 policies, rather than making 50 calls.

This feature is available in all regions where Verified Permissions is available. Pricing is based on the number of policies requested. For more information on pricing visit Amazon Verified Permissions Pricing – AWS - Amazon Web Services. For more information on the service visit Fine-Grained Authorization - Amazon Verified Permissions - AWS.
 

Read more


Amazon RDS for SQL Server supports minor versions in October 2024

New minor versions of Microsoft SQL Server are now available on Amazon RDS for SQL Server, providing performance enhancements and security fixes. Amazon RDS for SQL Server now supports these latest minor versions of SQL Server 2016, 2017, 2019 and 2022 across the Express, Web, Standard, and Enterprise editions.

We encourage you to upgrade your Amazon RDS for SQL Server database instances at your convenience. You can upgrade with just a few clicks in the Amazon RDS Management Console or by using the AWS CLI. Learn more about upgrading your database instances from the Amazon RDS User Guide. The new minor versions include:

  • SQL Server 2016 SP3 GRD - 13.0.6450.1
  • SQL Server 2017 CU31 - 14.0.3480.1
  • SQL Server 2019 CU28 - 15.0.4395.2
  • SQL Server 2022 CU15 - 16.0.4150.1


These minor versions are available in all AWS commercial regions where Amazon RDS for SQL Server databases are available, including the AWS GovCloud (US) Regions.

Amazon RDS for SQL Server makes it simple to set up, operate, and scale SQL Server deployments in the cloud. See Amazon RDS for SQL Server Pricing for pricing details and regional availability.

Read more


AWS announces availability of Microsoft Windows Server 2025 images on Amazon EC2

Amazon EC2 now supports Microsoft Windows Server 2025 with License Included (LI) Amazon Machine Images (AMIs), providing customers with an easy and flexible way to launch the latest version of Windows Server. By running Windows Server 2025 on Amazon EC2, customers can take advantage of the security, performance, and reliability of AWS with the latest Windows Server features.

Amazon EC2 is the proven, reliable, and secure cloud for your Windows Server workloads. Amazon creates and manages Microsoft Windows Server 2025 AMIs providing a reliable and quick way to launch Windows Server 2025 on EC2 instances. These images support Nitro-based instances with Unified Extensible Firmware Interface (UEFI) to provide enhanced security. These images also come with features such as Amazon EBS gp3 as the default root volume and the AWS NVMe driver pre-installed, which give you faster throughput and maximize price-performance. In addition, you can seamlessly use these images with pre-qualified services such as AWS Systems Manager, Amazon EC2 Image Builder, and AWS License Manager.

Windows Server 2025 AMIs are available in all commercial AWS Regions and the AWS GovCloud (US) Regions. You can find and launch instances directly from the Amazon EC2 console or through API or CLI commands. All instances running Windows Server 2025 AMIs are billed under the EC2 pricing for Windows operating system (OS).

To learn more about the new AMIs, see AWS Windows AMI reference. To learn more about running Windows Server 2025 on Amazon EC2, visit the Windows Workloads on AWS page.

Read more


AWS Security Hub launches 7 new security controls

AWS Security Hub has released 7 new security controls, increasing the total number of controls offered to 437. Security Hub released new controls for Amazon Simple Notification Service (Amazon SNS) topic and AWS Key Management Service (AWS KMS) keys checking for public access. Security Hub now supports additional controls for encryption checks for key AWS services such as AWS AppSync and Amazon Elastic File System (Amazon EFS). For the full list of recently released controls and the AWS Regions in which they are available, visit the Security Hub user guide.

To use the new controls, turn on the standard they belong to. Security Hub will then start evaluating your security posture and monitoring your resources for the relevant security controls. You can use central configuration to do so across all your organization accounts and linked Regions with a single action. If you are already using the relevant standards and have Security Hub configured to automatically enable new controls, these new controls will run without taking any additional action.

To get started, consult the following list of resources:

Read more


Amazon SES adds inline template support to send email APIs

Amazon Simple Email Service (SES) now allows customers to provide email templates directly within the SendBulkEmail or SendEmail API request. SES will use the provided inline template content to render and assemble the email content for delivery, reducing the need to manage template resources in your SES account.

Previously, Amazon Simple Email Service (SES) customers had to pre-create and store email templates in their SES account to use them for sending emails. This added complexity and friction to the email sending process, as customers had to manage the lifecycle of these templates. The new inline template support simplifies the integration process by allowing you to include the template content directly in your send API request, without having to create and maintain separate template resources.

Support for inline templates templated sending feature is available in all AWS Regions where Amazon SES is offered.

To learn more, see the documentation for using templates to send personalized email with the Amazon SES API.

Read more


Amazon Connect launches support for callbacks when using Chats and Tasks

Amazon Connect now enables you to request callbacks from Chats and Tasks in addition to voice calls. For example, if a customer reaches out after hours when no agent is available, they can request a callback by sending a chat message or completing a webform request (via Tasks). Callbacks allow end-customers to get a call from an available agent during normal business hours, without requiring them to stay on the line.

This feature is supported in all AWS regions where Amazon Connect is offered. To learn more, see our documentation. To learn more about Amazon Connect, the easy-to-use cloud contact center, visit the Amazon Connect website.

Read more


Amazon WorkSpaces WSP enables desktop traffic over TCP/UDP port 443

Amazon WorkSpaces Amazon DCV-enabled desktop traffic now supports both TCP and UDP over Port 443. This feature will be used automatically, requiring no configuration changes. Customers using port 4195 can continue to do so. The WorkSpaces client application prioritizes UDP (QUIC) for optimal performance, but will fallback to TCP if UDP is blocked. The WorkSpaces web client will connect over either TCP Port 4195 or 443. If Port 4195 is blocked, the client will exclusively use port 443.

Organizations managing WorkSpaces may not be the same as the organization managing the client networks where users will connect to WorkSpaces. Each network is managed independently, creating administration challenges, delays, or roadblocks to change outbound access rules. By opening WorkSpaces DCV desktop traffic over TCP/UDP Port 443 with support for fallback to TCP if UDP is not available, customers no longer need to open TCP/UDP 4195 unique ports.

WorkSpaces DCV enabled desktop traffic over TCP/UDP Port 443 support is available in all AWS Regions where Amazon WorkSpaces is available. There is no additional charge for this feature. Please see the Amazon WorkSpaces Administration Guide for more information.

Read more


AWS announces CSV result format support for Amazon Redshift Data API

Amazon Redshift Data API enables you to access data efficiently from Amazon Redshift data warehouses by eliminating the need to manage database drivers, connections, network configurations, data buffering, and more. Data API now supports comma seperated values (CSV) result format which provides flexibility in how you access and process data, allowing you to choose between JSON and CSV formats based on your application needs.

With CSV result format, you can now specify whether you want your query results formatted as JSON or CSV through the --result-format parameter when calling ExecuteStatement and BatchExecuteStatement APIs. To retrieve CSV results, use the new GetStatementResultV2 API which supports CSV results, while GetStatementResult API continues to support only JSON. If not specified, the default format remains JSON.

CSV support with Data API is now generally available for both Redshift Provisioned and Amazon Redshift Serverless data warehouses in all AWS commercial and the AWS GovCloud (US) Regions which support Data API. To get started and learn more, visit Amazon Redshift database developers guide.

Read more


aws-health

Announcing Cross Account Data Store Read Access for AWS HealthOmics

We are excited to announce that AWS HealthOmics sequence stores now support cross account read access to simplify data sharing and tool integration. AWS HealthOmics is a fully managed service that empowers healthcare and life science organizations to store, query, analyze omics data to generate insights to improve health and drive scientific discoveries. With this release, customers can enable secure data sharing with partners, while maintaining auditability and compliance frameworks.

Cross account reading for S3 API enables customers to write resource policies to manage sharing and restrict data reading based on their needs. Through the use of tag propagation and tag-based access control, users can create policies that share read access beyond their account while having a scalable mechanism to granularly restrict files based on their compliance structures. In addition, S3 access logs can be used to audit and validate access ensuring the data customers manage remains properly controlled.

Cross account S3 API access is now supported in all regions where AWS HealthOmics is available: US East (N. Virginia), US West (Oregon), Europe (Frankfurt, Ireland, London), Asia Pacific (Singapore), and Israel (Tel Aviv).

To get started, see the AWS HealthOmics documentation.
 

Read more


aws-healthimaging

Announcing enhanced support for medical imaging data with lossy compression in AWS HealthImaging

Today, HealthImaging launched enhancements that better handle lossy compressed medical imaging data. Some medical images, such as whole slide microscopy, ultrasound, and cardiology, utilize lossy image compression. With this feature launch, HealthImaging better supports lossy encoded data, and helps lower storage costs.

The HealthImaging import process encodes most image frames (pixel data) in the High-Throughput JPEG 2000 (HTJ2K) lossless format. With this launch, JPEG Baseline Lossy 8-bit, JPEG 2000 lossy, and High-Throughput JPEG 2000 lossy image compression will be persisted without transcoding. This means HealthImaging will store your lossy encoded data more efficiently, and thereby reducing your storage costs

With this launch, HealthImaging has also enhanced support for DICOM binary segmentation objects. Now image frames with Segmentation Type BINARY will be returned in the Explicit Little Endian (ELE) transfer syntax, as most applications expect.

AWS HealthImaging is a HIPAA-eligible service that empowers healthcare providers and their software partners to store, analyze, and share medical images at petabyte scale. With AWS HealthImaging, you can run your medical imaging applications at scale from a single, authoritative copy of each medical image in the cloud, while reducing total cost of ownership. To learn more about how HealthImaging import jobs work, see the AWS HealthImaging Developer Guide.

AWS HealthImaging is generally available in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (Ireland).
 

Read more


aws-iam

Announcing AWS STS support for ECDSA-based signatures of OIDC tokens

Today, AWS Security Token Service (STS) is announcing support for digitally signing OpenID Connect (OIDC) JSON Web Tokens (JWTs) using Elliptic Curve Digital Signature Algorithm (ECDSA) keys. A digital signature guarantees the JWT’s authenticity and integrity and ECDSA is a popular, NIST-approved digital signature algorithm. When your identity provider (IdP) authenticates a user, it crafts a signed OIDC JWT representing that user’s identity. When your authenticated user calls the AssumeRoleWithWebIdentity API and passes their OIDC JWT, STS vends short-term credentials that enable access to your protected AWS resources.

You now have a choice between using RSA and ECDSA keys when your IdP digitally signs an OIDC JWT. To begin using ECDSA keys with your OIDC IdP, update your IdP’s JWKS document with the new key information. No change to your AWS Identity and Access Management (IAM) configuration is needed to use ECDSA-based signatures of your OIDC JWTs.

Support for ECDSA-based signatures of OIDC JWTs is available in all AWS Regions, including the AWS GovCloud (US) Regions .

To learn more about using OIDC to authenticate your users and workloads, please visit OIDC Federation in the IAM Users Guide.

Read more


Amazon EKS simplifies providing IAM permissions to EKS add-ons

Amazon Elastic Kubernetes Service (EKS) now offers a direct integration between EKS add-ons and EKS Pod Identity, streamlining the lifecycle management process for critical cluster operational software that needs to interact with AWS services outside the cluster.

EKS add-ons that enable integration with underlying AWS resources need IAM permissions to interact with AWS services. EKS Pod Identities simplify how Kubernetes applications obtain AWS IAM permissions. With today’s launch, you can directly manage EKS Pod Identities using EKS add-ons operations through the EKS console, CLI, API, eksctl, and IAC tools like AWS CloudFormation, simplifying usage of Pod Identities for EKS add-ons. This integration expands the selection of Pod Identity compatible EKS add-ons from AWS and AWS Marketplace available for installation through the EKS console during cluster creation.

EKS add-ons integration with Pod Identities is generally available in all commercial AWS regions. To get started, see the EKS user guide.

Read more


Customize scope of IAM Access Analyzer unused access analysis

Customers use Identity and Access Management (IAM) Access Analyzer unused access findings to identify over permissive access granted to IAM roles and users in their accounts or AWS organization. Now, customers can optionally customize the analysis to meet their needs. Customers can select accounts, roles, and users to exclude from analysis and focus on specific areas to identify and remediate unused access. They can use identifiers such as account ID or scale configuration using role tags. By scoping the analyzer to monitor a sub-set of accounts and roles, customers can streamline findings review and optimize costs of using unused access analysis. Customers can update the configuration at any time to change the scope of analysis. With the new offering, IAM Access Analyzer provides enhanced controls to help customers tailor the analysis more closely to their organization’s security needs.

This new feature is available in all AWS Commercial Regions. To learn more about IAM Access Analyzer unused access analysis, see the documentation.

Read more


Starting today, AWS Identity and Access Management (IAM) now supports AWS PrivateLink in the AWS GovCloud (US) Regions. With IAM, you can specify who or what can access services and resources in AWS by creating and managing resources such as IAM roles and policies. You can now establish a private connection between your virtual private cloud (VPC) and IAM to manage IAM resources, helping you meet your compliance and regulatory requirements to limit public internet connectivity.

By using PrivateLink with both IAM and the AWS Security Token Service (STS), which already supports PrivateLink, you can now manage your IAM resources such as IAM roles and request temporary credentials to access your AWS resources end to end without going through the public Internet. Interface VPC endpoints for IAM in the AWS GovCloud (US) Regions can only be created in the AWS GovCloud (US-West) Region, where the IAM control plane is located. If your VPC is in a different Region, use AWS Transit Gateway to allow access to the IAM interface VPC endpoint from another Region.

For more information about AWS PrivateLink and IAM, please see the IAM User Guide.

Read more


aws-iam-identity-center

AWS Command Line Interface adds PKCE-based authorization for single sign-on

The AWS Command Line Interface (AWS CLI) v2 now supports OAuth 2.0 authorization code flows using the Proof Key for Code Exchange (PKCE) standard. This provides a simple and safe way to retrieve credentials for AWS CLI commands.

The AWS CLI is a unified tool that enables you to control multiple AWS services from the command line and to automate them through scripts. AWS CLI v2 offers integration with AWS IAM Identity Center, the recommended service for managing workforce access to AWS applications and multiple AWS accounts. The authorization code flow with PKCE is the recommended best practice for access to AWS resources from desktops and mobile devices with web browsers. It is now the default behavior when running the aws sso login or aws configure sso commands.

To learn more, see Configuring IAM Identity Center authentication with the AWS CLI in the AWS CLI User Guide. Share your questions, comments, and issues with us on GitHub. AWS IAM Identity Center is available at no additional cost in AWS Regions.
 

Read more


Centrally manage root access in AWS Identity and Access Management (IAM)

Today, AWS Identity and Access Management (IAM) is launching a new capability allowing customers to centrally manage their root credentials, simplify auditing of credentials, and perform tightly scoped privileged tasks across their AWS member accounts managed using AWS Organizations.

Now, administrators can remove unnecessary root credentials for member accounts in AWS Organizations and then, if needed, perform tightly scoped privileged actions using temporary credentials. By removing unnecessary credentials, administrators have fewer highly privileged root credentials that they must secure with multi-factor authentication (MFA), making it easier to effectively meet MFA compliance requirements. This helps administrators control highly privileged access in their accounts, reduces operational effort, and makes it easier for them to secure their AWS environment.

The capability to manage root access in AWS member accounts is available in all AWS Regions, including the AWS GovCloud (US) Regions and China Regions. To get started managing your root access in IAM, visit the list of resources below:

Read more


Introducing resource control policies (RCPs) to centrally restrict access to AWS resources

AWS is excited to announce resource control policies (RCPs) in AWS Organizations to help you centrally establish a data perimeter across your AWS environment. With RCPs, you can centrally restrict external access to your AWS resources at scale. At launch, RCPs apply to resources of the following AWS services: Amazon Simple Storage Service (Amazon S3), AWS Security Token Service, AWS Key Management Service, Amazon Simple Queue Service, and AWS Secrets Manager.

RCPs are a type of organization policy that can be used to centrally create and enforce preventative controls on AWS resources in your organization. Using RCPs, you can centrally set the maximum available permissions to your AWS resources as you scale your workloads on AWS. For example, an RCP can help enforce the requirement that “no principal outside my organization can access Amazon S3 buckets in my organization,” regardless of the permissions granted through individual bucket policies. RCPs complement service control policies (SCPs), an existing type of organization policy. While SCPs offer central control over the maximum permissions for IAM roles and users in your organization, RCPs offer central control over the maximum permissions on AWS resources in your organization.

Customers that use AWS IAM Access Analyzer to identify external access can review the impact of RCPs on their resource permissions. For an updated list of AWS services that support RCPs, refer to the list of services supporting RCPs. RCPs are available in all AWS commercial Regions. To learn more, visit the RCPs documentation.
 

Read more


AWS IAM Identity Center now supports search by permission set name

Today, AWS IAM Identity Center announced support for permission set search, enabling you to filter existing permission sets based on their names. This simplifies managing access to AWS accounts via IAM Identity Center, allowing you to use any substring in the permission set name to quickly lookup a permission set.

IAM Identity Center is where you create, or connect, your workforce users once and centrally manage their access to multiple AWS accounts and applications. Now, you can filter and find a permission set using any part of the name that you gave to the permission set, in addition to using the Amazon Resource Name (ARN).

IAM Identity Center enables you to connect your existing source of workforce identities to AWS once and manage access to multiple AWS accounts from a central place, as well as access the personalized experiences offered by AWS applications, such as Amazon Q; and define and audit user-aware access to data in AWS services, such as Amazon Redshift. IAM Identity Center is available at no additional cost in all AWS Regions where it is supported. To learn more, see the AWS IAM Identity Center User Guide.

Read more


aws-iot-core

AWS IoT Core adds capabilities to enrich MQTT messages and simplify permission management

AWS IoT Core, a managed cloud service that lets you securely connect Internet of Things (IoT) devices to the cloud and manage them at scale, announces two new capabilities - ability to enrich MQTT messages with additional data and use thing-to-connection association for simplifying permission management. Message enrichment capability enables developers to augment MQTT messages from devices with additional information from thing registry, without modifying their devices. The thing-to-connection association enables mapping an MQTT client to a registry thing, for client IDs that don’t match thing name. This will enable developers to leverage registry information in IoT policies, easily associate device actions to lifecycle events, and utilize existing capabilities like custom cost allocation and resource-specific logging, previously only available for matching client IDs and thing names.

To enrich all messages from devices, developers can define a subset of registry attributes as propagating attributes. They can customize their message routing, processing workflows using this appended data. For example, in automotive applications, developers can selectively route messages to the desired backend depending on the appended metadata, such as vehicle make and type stored in thing registry. Additionally, with thing-to-connection association, developers can leverage existing features like using registry metadata in IoT policies, associate AWS IoT Core lifecycle events to a thing, do custom cost allocation through billing groups, and enable resource-specific logging, even if MQTT client ID and thing name differ.

These new features are available in all AWS regions where AWS IoT Core is present. For more information refer to the developer guide and API documentation.

Read more


aws-iot-device-management

Announcing Commands feature for AWS IoT Device Management

Today, AWS IoT Device Management announced the general availability of the Commands feature, a managed capability that allows developers to build innovative applications where users can perform remote command and control actions on targeted devices and track the status of those executions. With this feature, you can send instructions, trigger device actions, or modify device configuration settings on-demand, simplifying the development of consumer facing applications.

Using the Commands feature, you can set fine-grained access controls, timeout settings, and receive real-time updates and notifications for each command execution, without having to manually create and manage MQTT topics, payload formats, Rules, Lambda functions, and status tracking. In addition, the feature supports custom payload formats, allowing you to define and store command entities as AWS resources for recurring use.

The AWS IoT Device Management commands feature is available in all AWS Regions where AWS IoT Device Management is offered. To learn more, see technical documentation. To get started, log in to the AWS IoT Management Console or use the CLI.
 

Read more


aws-iot-sitewise

AWS IoT SiteWise announces new generative AI-powered industrial assistant

AWS IoT SiteWise is a managed service that simplifies the collection, organization, and monitoring of industrial equipment data at scale. Today, we are excited to announce the general availability of AWS IoT SiteWise Assistant, a generative AI-powered assistant in AWS IoT SiteWise that allows industrial users to gain insights, solve problems, and take actions from their operational data and other data sources intuitively using natural language queries.

With the AWS IoT SiteWise Assistant, you can easily interact with your operational data by clicking on alarms in the SiteWise Monitor dashboard to get summaries or by asking questions like "What assets have active alarms?" or "How do I fix the wind turbine's low RPM issue?". The assistant understands the context of your industrial data in AWS IoT SiteWise from sources like sensors, machines, and related processes, and then contextualizes the data with your centralized knowledge base using Amazon Kendra to provide useful insights, empowering faster decision making to reduce downtime, optimize processes, and improve productivity.

AWS IoT SiteWise Assistant introduces new APIs that allow industrial solutions to access these insights on-demand. Developers can integrate capabilities of the Assistant into their industrial applications using updated IoT AppKit widgets like Chatbots, Line Charts, and KPI Gauges. Additionally, a Preview of the new Assistant-aware AWS IoT SiteWise Monitor portal offers a no-code experience for visualizing key data-driven insights.

AWS IoT SiteWise Assistant is now available in US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (Ireland). Check out the user guide, API reference, and launch blog to learn more.

Read more


aws-lake-formation

Amazon SageMaker Lakehouse integrated access controls now available in Amazon Athena federated queries

Amazon SageMaker now supports connectivity, discovery, querying, and enforcing fine-grained data access controls on federated sources when querying data with Amazon Athena. Athena is a query service that makes it simple to analyze your data lake and federated data sources such as Amazon Redshift, Amazon DynamoDB, or Snowflake using SQL without extract, transform, and load (ETL) scripts. Now, data workers can connect to and unify these data sources within SageMaker Lakehouse. Federated source metadata is unified in SageMaker Lakehouse, where you apply fine-grained policies in one place, helping to streamline analytics workflows and secure your data.

Log into Amazon SageMaker Unified Studio, connect to a federated data source in SageMaker Lakehouse, and govern data with column- and tag-based permissions that are enforced when querying federated data sources with Athena. In addition to the SageMaker Unified Studio, you can connect to these data sources through the Athena console and API. To help you automate and streamline connector set up, the new user experiences allow you to create and manage connections to data sources with ease.

Now, organizations can extract insights from a unified set of data sources while strengthening security posture, wherever your data is stored. The unification and fine-grained access controls on federated sources are available in all AWS Regions where SageMaker Lakehouse is available. To learn more, visit SageMaker Lakehouse documentation.

Read more


AWS Lake Formation now supports named LF-Tag expressions

Today, AWS announces the general availability of named LF-Tag expressions in AWS Lake Formation. With this launch, customers can create and manage named combinations of LF-Tags. With Named LF-Tag expressions, customers can now create permission expressions that better represent complex business requirements in permissions.

Customers use LF-Tags to create complex data grants based on attributes and want to manage the combination of LF-Tags. Now, when customers want to grant the same combination of LF-Tags to multiple users, they can create a named LF-Tag expression and grant that expression to multiple users rather than providing the full expression for every grant. Additionally, changes in a customer’s LF-Tag ontology, for example for changes in business requirements, means customers can update a single expression instead of all permissions that used the changed LF-Tags.

Named LF-Tag expressions are generally available in commercial AWS Regions where AWS Lake Formation is available and the AWS GovCloud (US) Regions.

To get started with this feature, visit the AWS Lake Formation documentation.
 

Read more


AWS Glue Data Catalog now supports Apache Iceberg automatic table optimization through Amazon VPC

AWS Glue Data Catalog now supports automatic optimization of Apache Iceberg tables that can be only accessed from a specific Amazon Virtual Private Cloud (VPC) environment. You can enable automatic optimization by providing a VPC configuration to optimize storage and improve query performance while keeping your tables secure.

AWS Glue Data Catalog supports compaction, snapshot retention and unreferenced file management that help you reduce metadata overhead, control storage costs and improve query performance. Customers who have governance and security configurations that require an Amazon S3 bucket to reside within a specific VPC can now use it with Glue Catalog. This gives you broader capabilities for automatic management of your Apache Iceberg data, regardless of where it's stored on Amazon S3.

Automatic optimization for Iceberg tables through Amazon VPC is available in 13 AWS regions US East (N. Virginia, Ohio), US West (Oregon), Europe (Ireland, London, Frankfurt, Stockholm), Asia Pacific (Tokyo, Seoul, Mumbai, Singapore, Sydney), South America (São Paulo). Customers can enable this through the AWS Console, AWS CLI, or AWS SDKs.

To get started, you can now provide the Glue network connection as an additional configuration along with optimization settings such as default retention period and days to keep unreferenced files. The AWS Glue Data Catalog will use the VPC information in the Glue connection to access Amazon S3 buckets and optimize Apache Iceberg tables.
To learn more, read the blog, and visit the AWS Glue Data Catalog documentation.
 

Read more


AWS Lake Formation is now available in the Asia Pacific (Malaysia) Region

AWS Lake Formation is a service that allows you to set up a secure data lake in days. A data lake is a centralized, curated, and secured repository that stores your data, both in its original form and prepared for analysis. A data lake enables you to break down data silos and combine different types of analytics to gain insights and guide better business decisions.

Creating a data lake with Lake Formation allows you to define where your data resides and what data access and security policies you want to apply. Your users can then access the centralized AWS Glue Data Catalog which describes available data sets and their appropriate usage. Your users can then leverage these data sets with their choice of analytics and machine learning services, like Amazon EMR for Apache Spark, Amazon Redshift Spectrum, AWS Glue, Amazon QuickSight, and Amazon Athena.

For a list of regions where AWS Lake Formation is available, see the AWS Region Table.
 

Read more


AWS Glue Data Catalog now supports scheduled generation of column level statistics

AWS Glue Data Catalog now supports the scheduled generation of column-level statistics for Apache Iceberg tables and file formats such as Parquet, JSON, CSV, XML, ORC, and ION. With this launch, you can simplify and automate the generation of statistics by creating a recurring schedule in the Glue Data Catalog. These statistics are integrated with the cost-based optimizer (CBO) from Amazon Redshift Spectrum and Amazon Athena, resulting in improved query performance and potential cost savings.

Previously, to setup recurring statistics generation schedule, you had to call AWS services using a combination of AWS Lambda and Amazon EventBridge Scheduler. With this new feature, you can now provide the recurring schedule as an additional configuration to Glue Data Catalog along with sampling percentage. For each scheduled run, the number of distinct values (NDVs) are collected for Apache Iceberg tables, and additional statistics such as the number of nulls, maximum, minimum, and average length are collected for other file formats. As the statistics are updated, Amazon Redshift and Amazon Athena use them to optimize queries, using optimizations such as optimal join order or cost based aggregation pushdown. You have visibility into the status and timing of each statistics generation run, as well as the updated statistics values.

To get started, you can schedule statistics generation using the AWS Glue Data Catalog Console or AWS Glue APIs. The support for scheduled generation of AWS Glue Catalog statistics is generally available in all regions where Amazon EventBridge Scheduler is available. Visit AWS Glue Catalog documentation to learn more.

Read more


aws-lambda

AWS Lambda announces Provisioned Mode for Kafka event source mappings (ESMs)

AWS Lambda announces Provisioned Mode for event source mappings (ESMs) that subscribe to Apache Kafka event sources, a feature that allows you to optimize the throughput of your Kafka ESM by provisioning event polling resources that remain ready to handle sudden spikes in traffic. Provisioned Mode helps you build highly responsive and scalable event-driven Kafka applications with stringent performance requirements.

Customers building streaming data applications often use Kafka as an event source for Lambda functions, and use Lambda's fully-managed MSK ESM or self-managed Kafka ESM, which automatically scale polling resources in response to events. However, for event-driven Kafka applications that need to handle unpredictable bursts of traffic, lack of control over the throughput of ESM can lead to delays in your users’ experience. Provisioned Mode for Kafka ESM allows you to fine-tune the throughput of the ESM by provisioning and auto-scaling between a minimum and maximum number of polling resources called event pollers, and is ideal for real-time applications with stringent performance requirements.

This feature is generally available in all AWS Commercial Regions where AWS Lambda is available, except Israel (Tel Aviv), Asia Pacific (Malaysia), and Canada West (Calgary).

You can activate Provisioned Mode for MSK ESM or self-managed Kafka ESM by configuring a minimum and maximum number of event pollers in the ESM API, AWS Console, AWS CLI, AWS SDK, AWS CloudFormation, and AWS SAM. You pay for the usage of event pollers, along a billing unit called Event Poller Unit (EPU). To learn more, read Lambda ESM documentation and AWS Lambda pricing.

Read more


AWS Lambda adds support for Node.js 22

AWS Lambda now supports creating serverless applications using Node.js 22. Developers can use Node.js 22 as both a managed runtime and a container base image, and AWS will automatically apply updates to the managed runtime and base image as they become available.

Node.js 22 is the latest long-term support (LTS) release of Node.js and is expected to be supported for security and bug fixes until April 2027. It provides access to the latest Node.js language features, such as the ‘fetch’ API. You can use Node.js 22 with Lambda@Edge in supported Regions, allowing you to customize low-latency content delivered through Amazon CloudFront. Powertools for AWS Lambda (TypeScript), a developer toolkit to implement serverless best practices and increase developer velocity, also supports Node.js 22.

The Node.js 22 runtime is available in all Regions where Lambda is available, including China and the AWS GovCloud (US) Regions.

You can use the full range of AWS deployment tools, including the Lambda console, AWS CLI, AWS Serverless Application Model (AWS SAM), AWS CDK, and AWS CloudFormation to deploy and manage serverless applications written in Node.js 22. For more information, including guidance on upgrading existing Lambda functions, see our blog post. For more information about AWS Lambda, visit our product page.

Read more


Announcing new Amazon CloudWatch Metrics for AWS Lambda Event Source Mappings (ESMs)

AWS Lambda announces new Amazon CloudWatch metrics for Lambda Event Source Mappings (ESMs), which provide customers visibility into the processing state of events read by ESMs that subscribe to Amazon SQS, Amazon Kinesis, and Amazon DynamoDB event sources. This enables customers to easily monitor issues or delays in event processing and take corrective actions.

Customers use ESMs to read events from event sources and invoke Lambda functions. Lack of visibility into processing state of events ingested by ESMs delays diagnosis of event processing issues. Customers can now use the following CloudWatch metrics to monitor the processing state of events ingested by ESMs — PolledEventCount, InvokedEventCount, FilteredOutEventCount, FailedInvokeEventCount, DeletedEventCount, DroppedEventCount, and OnFailureDestinationDeliveredEventCount. PolledEventCount counts the events read by an ESM, and InvokedEventCount counts the events that invoked a Lambda function. FilteredOutEventCount counts the events filtered out by an ESM. FailedInvokeEventCount counts the events that attempted to invoke a Lambda function, but encountered failure. DeletedEventCount counts the events that have been deleted from the SQS queue by Lambda upon successful processing. DroppedEventCount counts the events dropped due to event expiry or exhaustion of retry attempts. OnFailureDestinationDeliveredEventCount counts the events successfully sent to an on-failure destination.

This feature is generally available in all AWS Commercial Regions where AWS Lambda is available.

You can enable ESM metrics using Lambda event source mapping API, AWS Console, AWS CLI, AWS SDK, AWS CloudFormation, and AWS SAM. To learn more about these metrics, visit Lambda developer guide. These new metrics are charged at standard CloudWatch pricing for metrics.

Read more


AWS Lambda supports application performance monitoring (APM) via CloudWatch Application Signals

AWS Lambda now supports Amazon CloudWatch Application Signals, an application performance monitoring (APM) solution, enabling developers and operators to easily monitor the health and performance of their serverless applications built using Lambda.

Customers want an easy way to quickly identify and troubleshoot performance issues to minimize the mean time to recovery (MTTR) and operational costs of running serverless applications. Now, Application Signals provides pre-built, standardized dashboards for critical application metrics (such as throughput, availability, latency, faults, and errors), correlated traces, and interactions between the Lambda function and its dependencies (such as other AWS services), without requiring any manual instrumentation or code changes from developers. This gives operators a single-pane-of-glass view of the health of the application and enables them to drill down to establish the root cause of performance anomalies. You can also create Service Level Objectives (SLOs) in Application Signals to closely track the performance KPIs of critical operations in your application, enabling you to easily identify and triage operations that do not meet your business KPIs. Application Signals auto-instruments your Lambda function using enhanced AWS Distro for OpenTelemetry (ADOT) libraries, delivering better performance (cold start latency and memory consumption) than before.

To get started, visit the Configuration tab in Lambda console and enable Application Signals for your function with just one click in the “Monitoring and operational tools” section. To learn more, visit the launch blog post, Lambda developer guide, and Application Signals developer guide.

Application Signals for Lambda is available in all commercial AWS Regions where Lambda and CloudWatch Application Signals are available.
 

Read more


Amazon CloudWatch Synthetics now automatically deletes Lambda resources associated with canaries

Amazon CloudWatch Synthetics, an outside-in monitoring capability which continually verifies your customers’ experience by running snippets of code on AWS Lambda called canaries, will now automatically delete your associated Lambda resources when you try to delete Synthetics canaries minimizing the manual upkeep required to manage AWS resources in your account.

CloudWatch Synthetics creates Lambdas to execute canaries to monitor the health and performance of your web applications or API endpoints. When you delete a canary the Lambda function and its layers are no longer usable. With the release of this feature these Lambdas will be automatically removed when a canary is deleted, reducing the need for additional housekeeping in maintaining your Synthetics canaries. Canaries deleted via AWS console will automatically cleanup related lambda resources. Any new canaries created via CLI/SDK or CFN will automatically opt-in to this feature whereas canaries created before this launch need to be explicitly opted in.

This feature is available in all commercial regions, the AWS GovCloud (US) Regions, and China regions at no additional cost to the customers.

To learn more about the delete behavior of canaries, see the documentation, or refer to the user guide and One Observability Workshop to get started with CloudWatch Synthetics.
 

Read more


AWS Lambda supports Amazon S3 as a failed-event destination for asynchronous and stream event sources

AWS Lambda now supports Amazon S3 as a failed-event destination for asynchronous invocations, and for Amazon Kinesis and Amazon DynamoDB event source mappings (ESMs). This enables customers to route the failed batch of records and function execution results to S3 using a simple configuration, without the overhead of writing and managing additional code.

Customers building event-driven applications with asynchronous event sources or stream event sources for Lambda can configure services like Amazon Simple Queue Service (SQS) and Amazon Simple Notification Service (SNS) as failed-event destinations to store the results of failed invocations. However, in scenarios where existing failed-event destinations do not support the payload size requirements for the failed events, customers need to write custom logic to retrieve and redrive event payload data. With today’s announcement, customers can configure S3 as a failed-event destination for Lambda functions invoked via asynchronous invocations, Kinesis ESMs, and DynamoDB ESMs. This enables customers to deliver complete event payload data to the failed-event destination, and helps reduce the overhead of managing custom logic to reliably retrieve and redrive failed event data.

This feature is generally available in all AWS Commercial Regions where AWS Lambda and the configured event source or event destination are available.

To enable S3 as a failed-event destination, refer to our documentation for configuring destinations with asynchronous invocations, Kinesis ESMs, and DynamoDB ESMs. This feature incurs no additional charge to use. You pay for charges associated with Amazon S3 usage.

Read more


AWS CloudFormation Hooks now support custom AWS Lambda functions

AWS CloudFormation Hooks introduces a pre-built hook that allows you to simply point to an AWS Lambda function in your account. With CloudFormation Hooks, you can provide custom logic that proactively evaluate your resource configurations before provisioning. Today’s launch allows you to provide your custom logic as a Lambda function, allowing a simpler way for you to author a hook while gaining extended flexibility of hosting Lambda functions in your account.

Prior to this launch, customers used the CloudFormation CLI (cfn-cli) to author and publish hooks to the CloudFormation registry. Now, customers can simply activate the Lambda hook and pass a Lambda Amazon Resource Names (ARNs) for hooks to invoke. This allows you to directly edit your Lambda function to make updates without re-configuring your hook. Additionally, you no longer have to register your custom logic to CloudFormation registry.

The Lambda hook is available at no additional charge in all AWS Commercial Regions. Customers will incur a charge for Lambda usage. Refer to Lambda’s pricing guide for more information. To get started, you can use the new Hooks console workflow within CloudFormation console, AWS CLI, or CloudFormation.

To learn more about the Lambda hook, check out the detailed feature walkthrough on the AWS DevOps Blog or refer to the Lambda Hook User Guide. To get started with creating your Lambda function, visit AWS Lambda User Guide.
 

Read more


AWS Lambda now supports SnapStart for Python and .NET functions

Starting today, you can use Lambda SnapStart with your functions that use the Python and .NET managed runtimes, to deliver as low as sub-second startup performance. Lambda SnapStart is an opt-in capability that makes it easier for you to build highly responsive and scalable applications without provisioning resources or implementing complex performance optimizations.

For latency sensitive applications that support unpredictable bursts of traffic, high startup latencies—known as cold starts—can cause delays in your users’ experience. Lambda SnapStart can improve startup times by initializing the function’s code ahead of time, taking a snapshot of the initialized execution environment, and caching it. When the function is invoked and subsequently scales up, Lambda SnapStart resumes new execution environments from the cached snapshot instead of initializing them from scratch, significantly improving startup latency. Lambda SnapStart is ideal for applications such as synchronous APIs, interactive microservices, data processing, and ML inference.

Lambda SnapStart for Python and .NET is generally available in the following AWS Regions: US East (Ohio, N. Virginia), US West (Oregon), Asia Pacific (Singapore, Tokyo, Sydney), and Europe (Frankfurt, Ireland, Stockholm).

You can activate SnapStart for new or existing Lambda functions running on Python 3.12 (and newer) and .NET 8 (and newer) using the AWS Lambda API, AWS Management Console, AWS Command Line Interface (AWS CLI), AWS Cloud Formation, AWS Serverless Application Model (AWS SAM), AWS SDK, and AWS Cloud Development Kit (AWS CDK). For more information, see the Lambda documentation, or the launch blog post. To learn more about pricing for SnapStart on Python and .NET, visit AWS Lambda Pricing

Read more


AWS Lambda adds support for Python 3.13

AWS Lambda now supports creating serverless applications using Python 3.13. Developers can use Python 3.13 as both a managed runtime and a container base image, and AWS will automatically apply updates to the managed runtime and base image as they become available.

Python 3.13 is the latest long-term support (LTS) release of Python and is expected to be supported for security and bug fixes until October 2029. This release provides Lambda customers access to the latest Python 3.13 language features. You can use Python 3.13 with Lambda@Edge (in supported Regions), allowing you to customize low-latency content delivered through Amazon CloudFront. Powertools for AWS Lambda (Python), a developer toolkit to implement serverless best practices and increase developer velocity, also supports Python 3.13.

The Python 3.13 runtime is available in all Regions where Lambda is available, including China and the AWS GovCloud (US) Regions.

You can use the full range of AWS deployment tools, including the Lambda console, AWS CLI, AWS Serverless Application Model (AWS SAM), AWS CDK, and AWS CloudFormation to deploy and manage serverless applications written in Python 3.13. For more information, including guidance on upgrading existing Lambda functions, read our blog post. For more information about AWS Lambda, visit the product page.

Read more


AWS Lambda supports Customer Managed Key (CMK) encryption for Zip function code artifacts

AWS Lambda now supports encryption of Lambda function Zip code artifacts using customer managed keys instead of default AWS owned keys. Using keys that they create, own, and manage can satisfy customer’s organizational security and governance requirements.

AWS Lambda is widely adopted for its simple programming model, built-in event triggers, automatic scaling, and fault tolerance. Previously, Lambda supported customer-managed AWS Key Management Service (AWS KMS) key-based encryption for the configuration data stored inside Lambda, such as function environment variables and SnapStart-enabled function snapshots. With today’s launch, customers can provide their own key to encrypt function code in Zip artifacts, making it easy to audit or control access to the code deployed in the Lambda function.

Customers can encrypt new or existing function Zip code artifacts by supplying a KMS key when creating or updating a function using AWS Lambda API, AWS Management Console, AWS Command Line Interface (AWS CLI), AWS SDK, AWS CloudFormation, or AWS Serverless Application Model (AWS SAM). When the KMS key is disabled, Lambda service and any users using GetFunction API to fetch deployment package will no longer have access to the Zip artifacts deployed with the Lambda function, thus, providing a convenient revocation control to the customers. If no key is provided, Lambda still secures the Zip code artifacts with AWS-managed encryption.

This feature is available in all AWS Regions where Lambda is available, except the China Regions. To learn more, visit documentation.

Read more


AWS Lambda announces JSON logging support for .NET managed runtime

AWS Lambda now enables you to natively capture application logs in JSON structured format for Lambda functions that use .NET Lambda managed runtime. JSON format allows logs to be structured as a series of key-value pairs, enabling you to quickly search, filter, and analyze large volumes of logs to easily troubleshoot failures and understand the performance of your Lambda functions.

We previously announced support for natively capturing application logs (logs generated by your Lambda function code) and system logs (logs generated by the Lambda service while executing your function code) in JSON structured format for Python, Node.js, and Java managed runtimes. However, for .NET managed runtime, you could only natively capture system logs in JSON structured format. To capture application logs in JSON structured format, you had to manually configure logging libraries. This launch enables you to capture application logs in JSON structured format for functions that use .NET managed runtime without having to use your own logging libraries.

To get started, you can set log format to JSON for Lambda functions that use any .NET managed runtime using Lambda API, Lambda console, AWS CLI, AWS Serverless Application Model (SAM), and AWS CloudFormation. To learn more, visit the launch blog post. You can learn about Lambda logging in the Lambda logging controls blog post or Lambda Developer Guide.

JSON structured logging support for .NET is now available in all AWS Regions where Lambda is available, except for China and GovCloud Regions, at no additional cost. For more information, see the AWS Region table.

Read more


aws-license-manager

AWS launches user-based subscription of Microsoft Remote Desktop Services

Today, AWS announces the general availability of Microsoft Remote Desktop Services with AWS provided licenses. Customers can now purchase user-based subscription of Microsoft Remote Desktop Services licenses directly from AWS. This new offering provides licensing flexibility and business continuity for customers running graphical user interface (GUI) based applications on Amazon Elastic Compute Cloud (Amazon EC2) Windows instances.

Thousands of customers use Windows Server on Amazon EC2 to host custom applications or independent software vendor (ISV) products that require remote connectivity via Microsoft Remote Desktop Services. Previously, customers had to procure the licenses through various Microsoft licensing agreements. With the AWS provided subscription, customers can now access Microsoft Remote Desktop Services licenses from AWS on a per-user, per-month basis, eliminating the need for separate licensing agreements and reducing operational overhead. Unlike AWS provided Microsoft Office and Visual Studio, customers can continue using their existing Active Directory(s) for managing user access to GUI-based applications on Amazon EC2. Moreover, customers can have more than two concurrent user sessions with Windows Server instances. Lastly, AWS License Manager enables centralized tracking for license usage, simplifying governance and cost management. Customers can start using AWS provided Microsoft Remote Desktop Services licenses without rebuilding their existing Amazon EC2 instances, providing a seamless migration path for existing workloads.

AWS provided user-based subscription of Microsoft Remote Desktop Services license is available in all AWS Regions currently License Manager supports. For further questions, visit the user guide. To learn more and get started, visit here.
 

Read more


aws-mainframe-modernization

AWS Mainframe Modernization achieves FedRAMP Moderate and SOC compliance

AWS Mainframe Modernization has added approval for Federal Risk and Authorization Management Program (FedRAMP) Moderate and System and Organization Controls (SOC) reports.

AWS Mainframe Modernization has achieved Federal Risk and Authorization Management Program (FedRAMP) Moderate authorization, listed on the FedRAMP marketplace, approved by the FedRAMP Joint Authorization Board (JAB) for the AWS US East / West Region which includes US East (Ohio), US East (N. Virginia), US West (N. California), and US West (Oregon) Regions. FedRAMP is a US government-wide program that delivers a standard approach to the security assessment, authorization, and continuous monitoring for cloud products and services.

AWS Mainframe Modernization is now System and Organization Controls (SOC) compliant. AWS System and Organization Controls (SOC) Reports are independent third-party examination reports that demonstrate how AWS achieves key compliance controls and objectives. The purpose of these reports is to help you and your auditors understand the AWS controls established to support operations and compliance. AWS Mainframe Modernization is SOC compliant in all AWS regions where it is generally available, including the AWS GovCloud (US) Regions.

The AWS Mainframe Modernization service allows customers and partners to modernize and migrate on-premise mainframe applications and test, run, and operate them on AWS Cloud native managed runtimes. It enables modernization patterns like refactor and replatform, as well as augmentation patterns supported by data replication and file transfer. To learn more, please visit AWS Mainframe Modernization service product and documentation pages.
 

Read more


aws-managed-services

Amazon CloudWatch launches Observability Solutions for AWS Services and Workloads on AWS

Observability solutions help you get up-and-running faster with infrastructure and application monitoring at AWS. They are intended for developers who need opinionated guidance about the best options for observing AWS services, custom applications, and third-party workloads. Observability solutions include working examples of instrumentation, telemetry collection, custom dashboards, and metric alarms.

Using observability solutions, you can select from a catalog of available solutions that deliver focused observability guidance for AWS services and common workloads such as Java Virtual Machine (JVM), Apache Kafka, Apache Tomcat, or NGINX. Solutions cover monitoring tasks including installing and configuring Amazon CloudWatch agent, deploying pre-defined custom dashboards and setting metric alarms. Observability solutions also include guidance about observability features such as Detailed Monitoring metrics for infrastructure, Container Insights for container monitoring, and Application Signals for monitoring applications. Solutions are available for Amazon CloudWatch and Amazon Managed Service for Prometheus. Observability solutions can be deployed as-is or customized to suit specific use cases, with options for enabling features or configuring deployments based on workload needs.

Observability solutions are available in all commercial regions.

To get started with observability solutions, navigate to the observability solutions page in the CloudWatch console.

Read more


aws-management-console

Announcing features to favorite applications and quickly access your recently used applications

Today, we’re excited to launch application favoriting and quick access features in the AWS Management Console. Now you can pin your most-used applications as favorites and quickly return to recently visited applications.

Customers can easily designate favorite applications with a single click, and sort your most important applications, bringing favorites to the top of your list. Recently visited applications can now be accessed in the Recently Visited widget on Console Home, streamlining your workflow and reducing the time spent searching for frequently used resources. You can also access favorites, recently visited, and a list of all applications in the Services menu in the navigation bar from anywhere in the AWS Console.

These new features are available in all public AWS Regions.

To start using recently visited and favorited applications, visit the Applications widget on Console Home by signing into the AWS Management Console and use the star icon to designate favorite applications.

Read more


Introducing an AWS Management Console Visual Update (Preview)

Now available in Preview, the visual update in the AWS Management Console helps customers scan content, focus on the key information, and find what they are looking for more effectively, while preserving the familiar and consistent experience. The new, modern layout also provides easy access to contextual tools.

Customers now benefit from optimized information density that maximizes available content on screen, allowing them to see more content at a glance. Thanks to a reduced visual complexity, crisper styles and improved use of color, the experience is more intuitive, readable, and efficient. We modernized the interface, with rounder shapes and a new family of illustrations, complemented by added motion to bring moments of delight. While introducing these visual enhancements, we continue to offer a predictable experience that adheres to the highest accessibility standards.

The visual update is available in selected consoles across all AWS Regions, with the latest version of Cloudscape Design System. We will be extending the update across all services. Visit the AWS Management Console to experience the visual update.

Read more


Customers can now make payments using SEPA accounts in Five EU countries

Customers in UK, Spain, Netherlands and Belgium can now create an AWS Account using their bank account. Upon signup, customers with a billing address in these countries can now securely connect their bank account which supports the Single Euro Payment Area (SEPA) standard.

SEPA direct debit is a popular payment method in Europe, widely used to make payments for utility bills. Until today, this feature was available only for customers in Germany. Customers in other countries needed to provide credit or debit card details to complete the sign up. With this launch, customers in 4 additional countries can sign and pay using their SEPA bank accounts.

If you’re a customer signing up for AWS from any of these 5 countries, you can choose "Bank Account" from AWS sign up page, followed by "Link your bank account". Select your bank from the list of available banks and sign in to your bank using your online banking credentials. Signing in to your bank allows you to securely add your bank account to your AWS account and verifies that you are the owner of the bank account. By default, this bank account will be used when paying for your future AWS invoices. Signup with Bank Account is available in Germany, the first country where this feature is available.

To learn more, See Verify and link your bank account to your AWS Europe payment methods.

Read more


aws-marketplace

Buy with AWS accelerates solution discovery and procurement on AWS Partner websites

Today, AWS Marketplace announces Buy with AWS, a new feature that helps accelerate discovery and procurement on AWS Partners’ websites for products available in AWS Marketplace. Partners that sell or resell products in AWS Marketplace can now offer new experiences on their websites that are powered by AWS Marketplace. Customers can more quickly identify solutions from Partners that are available in AWS Marketplace and use their AWS accounts to access a streamlined purchasing experience.

Customers browsing on Partner websites can explore products that are “Available in AWS Marketplace” and request demos, access free trials, and request custom pricing. Customers can conveniently and securely make purchases by clicking the Buy with AWS button and completing transactions by logging in to their AWS accounts. All purchases made through Buy with AWS are transacted and managed within AWS Marketplace, allowing customers to take advantage of benefits such as consolidated AWS billing, centralized subscriptions management, and access to cost optimization tools.

For AWS Partners, Buy with AWS provides a new way to engage website visitors and accelerate the path-to-purchase for customers. By adding Buy with AWS buttons to Partner websites, Partners can give website visitors the ability to subscribe to free trials, make purchases, and access custom pricing using their AWS accounts. Partners can complete an optional integration and build new experiences on websites that allow customers to search curated product listings and filter products from the AWS Marketplace catalog.

Learn more about making purchases using Buy with AWS. Learn how AWS Partners can start selling using Buy with AWS.

Read more


Start collaborating on multi-partner opportunities with Partner Connections (Preview)

Today, AWS Partner Central announces the preview of Partner Connections, a new feature allowing AWS Partners to discover and connect with other Partners for collaboration on shared customer opportunities. With Partner Connections, Partners can co-sell joint solutions, accelerate deal progression, and expand their reach by teaming with other AWS Partners.

At the core of Partner Connections are two key capabilities: connections discovery and multi-partner opportunities. The connections discovery feature uses AI-powered recommendations to streamline Partner matchmaking, making it easier for Partners to find suitable collaborators and add them to their network. With multi-partner opportunities, Partners can work together seamlessly to create and manage joint customer opportunities in APN Customer Engagements (ACE). This integrated approach allows Partners to work seamlessly with AWS and other Partners on shared opportunities, reducing the operational overhead of managing multi-partner opportunities.

Partners can also create, update, and share multi-partner opportunities using the Partner Central API for Selling. This allows Partners to collaborate with other Partners and AWS on joint sales opportunities from their own customer relationship management (CRM) system.

Partner Connections (Preview) is available to all eligible AWS Partners who have signed the ACE Terms and Conditions and have linked their AWS account to their Partner Central account. To get started, log in to AWS Partner Central and review the ACE user guide for more information. To see how Partner Connections works, read the blog.

Read more


Deploy GROW with SAP on AWS from AWS Marketplace

GROW with SAP on AWS is now available for subscription from AWS Marketplace. As a complete offering of solutions, best practices, adoption acceleration services, community and learning, GROW with SAP helps any size organization adopt cloud enterprise resource planning (ERP) with speed, predictability, and continuous innovation. GROW with SAP on AWS can be implemented in months instead of years with traditional on-premises ERP implementations.

By implementing GROW with SAP on AWS, you can simplify everyday work, grow your business, and secure your success. At the core of GROW with SAP is SAP S/4HANA Cloud, a full-featured SaaS ERP suite built on the learnings of SAP’s 50+ years of industry best practices. GROW with SAP allows your organization to gain end- to- end process visibility and control with integrated systems across HR, procurement, sales, finance, supply chain, and manufacturing. It also includes SAP Business AI-powered processes leveraging AWS to provide data-driven insights and recommendations. Customers can also innovate with generative AI using their SAP data through Amazon Bedrock models in the SAP generative AI hub. GROW with SAP on AWS takes advantage of AWS Graviton processors, which offer up to 60% less energy than comparable cloud instances for the same performance.

GROW with SAP on AWS is initially available in the US-East Region.

To subscribe to GROW with SAP on AWS, visit the AWS Marketplace listing. Or, to learn more, visit the GROW with SAP on AWS detail page.

Read more


New streamlined deployment experience for Databricks on AWS

Today, AWS introduces an enhanced version of SaaS Quick Launch for Databricks Data Intelligence Platform in AWS Marketplace, delivering a streamlined Databricks workspace deployment experience on AWS. Databricks is a unified data analytics platform that enables organizations to accelerate data-driven innovation. SaaS Quick Launch for Databricks automates installation and configuration steps, simplifying the process of launching Databricks workspaces on AWS, where data professionals manage notebooks, clusters, and data engineering jobs.

Previously, deploying Databricks on AWS required manual configuration and knowledge of AWS infrastructure provisioning tools. Now all users, including data engineers, data scientists, and business analysts, can quickly and easily deploy Databricks on AWS through AWS Marketplace in three guided steps. When subscribing to Databricks in AWS Marketplace, customers can use the new streamlined deployment experience to rapidly configure, deploy, and access their Databricks workspaces and accelerate their data analytics, machine learning, and data science initiatives on AWS. Through this simplified process, the necessary AWS resources are automatically provisioned and integrated with Databricks following AWS best practices for security and high availability.

This streamlined deployment experience in AWS Marketplace is currently available for all AWS Regions supported by Databricks.

To get started with the new streamlined deployment experience for Databricks, visit Databricks Data Intelligence Platform in AWS Marketplace.

Read more


AWS Marketplace now offers EC2 Image Builder components from independent software vendors

AWS Marketplace now offers EC2 Image Builder components from independent software vendors (ISVs), helping you streamline your Amazon Machine Image (AMI) build processes. You can find and subscribe to Image Builder components from ISVs in AWS Marketplace or in the Image Builder console, and incorporate the components into your golden images through Image Builder. AWS Marketplace offers a catalog of Image Builder components from ISVs to help address the monitoring, security, governance, and compliance needs of your organization.

Previously, consolidating software from ISVs into golden images required you to go through a time-consuming procurement process and write custom code, resulting in unnecessary overhead. With the addition of Image Builder components in AWS Marketplace, you can now find, subscribe to, and incorporate software components from ISVs into your golden images on AWS. You can also configure your Image Builder pipelines to automatically update golden images as the latest version of components get released in AWS Marketplace, helping to keep your systems current and eliminating the need for custom code. You can continue sharing golden images within your organization by distributing the entitlements for subscribed components across AWS accounts. Your organization can then use the same golden images, maintaining your security and governance standards.

To learn more, access documentation for AWS Marketplace or EC2 Image Builder. Visit AWS Marketplace to view all supported EC2 Image Builder components, including software from popular providers such as Datadog, Dynatrace, Insight Technology, Inc., Fortinet, OpenVPN Inc, SIOS Technology Corp., Cisco, KeyFactor, Datamasque, Grafana, Kong, Wiz and more.

Read more


Colombian Sellers and Channel Partners now available in AWS Marketplace

AWS Marketplace now enables customers to discover and subscribe to software from Colombia Independent Software Vendors (ISVs) and Channel Partners. This expansion serves to increase the breadth of software and data offerings, adding to the 20,000+ software listings and data products from 5000+ sellers.

Starting today, AWS Marketplace customers around the world can directly procure software and data products from ISVs in Colombia, making it easier than ever to reach data-driven decisions and build operations in the cloud. In addition, AWS Marketplace customers can now purchase software through regional and local Channel Partners in Colombia, who offer knowledge of their business, localized support, and trusted expertise, through Channel Partner Private Offers (CPPO).

Software from Colombian ISVs such as Software Colombia, CARI AI and Nuevosmedios are now available in AWS Marketplace. In addition, Channel Partners such as Ikusi, Axity Colombia, Netdata, and AndeanTrade are now able to sell software in AWS Marketplace through CPPO. ISVs and Channel Partners from Colombia join the ever-growing offerings from AWS Marketplace and more products are added regularly.

AWS Marketplace is a curated digital catalog of third-party software that makes it easy for customers to find, buy, and deploy solutions that run on Amazon Web Services (AWS).

For more information on listing in AWS Marketplace, please visit the AWS Marketplace Seller Guide.
For more information on purchasing solutions through AWS Marketplace, please visit the AWS Marketplace Buyer Guide.

Read more


Announcing AWS Partner Assistant, a generative AI-powered virtual assistant for AWS Partners

AWS Partner Assistant, a generative AI–powered virtual assistant built on Amazon Q Business, is now available for Partners in AWS Partner Central and the AWS Marketplace Management Portal. Partner Assistant makes it easier for you to get quick answers to common questions—helping you boost productivity and accelerate your AWS Partner journey to unlock benefits faster.

Partner Assistant enables you to reduce the need for manual searches by generating real-time guidance and concise summaries from guides and documentation that are available specifically for AWS Partners. For example, you can ask Partner Assistant how to list a software as a service (SaaS) product in AWS Marketplace, for details about available funding programs for Partners, or how to obtain the Generative AI Competency. The assistant’s responses include links to resources available in Partner Central and AWS Docs for further details.

AWS Partner Assistant is available to all Partners who have linked their Partner Central and AWS accounts.

Get started using AWS Partner Assistant by logging in to AWS Partner Central or the AWS Marketplace Management Portal and accessing the chat from the bottom right of your screen. Learn more about becoming an AWS Partner.
 

Read more


Self-Service Know Your Customer (KYC) for AWS Marketplace Sellers

AWS Marketplace now offers a self-service Know Your Customer (KYC) feature for all sellers wishing to transact via the AWS Europe, Middle East, and Africa (EMEA) Marketplace Operator. The KYC verification process is required for sellers to receive disbursements via the AWS EMEA Marketplace Operator. This new self-service feature helps sellers complete this KYC process quickly and easily, and unblocks their business growth in EMEA region.

Completing KYC and onboarding to EMEA Marketplace operator allows sellers to provide a more localized experience for their customers. Customers will see consistent Value Added Tax (VAT) charges across all their AWS purchases. They can also pay using their local bank accounts through Single Euro Payment Area (SEPA) for AWS Marketplace Invoices. Additionally, customers will get invoices for all their AWS services and Marketplace purchases from a single entity - AWS EMEA. This makes billing and procurement much simpler for customers in Europe, the Middle East, and Africa.

The new self-service KYC experience empowers sellers to complete verification independently, reducing the time to onboard and eliminating the need to coordinate with AWS Marketplace support team.

We invite all AWS Marketplace sellers to take advantage of this new feature to expand their reach in the EMEA region and provide an improved purchasing experience for their customers. To get started, please visit the AWS Marketplace Seller Guide.

Read more


AWS Marketplace introduces AI-powered product summaries and comparisons

AWS Marketplace now provides AI-powered product summaries and comparisons for popular software as a service (SaaS) products, helping you make faster and more informed software purchasing decisions. Use this feature to compare similar SaaS products across key evaluation criteria such as customer reviews, product popularity, features, and security credentials. Additionally, you can gain AI-summarized insights into key decision factors like ease of use, customer support, and cost effectiveness.

Sifting through thousands of options on the web to find software products that best fit your business needs can be challenging and time-consuming. The new product comparisons feature in AWS Marketplace helps with simplifying this process for you. It leverages machine learning to recommend similar SaaS products for consideration. It then uses generative AI to summarize product information and customer reviews, highlight unique aspects of products, and helps you understand key differences to identify the best product for your use cases. You can also customize the comparison sets and download comparisons tables to share with colleagues.

The product comparisons feature is available for popular SaaS products in all commercial AWS Regions where AWS Marketplace is available.

Check out AI-generated product summaries in AWS Marketplace. Find the new experience on popular SaaS product pages such as Databricks Data Intelligence Platform and Trend Cloud One. To learn more about how the experience works, visit the AWS Marketplace Buyer Guide.

Read more


Announcing enhanced purchase order support for AWS Marketplace

Today, AWS Marketplace is extending transaction purchase order number support to products with pay-as-you-go pricing, including Amazon Bedrock subscriptions, software as a service (SaaS) contracts with consumption pricing, and AMI annuals. Additionally, you can update purchase order numbers post-subscription prior to invoice creation to ensure your invoices reflect the proper purchase order. This launch helps you allocate costs and makes it easier to process and pay invoices.

The purchase order feature in AWS Marketplace allows the purchase order number that you provide at the time of the transaction in AWS Marketplace to appear on all invoices related to that purchase. Now, you can provide a purchase order at the time of purchase for most products available in AWS Marketplace, including products with pay-as-you-go pricing. You can add or update purchase orders post-subscription, prior to invoice generation, within the AWS Marketplace console. You can also provide more than one PO for products appearing on your monthly AWS Marketplace invoice and receive a unique invoice for each purchase order. Additionally, you can add a unique PO for each fixed charge and associated AWS Marketplace monthly usage charges at the time of purchase, or post-subscription in the AWS Marketplace console.

You can update purchase orders for existing subscriptions under manage subscriptions in the AWS Marketplace console. To enable transaction purchase orders for AWS Marketplace, sign in to the management account (for AWS Organizations) and enable the AWS Billing integration in the AWS Marketplace Console settings. To learn more, read the AWS Marketplace Buyer Guide.

Read more


AWS Marketplace announces improved offer and agreement management capabilities for sellers

AWS Marketplace now offers improved capabilities to help sellers manage agreements and create new offers more efficiently. Sellers can access an improved agreements navigation experience, export details to PDF, and clone past private offers in the AWS Marketplace Management Portal.

The new agreements experience makes it easier to find agreements for a specific offer or by the customer and take action based on the agreement’s status—active, expiring, expired, replaced, or cancelled. This holistic view enables you to retrieve agreements faster to help you prepare for customer engagements and identify renewal or expansion opportunities. To simplify sharing and offline collaboration, you can now export details into PDF format. Additionally, the new offer cloning capability enables you to replicate common offer configurations from past direct private offers. This gives you the ability to quickly make adjustments for renewals and revisions to ongoing offers.

These features are available for all AWS Partners selling SaaS, Amazon Machine Images (AMI), containers, and professional services products in AWS Marketplace. To learn more, visit the AWS Marketplace Seller Guide, or access the AWS Marketplace Management Portal to try the new capabilities.

Read more


Enhanced account linking experience across AWS Marketplace and AWS Partner Central

Today, AWS announces an improved account linking experience for AWS Partners to create and connect their AWS Marketplace accounts with AWS Partner Central, as well as onboarding associated users. Account Linking allows Partners to seamlessly navigate between Partner Central and Marketplace Management Portal using Single Sign-On (SSO), connect Partner Central solutions to AWS Marketplace listings, link private offers to opportunities for tracking deals from pipeline to customer offers, and access AWS Marketplace insights within centralized AWS Partner Analytics Dashboard. Linking accounts also unlocks access to valuable Amazon Partner Network (APN) program benefits such as ISV Accelerate and accelerated sales cycles.

The new account linking experience introduces three major improvements to streamline the self-guided linking workflow. First, it simplifies the process to associate your AWS account with AWS Marketplace by registering your legal business name. Second, it automates the creation and bulk assignment of Identity and Access Management (IAM) roles to AWS Partner Central users, eliminating the need for manual creation in the AWS IAM console. Third, it introduces three new AWS managed policies to simplify permission management for AWS Partner Central and Marketplace access. The new policies offer fine-grained access options, ranging from full Partner Central access to personalized access to co-sell or marketplace offer management.

This new experience is available for all AWS Partners. To get started, navigate to the “Account Linking” feature on the AWS Partner Central homepage. To learn more, review the AWS Partner Central documentation.

Read more


Gain new insights into your sales pipeline

Today, Amazon Web Services, Inc. (AWS) announces new pipeline performance data visualizations in the Analytics and Insights Dashboard. Partners can now inspect win rate of closed opportunities, assess top performing segments, and identify required actions on open opportunities.

Drill downs by customer region, segment, and industry are available for key metrics including open opportunity count, opportunities that require updates, and win rates. Additionally, AWS Specialization partners in the APN Customer Engagements (ACE) program get more insights with co-sell recommendation scores. The co-sell recommendation score assesses how well their solutions are positioned to meet customer needs. By combining top performing benchmarks and co-sell recommendation scores, partners can see where they are most well-positioned for co-selling and delivering for AWS customer use cases.

To get started, log into your AWS Partner Central account and navigate to the Opportunities tab within the Analytics and Insights Dashboard. Here, you'll find new visuals for pipeline performance and co-sell recommendation scores.

To learn about all the new features the dashboard has to offer, log into AWS Partner Central and explore the Analytics and Insights User Guide!
 

Read more


AWS Partner CRM Connector Adds Partner Central API Support

Starting today, the AWS Partner CRM Connector further simplifies co-sell actions between Salesforce and AWS Partner Central through APN Customer Engagement (ACE) integration. Partners can now share and receive AWS opportunities faster through the Partner Central API, use multi-object mapping to simplify related field mapping and reduce redundant data between Salesforce and ACE Pipeline Manager, and receive submission updates via EventBridge, making it easier than ever to supercharge co-selling and sales motions.

These new capabilities enable partners manage AWS co-sell opportunities with increased speed and flexibility. The Partner Central API accelerates information sharing, while EventBridge pushes real-time update notifications for key actions as they occur. Multi-object mapping adds another layer of efficiency, giving partners control over data flow by simplifying account look-ups and reducing repetitive entries across Salesforce fields and business workflows.

This modular connector provides greater governance, visibility, and effectiveness in management of ACE opportunities and leads, and AWS Marketplace private offers and resale authorizations. It enables automation through sales process alignment, and accelerates adoption through the extension of capabilities to field sales teams.

The AWS Partner CRM Connector for Salesforce is available as an application to install at no-cost from the Salesforce AppExchange.

Visit AWS Partner Central documentation to learn more, and learn more about the CRM Connector in the AWS Partner CRM Integration documentation.

Read more


AWS Partner Central now supports dedicated Slack channels for collaboration on co-selling opportunities

AWS Partners can now request dedicated Slack channels through AWS Partner Central to collaborate with AWS sales teams on ACE co-selling opportunities. This feature helps simplify communication, ensuring all members stay updated on deal progression, enabling better collaboration and more efficient deal closure for strategic customer engagements.

Partners can request a Slack channel for an eligible open opportunity in the Collaboration Channels tab within the ACE Pipeline Manager in AWS Partner Central. The AWS sales team will receive notifications for collaboration requests through the AWS Secure Connect Slack application, allowing them to create dedicated Slack channels. Individuals from AWS and Partner opportunity teams, including account managers, solution architects, and success managers, can then engage directly through the channels for associated opportunities. These Slack channels include enhanced security controls to ensure only the designated opportunity team participates, helping to safeguard confidentiality. Each channel is also integrated with AWS Partner Central, delivering real-time updates on deal progress—such as stage changes and next steps—all within Slack. This new feature builds on the Slack Connect capability made available earlier this year.

This feature is available globally to ACE-eligible AWS Partners working on high-value deals, excluding deals related to national security or customers in the Greater China Region.

Log in to AWS Partner Central today to request Slack channels directly through the ACE Pipeline Manager in Partner Central and start collaborating, or ask your AWS sales contacts to create a channel.
 

Read more


aws-marketplace-and-partners

Respond and recovery more quickly with AWS Security Incident Response Partners

Today, AWS Security Incident Response launches a new AWS Specialization with approved partners from the AWS Partner Network (APN). AWS customers today rely on various 3rd party tools and services to support their internal security incident response capabilities. To better help both customers and partners, AWS introduced AWS Security Incident Response, a new service that helps customers prepare for, respond to, and recover from security events. Alongside approved AWS Partners, AWS Security Incident Response monitors, investigates, and escalates triaged security findings from Amazon GuardDuty and other threat detection tools through AWS Security Hub. Security Incident Response identifies and escalates only high-priority incidents. Partners and customers can also leverage collaboration and communication features to streamline coordinated incident response for faster reaction and recovery. For example, service members can create  a predefined "Incident Response Team" that is automatically alerted whenever a security case is escalated. Alerted members, which includes customers and partners, can then communicate and collaborate in a centralized format, with native feature integrations such as in-console messaging, video conferencing, and quick and secure data transfer. 

Customers can access the service alongside AWS Partners that have been vetted and approved to use Security Incident Response. Learn more and explore AWS Security Incident Response Partners with specialized expertise to help you respond when it matters most.

Read more


Start collaborating on multi-partner opportunities with Partner Connections (Preview)

Today, AWS Partner Central announces the preview of Partner Connections, a new feature allowing AWS Partners to discover and connect with other Partners for collaboration on shared customer opportunities. With Partner Connections, Partners can co-sell joint solutions, accelerate deal progression, and expand their reach by teaming with other AWS Partners.

At the core of Partner Connections are two key capabilities: connections discovery and multi-partner opportunities. The connections discovery feature uses AI-powered recommendations to streamline Partner matchmaking, making it easier for Partners to find suitable collaborators and add them to their network. With multi-partner opportunities, Partners can work together seamlessly to create and manage joint customer opportunities in APN Customer Engagements (ACE). This integrated approach allows Partners to work seamlessly with AWS and other Partners on shared opportunities, reducing the operational overhead of managing multi-partner opportunities.

Partners can also create, update, and share multi-partner opportunities using the Partner Central API for Selling. This allows Partners to collaborate with other Partners and AWS on joint sales opportunities from their own customer relationship management (CRM) system.

Partner Connections (Preview) is available to all eligible AWS Partners who have signed the ACE Terms and Conditions and have linked their AWS account to their Partner Central account. To get started, log in to AWS Partner Central and review the ACE user guide for more information. To see how Partner Connections works, read the blog.

Read more


Introducing the AWS Digital Sovereignty Competency

Digital sovereignty has been a priority for AWS since its inception. AWS remains committed to offering customers the most advanced sovereignty controls and features in the cloud. With the increasing importance of digital sovereignty for public sector organizations and regulated industries, AWS is excited to announce the launch of the AWS Digital Sovereignty Competency.

The AWS Digital Sovereignty Competency curates and validates a community of AWS Partners with advanced sovereignty capabilities and solutions, including deep experience in helping customers address sovereignty and compliance requirements. These partners can assist customers with residency control, access control, resilience, survivability, and self-sufficiency.

Through this competency, customers can search for and engage with trusted local and global AWS Partners that have technically validated experience in addressing customers’ sovereignty requirements. Many partners have built sovereign solutions that leverage AWS innovations and built-in controls and security features.

In addition to these offerings, AWS Digital Sovereignty Partners provide skills and knowledge of local compliance requirements and regulations, making it easier for customers to meet their digital sovereignty requirements while benefiting from the performance, agility, security, and scale of the AWS Cloud.

Read more


New streamlined deployment experience for Databricks on AWS

Today, AWS introduces an enhanced version of SaaS Quick Launch for Databricks Data Intelligence Platform in AWS Marketplace, delivering a streamlined Databricks workspace deployment experience on AWS. Databricks is a unified data analytics platform that enables organizations to accelerate data-driven innovation. SaaS Quick Launch for Databricks automates installation and configuration steps, simplifying the process of launching Databricks workspaces on AWS, where data professionals manage notebooks, clusters, and data engineering jobs.

Previously, deploying Databricks on AWS required manual configuration and knowledge of AWS infrastructure provisioning tools. Now all users, including data engineers, data scientists, and business analysts, can quickly and easily deploy Databricks on AWS through AWS Marketplace in three guided steps. When subscribing to Databricks in AWS Marketplace, customers can use the new streamlined deployment experience to rapidly configure, deploy, and access their Databricks workspaces and accelerate their data analytics, machine learning, and data science initiatives on AWS. Through this simplified process, the necessary AWS resources are automatically provisioned and integrated with Databricks following AWS best practices for security and high availability.

This streamlined deployment experience in AWS Marketplace is currently available for all AWS Regions supported by Databricks.

To get started with the new streamlined deployment experience for Databricks, visit Databricks Data Intelligence Platform in AWS Marketplace.

Read more


Introducing the AWS Consumer Goods Competency

In the ever-evolving consumer goods industry, innovation and agility are paramount. AWS has launched the AWS Consumer Goods Competency to support digital transformation. This initiative connects businesses with top validated AWS Partners offering specialized industry solutions.

These partners provide expertise across six critical areas: product development, manufacturing, supply chain, marketing, unified commerce, and digital transformation. To earn the designation, partners must complete a rigorous technical validation process based on the AWS Well-Architected Framework, ensuring reliable, secure, and efficient cloud operations.

By collaborating with these validated partners, consumer goods companies can drive innovation, enhance customer experiences, and gain competitive market advantages. The AWS Competency Partner program is a comprehensive framework that identifies partners with exceptional technical expertise and proven customer success. This formal AWS Specialization recognizes partners' capabilities in advancing industry technology.

With this new AWS Competency, AWS reinforces its commitment to supporting digital transformation in the consumer goods sector. Businesses can now accelerate their innovation, streamline operations, and deliver exceptional customer experiences in the highly competitive market.

Read more


AWS Marketplace now offers EC2 Image Builder components from independent software vendors

AWS Marketplace now offers EC2 Image Builder components from independent software vendors (ISVs), helping you streamline your Amazon Machine Image (AMI) build processes. You can find and subscribe to Image Builder components from ISVs in AWS Marketplace or in the Image Builder console, and incorporate the components into your golden images through Image Builder. AWS Marketplace offers a catalog of Image Builder components from ISVs to help address the monitoring, security, governance, and compliance needs of your organization.

Previously, consolidating software from ISVs into golden images required you to go through a time-consuming procurement process and write custom code, resulting in unnecessary overhead. With the addition of Image Builder components in AWS Marketplace, you can now find, subscribe to, and incorporate software components from ISVs into your golden images on AWS. You can also configure your Image Builder pipelines to automatically update golden images as the latest version of components get released in AWS Marketplace, helping to keep your systems current and eliminating the need for custom code. You can continue sharing golden images within your organization by distributing the entitlements for subscribed components across AWS accounts. Your organization can then use the same golden images, maintaining your security and governance standards.

To learn more, access documentation for AWS Marketplace or EC2 Image Builder. Visit AWS Marketplace to view all supported EC2 Image Builder components, including software from popular providers such as Datadog, Dynatrace, Insight Technology, Inc., Fortinet, OpenVPN Inc, SIOS Technology Corp., Cisco, KeyFactor, Datamasque, Grafana, Kong, Wiz and more.

Read more


Colombian Sellers and Channel Partners now available in AWS Marketplace

AWS Marketplace now enables customers to discover and subscribe to software from Colombia Independent Software Vendors (ISVs) and Channel Partners. This expansion serves to increase the breadth of software and data offerings, adding to the 20,000+ software listings and data products from 5000+ sellers.

Starting today, AWS Marketplace customers around the world can directly procure software and data products from ISVs in Colombia, making it easier than ever to reach data-driven decisions and build operations in the cloud. In addition, AWS Marketplace customers can now purchase software through regional and local Channel Partners in Colombia, who offer knowledge of their business, localized support, and trusted expertise, through Channel Partner Private Offers (CPPO).

Software from Colombian ISVs such as Software Colombia, CARI AI and Nuevosmedios are now available in AWS Marketplace. In addition, Channel Partners such as Ikusi, Axity Colombia, Netdata, and AndeanTrade are now able to sell software in AWS Marketplace through CPPO. ISVs and Channel Partners from Colombia join the ever-growing offerings from AWS Marketplace and more products are added regularly.

AWS Marketplace is a curated digital catalog of third-party software that makes it easy for customers to find, buy, and deploy solutions that run on Amazon Web Services (AWS).

For more information on listing in AWS Marketplace, please visit the AWS Marketplace Seller Guide.
For more information on purchasing solutions through AWS Marketplace, please visit the AWS Marketplace Buyer Guide.

Read more


Self-Service Know Your Customer (KYC) for AWS Marketplace Sellers

AWS Marketplace now offers a self-service Know Your Customer (KYC) feature for all sellers wishing to transact via the AWS Europe, Middle East, and Africa (EMEA) Marketplace Operator. The KYC verification process is required for sellers to receive disbursements via the AWS EMEA Marketplace Operator. This new self-service feature helps sellers complete this KYC process quickly and easily, and unblocks their business growth in EMEA region.

Completing KYC and onboarding to EMEA Marketplace operator allows sellers to provide a more localized experience for their customers. Customers will see consistent Value Added Tax (VAT) charges across all their AWS purchases. They can also pay using their local bank accounts through Single Euro Payment Area (SEPA) for AWS Marketplace Invoices. Additionally, customers will get invoices for all their AWS services and Marketplace purchases from a single entity - AWS EMEA. This makes billing and procurement much simpler for customers in Europe, the Middle East, and Africa.

The new self-service KYC experience empowers sellers to complete verification independently, reducing the time to onboard and eliminating the need to coordinate with AWS Marketplace support team.

We invite all AWS Marketplace sellers to take advantage of this new feature to expand their reach in the EMEA region and provide an improved purchasing experience for their customers. To get started, please visit the AWS Marketplace Seller Guide.

Read more


AWS Marketplace introduces AI-powered product summaries and comparisons

AWS Marketplace now provides AI-powered product summaries and comparisons for popular software as a service (SaaS) products, helping you make faster and more informed software purchasing decisions. Use this feature to compare similar SaaS products across key evaluation criteria such as customer reviews, product popularity, features, and security credentials. Additionally, you can gain AI-summarized insights into key decision factors like ease of use, customer support, and cost effectiveness.

Sifting through thousands of options on the web to find software products that best fit your business needs can be challenging and time-consuming. The new product comparisons feature in AWS Marketplace helps with simplifying this process for you. It leverages machine learning to recommend similar SaaS products for consideration. It then uses generative AI to summarize product information and customer reviews, highlight unique aspects of products, and helps you understand key differences to identify the best product for your use cases. You can also customize the comparison sets and download comparisons tables to share with colleagues.

The product comparisons feature is available for popular SaaS products in all commercial AWS Regions where AWS Marketplace is available.

Check out AI-generated product summaries in AWS Marketplace. Find the new experience on popular SaaS product pages such as Databricks Data Intelligence Platform and Trend Cloud One. To learn more about how the experience works, visit the AWS Marketplace Buyer Guide.

Read more


AWS Marketplace announces improved offer and agreement management capabilities for sellers

AWS Marketplace now offers improved capabilities to help sellers manage agreements and create new offers more efficiently. Sellers can access an improved agreements navigation experience, export details to PDF, and clone past private offers in the AWS Marketplace Management Portal.

The new agreements experience makes it easier to find agreements for a specific offer or by the customer and take action based on the agreement’s status—active, expiring, expired, replaced, or cancelled. This holistic view enables you to retrieve agreements faster to help you prepare for customer engagements and identify renewal or expansion opportunities. To simplify sharing and offline collaboration, you can now export details into PDF format. Additionally, the new offer cloning capability enables you to replicate common offer configurations from past direct private offers. This gives you the ability to quickly make adjustments for renewals and revisions to ongoing offers.

These features are available for all AWS Partners selling SaaS, Amazon Machine Images (AMI), containers, and professional services products in AWS Marketplace. To learn more, visit the AWS Marketplace Seller Guide, or access the AWS Marketplace Management Portal to try the new capabilities.

Read more


Enhanced account linking experience across AWS Marketplace and AWS Partner Central

Today, AWS announces an improved account linking experience for AWS Partners to create and connect their AWS Marketplace accounts with AWS Partner Central, as well as onboarding associated users. Account Linking allows Partners to seamlessly navigate between Partner Central and Marketplace Management Portal using Single Sign-On (SSO), connect Partner Central solutions to AWS Marketplace listings, link private offers to opportunities for tracking deals from pipeline to customer offers, and access AWS Marketplace insights within centralized AWS Partner Analytics Dashboard. Linking accounts also unlocks access to valuable Amazon Partner Network (APN) program benefits such as ISV Accelerate and accelerated sales cycles.

The new account linking experience introduces three major improvements to streamline the self-guided linking workflow. First, it simplifies the process to associate your AWS account with AWS Marketplace by registering your legal business name. Second, it automates the creation and bulk assignment of Identity and Access Management (IAM) roles to AWS Partner Central users, eliminating the need for manual creation in the AWS IAM console. Third, it introduces three new AWS managed policies to simplify permission management for AWS Partner Central and Marketplace access. The new policies offer fine-grained access options, ranging from full Partner Central access to personalized access to co-sell or marketplace offer management.

This new experience is available for all AWS Partners. To get started, navigate to the “Account Linking” feature on the AWS Partner Central homepage. To learn more, review the AWS Partner Central documentation.

Read more


AWS Partner Central now provides API for Selling with AWS

Today, AWS introduces the AWS Partner Central API for Selling, enabling AWS Partners to integrate their Customer Relationship Management (CRM) systems with AWS Partner Central. This API allows partners to streamline and scale their co-selling process by automating the creation and management of APN Customer Engagements (ACE) opportunities within their own CRM. This API provides improved efficiency, scale, and error handing compared to the existing Amazon S3-based CRM integration, and is available to all AWS Partners.

AWS Partner Central API for Selling enables partners to create, update, view, and assign opportunities, as well as accept invitations to engage on AWS referrals. Additionally, partners can retrieve a list of their solutions on AWS Partner Central, and associate specific solutions, AWS products, or AWS Marketplace offers with opportunities as needed. Real-time notifications via AWS EventBridge keep partners up to date on any changes on the opportunity. The API also integrates with AWS services, enabling partners to monitor co-selling via Amazon CloudWatch and audit with AWS CloudTrail. Partners can use this API in combination with the AWS Marketplace Catalog API to manage the entire opportunity-to-offer process directly within their CRM.

AWS Partner Central API for Selling is now available in the US East (N. Virginia) region and is accessible through AWS SDKs in .NET, Python, Java, Go, and other programming languages. Partners can also use this API via the AWS Partner CRM Connector or our multiple integration partners.

Learn more on Automations for Partners. To get started, visit AWS Partner Central API documentation.

Read more


Announcing financing program for AWS Marketplace purchases for select US customers

Today, AWS announces the availability of a new financing program supported by PNC Vendor Finance, enabling select customers in the United States (US) to finance AWS Marketplace software purchases directly from the AWS Billing and Cost Management console. For the first time, select US customers can apply for, utilize, and manage financing within the console for AWS Marketplace software purchases.

AWS Marketplace helps customers find, try, buy, and launch third-party software, while consolidating billing and management with AWS. With thousands of software products available in AWS Marketplace, this financing program enables you to buy the software you need to drive innovation. With financing amounts ranging from $10,000 - $100,000,000, subject to credit approval, you have more options to pay for your AWS Marketplace purchases. If approved, you can utilize financing for AWS Marketplace software purchases that have at least 12-month contracts. Financing can be applied to multiple purchases from multiple AWS Marketplace sellers. This financing program gives you the flexibility to better manage your cash flow by spreading payments over time, while only paying financing cost on what you use.

This new financing program supported by PNC Vendor Finance is available in the AWS Billing and Cost Management console for select AWS Marketplace customers in the US, excluding NV, NC, ND, TN, & VT.

To learn more about financing options for AWS Marketplace purchases and details about the financing program supported by PNC Vendor Finance, visit the AWS Marketplace financing page.
 

Read more


AWS Partner Central now supports dedicated Slack channels for collaboration on co-selling opportunities

AWS Partners can now request dedicated Slack channels through AWS Partner Central to collaborate with AWS sales teams on ACE co-selling opportunities. This feature helps simplify communication, ensuring all members stay updated on deal progression, enabling better collaboration and more efficient deal closure for strategic customer engagements.

Partners can request a Slack channel for an eligible open opportunity in the Collaboration Channels tab within the ACE Pipeline Manager in AWS Partner Central. The AWS sales team will receive notifications for collaboration requests through the AWS Secure Connect Slack application, allowing them to create dedicated Slack channels. Individuals from AWS and Partner opportunity teams, including account managers, solution architects, and success managers, can then engage directly through the channels for associated opportunities. These Slack channels include enhanced security controls to ensure only the designated opportunity team participates, helping to safeguard confidentiality. Each channel is also integrated with AWS Partner Central, delivering real-time updates on deal progress—such as stage changes and next steps—all within Slack. This new feature builds on the Slack Connect capability made available earlier this year.

This feature is available globally to ACE-eligible AWS Partners working on high-value deals, excluding deals related to national security or customers in the Greater China Region.

Log in to AWS Partner Central today to request Slack channels directly through the ACE Pipeline Manager in Partner Central and start collaborating, or ask your AWS sales contacts to create a channel.
 

Read more


aws-network-firewall

AWS Network Firewall expands the list of supported protocols and keywords in firewall rules

Today, we are excited to announce support for new protocols in AWS Network Firewall so you can protect your Amazon VPCs using application-specific inspection rules. With this launch, AWS Network Firewall will detect protocols like HTTP2, QUIC, and PostgreSQL so you can apply firewall inspection rules to these protocols. You can also use new rule keywords in TLS, SNMP, DHCP, and Kerberos rules to apply granular security controls to your stateful inspection rules.

AWS Network Firewall is a managed firewall service that makes it easy to deploy essential network protections for all your Amazon VPCs. It’s flexible rules engine lets you define firewall rules that give you fine-grained control over network traffic. You can also enable AWS Managed Rules for intrusion detection and prevention signatures that protect against threats such as botnets, scanners, web attacks, phishing and emerging events.

You can create AWS Network Firewall rules using Amazon VPC console, AWS CLI or the Network Firewall API. To see which regions AWS Network Firewall is available in, visit the AWS Region Table. For more information, please see the AWS Network Firewall product page and the service documentation.
 

Read more


aws-organizations

Amazon CloudWatch now provides centralized visibility into telemetry configurations

Amazon CloudWatch now offers centralized visibility into critical AWS service telemetry configurations, such as Amazon VPC Flow Logs, Amazon EC2 Detailed Metrics, and AWS Lambda Traces. This enhanced visibility enables central DevOps teams, system administrators, and service teams to identify potential gaps in their infrastructure monitoring setup. The telemetry configuration auditing experience seamlessly integrates with AWS Config to discover AWS resources, and can be turned on for the entire organization using the new AWS Organizations integration with Amazon CloudWatch.

With visibility into telemetry configurations, you can identify monitoring gaps that might have been missed in your current setup. For example, this helps you identify gaps in your EC2 detailed metrics so that you can address them and easily detect short-lived performance spikes and build responsive auto-scaling policies. You can audit telemetry configuration coverage at both resource type and individual resource levels, refining the view by filtering across specific accounts, resource types, or resource tags to focus on critical resources.

The telemetry configurations auditing experience is available in US East (N. Virginia), US West (Oregon), US East (Ohio), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm) regions. There is no additional cost to turn on the new experience, including for AWS Config.

You can get started with auditing your telemetry configurations using the Amazon CloudWatch Console, by clicking on Telemetry config in the navigation panel, or programmatically using the API/CLI. To learn more, visit our documentation.

Read more


Amazon Web Services announces declarative policies

Today, AWS announces the general availability of declarative policies, a new management policy type within AWS Organizations. These policies simplify the way customers enforce durable intent, such as baseline configuration for AWS services within their organization. For example, customers can configure EC2 to allow instance launches using AMIs vended by specific providers and block public access in their VPC with a few simple clicks or commands for their entire organization using declarative policies.

Declarative policies are designed to prevent actions that are non-compliant with the policy. The configuration defined in the declarative policy is maintained even when services add new APIs or features, or when customers add new principals or accounts to their organization. With declarative policies, governance teams have access to the account status report which provides insight into the current configuration for an AWS service across their organization. This helps them asses readiness to enforce configuration at scale. Administrators can provide additional transparency to end users by configuring custom error messages to redirect them to internal wikis or ticketing systems through declarative policies.

To get started, navigate to the AWS Organizations console to create and attach declarative policies. You can also use AWS Control Tower, AWS CLI or CloudFormation templates to configure these policies. Declarative policies today support EC2, EBS and VPC configurations with support for other services coming soon. To learn more see documentation and blog post.

Read more


AWS announces AWS Security Incident Response for general availability

Today, AWS announces the general availability of AWS Security Incident Response, a new service that helps you prepare for, respond to, and recover from security events. This service offers automated monitoring and investigation of security findings to free up your resources from routine tasks, communication and collaboration features to streamline response coordination, and direct 24/7 access to the AWS Customer Incident Response Team (CIRT).

Security Incident Response integrates with existing detection services, such as Amazon GuardDuty, and third-party tools through AWS Security Hub to rapidly review security alerts, escalate high-priority findings, and, with your permission, implement containment actions. It reduces the number of alerts your team needs to analyze, saving time and allowing your security personnel to focus on strategic initiatives. The service centralizes all incident-related communications, documentation, and actions, making coordinated incident response across internal and external stakeholders possible and reducing the time to coordinate from hours to minutes. You can preconfigure incident response team members, set up automatic notifications, manage case permissions, and use communication tools like video conferencing and in-console messaging during security events. By accessing the service through a single, centralized dashboard in the AWS Management Console, you can monitor active cases, review resolved security incident cases, and track key metrics, such as the number of triaged events and mean time to resolution, in real time. If you require specialized expertise, you can connect 24/7 to the AWS CIRT in only one step.

For more information about AWS Regions where Security Incident Response is available, refer to the following service documentation.

To get started, visit the Security Incident Response console, and explore the overview page to learn more. For configuration details, refer to the Security Incident Response User Guide.

Read more


AWS Backup now supports resource type and multiple tag selections in backup policies

Today, AWS Backup announces additional options to assign resources to a backup policy on AWS Organizations. Customers can now select specific resources by resource type and exclude them based on resource type or tag. They can also use the combination of multiple tags within the same resource selection.

With additional options to select resources, customers can implement flexible backup strategies across their organizations by combining multiple resource types and/or tags. They can also exclude resources they do not want to back up using resource type or tag, optimizing cost on non-critical resources.

To get started, use your AWS Organizations' management account to create or edit an AWS Backup policy. Then, create or modify a resource selection using the AWS Organizations' API, CLI, or JSON editor in either the AWS Organizations or AWS Backup console.

AWS Backup support for enhanced resource selection in backup policies is available in all commercial regions where AWS Backup’s cross account management is available. For more information, visit our documentation and launch blog.

Read more


Introducing resource control policies (RCPs) to centrally restrict access to AWS resources

AWS is excited to announce resource control policies (RCPs) in AWS Organizations to help you centrally establish a data perimeter across your AWS environment. With RCPs, you can centrally restrict external access to your AWS resources at scale. At launch, RCPs apply to resources of the following AWS services: Amazon Simple Storage Service (Amazon S3), AWS Security Token Service, AWS Key Management Service, Amazon Simple Queue Service, and AWS Secrets Manager.

RCPs are a type of organization policy that can be used to centrally create and enforce preventative controls on AWS resources in your organization. Using RCPs, you can centrally set the maximum available permissions to your AWS resources as you scale your workloads on AWS. For example, an RCP can help enforce the requirement that “no principal outside my organization can access Amazon S3 buckets in my organization,” regardless of the permissions granted through individual bucket policies. RCPs complement service control policies (SCPs), an existing type of organization policy. While SCPs offer central control over the maximum permissions for IAM roles and users in your organization, RCPs offer central control over the maximum permissions on AWS resources in your organization.

Customers that use AWS IAM Access Analyzer to identify external access can review the impact of RCPs on their resource permissions. For an updated list of AWS services that support RCPs, refer to the list of services supporting RCPs. RCPs are available in all AWS commercial Regions. To learn more, visit the RCPs documentation.
 

Read more


aws-outposts

AWS simplifies the use of third-party block storage arrays with AWS Outposts

Starting today, customers can attach block data volumes backed by NetApp® on-premises enterprise storage arrays and Pure Storage® FlashArray™ to Amazon Elastic Compute Cloud (Amazon EC2) instances on AWS Outposts directly from the AWS Management Console. This makes it easier for customers to leverage third-party storage with Outposts. Outposts is a fully managed service that extends AWS infrastructure, AWS services, APIs, and tools to virtually any on-premises or edge location for a truly consistent hybrid experience.

With this enhancement, Outpost customers can combine the cloud capabilities offered by Outposts with advanced data management features, high density storage, and high performance offered by NetApp on-premises enterprise storage arrays and Pure Storage FlashArray. Today, customers can use Amazon Elastic Block Store (Amazon EBS) and Local Instance Store volumes to store and process data locally and comply with data residency requirements. Now, with this enhancement, they can do so while leveraging the external volumes backed by compatible third-party storage. By leveraging the new enhancement, customers can maximize value from their existing storage investments, while benefiting from the cloud operational model enabled by Outposts.

This enhancement is available on Outposts racks and Outposts 2U servers at no additional charge in all AWS Regions where Outposts is available, except the AWS GovCloud Regions. See the FAQs for Outposts servers and Outposts racks for the latest availability information.

You can use the AWS Management Console or CLI to attach the third-party block data volumes to Amazon EC2 instances on Outposts. To learn more, check out this blog post.

Read more


Announcing static stability for Amazon EC2 instances backed by EC2 instance store on AWS Outposts

AWS Outposts now offers static stability for Amazon EC2 instances backed by EC2 instance store. This enables automatic recovery for workloads running on such EC2 instances from power failures or reboots, even when the connection to the parent AWS Region is temporarily unavailable. This means Outposts servers and Outposts racks can recover faster from power outages, minimizing downtime and data loss.

Outposts provides a consistent hybrid experience by bringing AWS services to customer premises and edge locations on fully managed AWS infrastructure. While Outposts typically runs connected to an AWS Region for resource management, access control, and software updates, the new static stability feature enables workloads running on EC2 instances backed by EC2 instance store to recover from power failures even when connectivity to the AWS Region is unavailable. Note that this capability is currently not available for EC2 instances backed by Amazon EBS volumes.

This capability is in all AWS Regions where Outposts is supported. Check out the Outposts servers FAQs page and the Outposts rack FAQs page for the full list of supported Regions.

To get started, no customer specific action is required. Static stability is now enabled for all EC2 instances backed by EC2 instance store.

Read more


Self-service capacity management for AWS Outposts

AWS Outposts now supports self-service capacity management making it easy for you to view and manage compute capacity on your Outposts. Outposts brings native AWS services, infrastructure, and operating models to virtually any data center, co-location space, or on-premises facility by providing the same services, tools, and partner solutions with EC2 on premises. Customers have evolving business requirements and often need to fine-tune their application needs as their business scales. Capacity management enables viewing and modifying the configuration of EC2 capacity installed on Outposts.

Customers define their configuration when ordering a new Outposts to support a variety of different instances. Customers utilize capacity management to view these instances on their Outposts, their configured sizes, and their placement within the Outposts. Customers can also use capacity management to view, plan, and modify their capacity configuration which they will customize through this new self-service UI and API.

These capacity management features are available in all AWS Regions where Outposts is supported. Check out the Outposts rack FAQs page and the Outposts servers FAQs page for the full list of supported Regions.

To learn more about these capacity management capabilities for Outposts, read the Outposts user guide. To discuss Outposts capacity needs for your on-premises workloads with an Outposts specialist, submit this form.
 

Read more


aws-private-certificate-authority

AWS Controllers for Kubernetes for AWS Private CA now generally available

AWS Controllers for Kubernetes (ACK) service controller for AWS Private Certificate Authority (AWS Private CA) has graduated to generally available status.

By using ACK service controller for AWS Private CA, customers can now provision and manage AWS Private CA certificate authorities (CAs) and private certificates directly from Kubernetes. You can use private certificates to secure containers with encryption and identify workloads. AWS Private CA enables creation of private CA hierarchies, including root and subordinate CAs, without the investment and maintenance costs of operating an on-premises CA. With AWS Private CA, you can issue certificates automatically and at scale from a highly-available, managed cloud CA that is backed by hardware security modules.

To get started using ACK service controller for AWS Private CA visit the documentation. You can learn more about ACK and other service controllers here.

Read more


VPC Lattice now includes TCP support with VPC Resources

With the launch of VPC Resources for Amazon VPC Lattice, you can now access all of your application dependencies through a VPC Lattice service network. You're able to connect to your application dependencies hosted in different VPCs, accounts, and on-premises using additional protocols, including TLS, HTTP, HTTPS, and now TCP. This new feature expands upon the existing HTTP-based services support, enabling you to share a wider range of resources across your organization.

With VPC Resource support, you can add your TCP resources, such as Amazon RDS databases, custom DNS, or IP endpoints, to a VPC Lattice service network. Now, you can share and connect to all your application dependencies, such as HTTP APIs and TCP databases, across thousands of VPCs, simplifying network management and providing centralized visibility with built-in access controls.

VPC Resources are generally available with VPC Lattice in Africa (Cape Town), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Europe (Frankfurt), Europe (Ireland), Europe (Paris), Europe (Stockholm), Middle East (Bahrain), South America (Sao Paulo), US East (N. Virginia), US East (Ohio), US West (Oregon).

To get started, read the VPC Resources launch blog, architecture blog, and VPC Lattice User Guide. Learn more about VPC Lattice, visit Amazon VPC Lattice Getting Started.
 

Read more


AWS PrivateLink customers can now use VPC endpoints (powered by AWS PrivateLink) to privately and securely access VPC resources. These resources, such as databases or clusters, can be in your VPC or on-premises network, need not be load-balanced, and can be shared with other teams in your organization or with external independent software vendor (ISV) partners.

AWS PrivateLink is a highly available and scalable technology that enables your VPCs to have private unidirectional connection to VPC endpoint services, including supported AWS services and AWS Marketplace services, and now to VPC resources. Prior to this launch, customers could only access or share services that use Network Load Balancer or Gateway Load Balancer. Now, customers can share any VPC resource using AWS Resource Access Manager (AWS RAM). This resource can be an AWS-native resource such as an RDS database, a domain name, or an IP address in another VPC or on-premises environment. Once shared, the intended users can access these resources privately using VPC endpoints. They can use a resource VPC endpoint to access one resource or pool multiple resources in an Amazon VPC Lattice service network, and access the service network using a service network VPC endpoint. There are standard charges for sharing and accessing VPC resources — please see the pricing pages for AWS PrivateLink and VPC Lattice.

This capability is available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Africa (Cape Town), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Paris), Europe (Stockholm), Middle East (Bahrain), and South America (Sao Paulo).

To learn more about this capability and get started, please read our launch blog or refer to the AWS PrivateLink documentation.

Read more


AWS PrivateLink now supports native cross-region connectivity. Until now, Interface VPC endpoints only supported connectivity to VPC endpoint services in the same region. This launch enables customers to connect to VPC endpoint services hosted in other AWS Regions in the same AWS partition over Interface endpoints.

As a service provider, you can enable access to your VPCE service for customers in all existing and upcoming AWS Regions without the need to setup additional infrastructure in each region. As a service consumer, you can privately connect to VPCE services in other AWS Regions without the need to setup cross-region peering or exposing your data over the public internet. Cross-region enabled VPCE services can be accessed through Interface endpoints at a private IP address in your VPC, enabling simpler and more secure inter-region connectivity.

To learn about pricing for this feature, please see the AWS PrivateLink pricing page. The capability is available in US East (N. Virginia), US West (Oregon), Europe (Ireland), Asia Pacific (Singapore), South America (São Paulo), Asia Pacific (Tokyo) and Asia Pacific (Sydney) Regions. To learn more, visit AWS PrivateLink in the Amazon VPC Developer Guide.

Read more


aws-resilience-hub

AWS Resilience Hub introduces a summary view

AWS Resilience Hub introduces a new summary view, providing an executive level view of the resilience posture of the application portfolio defined on Resilience Hub. The new summary view allows you to visualize the state of your application portfolio, so you can efficiently manage and improve your applications’ ability to withstand and recover from disruptions.

Understanding the current state of application resilience can be a challenge, especially when it comes to identifying which applications need attention and communicating this information across your organization. The new summary view in Resilience Hub helps you to quickly identify applications that require remediation and streamline resilience management across your application portfolio. In addition to the new summary view, we are providing the ability to export the data powering the summary view to allow you to create custom reports for stakeholder communication. The summary and export functions allows teams to quickly assess the current state of application resilience and take necessary actions to improve it.

The new summary view is available in all of the AWS Regions where AWS Resilience Hub is supported. For the most up-to-date availability information, see the AWS Regional Services List.

To learn more about AWS Resilience Hub, visit our product page. To get started with AWS Resilience Hub, sign into the AWS console.

Read more


aws-resource-explorer

Find security, compliance, and operating metrics in AWS Resource Explorer

Today, AWS announced the general availability of a new console experience in AWS Resource Explorer that centralizes resource insights and properties from AWS services. With this release, you now have a single console experience to use simple keyword-based search for your AWS resources, view relevant resource properties, and confidently take action to organize your resources.

You can now inspect resource properties, resource-level cost with AWS Cost Explorer, AWS Security Hub findings, AWS Config compliance and configuration history, event timelines with AWS CloudTrail, and a relationship graph showing connected resources. You can also take actions on resources directly from the Resource Explorer console, such as manage tags, add resources to applications, and get additional information about a resource with Amazon Q. For example, now you can use Resource Explorer to search for untagged AWS Lambda functions, inspect the properties and tags of a specific function, examine a relationship graph to see what other resources it is connected to, and tag the function accordingly – all from a single console.

Resource Explorer is available at no additional charge, though features such as compliance information and configuration history require use of AWS Config, which is charged separately. These features are available in all AWS Regions where Resource Explorer is generally available. For more information on Resource Explorer, please visit our documentation. To learn more about how to configure Resource Explorer for your organization, view our multi-account search getting started guide.

Read more


aws-security-hub

AWS announces AWS Security Incident Response for general availability

Today, AWS announces the general availability of AWS Security Incident Response, a new service that helps you prepare for, respond to, and recover from security events. This service offers automated monitoring and investigation of security findings to free up your resources from routine tasks, communication and collaboration features to streamline response coordination, and direct 24/7 access to the AWS Customer Incident Response Team (CIRT).

Security Incident Response integrates with existing detection services, such as Amazon GuardDuty, and third-party tools through AWS Security Hub to rapidly review security alerts, escalate high-priority findings, and, with your permission, implement containment actions. It reduces the number of alerts your team needs to analyze, saving time and allowing your security personnel to focus on strategic initiatives. The service centralizes all incident-related communications, documentation, and actions, making coordinated incident response across internal and external stakeholders possible and reducing the time to coordinate from hours to minutes. You can preconfigure incident response team members, set up automatic notifications, manage case permissions, and use communication tools like video conferencing and in-console messaging during security events. By accessing the service through a single, centralized dashboard in the AWS Management Console, you can monitor active cases, review resolved security incident cases, and track key metrics, such as the number of triaged events and mean time to resolution, in real time. If you require specialized expertise, you can connect 24/7 to the AWS CIRT in only one step.

For more information about AWS Regions where Security Incident Response is available, refer to the following service documentation.

To get started, visit the Security Incident Response console, and explore the overview page to learn more. For configuration details, refer to the Security Incident Response User Guide.

Read more


AWS Security Hub launches 7 new security controls

AWS Security Hub has released 7 new security controls, increasing the total number of controls offered to 437. Security Hub released new controls for Amazon Simple Notification Service (Amazon SNS) topic and AWS Key Management Service (AWS KMS) keys checking for public access. Security Hub now supports additional controls for encryption checks for key AWS services such as AWS AppSync and Amazon Elastic File System (Amazon EFS). For the full list of recently released controls and the AWS Regions in which they are available, visit the Security Hub user guide.

To use the new controls, turn on the standard they belong to. Security Hub will then start evaluating your security posture and monitoring your resources for the relevant security controls. You can use central configuration to do so across all your organization accounts and linked Regions with a single action. If you are already using the relevant standards and have Security Hub configured to automatically enable new controls, these new controls will run without taking any additional action.

To get started, consult the following list of resources:

Read more


aws-security-incident-response

AWS announces AWS Security Incident Response for general availability

Today, AWS announces the general availability of AWS Security Incident Response, a new service that helps you prepare for, respond to, and recover from security events. This service offers automated monitoring and investigation of security findings to free up your resources from routine tasks, communication and collaboration features to streamline response coordination, and direct 24/7 access to the AWS Customer Incident Response Team (CIRT).

Security Incident Response integrates with existing detection services, such as Amazon GuardDuty, and third-party tools through AWS Security Hub to rapidly review security alerts, escalate high-priority findings, and, with your permission, implement containment actions. It reduces the number of alerts your team needs to analyze, saving time and allowing your security personnel to focus on strategic initiatives. The service centralizes all incident-related communications, documentation, and actions, making coordinated incident response across internal and external stakeholders possible and reducing the time to coordinate from hours to minutes. You can preconfigure incident response team members, set up automatic notifications, manage case permissions, and use communication tools like video conferencing and in-console messaging during security events. By accessing the service through a single, centralized dashboard in the AWS Management Console, you can monitor active cases, review resolved security incident cases, and track key metrics, such as the number of triaged events and mean time to resolution, in real time. If you require specialized expertise, you can connect 24/7 to the AWS CIRT in only one step.

For more information about AWS Regions where Security Incident Response is available, refer to the following service documentation.

To get started, visit the Security Incident Response console, and explore the overview page to learn more. For configuration details, refer to the Security Incident Response User Guide.

Read more


aws-shield

AWS Shield Advanced is now available in Asia Pacific (Malaysia) Region

Starting today, you can use AWS Shield Advanced in the AWS Asia Pacific (Malaysia) Region. AWS Shield Advanced is a managed application security service that safeguards applications running on AWS from distributed denial of service (DDoS) attacks. Shield Advanced provides always-on detection and automatic inline mitigations that minimize application downtime and latency from DDoS attacks. Also, it provides protections against more sophisticated and larger attacks for your applications running on Amazon Elastic Compute Cloud (EC2), Amazon Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator, and Amazon Route 53. To learn more visit, the AWS Shield Advanced product page.

For a full list of AWS regions where AWS Shield Advanced is available, visit the AWS Regional Services page. AWS Shield Advanced pricing may vary between regions. For more information about pricing, visit the AWS Shield Pricing page.
 

Read more


aws-step-functions

AWS Step Functions simplifies developer experience with Variables and JSONata transformations

AWS Step Functions announces support for two new capabilities: Variables and JSONata data transformations. Variables allow developers to assign data in one state and reference it in a subsequent state, simplifying state payload management, reducing the need to pass data through multiple intermediate states. With support for JSONata, an open source query and transformation language, customers can now perform advanced data manipulation and transformation such as date and time formatting, and mathematical operations. Additionally, when using JSONata, we have simplified input and output processing by reducing the number of JSON transformation fields required to call services and pass data to the next state.

AWS Step Functions is a visual workflow service capable of orchestrating over 14,000 API actions from over 220 AWS services to build distributed applications and data processing workloads. With support for Variables and JSONata, developers can build distributed serverless applications faster and more efficiently with enhanced payload management capabilities. These features also reduce the need for custom code, lowering costs and reducing the number of state transitions needed to construct and pass data between states.

Variables and JSONata are available at no additional cost in: US East (N. Virginia, Ohio), US West (Oregon), Canada (Central), Europe (Ireland and Frankfurt), and Asia Pacific (Tokyo, Seoul, Singapore, and Sydney) with the remaining regions to follow in the coming days. We have also partnered with LocalStack and Datadog to ensure that their local emulation and observability experiences are updated to support Variables and JSONata. To learn more, please visit:

Read more


Announcing Infrastructure as Code template generation for AWS Step Functions

AWS Step Functions now supports exporting workflows as AWS CloudFormation or AWS Serverless Application Model (SAM) templates directly in the AWS console. This allows for centralized and repeatable provisioning and management of your workflow configurations. AWS Step Functions is a visual workflow service capable of orchestrating virtually any AWS service to automate business processes and data processing workloads.

Now, you can export and customize templates from existing workflows to easily provision them in other accounts or jump-start the creation of new workflows. When you combine the Step Functions templates you generate with those from other services, you can provision your entire application using AWS CloudFormation stacks. Additionally, you can export your workflows to the AWS Infrastructure Composer console to take advantage of the visual builder capabilities to create a new serverless application project. Using Infrastructure Composer, you can connect the workflow with other AWS resources and generate the resource configurations in an AWS SAM template.

For more information about the AWS Regions where AWS Step Functions is available, see the AWS Region table. You can get started in the AWS console. To learn more, see the AWS Step Functions Developer Guide.

Read more


aws-support

AWS support case management is now available in AWS Chatbot for Microsoft Teams and Slack

AWS Chatbot announces general availability of AWS Support case management in Microsoft Teams and Slack. AWS customers can now use AWS Chatbot to monitor AWS support cases updates and respond to them from chat channels.

When troubleshooting issues, customers need to stay informed up-to-date on the latest support case updates in a place where they are collaborating. Previously, customers had to install a separate app or navigate to the Console to manage support cases. Now, customers can monitor and manage support cases from Microsoft Teams and Slack with AWS Chatbot.

To manage support cases from chat channels with AWS Chatbot, customers subscribe a chat channel to support case events published in EventBridge. As new case correspondences get added, AWS Chatbot sends the support case update notifications to the configured chat channels. Channel members can the use action buttons on the notifications to view the latest case updates and respond to them without leaving the chat channel.

To interact with support cases in chat channels, you must have a Business, Enterprise On-Ramp, or Enterprise Support plan. The case management in chat applications is available at no additional cost in AWS Regions where AWS Chatbot is offered. Get started with AWS Chatbot by visiting the AWS Management Chatbot Console and by downloading the AWS Chatbot app from the Microsoft Teams marketplace or Slack App Directory. Visit the AWS Chatbot product page and Managing AWS Support cases from chat channels in AWS Chatbot documentation to learn more.
 

Read more


AWS Incident Detection and Response now available in 16 additional AWS regions

Starting today, AWS Incident Detection and Response is now available in 16 additional AWS regions. This service provides AWS Enterprise Support customers with proactive engagement and incident management, aimed at minimizing the risk of failures and accelerating the recovery of your critical workloads. AWS experts will assess your workloads for resilience, observability, and create customized runbooks for incident management. AWS Incident Management Engineers (IMEs) are on call 24/7 to detect incidents and engage you within 5 minutes of an alarm to offer guidance for mitigation and recovery.

With this release, AWS Incident Detection and Response is now available in the following AWS regions: Africa (Capetown), Asia Pacific (Seoul), Asia Pacific (Osaka), Middle East (Bahrain), Asia Pacific (Hong Kong), Middle East (UAE), Asia Pacific (Jakarta), Asia Pacific (Hyderabad), Asia Pacific (Melbourne), EU (Zurich), Europe (Spain), Canada West (Calgary), Israel (Tel Aviv), EU (Milan), North America (Calgary), Asia Pacific (Malaysia).

Visit the eligible AWS regions to see the full list of all supported regions. Visit the AWS Incident Detection and Response product page to get started.
 

Read more


aws-systems-manager

AWS Systems Manager now support Windows Server 2025, Ubuntu Server 24.04, and Ubuntu Server 24.10

AWS Systems Manager now supports instances running Windows Server 2025, Ubuntu Server 24.04, and Ubuntu Server 24.10. Systems Manager customers running these operating systems versions now have access to all AWS Systems Manager Node Management capabilities, including Fleet Manager, Compliance, Inventory, Hybrid Activations, Session Manager, Run Command, State Manager, Patch Manager, and Distributor. For a full list of supported operating systems and machine types for AWS Systems Manager, see the user guide. Patch Manager enables you to automatically patch instances with both security-related and other types of updates across your infrastructure for a variety of common operating systems, including Windows Server, Amazon Linux, and Red Hat Enterprise Linux (RHEL). For a full list of supported operating systems for AWS Systems Manager Patch Manager, see the Patch Manager prerequisites user guide page.

This feature is available in all AWS Regions where AWS Systems Manager is available. For more information, visit the Systems Manager product page and Systems Manager documentation.
 

Read more


The new AWS Systems Manager experience: Simplifying node management

The new AWS Systems Manager experience helps you scale operational efficiency by simplifying node management, making it easier to manage nodes running anywhere— whether it's EC2 instances, hybrid servers, or servers running in a multicloud environment. The new AWS Systems Manager experience gives you a comprehensive, centralized view to easily manage all of your nodes at scale.

With this launch, you can now see all managed and unmanaged nodes across your organizations’ AWS accounts and Regions from a single place. You can also identify, diagnose, and remediate unmanaged nodes. Once remediated, meaning they are managed by Systems Manager, you can leverage the full suite of Systems Manager tools to patch nodes with security updates, securely connect to nodes without managing SSH keys or bastion hosts, automate operational commands at scale, and gain comprehensive visibility across your entire fleet. Systems Manager is also now integrated with Amazon Q Developer which extends your ability to see and control your nodes from anywhere in the AWS console. For example, you can ask Amazon Q to “show me managed instances running Amazon Linux 1” to quickly get the information you need for operational investigations. It's the same powerful Systems Manager many customers rely on, improved and simplified to help you save time and effort.

The new Systems Manager experience is available in AWS Regions found here.

Get started now at no additional cost and easily enable the new experience in Systems Manager. For more information, visit the Systems Manager product page and user guide.
 

Read more


aws-tools-and-sdks

Announcing business planning feature in AWS Partner Central

AWS Partner Central is launching a business planning feature to help AWS Partners create successful partnerships and accelerate co-sell with AWS.

Currently, Partners have multiple touchpoints, conversations, and emails with AWS Partner management and sales teams as part of business planning exercises. AWS is making this collaboration easier and more efficient by centralizing the business planning process and standardizing templates in Partner Central. This will provide a central mechanism to help track progress toward business goals with AWS.

Partners can create joint business plans with AWS that are tailor-made for their unique business needs. Partners can review and edit inputs, set goals, and track progress in a single experience. Comprehensive reporting provides year-to-date actual performance, current-year attainment, and year-over-year changes for selected business metrics, reducing manual effort for collecting data from various sources.

The business planning feature is available to AWS Partners who are actively engaged with AWS Partner management teams to create joint business plans. To get started, reach out to your AWS Partner contact to initiate a business plan. Once a draft plan is shared, log in to AWS Partner Central, navigate to “My company,” and click on “Business plan” to start collaborating.

Read more


aws-transfer-family

Announcing AWS Transfer Family web apps

AWS Transfer Family web apps are a new resource that you can use to create a simple interface for accessing your data in Amazon S3 through a web browser. With Transfer Family web apps, you can provide your workforce with a fully managed, branded, and secure portal for your end users to browse, upload, and download data in S3.

Transfer Family offers fully managed file transfers over SFTP, FTPS, FTP, and AS2, enabling seamless workload migrations with no need to change your third-party clients or their configurations. Now, you can also enable browser-based transfers for non-technical users in your organization through a user-friendly interface. Transfer Family web apps are integrated with AWS IAM Identity Center and S3 Access Grants, enabling fine-grained access controls that map corporate identities in your existing directories directly to S3 datasets. With a few clicks in the Transfer Family console, you can generate a shareable URL for your web app. Then, your authenticated users can start accessing data you authorize them to access through their web browsers.

Transfer Family web apps are available in select AWS Regions. You can get started with Transfer Family web apps in the Transfer Family console. For pricing, visit the Transfer Family pricing page. To learn more, read the AWS News Blog or visit the Transfer Family User Guide.
 

Read more


AWS Transfer Family is now available in the AWS Asia Pacific (Malaysia) Region

Customers in AWS Asia Pacific (Malaysia) Region can now use AWS Transfer Family.

AWS Transfer Family provides fully managed file transfers for Amazon Simple Storage Service (Amazon S3) and Amazon Elastic File System (Amazon EFS) over Secure File Transfer Protocol (SFTP), File Transfer Protocol (FTP), FTP over SSL (FTPS) and Applicability Statement 2 (AS2). In addition to file transfers, Transfer Family enables common file processing and event-driven automation for managed file transfer (MFT) workflows, helping customers to modernize and migrate their business-to-business file transfers to AWS.

To learn more about AWS Transfer Family, visit our product page and user-guide. See the AWS Region Table for complete regional availability information.

Read more


aws-transit-gateway

AWS Transit Gateway and AWS Cloud WAN enhance visibility metrics and Path MTU support

AWS Transit Gateway (TGW) and AWS Cloud WAN now support per availability zone (AZ) metrics delivered to CloudWatch. Furthermore, both services now support Path Maximum Transmission Unit Discovery (PMTUD) for effective mitigation against MTU mismatch issues in their global networks.

TGW and Cloud WAN allow customers to monitor their global network through performance and traffic metrics such as bytes in/out, packets in/out, and packets dropped. Until now, these metrics were available at an attachment level, and aggregate TGW and Core Network Edge (CNE) levels. With this launch, customers have more granular visibility into AZ-level metrics for VPC attachments. AZ-level metrics enable customers to rapidly troubleshoot any AZ impairments and provide deeper visibility in AZ-level traffic patterns across TGW and Cloud WAN.

TGW and Cloud WAN now also support standard PMTUD mechanism for traffic ingressing on VPC attachments. Until now, jumbo sized packets exceeding the TGW/CNE MTU (8500 bytes) would get silently dropped on VPC attachments. With this launch, an Internet Control Message Protocol (ICMP) Fragmentation Needed response message is sent back to sender hosts allowing them to remediate packet MTU size and thus minimize packet loss due to MTU mismatches in their network. PMTUD support is available for both IPv4 and IPv6 packets.

The per-AZ CloudWatch metrics and PMTUD support are available within each service in all AWS Regions where TGW or Cloud WAN are available. For more information, see the AWS Transit Gateway and AWS Cloud WAN documentation pages.

Read more


aws-user-notifications

Announcing the new AWS User Notifications SDK

Today, we announced the general availability of AWS User notifications SDK which enables you to programmatically configure and get notifications (e.g., AWS Health events, EC2 Instance state change, or CloudWatch Alarms). The User Notifications SDK makes it easy to automate the creation of notification configurations in your accounts; e.g., a Cloud Center of Excellence (CCoE) can set up AWS Health notifications for each provisioned account.

With User Notifications SDK, you specify which events you want to be notified about, and in which channels (email, AWS Chatbot for Microsoft Teams and Slack notifications, and AWS Console Mobile App push notifications) with no need to access the Management Console. Users with User Notifications permissions can enable notifications for use cases like AWS Health events, Amazon CloudWatch alarms, or Amazon EC2 instance state changes. For example, notify your team’s Slack channel whenever an EC2 instance in region US East (Virginia) or Europe (Frankfurt) with tag ‘production’ changes state to “stopped”.

The User Notifications SDK is offered at no additional cost.

For more information, visit the AWS User Notifications product page and documentation. To get started, go to AWS User Notifications API reference and AWS User Notifications Contacts API reference. CloudFormation support will be coming soon.

Read more


AWS End User Messaging launches message feedback tracking

Today, AWS End User Messaging now allows you to track feedback for messages sent through the SMS, and MMS channel. AWS End User Messaging provides developers with a scalable and cost-effective messaging infrastructure without compromising the safety, security, or results of their communications.

For each SMS and MMS you send, you can now track message feedback rates like one-time passcode conversions, promotional offer link clicks, or online shopping cart additions. Message feedback rates allow you to track leading indicators for message performance that is specific to your use-case.

To learn more, visit the AWS End User Messaging SMS User Guide.
 

Read more


AWS End User Messaging announces integration with Amazon EventBridge

Today, AWS End User Messaging announces an integration with Amazon EventBridge. AWS End User Messaging provides developers with a scalable and cost-effective messaging infrastructure without compromising the safety, security, or results of their communications.

Now your SMS, MMS and voice delivery events which contain information like the status of the message, price, and carrier information will be available in EventBridge. You can then send send your SMS events to other AWS services and the many SaaS applications that EventBridge integrates with. EventBridge also allows you to create rules that filter and route your SMS events to event destinations you specify.

To learn more, visit the AWS End User Messaging SMS User Guide.
 

Read more


aws-verified-access

AWS Verified Access now supports secure access to resources over non-HTTP(S) protocols (Preview)

Today, AWS announces the preview of AWS Verified Access’ new feature that supports secure access to resources that connect over protocols such as TCP, SSH, and, RDP. With this launch, Verified Access enables you to provide secure, VPN-less access to your corporate applications and resources using AWS zero trust principles. This feature eliminates the need to manage separate access and connectivity solutions for your non-HTTP(S) resources on AWS and simplifies security operations.

Verified Access evaluates each access request in real time based on the user’s identity and device posture, using fine-grained policies. With this feature, you can extend your existing Verified Access policies to enable secure access to non-HTTP(S) resources such as git-repositories, databases, and a group of EC2 instances. For example, you can create centrally managed policies that grant SSH access across your EC2 fleet to only authenticated members of the system administration team, while ensuring that connections are permitted only from compliant devices. This simplifies your security operations by allowing you to create, group, and manage access policies for applications and resources with similar security requirements from a single interface.

This feature of AWS Verified Access is available in preview in 18 AWS regions: US East (Ohio), US East (Northern Virginia), US West (N California), US West (Oregon), Canada (Central), Asia Pacific (Sydney), Asia Pacific (Jakarta), Asia Pacific (Tokyo), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Europe (Ireland), Europe (London), Europe (Frankfurt), Europe (Milan), Europe (Stockholm), South America (São Paulo), and, Israel (Tel Aviv).

To learn more, visit the product page, launch blog and documentation.

Read more


aws-well-architected-tool

AWS Well-Architected adds enhanced implementation guidance

Today, we are announcing updates to the AWS Well-Architected Framework, featuring comprehensive guidance to help customers build and operate secure, high-performing, resilient, and efficient workloads on AWS. This update includes 14 newly refreshed best practices, including the Reliability Pillar, representing the first major improvements since 2022.

The refreshed Framework offers prescriptive guidance, expanded best practices, and updated resources to help customers tailor AWS recommendations to their specific needs, accelerating cloud adoption and applying best practices more effectively.

These updates strengthen workload security, reliability, and efficiency, empowering organizations to scale confidently and build resilient, sustainable architectures. The Reliability Pillar, in particular, provides deeper insights for creating dependable cloud solutions.

What partners are saying about the updated guidance: Well-Architected Partner, 6Pillar, CEO, Lorenzo Modesto “While the updated content that the AWS Well-Architected Team is generating is massively helpful for both WA Partners and those AWS Consulting Partners who want to become WA Partners, what’s most powerful is the focus on partners automating their WA practices.”

The updated AWS Well-Architected Framework is available now for all AWS customers. Updates in this release will be incorporated into the AWS Well-Architected Tool in future releases, which you can use to review your workloads, address important design considerations, and help you follow the AWS Well-Architected Framework guidance. To learn more about the AWS Well-Architected Framework, visit the AWS Well-Architected Framework documentation.
 

Read more


aws-wickr

AWS Wickr is now available in the AWS Asia Pacific (Malaysia) Region

AWS Wickr now allows you to establish a network in the Asia Pacific (Malaysia) Region to help you meet data residency requirements, and other obligations.

AWS Wickr is a security-first messaging and collaboration service with features designed to help keep your internal and external communications secure, private, and compliant. AWS Wickr protects one-to-one and group messaging, voice and video calling, file sharing, screen sharing, and location sharing with end-to-end encryption. Customers have full administrative control over data, which includes addressing information governance polices, configuring ephemeral messaging options, and deleting credentials for lost or stolen devices. You can log both internal and external conversations in an AWS Wickr network to a private data store that you manage, for data retention and auditing purposes.

AWS Wickr is available in the AWS US East (N. Virginia), AWS GovCloud (US-West), AWS Canada (Central), AWS Europe (London, Frankfurt, Stockholm, and Zurich), and AWS Asia Pacific (Singapore, Sydney, Tokyo and now Malaysia) Regions.

To learn more and get started, see the following resources:

Read more


aws-x-ray

Application Signals provides OTEL support via X-Ray OTLP endpoint for traces

CloudWatch Application Signals, an application performance monitoring (APM) solution, enables developers and operators to easily monitor the health and performance of their applications hosted across different compute platforms such as EKS, ECS and more. Customers can now use OpenTelemetry Protocol (OTLP), an open-source protocol, to send traces to the X-Ray OTLP endpoint, and unlock application performance monitoring capabilities with Application Signals.

OpenTelemetry Protocol (OTLP) is a standardized protocol for transmitting telemetry data from your applications to monitoring solutions like CloudWatch. Developers who use OpenTelemetry to instrument their applications can now send traces to the X-Ray OTLP endpoint, unlocking, via Application Signals, pre-built, standardized dashboards for critical application metrics (throughput/latency/errors), correlated trace spans, and interactions between applications and its dependencies (such as other AWS services). This provides operators with a complete picture of the application's health, allowing them to pinpoint the source of performance issues. By creating Service Level Objectives (SLOs) within Application Signals, customers can track performance indicators of crucial application functions. This makes it simple to spot and address any operations falling short of their business goals. Finally, customers can also analyze application issues in business context such as troubleshoot customer support tickets or find top customers impacted due to application disruptions by searching and analyzing transaction (or trace) spans.

OTLP endpoint for traces is available in all regions where Application Signals is generally available. For pricing, see Amazon CloudWatch pricing. See documentation to learn more.

Read more


bottlerocket

Bottlerocket announces new AMIs that are preconfigured to use FIPS 140-3 validated cryptographic modules

Today, AWS has announced new AMIs for Bottlerocket that are preconfigured to use FIPS 140-3 validated cryptographic modules, including the Amazon Linux 2023 Kernel Crypto API and AWS-LC. Bottlerocket is a Linux-based operating system purpose-built for running containers, with a focus on security, minimal footprint, and safe updates.

With these FIPS-enabled Bottlerocket AMIs, the host software uses only FIPS-approved cryptographic algorithms for TLS connections. This includes connectivity to AWS services such as EC2 and Amazon Elastic Container Registry (ECR). Additionally, in regions where FIPS endpoints are available, the AMIs automatically use FIPS-compliant endpoints for these services by default, streamlining secure configurations for containerized workloads.

The FIPS-enabled Bottlerocket AMIs are now available in all commercial and AWS GovCloud (US) Regions. To see the regions where FIPS-endpoints are supported, visit the AWS FIPS 140-3 page.

To get started with Bottlerocket, see the Bottlerocket User Guide. You can also visit the Bottlerocket product page and explore the Bottlerocket GitHub repository for more information.

Read more


business-productivity

Amazon Q Business now provides insights from your databases and data warehouses (preview)

Today, AWS announces the public preview of the integration between Amazon Q Business and Amazon QuickSight, delivering a transformative capability that unifies answers from structured data sources (databases, warehouses) and unstructured data (documents, wikis, emails) in a single application.

Amazon Q Business is a generative AI–powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. Amazon QuickSight is a business intelligence (BI) tool that helps you visualize and understand your structured data through interactive dashboards, reports, and analytics. While organizations want to leverage generative AI for business insights, they experience fragmented access to unstructured and structured data.

With the QuickSight integration, customers can now link their structured sources to Amazon Q Business through QuickSight’s extensive set of data source connectors. Amazon Q Business responds in real time, combining the QuickSight answer from your structured sources with any other relevant information found in documents. For example, users could ask about revenue comparisons, and Amazon Q Business will return an answer from PDF financial reports along with real-time charts and metrics from QuickSight. This integration unifies insights across knowledge sources, helping organizations make more informed decisions while reducing the time and complexity traditionally required to gather insights.

This integration is available to all Amazon Q Business Pro, and Amazon QuickSight Reader Pro, and Author Pro users in the US East (N. Virginia) and US West (Oregon) AWS Regions. To learn more, visit the Amazon Q Business documentation site.

Read more


Amazon Connect Contact Lens now automatically categorizes your contacts using generative AI

Amazon Connect Contact Lens now provides you with the ability to automatically categorize your contacts using generative AI, making it easy to identify top drivers, customer experience, and agent behavior for your contacts. You can provide criteria to categorize contacts in natural language (e.g., did the customer try to make a payment on their balance?). Contact Lens then automatically labels contacts that meet the match criteria, and provides relevant points from the conversation. In addition, you can receive alerts and generate tasks on categorized contacts, and search for contacts using the automated labels. This feature helps supervisors easily categorize contacts for scenarios such as identifying customer interest in specific products, assessing customer satisfaction, monitoring whether agents exhibited professional behavior on calls, and more.

This feature is supported in the English language and is available in two AWS regions including US East (N. Virginia) and US West (Oregon). To learn more, please visit our documentation and our webpage. This feature is included within Contact Lens conversational analytics price at no additional cost. For information about Contact Lens pricing, please visit our pricing page.

Read more


Amazon Connect launches AI guardrails for Amazon Q in Connect

Amazon Q in Connect, a generative AI powered assistant for customer service, now enables customers to natively configure AI guardrails to implement safeguards based on their use cases and responsible AI policies. Contact center administrators can configure company-specific guardrails for Amazon Q in Connect to filter harmful and inappropriate responses, redact sensitive personal information, and limit incorrect information in the responses due to potential large language model (LLM) hallucination.

For end-customer self-service scenarios, guardrails can be used to ensure Amazon Q in Connect responses are constrained to only company-related topics and maintain professional communication standards. Additionally, when agents leverage Amazon Q in Connect to help solve customer issues, these guardrails can prevent accidental exposure of personally identifiable information (PII) to agents. Contact center administrators will have the flexibility to configure these guardrails and selectively apply them to different contact types.

For region availability, please see the availability of Amazon Connect features by Region. To learn more about Amazon Q in Connect, please visit the website or see the help documentation.
 

Read more


Amazon Connect launches new intraday forecast dashboards

Amazon Connect now allows you to compare intraday forecasts against previously published forecasts, review projected daily performance, and receive predictions for effective staffing, all available within the Amazon Connect Contact Lens dashboards. With intraday forecasts, you receive updates every 15 minutes with predictions for rest-of-day contact volumes, average queue answer time, average handle time, and, now, effective staffing. These forecasts allow you to take proactive actions to improve customer wait time and service level. For example, contact center managers can now track agent utilization at the queue level, enabling them to identify potential imbalances or staffing shortages and take action before wait times are impacted.

This feature is available in all AWS Regions where Amazon Connect forecasting, capacity planning, and agent scheduling is available. To learn more see the Amazon Connect Administrator Guide.

Read more


Amazon Connect launches AI assistant for customer segments and trigger-based campaigns

Amazon Connect now offers new capabilities to proactively engage your customers in a personalized manner. These features help non-technical business users create customer segments using prompts and drive trigger-based campaigns to deliver timely, relevant communications to the right audiences.

Use new segment AI assistant in Amazon Connect Customer Profiles to build audiences using natural language queries and receive recommendations based on trends in the customer data. Identify segments such as customers with an increase in support cases over the last quarter, or whose have reduced purchases in the last month, using easy-to-use prompts. Use new trigger-based campaigns based on real-time customer events on Amazon Connect outbound campaigns to proactively drive outbound communications in just a few clicks. Engage customers with timely, relevant communications via their preferred channels, responding instantly to behaviors such as abandoned shopping carts or frequent visits to specific help pages.

With Amazon Connect Customer Profiles and Amazon Connect outbound campaigns, only pay-as-you-go for customer profiles utilized daily, outbound campaigns processing and for associated channels usage. Both features of Amazon Connect are available in US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), Canada (Central), Europe (Frankfurt) and Europe (London) AWS regions. In addition, segment AI assistant is available in Asia Pacific (Seoul), Asia Pacific (Tokyo) and Asia Pacific (Singapore) AWS regions, with trigger based campaigns also available in Africa (Cape Town) AWS region. To learn more, visit our webpages for Customer Profiles and for outbound campaigns.

Read more


Amazon Connect Contact Lens now supports external voice

Amazon Connect now integrates with other voice systems for real-time and post-call analytics, so you can use Amazon Connect Contact Lens with your existing voice system to help improve customer experience and agent performance.

Amazon Connect Contact Lens provides call recordings, conversational analytics (including contact transcript, sensitive data redaction, content categorization, theme detection, sentiment analysis, real-time alerts, and post-contact summary), and agent performance evaluations (including evaluation forms, automated evaluation, supervisor review) with a rich user experience to display, search and filter customer interactions, and programmatic access to data streams and the data lake. If you are an existing Amazon Connect customer, you can expand use of Contact Lens to other voice systems for consistent analytics in a single data warehouse. If you want to migrate your contact center to Amazon Connect, you can start with Contact Lens analytics and performance insights before migrating their agents.

Contact Lens supports external voice in the US East (N. Virginia) and US West (Oregon) AWS Regions.

To learn more about Amazon Connect and call transfers, review the following resources:

Read more


Amazon Connect now supports external voice transfers

Amazon Connect now integrates with other voice systems to directly transfer voice calls and metadata without using the public telephone network. You can use Amazon Connect telephony and Interactive Voice Response (IVR) with your existing voice systems to help improve customer experience and reduce costs.

Amazon Connect IVR provides conversational voice bots in 30+ languages with natural language processing, automated speech recognition, and text-to-speech to help personalize customer service, provide self-service for complex tasks, and collect information to reduce agent handling time. Now, you can use Amazon Connect to modernize the IVR experience of your existing contact center and your enterprise and branch voice systems. Additionally, enterprises migrating their contact center to Amazon Connect can start with Connect telephony and IVR for immediate modernization ahead of agent migration.

External voice transfer is available in the US East (N. Virginia) and US West (Oregon) AWS Regions.

To learn more about Amazon Connect and call transfers, review the following resources:

Read more


Amazon Connect Contact Lens now automates agent performance evaluations using generative AI

Amazon Connect Contact Lens now provides you with the ability to use generative AI to automatically fill and submit agent performance evaluations. Managers can now specify their evaluation criteria in natural language, and use generative AI for automating evaluations of any or all of agents’ customer interactions, and get aggregated agent performance insights across cohorts of agents over time. You are also provided with context and justification for the automated evaluations along with references to specific points in the conversation for agent coaching. This launch provides managers with automated evaluations of additional agent behaviors (e.g., was the agent able to resolve the customer’s issue?), enabling managers to comprehensively monitor and improve regulatory compliance, agent adherence to quality standards and sensitive data collection, while reducing the time spent on evaluating agent performance.

This feature is supported in the English language and is available in 8 AWS regions including US East (N. Virginia), US West (Oregon), Europe (Frankfurt), Europe (London), Canada (Central), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Asia Pacific (Singapore). To learn more, please visit our documentation and our webpage. This feature is included within Contact Lens performance evaluations at no additional cost. For information about Contact Lens pricing, please visit our pricing page.

Read more


Amazon Connect now supports WhatsApp Business messaging

Amazon Connect now supports WhatsApp Business messaging, enabling you to deliver personalized experiences to your customers who use WhatsApp, one of the world's most popular messaging platforms, increasing customer satisfaction and reducing costs. Rich messaging features such as inline images and videos, list messages, and quick replies allow your customers to browse product recommendations, check order status, or schedule appointments.

Amazon Connect for WhatsApp Business messaging makes it easy for your customers to initiate a conversation by simply tapping on WhatsApp-enabled phone numbers or chat buttons published on your website or mobile app, or by scanning a QR code. As a result, you are able to reduce call volumes and lower operational costs by deflecting calls to chats. WhatsApp Business messaging uses the same generative AI-powered chatbots, routing, configuration, analytics, and agent experience as voice, chat, SMS, Apple Messages for Business, tasks, web calling, and email in Amazon Connect, making it easy for you to deliver seamless omnichannel customer experiences.

Amazon Connect for WhatsApp Business messaging is available in US East (N. Virginia), US West (Oregon), Europe (Frankfurt), Europe (London), and Asia Pacific (Singapore) regions.

To learn more and get started, please refer to the help documentation, pricing page, or visit the Amazon Connect website.

Read more


Amazon Connect launches generative AI-powered self-service with Amazon Q in Connect

Amazon Q in Connect, a generative-AI powered assistant for customer service, now supports end-customer self-service interactions across Interactive Voice Response (IVR) and digital channels. With this launch, businesses can augment their existing self-service experiences with generative AI capabilities to create more personalized and dynamic experiences to improve customer satisfaction and first contact resolution.

Amazon Q in Connect can directly converse with end-customers and reason over undefined intents for more ambiguous scenarios to provide customers accurate responses. For example, Amazon Q in Connect can help end-customers by completing actions such as booking trips, applying for loans, or scheduling doctor appointments. Amazon Q in Connect also supports Q&A, helping end-customer get the information they need as well as asking end-customers follow up questions to determine the right answers. If a customer requires additional support, Amazon Q in Connect provides seamless transition to customer service agents, preserving the full conversation context ensuring a cohesive customer experience.

For region availability, please see the availability of Amazon Connect features by Region. To learn more about Amazon Q in Connect, please visit the website or see the help documentation.

Read more


AWS announces Salesforce Contact Center with Amazon Connect (Preview)

Today, AWS announces the Preview of Salesforce Contact Center with Amazon Connect, a groundbreaking offering that integrates native digital and voice capabilities into Salesforce Service Cloud, delivering a unified and streamlined experience for agents. Salesforce users can now unify and route voice, chat, email, and case management across Amazon Connect and Service Cloud capabilities, streamlining operational efficiency and enhancing customer service interactions.

With Salesforce Contact Center with Amazon Connect, companies can now seamlessly integrate their Salesforce CRM data and agent experience with Amazon Connect’s leading voice, digital channel, and routing capabilities. Salesforce users can innovate with personalized and responsive service across every touchpoint. Customers receive personalized, AI-powered self-service experiences powered by Amazon Lex across Amazon Connect voice and chat, quickly solving issues. For more complex inquiries, the seamless transition from self-service to agent-assistance connects customers to the right agent, who has a unified view of the customer’s data, issue, and interaction history in Salesforce Service Cloud. Integrated data and APIs empower agents with Contact Lens real-time voice transcripts and supervisors with call monitoring in Salesforce Service Cloud. Salesforce admins can quickly deploy and configure an integrated contact center solution in minutes with Amazon Connect voice, chat and routing of Salesforce cases.

If you’re interested in joining the preview of Salesforce Contact Center with Amazon Connect, sign up here. To learn more, visit the website.

Read more


Introducing Amazon Q Apps with private sharing

Amazon Q Apps, a capability within Amazon Q Business to create lightweight, generative AI-powered apps, now supports private sharing. This new feature enables app creators to restrict app access to select Amazon Q Business users, providing more granular control over app visibility and usage within organizations.

Previously, Amazon Q Apps could only be kept private for individual use or published to all users of the Amazon Q Business environment through the Amazon Q Apps library. Now app creators can share their apps with specific individuals allowing for more targeted collaboration and controlled access. App users with access to shared apps can find these apps in the Amazon Q Apps Library and run them. Apps shown in the library respect the access set by the app creator so those are visible only to selected users. Private sharing enables new functional use cases. For instance, a messaging-compliant document generation app may be shared company-wide for anyone in the organization to use, while a customer outreach app could be restricted to individuals of the sales team only. Private sharing also opens up possibilities for app creators to gather early feedback from a small group of users before wider distribution of their app.

Amazon Q Apps with private sharing is now available in the same regions where Amazon Q Business is available.

To learn more about private sharing in Amazon Q Apps, visit the Q Apps documentation.

Read more


Amazon Q Apps introduces data collection (Preview)

Amazon Q Apps, the generative AI-powered app creation capability of Amazon Q Business, now offers a new data collection feature in public preview. This enhancement enables users to collate data across multiple users within their organization, further enhancing the collaborative quality of Amazon Q Apps for various business needs.

With the new ability to collect data through form cards, app creators can design apps to gather information for a diverse set of business use cases, such as conducting team surveys, compiling questions for company-wide meetings, tracking new hire onboarding progress, or running a project retrospective. These apps can further leverage generative AI to analyze the collected data, identify common themes, summarize ideas, and provide actionable insights. A shared data collection app can be instantiated into different data collections by app users, each with its own unique, shareable link. App users can participate in an ongoing data collection to submit responses, or start their own data collection without the need to duplicate the app.

Amazon Q Apps with data collection is available in the regions where Amazon Q Business is available.

To learn more about data collection in Amazon Q Apps and how it can benefit your organization, visit the Q Apps documentation.

Read more


Amazon Q Business now available as browser extension

Today, Amazon Web Services announces the general availability of Amazon Q Business browser extensions for Google Chrome, Mozilla Firefox, and Microsoft Edge. Users can now supercharge their browsers’ intelligence and receive context-aware, generative AI assistance, making it easy to get on-the-go help for their daily tasks.

The Amazon Q Business browser extension makes it easy for users to summarize web pages, ask questions about web content or uploaded files, and leverage large language model knowledge directly within their browser. With the browser extension, users can maximize reading productivity, streamline their research and analysis of complex information, and get instant help when creating content.

The Amazon Q Business browser extension is now available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (N. California), and US West (Oregon).

Learn how to boost your productivity with AI-powered assistance within your browser by visiting the Amazon Q Business product page and the Amazon Q Business documentation site.

Read more


Application Signals provides OTEL support via X-Ray OTLP endpoint for traces

CloudWatch Application Signals, an application performance monitoring (APM) solution, enables developers and operators to easily monitor the health and performance of their applications hosted across different compute platforms such as EKS, ECS and more. Customers can now use OpenTelemetry Protocol (OTLP), an open-source protocol, to send traces to the X-Ray OTLP endpoint, and unlock application performance monitoring capabilities with Application Signals.

OpenTelemetry Protocol (OTLP) is a standardized protocol for transmitting telemetry data from your applications to monitoring solutions like CloudWatch. Developers who use OpenTelemetry to instrument their applications can now send traces to the X-Ray OTLP endpoint, unlocking, via Application Signals, pre-built, standardized dashboards for critical application metrics (throughput/latency/errors), correlated trace spans, and interactions between applications and its dependencies (such as other AWS services). This provides operators with a complete picture of the application's health, allowing them to pinpoint the source of performance issues. By creating Service Level Objectives (SLOs) within Application Signals, customers can track performance indicators of crucial application functions. This makes it simple to spot and address any operations falling short of their business goals. Finally, customers can also analyze application issues in business context such as troubleshoot customer support tickets or find top customers impacted due to application disruptions by searching and analyzing transaction (or trace) spans.

OTLP endpoint for traces is available in all regions where Application Signals is generally available. For pricing, see Amazon CloudWatch pricing. See documentation to learn more.

Read more


SES Mail Manager adds delivery of email to Amazon Q Business applications

SES announces that Mail Manager now has a rule action for “Deliver to Q Business” which allows customers to specify an Amazon Q Business application resource and submit email messages to it for indexing and queries. This simplifies setup and allows granular control of which messages are selected by the rule conditions, as well as enabling multiple parallel configurations if customers want to index different messages into separate Q Business applications entirely.

Amazon Q Business is a generative AI–powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. Customers submitting email content will be able to identify patterns of discussion, activities around specific themes, and other content which is not an explicit cybersecurity attack but may still be of interest to managers, risk officers, or compliance teams. Mail Manager and Q Business offer an additional dimension for email risk management, with full flexibility around which messages are retained, in which locations, and for what duration.

The Mail Manager rule action to deliver to Amazon Q Business is available in all AWS commercial Regions where both Q Business and Mail Manager are already available. To learn more about Mail Manager, click here.

Read more


Amazon Connect Contact Lens generative AI-powered post contact summarization is now available in 5 new regions

Amazon Connect Contact Lens generative AI-powered post contact summarization is now available in Europe (London), Canada (Central), Asia Pacific (Sydney), Asia Pacific (Tokyo), and Asia Pacific (Singapore) AWS regions, which summarize long customer conversations into succinct, coherent, and context rich contact summaries (e.g., “The customer didn’t receive a reimbursement for a last minute flight cancellation and the agent didn’t offer a partial reimbursement as per the SOP”). Agents can access post-contact summaries within seconds after a customer contact call complete to quickly complete their after contact work. This also helps supervisors improve the customer experience by getting faster insights when reviewing contacts, saving time on quality and compliance reviews, and more quickly identifying opportunities to improve agent performance.

With this launch, Contact Lens generative AI-powered post contact summarization is available in 7 AWS regions, including the 5 new regions and the existing US East (N. Virginia), US West (Oregon) regions. To learn more, please visit our documentation and our webpage. This feature is included with Contact Lens conversational analytics at no additional charge. For information about Contact Lens pricing, please visit our pricing page.

Read more


Amazon Connect now supports nine additional languages for forecasting, capacity planning, and scheduling

Amazon Connect now supports nine additional languages for forecasting, capacity planning, and scheduling. New languages now supported include: Canadian French, Chinese (Simplified and Traditional), French, German, Italian, Japanese, Korean, Portuguese (Brazilian), and Spanish.

These new languages are available in all AWS Regions where Amazon Connect forecasting, capacity planning, and scheduling are available. To learn more about Amazon Connect agent scheduling, click here.
 

Read more


AWS re:Post Private is now integrated with Amazon Bedrock to offer contextual knowledge to organizations

Today, AWS re:Post Private announces its integration with Amazon Bedrock, ushering in a new era of contextualized knowledge management for customer organizations. This feature transforms traditional organizational knowledge practices into a dynamic system of collaborative intelligence, where human expertise and AI capabilities complement each other to build collective wisdom.

At the heart of this integration is re:Post Agent for re:Post Private, an AI-powered assistant that delivers highly contextual technical answers to customer questions, drawing from a rich repository of curated knowledge resources. re:Post Agent for re:Post Private uniquely combines customer-specific private knowledge with AWS's vast public knowledge base, ensuring responses are not only timely but also tailored to each organization's specific context and needs.

By adopting re:Post Private with this new integration, organizations can now harness the full potential of collaborative intelligence. This powerful alliance between human insight and AI efficiency opens up new avenues for problem-solving, innovation, and knowledge sharing within enterprises. Unlock the transformative possibilities of collaborative intelligence and elevate your organization's knowledge management capabilities with re:Post Private.

Read more


AWS Wickr is now available in the AWS Asia Pacific (Malaysia) Region

AWS Wickr now allows you to establish a network in the Asia Pacific (Malaysia) Region to help you meet data residency requirements, and other obligations.

AWS Wickr is a security-first messaging and collaboration service with features designed to help keep your internal and external communications secure, private, and compliant. AWS Wickr protects one-to-one and group messaging, voice and video calling, file sharing, screen sharing, and location sharing with end-to-end encryption. Customers have full administrative control over data, which includes addressing information governance polices, configuring ephemeral messaging options, and deleting credentials for lost or stolen devices. You can log both internal and external conversations in an AWS Wickr network to a private data store that you manage, for data retention and auditing purposes.

AWS Wickr is available in the AWS US East (N. Virginia), AWS GovCloud (US-West), AWS Canada (Central), AWS Europe (London, Frankfurt, Stockholm, and Zurich), and AWS Asia Pacific (Singapore, Sydney, Tokyo and now Malaysia) Regions.

To learn more and get started, see the following resources:

Read more


CloudWatch RUM now supports percentile aggregations and simplified troubleshooting with web vitals metrics

CloudWatch RUM, which captures real-time data on web application performance and user interactions, helping you quickly detect and resolve issues impacting the user experience, now supports percentile aggregation of web vital metrics and simplified events based troubleshooting directly from the web vitals anomaly.

Google uses the 75th percentile (p75) of a web page’s Core Web Vitals—Largest Contentful Paint, First Input Delay, and Cumulative Layout Shift—to influence page ranking. With CloudWatch RUM, you can now monitor these p75 values of web page vitals and ensure that majority of your visitors experience optimal performance, minimizing the impact of outliers. You can also click on any point in the Web Vitals graph to view correlated page events, allowing you to quickly dive into event details such as browser, device, and geolocation to identify specific conditions causing performance issues. Additionally, you can track affected users and sessions for in-depth analysis and quickly troubleshoot issues without the added steps of applying filters to retrieve correlated events in CloudWatch RUM.

These enhancements are available in all regions where CloudWatch RUM is available at no additional cost to users.

See documentation to learn more about the feature, or see user guide or AWS One Observability Workshop to get started with real user monitoring using CloudWatch RUM.

Read more


AWS End User Messaging introduces phone number block/allow rules

Today, AWS End User Messaging expands SMS protect capabilities with phone number rules. With phone number rules, you can explicitly block or allow messages to individual phone numbers overriding your country rule settings.

You can use the new rules to fine tune your messaging strategy. For instance, you can use “block” rules to stop sending messages to specific numbers where you see abuse, helping you avoid unnecessary SMS costs. The phone number rules can be configured in the AWS End User Messaging console or accessed via APIs, enabling seamless integration with customer data platforms, contact centers, or other systems and databases that you integrate with.

To learn more and start using phone number block/allow rules, visit the AWS End User Messaging SMS User Guide.

Read more


Amazon Connect offers new personalized and proactive engagement capabilities

Amazon Connect now offers a set of new capabilities to help you proactively address customer needs before they become potential issues, enabling better customer outcomes. You can initiate proactive outbound communications for real-time service updates, promotional offers, product usage tips, and appointment reminders at just the right moments throughout your customer’s experience from the right channel. Use Amazon Connect Customer Profiles to define target segments that are dynamically updated based on real-time customer behaviors including orders from point-of-sale systems, location data from mobile apps, appointments from scheduling systems, or interactions from websites. Use Amazon Connect outbound campaigns to configure outbound communications in just a few clicks and engage customers with timely, personalized communications via their preferred channels, including voice calls, SMS, or email. Visualize campaign performance using dashboards from Amazon Connect Analytics, ensuring clarity and effectiveness in your proactive customer engagement strategies.

With Amazon Connect Customer Profiles and Amazon Connect outbound campaigns, only pay-as-you-go for customer profiles utilized daily, outbound campaigns processing and for associated channels usage. Both features of Amazon Connect are available in US East (N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Sydney), Canada (Central), and Europe (Frankfurt) and Europe (London). To learn more, visit our webpages for Customer Profiles and for outbound campaigns.

Read more


Amazon SES adds inline template support to send email APIs

Amazon Simple Email Service (SES) now allows customers to provide email templates directly within the SendBulkEmail or SendEmail API request. SES will use the provided inline template content to render and assemble the email content for delivery, reducing the need to manage template resources in your SES account.

Previously, Amazon Simple Email Service (SES) customers had to pre-create and store email templates in their SES account to use them for sending emails. This added complexity and friction to the email sending process, as customers had to manage the lifecycle of these templates. The new inline template support simplifies the integration process by allowing you to include the template content directly in your send API request, without having to create and maintain separate template resources.

Support for inline templates templated sending feature is available in all AWS Regions where Amazon SES is offered.

To learn more, see the documentation for using templates to send personalized email with the Amazon SES API.

Read more


cloud-financial-management

AWS announces Invoice Configuration

Today, AWS announces the general availability of Invoice Configuration, which enables you to customize your invoicing experience to receive separate AWS invoices based on your organizational structure. This enables you to group AWS accounts according to your internal business entities such as legal entities, subsidiaries, cost centers etc. and receive separate AWS invoices for each of your business entities, within the same AWS Organization. A separate invoice per business entity enables you to track invoices separately, thus enabling faster processing of AWS Invoices by removing manual processes to split the AWS invoice on an entity level.

With Invoice Configuration, you can create Invoice Units, which are groups of member accounts, that best represent your business entities and then designate a member or management account as the receiver for the invoice of the business entity. You can optionally associate a purchase order by Invoice Unit and visualize charges by Invoice Units using Cost Categories in Cost explorer and Cost and Usage Report.

You can either use the Invoice Configuration through the AWS Billing and Cost management console or access it through the AWS SDKs or AWS CLI to programmatically create and manage Invoice Units.

Invoice Configuration is available in all public AWS Regions, excluding GovCloud (US) Regions and China (Beijing) and China (Ningxia) Regions. To learn more visit the product page, blog post, or review the User Guide and API Reference.
 

Read more


Self-Service Know Your Customer (KYC) for AWS Marketplace Sellers

AWS Marketplace now offers a self-service Know Your Customer (KYC) feature for all sellers wishing to transact via the AWS Europe, Middle East, and Africa (EMEA) Marketplace Operator. The KYC verification process is required for sellers to receive disbursements via the AWS EMEA Marketplace Operator. This new self-service feature helps sellers complete this KYC process quickly and easily, and unblocks their business growth in EMEA region.

Completing KYC and onboarding to EMEA Marketplace operator allows sellers to provide a more localized experience for their customers. Customers will see consistent Value Added Tax (VAT) charges across all their AWS purchases. They can also pay using their local bank accounts through Single Euro Payment Area (SEPA) for AWS Marketplace Invoices. Additionally, customers will get invoices for all their AWS services and Marketplace purchases from a single entity - AWS EMEA. This makes billing and procurement much simpler for customers in Europe, the Middle East, and Africa.

The new self-service KYC experience empowers sellers to complete verification independently, reducing the time to onboard and eliminating the need to coordinate with AWS Marketplace support team.

We invite all AWS Marketplace sellers to take advantage of this new feature to expand their reach in the EMEA region and provide an improved purchasing experience for their customers. To get started, please visit the AWS Marketplace Seller Guide.

Read more


AWS Billing and Cost Management Data Exports for FOCUS 1.0 is now generally available

Today, AWS announces the general availability (GA) of Data Exports for FOCUS 1.0, which has been in public preview since June 2024. FOCUS 1.0 is an open-source cloud cost and usage specification that provides standardization to simplify cloud financial management across multiple sources. Data Exports for FOCUS 1.0 enables customers to export their AWS cost and usage data with the FOCUS 1.0 schema to Amazon S3. The GA release of FOCUS 1.0 is a new table in Data Exports in which key specification conformance gaps have been solved compared to the preview table.

With Data Exports for FOCUS 1.0 (GA), customers receive their costs in four standardized columns, ListCost, ContractedCost, BilledCost, and EffectiveCost. It provides a consistent treatment of discounts and amortization of Savings Plans and Reserved Instances. The standardized schema of FOCUS ensures data can be reliably referenced across sources.

Data Exports for FOCUS 1.0 (GA) is available in the US East (N. Virginia) Region, but includes cost and usage data covering all AWS Regions, except AWS GovCloud (US) Regions and AWS China (Beijing and Ningxia) Regions.

Learn more about Data Exports for FOCUS 1.0 in the User Guide, product details page, and at the FOCUS project webpage. Get started by visiting the Data Exports page in the AWS Billing and Cost Management console and creating an export of the new GA table named “FOCUS 1.0 with AWS columns”. After creating a FOCUS 1.0 GA export, you will no longer need your preview export. You can view the specification conformance of the GA release here.
 

Read more


Enhanced Pricing Calculator now supports discounts and purchase commitments (in preview)

Today, AWS announces the public preview of the enhanced AWS Pricing Calculator that provides accurate cost estimates for new workloads or modifications to your existing AWS usage by incorporating eligible discounts. It also helps you estimate the cost impact of your commitment purchases and their impact to your organization's consolidated bill. With today’s launch, AWS Pricing Calculator now allows you to apply eligible discounts to your cost estimates, enabling you to make informed financial planning decisions.

The enhanced Pricing Calculator, available within the AWS Billing and Cost Management Console, provides two types of cost estimates: cost estimation for a workload, and estimation of a full AWS bill. Using the enhanced Pricing Calculator, you can import your historical usage or create net new usage when creating a cost estimate. You can also get started by importing existing Pricing Calculator estimates, and sharing an estimate with other AWS console users. Using the enhanced Pricing Calculator, you can confidently assess the cost impact and understand your return on investment for migrating workloads, planning new workloads or growth of existing workloads. You can plan for commitment purchases on the AWS cloud. You can also create or access cost estimates using a new public cost estimations API.

The enhanced Pricing Calculator is available in all AWS commercial regions, excluding China. To get started with new Pricing Calculator, visit the AWS Billing and Cost Management Console. To learn more visit the AWS Pricing Calculator user guide and blog.
 

Read more


Announcing enhanced purchase order support for AWS Marketplace

Today, AWS Marketplace is extending transaction purchase order number support to products with pay-as-you-go pricing, including Amazon Bedrock subscriptions, software as a service (SaaS) contracts with consumption pricing, and AMI annuals. Additionally, you can update purchase order numbers post-subscription prior to invoice creation to ensure your invoices reflect the proper purchase order. This launch helps you allocate costs and makes it easier to process and pay invoices.

The purchase order feature in AWS Marketplace allows the purchase order number that you provide at the time of the transaction in AWS Marketplace to appear on all invoices related to that purchase. Now, you can provide a purchase order at the time of purchase for most products available in AWS Marketplace, including products with pay-as-you-go pricing. You can add or update purchase orders post-subscription, prior to invoice generation, within the AWS Marketplace console. You can also provide more than one PO for products appearing on your monthly AWS Marketplace invoice and receive a unique invoice for each purchase order. Additionally, you can add a unique PO for each fixed charge and associated AWS Marketplace monthly usage charges at the time of purchase, or post-subscription in the AWS Marketplace console.

You can update purchase orders for existing subscriptions under manage subscriptions in the AWS Marketplace console. To enable transaction purchase orders for AWS Marketplace, sign in to the management account (for AWS Organizations) and enable the AWS Billing integration in the AWS Marketplace Console settings. To learn more, read the AWS Marketplace Buyer Guide.

Read more


AWS Compute Optimizer now supports rightsizing recommendations for Amazon Aurora

AWS Compute Optimizer now provides recommendations for Amazon Aurora DB instances. These recommendations help you identify idle database instances and choose the optimal DB instance class, so you can reduce costs for unused resources and increase the performance of under-provisioned workloads.

AWS Compute Optimizer automatically analyzes Amazon CloudWatch metrics such as CPU utilization, network throughput, and database connections to generate recommendations for your DB instances running Amazon Aurora MySQL-Compatible Edition and Aurora PostgreSQL-Compatible Edition engines. If you enable Amazon RDS Performance Insights on your DB instances, Compute Optimizer will analyze additional metrics such as DBLoad and out-of-memory counters to give you more insights to choose the optimal DB instance configuration. With this launch, AWS Compute Optimizer now supports recommendations for Amazon RDS for MySQL, Amazon RDS for PostgreSQL, and Amazon Aurora database engines.

This new feature is available in all AWS Regions where AWS Compute Optimizer is available except the AWS GovCloud (US) and the China Regions. To learn more about the new feature updates, please visit Compute Optimizer’s product page and user guide.

Read more


AWS Compute Optimizer now supports idle resource recommendation

Today, AWS announces that AWS Compute Optimizer now supports recommendations to help you identify idle AWS resources. With this new recommendation type, you will be able to identify resources that are un-used and may be candidates for turning off or deleting, resulting in cost savings.

With the new idle resource recommendation, you will be able to identify idle EC2 instances, EC2 Auto Scaling groups, EBS volumes, ECS services running on Fargate, and RDS instances. You can view the total savings potential of stopping or deleting these idle resources. Compute Optimizer analyzes 14 consecutive days of utilization history to validate if resources are idle to provide trustworthy savings opportunities. You can also view idle resource recommendation across all AWS accounts in your organization through the Cost Optimization Hub, with de-duplicated estimated savings with other recommendations on the same resources.

For more information about the AWS Regions where Compute Optimizer is available, see AWS Region table.

For more information about Compute Optimizer, visit our product page and documentation. You can start using AWS Compute Optimizer through the AWS Management Console, AWS CLI, and AWS SDK.

Read more


Announcing financing program for AWS Marketplace purchases for select US customers

Today, AWS announces the availability of a new financing program supported by PNC Vendor Finance, enabling select customers in the United States (US) to finance AWS Marketplace software purchases directly from the AWS Billing and Cost Management console. For the first time, select US customers can apply for, utilize, and manage financing within the console for AWS Marketplace software purchases.

AWS Marketplace helps customers find, try, buy, and launch third-party software, while consolidating billing and management with AWS. With thousands of software products available in AWS Marketplace, this financing program enables you to buy the software you need to drive innovation. With financing amounts ranging from $10,000 - $100,000,000, subject to credit approval, you have more options to pay for your AWS Marketplace purchases. If approved, you can utilize financing for AWS Marketplace software purchases that have at least 12-month contracts. Financing can be applied to multiple purchases from multiple AWS Marketplace sellers. This financing program gives you the flexibility to better manage your cash flow by spreading payments over time, while only paying financing cost on what you use.

This new financing program supported by PNC Vendor Finance is available in the AWS Billing and Cost Management console for select AWS Marketplace customers in the US, excluding NV, NC, ND, TN, & VT.

To learn more about financing options for AWS Marketplace purchases and details about the financing program supported by PNC Vendor Finance, visit the AWS Marketplace financing page.
 

Read more


compute

Amazon EC2 Hpc6id instances are now available in Europe (Paris) region

Starting today, Amazon EC2 Hpc6id instances are available in additional AWS Region Europe (Paris). These instances are optimized to efficiently run memory bandwidth-bound, data-intensive high performance computing (HPC) workloads, such as finite element analysis and seismic reservoir simulations. With EC2 Hpc6id instances, you can lower the cost of your HPC workloads while taking advantage of the elasticity and scalability of AWS.

EC2 Hpc6id instances are powered by 64 cores of 3rd Generation Intel Xeon Scalable processors with an all-core turbo frequency of 3.5 GHz, 1,024 GB of memory, and up to 15.2 TB of local NVMe solid state drive (SSD) storage. EC2 Hpc6id instances, built on the AWS Nitro System, offer 200 Gbps Elastic Fabric Adapter (EFA) networking for high-throughput inter-node communications that enable your HPC workloads to run at scale. The AWS Nitro System is a rich collection of building blocks that offloads many of the traditional virtualization functions to dedicated hardware and software. It delivers high performance, high availability, and high security while reducing virtualization overhead.

To learn more about EC2 Hpc6id instances, see the product detail page.

Read more


Amazon EC2 Hpc7a instances are now available in Europe (Paris) region

Starting today, Amazon EC2 Hpc7a instances are available in additional AWS Region Europe (Paris). EC2 Hpc7a instances are powered by 4th generation AMD EPYC processors with up to 192 cores, and 300 Gbps of Elastic Fabric Adapter (EFA) network bandwidth for fast and low-latency internode communications. Hpc7a instances feature Double Data Rate 5 (DDR5) memory, which enables high-speed access to data in memory.

Hpc7a instances are ideal for compute-intensive, tightly coupled, latency-sensitive high performance computing (HPC) workloads, such as computational fluid dynamics (CFD), weather forecasting, and multiphysics simulations, helping you scale more efficiently on fewer nodes. To optimize HPC instances networking for tightly coupled workloads, you can access these instances in a single Availability Zone within a Region.

To learn more, see Amazon Hpc7a instances.

Read more


Amazon EC2 Trn2 instances are generally available

Today, AWS announces the general availability of Amazon Elastic Compute Cloud (Amazon EC2) Trn2 instances and preview of Trn2 UltraServers, powered by AWS Trainium2 chips. Available via EC2 Capacity Blocks, Trn2 instances and UltraServers are the most powerful EC2 compute solutions for deep learning and generative AI training and inference.

You can use Trn2 instances to train and deploy the most demanding foundation models including large language models (LLMs), multi-modal models, diffusion transformers and more to build a broad set of AI applications. To reduce training times and deliver breakthrough response times (per-token-latency) for the most capable, state-of-the-art models you might need more compute and memory than a single instance can deliver. Trn2 UltraServers is a completely new EC2 offering that uses NeuronLink, a high-bandwidth, low-latency fabric, to connect 64 Trainium2 chips across 4 Trn2 instances into one node unlocking unparalleled performance. For inference, UltraServers help deliver industry-leading response times to create the best real-time experiences. For training, UltraServers boost model training speed and efficiency with faster collective communication for model parallelism as compared to standalone instances.

Trn2 instances feature 16 Trainium2 chips to deliver up to 20.8 petaflops of FP8 compute, 1.5 TB high bandwidth memory with 46 TB/s of memory bandwidth, and 3.2 Tbps of EFA networking. Trn2 UltraServers feature 64 Trainium2 chips to deliver up to 83.2 petaflops of FP8 compute, 6 TB of total high bandwidth memory with 185 TB/s of total memory bandwidth, and 12.8 Tbps of EFA networking. They both are deployed in EC2 UltraClusters to provide non-blocking, petabit scale-out capabilities for distributed training. Trn2 instances are generally available in the trn2.48xlarge size in the US East (Ohio) AWS Region through EC2 Capacity Blocks for ML.

To learn more about Trn2 instances and request access to Trn2 UltraServers please visit the Trn2 instances page

Read more


Amazon EC2 P5en instances, optimized for generative AI and HPC, are generally available

Today, AWS announces the general availability of Amazon Elastic Compute Cloud (Amazon EC2) P5en instances, powered by the latest NVIDIA H200 Tensor Core GPUs. These instances deliver the highest performance in Amazon EC2 for deep learning and high performance computing (HPC) applications.

You can use Amazon EC2 P5en instances for training and deploying increasingly complex large language models (LLMs) and diffusion models powering the most demanding generative AI applications. You can also use P5en instances to deploy demanding HPC applications at scale in pharmaceutical discovery, seismic analysis, weather forecasting, and financial modeling.

P5en instances feature up to 8 H200 GPUs which have 1.7x GPU memory size and 1.5x GPU memory bandwidth than H100 GPUs featured in P5 instances. P5en instances pair the H200 GPUs with high performance custom 4th Generation Intel Xeon Scalable processors, enabling Gen5 PCIe between CPU and GPU which provides up to 4x the bandwidth between CPU and GPU and boosts AI training and inference performance. P5en, with up to 3200 Gbps of third generation of EFA using Nitro v5, shows up to 35% improvement in latency compared to P5 that uses the previous generation of EFA and Nitro. This helps improve collective communications performance for distributed training workloads such as deep learning, generative AI, real-time data processing, and high-performance computing (HPC) applications. To address customer needs for large scale at low latency, P5en instances are deployed in Amazon EC2 UltraClusters, and provide market-leading scale-out capabilities for distributed training and tightly coupled HPC workloads.

P5en instances are now available in the US East (Ohio), US West (Oregon), and Asia Pacific (Tokyo) AWS Regions and US East (Atlanta) Local Zone us-east-1-atl-2a in the p5en.48xlarge size.

To learn more about P5en instances, see Amazon EC2 P5en Instances.

Read more


Announcing Amazon EC2 I8g instances

AWS is announcing the general availability of Amazon Elastic Compute Cloud (Amazon EC2) storage optimized I8g instances. I8g instances offer the best performance in Amazon EC2 for storage-intensive workloads. I8g instances are powered by AWS Graviton4 processors that deliver up to 60% better compute performance compared to previous generation I4g instances. I8g instances use the latest third generation AWS Nitro SSDs, local NVMe storage that deliver up to 65% better real-time storage performance per TB while offering up to 50% lower storage I/O latency and up to 60% lower storage I/O latency variability. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software enhancing the performance and security for your workloads.

I8g instances offer instance sizes up to 24xlarge, 768 GiB of memory, and 22.5 TB instance storage. They are ideal for real-time applications like relational databases, non-relational databases, streaming databases, search queries and data analytic.

I8g instances are available in the following AWS Regions: US East (N. Virginia) and US West (Oregon).

To learn more, see Amazon EC2 I8g instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs.

Read more


Amazon Web Services announces declarative policies

Today, AWS announces the general availability of declarative policies, a new management policy type within AWS Organizations. These policies simplify the way customers enforce durable intent, such as baseline configuration for AWS services within their organization. For example, customers can configure EC2 to allow instance launches using AMIs vended by specific providers and block public access in their VPC with a few simple clicks or commands for their entire organization using declarative policies.

Declarative policies are designed to prevent actions that are non-compliant with the policy. The configuration defined in the declarative policy is maintained even when services add new APIs or features, or when customers add new principals or accounts to their organization. With declarative policies, governance teams have access to the account status report which provides insight into the current configuration for an AWS service across their organization. This helps them asses readiness to enforce configuration at scale. Administrators can provide additional transparency to end users by configuring custom error messages to redirect them to internal wikis or ticketing systems through declarative policies.

To get started, navigate to the AWS Organizations console to create and attach declarative policies. You can also use AWS Control Tower, AWS CLI or CloudFormation templates to configure these policies. Declarative policies today support EC2, EBS and VPC configurations with support for other services coming soon. To learn more see documentation and blog post.

Read more


Announcing Amazon Elastic VMware Service (Preview)

Today, AWS announces the preview of Amazon Elastic VMware Service (Amazon EVS). Amazon EVS is a new, native AWS service to run VMware Cloud Foundation (VCF) within your Amazon Virtual Private Cloud (Amazon VPC). 

Amazon EVS automates and simplifies deployments and provides a ready-to-use VMware Cloud Foundation (VCF) environment on AWS. This allows you to quickly migrate VMware-based virtual machines to AWS using the same VCF software and tools you already use in your on-premises environment. 

With Amazon EVS, you can now take advantage of the scale, resilience, and performance of AWS together with familiar VCF software and tools. You have the choice to self-manage or leverage AWS Partners to manage and operate your EVS deployments. With this, you keep complete control over your VMware architecture and can optimize your deployments to meet the unique demands of your applications. Amazon EVS provides the fastest path to migrate and operate VMware workloads on AWS.

Amazon EVS is currently available in preview for pre-selected customers and partners. To learn more about Amazon EVS and how it can help accelerate your VMware workload migration to AWS, visit the Amazon EVS product page or contact us.

Read more


Amazon CloudWatch Container Insights launches enhanced observability for Amazon ECS

Amazon CloudWatch Container Insights introduces enhanced observability for Amazon Elastic Container Service (ECS) running on Amazon EC2 and Amazon Fargate with out-of-the-box detailed metrics, from cluster level down to container level to deliver faster problem isolation and troubleshooting.

Enhanced observability enables customers to visually drill up and down across various container layers and directly spot issues like memory leaks in individual containers, reducing mean time to resolution. With enhanced observability customers can now view their clusters, services, tasks or containers sorted by resource consumption, quickly identify anomalies, and mitigate risks pro-actively before end user experience is impacted. Using Container Insights’ new landing page, customers can now easily understand overall health and performance of clusters across multiple accounts, identify the ones operating under high utilization and pinpoint the root cause by directly browsing to the related detailed dashboards view saving time and effort.

You can get started with enhanced observability at cluster level or account level by selecting “Enhanced” radio button on Amazon ECS console or through the AWS CLI, CloudFormation and CDK. You can also collect instance level metrics from EC2 by launching the CloudWatch agent as a daemon service on your Container Insights enabled clusters.

Container Insights is available in all public AWS Regions, including the AWS GovCloud (US) Regions, China (Beijing, operated by Sinnet), and China (Ningxia, operated by NWCD). Container Insights with enhanced observability for ECS comes with a flat metric pricing – see pricing page for details. For further information, visit the Container Insights documentation.

Read more


Introducing Amazon EC2 next generation high density Storage Optimized I7ie instances

Amazon Web Services is announcing general availability for next generation high density Storage Optimized I7ie instances. Designed for large storage I/O intensive workloads, I7ie instances are powered by 5th generation Intel Xeon Scalable processors with an all-core turbo frequency of 3.2 GHz, offering up to 40% better compute performance and 20% better price performance over existing I3en instances. I7ie instances have the highest local NVMe storage density in the cloud for storage optimized instances and offer up to twice as many vCPUs and memory compared to prior generation instances. Powered by 3rd generation AWS Nitro SSDs, I7ie instances deliver up to 65% better real-time storage performance, up to 50% lower storage I/O latency, and 65% lower storage I/O latency variability compared to I3en instances.

I7ie are high density storage optimized instances, ideal for workloads requiring fast local storage with high random read/write performance at very low latency consistency to access large data sets. I7ie instances also deliver 40% better compute performance to run more complex queries without increasing the storage density per vCPU. Additionally, the 16KB torn write prevention feature, enables customers to eliminate performance bottlenecks.

I7ie instances deliver up to 100Gbps of network bandwidth and 60Gbps of bandwidth for Amazon Elastic Block Store (EBS).

I7ie instances are available in the US East (N. Virginia) AWS Region today. Customers can use these instances with On Demand and Savings Plan purchase options. To learn more, visit the I7ie instances page.

Read more


AWS Marketplace now offers EC2 Image Builder components from independent software vendors

AWS Marketplace now offers EC2 Image Builder components from independent software vendors (ISVs), helping you streamline your Amazon Machine Image (AMI) build processes. You can find and subscribe to Image Builder components from ISVs in AWS Marketplace or in the Image Builder console, and incorporate the components into your golden images through Image Builder. AWS Marketplace offers a catalog of Image Builder components from ISVs to help address the monitoring, security, governance, and compliance needs of your organization.

Previously, consolidating software from ISVs into golden images required you to go through a time-consuming procurement process and write custom code, resulting in unnecessary overhead. With the addition of Image Builder components in AWS Marketplace, you can now find, subscribe to, and incorporate software components from ISVs into your golden images on AWS. You can also configure your Image Builder pipelines to automatically update golden images as the latest version of components get released in AWS Marketplace, helping to keep your systems current and eliminating the need for custom code. You can continue sharing golden images within your organization by distributing the entitlements for subscribed components across AWS accounts. Your organization can then use the same golden images, maintaining your security and governance standards.

To learn more, access documentation for AWS Marketplace or EC2 Image Builder. Visit AWS Marketplace to view all supported EC2 Image Builder components, including software from popular providers such as Datadog, Dynatrace, Insight Technology, Inc., Fortinet, OpenVPN Inc, SIOS Technology Corp., Cisco, KeyFactor, Datamasque, Grafana, Kong, Wiz and more.

Read more


Announcing Amazon EKS Hybrid Nodes

Today, AWS announces the general availability of Amazon Elastic Kubernetes Service (Amazon EKS) Hybrid Nodes. With Amazon EKS Hybrid Nodes, you can use your on-premises and edge infrastructure as nodes in Amazon EKS clusters. Amazon EKS Hybrid Nodes unifies Kubernetes management across environments and offloads Kubernetes control plane management to AWS for your on-premises and edge applications.

You can now manage Kubernetes applications running on-premises and in edge environments to meet low-latency, local data processing, regulatory, or policy requirements using the same Amazon EKS clusters, features, and tools as applications running in AWS Cloud. Amazon EKS Hybrid Nodes works with any on-premises hardware or virtual machines, bringing the efficiency, scalability, and availability of Amazon EKS to wherever your applications need to run. You can use a wide range of Amazon EKS features with Amazon EKS Hybrid Nodes including Amazon EKS add-ons, EKS Pod Identity, cluster access management, cluster insights, and extended Kubernetes version support. Amazon EKS Hybrid Nodes is natively integrated with various AWS services including AWS Systems Manager, AWS IAM Roles Anywhere, Amazon Managed Service for Prometheus, Amazon CloudWatch, and Amazon GuardDuty for centralized monitoring, logging, and identity management.

Amazon EKS Hybrid Nodes is available in all AWS Regions, except the AWS GovCloud (US) Regions and the China Regions. Amazon EKS Hybrid Nodes is currently available for new Amazon EKS clusters. With Amazon EKS Hybrid Nodes, there are no upfront commitments or minimum fees, and you are charged per hour for the vCPU resources of your hybrid nodes when they are attached to your Amazon EKS clusters.

To get started and learn more about Amazon EKS Hybrid Nodes, see the Amazon EKS Hybrid Nodes User Guide, product webpage, pricing webpage, and AWS News Launch blog.

Read more


AWS simplifies the use of third-party block storage arrays with AWS Outposts

Starting today, customers can attach block data volumes backed by NetApp® on-premises enterprise storage arrays and Pure Storage® FlashArray™ to Amazon Elastic Compute Cloud (Amazon EC2) instances on AWS Outposts directly from the AWS Management Console. This makes it easier for customers to leverage third-party storage with Outposts. Outposts is a fully managed service that extends AWS infrastructure, AWS services, APIs, and tools to virtually any on-premises or edge location for a truly consistent hybrid experience.

With this enhancement, Outpost customers can combine the cloud capabilities offered by Outposts with advanced data management features, high density storage, and high performance offered by NetApp on-premises enterprise storage arrays and Pure Storage FlashArray. Today, customers can use Amazon Elastic Block Store (Amazon EBS) and Local Instance Store volumes to store and process data locally and comply with data residency requirements. Now, with this enhancement, they can do so while leveraging the external volumes backed by compatible third-party storage. By leveraging the new enhancement, customers can maximize value from their existing storage investments, while benefiting from the cloud operational model enabled by Outposts.

This enhancement is available on Outposts racks and Outposts 2U servers at no additional charge in all AWS Regions where Outposts is available, except the AWS GovCloud Regions. See the FAQs for Outposts servers and Outposts racks for the latest availability information.

You can use the AWS Management Console or CLI to attach the third-party block data volumes to Amazon EC2 instances on Outposts. To learn more, check out this blog post.

Read more


Amazon EC2 introduces Allowed AMIs to enhance AMI governance

Amazon EC2 introduces Allowed AMIs, a new account-wide setting that enables you to limit the discovery and use of Amazon Machine Images (AMIs) within your AWS accounts. You can now simply specify the AMI owner accounts or AMI owner aliases permitted within your account, and only AMIs from these owners will be visible and available to you to launch EC2 instances.

Prior to today, you could use any AMI explicitly shared with your account or any public AMI, regardless of its origin or trustworthiness, putting you at risk of accidentally using an AMI that didn’t meet your organization's compliance requirements. Now with Allowed AMIs, your administrators can specify the accounts or owner aliases whose AMIs are permitted for discovery and use within your AWS environment. This streamlined approach provides guardrails to reduce the risk of inadvertently using non-compliant or unauthorized AMIs. Allowed AMIs also supports an audit-mode functionality to identify EC2 instances launched using AMIs not permitted by this setting, helping you identify non-compliant instances before the setting is applied. You can apply this setting across AWS Organizations and Organizational Units using Declarative Policies, allowing you to manage and enforce this setting at scale.

Allowed AMI setting only applies to public AMIs and AMIs explicitly shared with your AWS accounts. By default, this setting is disabled for all AWS accounts. You can enable it by using the AWS CLI, SDKs, or Console. To learn more, please visit our documentation.

Read more


Announcing Amazon EKS Auto Mode

Today at re:Invent, AWS announced Amazon Elastic Kubernetes Service (Amazon EKS) Auto Mode, a new feature that fully automates compute, storage, and networking management for Kubernetes clusters. Amazon EKS Auto Mode simplifies running Kubernetes by offloading cluster operations to AWS, improves the performance and security of your applications, and helps optimize compute costs. 

You can use EKS Auto Mode to get Kubernetes conformant managed compute, networking, and storage for any new or existing EKS cluster. This makes it easier for you to leverage the security, scalability, availability, and efficiency of AWS for your Kubernetes applications. EKS Auto Mode removes the need for deep expertise, ongoing infrastructure management, or capacity planning by automatically selecting the best EC2 instances to run your application. It helps optimize compute costs while maintaining application availability by dynamically scaling EC2 instances based on demand. EKS Auto Mode provisions, operates, secures, and upgrades EC2 instances within your account using AWS-controlled access and lifecycle management. It handles OS patches and updates and limits security risks with ephemeral compute, which strengthens your security posture by default.

EKS Auto Mode is available today in all AWS Regions, except AWS GovCloud (US) and China Regions. You can enable EKS Auto Mode in any EKS cluster running Kubernetes 1.29 and above with no upfront fees or commitments—you pay for the management of the compute resources provisioned, in addition to your regular EC2 costs. 

To get started with EKS Auto Mode, use the EKS API, AWS Console, or your favorite infrastructure as code tooling to enable it in a new or existing EKS cluster. To learn more about EKS Auto Mode and how it can streamline your Kubernetes operations, visit the EKS Auto Mode feature page and see the AWS News launch blog.

Read more


Today, we’re introducing a new feature for Neptune Analytics that allows customers to easily provision Amazon VPC interface endpoints (interface endpoints) in their Virtual Private Cloud (Amazon VPC). These endpoints provide direct access from on-premises applications over VPN or AWS Direct Connect, and across AWS Regions via VPC peering. With this feature, network engineers can create and manage VPC resources centrally. By leveraging AWS PrivateLink and interface endpoints, development teams can now establish private, secure network connectivity from their applications to Neptune Analytics with simplified configuration.

Previously, development teams had to manually configure complex network settings, leading to operational overhead and potential misconfigurations that could affect security and connectivity. With AWS PrivateLink support for Neptune Analytics, customers can now streamline private connectivity between VPCs, Neptune Analytics, and on-premises data centers using interface endpoints and private IP addresses. This approach simplifies this process by allowing central teams to create and manage PrivateLink endpoints and development teams to utilize those PrivateLink endpoints for their graphs without needing to manage them directly. This launch allows developers to concentrate on graph load, thereby reducing time-to-value and simplifying overall management.

Please see AWS PrivateLink pricing for the cost details. You can get started with the feature by using AWS API, AWS CLI, or AWS SDK.
 

Read more


Introducing Advanced Scaling in Amazon EMR Managed Scaling

We are excited to announce Advanced Scaling, a new capability in Amazon EMR Managed Scaling which provides customers increased flexibility to control the performance and resource utilization of their Amazon EMR on EC2 clusters. With Advanced Scaling, customers will be able to configure the desired resource utilization or performance levels for their cluster, and Amazon EMR Managed Scaling will leverage the customers intent to intelligently scale the cluster and optimize cluster compute resources.

Customers appreciate the simplicity of Amazon EMR Managed Scaling. However, there are instances where the default Amazon EMR Managed Scaling algorithm might lead to cluster under-utilization for specific customer’s workload. For instance, clusters running multiple tasks of relatively short duration (task runtime of 10 seconds or less), Amazon EMR Managed Scaling by default scales up the cluster aggressively and conservatively scale it down to avoid negative impact to job run times. While this is the right approach for SLA-sensitive workloads, it might not be optimal for cost sensitive workloads. With Advanced Scaling, customer can now configure Amazon EMR Managed Scaling behavior suitable for their workload type and we will apply tailored optimization to intelligently add or remove nodes from the clusters.

To get started with Advanced Scaling, you can set the ScalingStrategy and UtilizationPerformanceIndex parameters either when creating a new Managed Scaling policy, or updating an existing Managed Scaling policy. Advanced Scaling is available with Amazon EMR release 7.0 and later and is available in all regions where Amazon EMR Managed Scaling is available. For more details, please refer to our Advanced Scaling documentation.

Read more


Amazon EC2 R7g instances are now available in AWS Middle East (Bahrain) region

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R7g instances are available in Middle East (Bahrain) region. These instances are powered by AWS Graviton3 processors that provide up to 25% better compute performance compared to AWS Graviton2 processors, and built on top of the the AWS Nitro System, a collection of AWS designed innovations that deliver efficient, flexible, and secure cloud services with isolated multi-tenancy, private networking, and fast local storage.

Amazon EC2 Graviton3 instances also use up to 60% less energy to reduce your cloud carbon footprint for the same performance than comparable EC2 instances. For increased scalability, these instances are available in 9 different instance sizes, including bare metal, and offer up to 30 Gbps networking bandwidth and up to 20 Gbps of bandwidth to the Amazon Elastic Block Store (EBS).

To learn more, see Amazon EC2 R7g. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.

Read more


Amazon EC2 C7g instances are now available in additional regions

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C7g instances are available in Europe (Paris) and Asia Pacific (Osaka) Regions. These instances are powered by AWS Graviton3 processors that provide up to 25% better compute performance compared to AWS Graviton2 processors, and built on top of the the AWS Nitro System, a collection of AWS designed innovations that deliver efficient, flexible, and secure cloud services with isolated multi-tenancy, private networking, and fast local storage.

Amazon EC2 Graviton3 instances also use up to 60% less energy to reduce your cloud carbon footprint for the same performance than comparable EC2 instances. For increased scalability, these instances are available in 9 different instance sizes, including bare metal, and offer up to 30 Gbps networking bandwidth and up to 20 Gbps of bandwidth to the Amazon Elastic Block Store (EBS).

To learn more, see Amazon EC2 C7g. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.
 

Read more


Amazon EC2 R8g instances now available in AWS Asia Pacific (Mumbai)

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R8g instances are available in AWS Asia Pacific (Mumbai) region. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. Amazon EC2 R8g instances are ideal for memory-intensive workloads such as databases, in-memory caches, and real-time big data analytics. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads.

AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. AWS Graviton4-based R8g instances offer larger instance sizes with up to 3x more vCPU (up to 48xlarge) and memory (up to 1.5TB) than Graviton3-based R7g instances. These instances are up to 30% faster for web applications, 40% faster for databases, and 45% faster for large Java applications compared to AWS Graviton3-based R7g instances. R8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS).

To learn more, see Amazon EC2 R8g Instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.

Read more


Amazon EC2 Capacity Blocks now supports instant start times and extensions

Today, Amazon Web Services announces three new features for Amazon Elastic Compute Cloud (Amazon EC2) Capacity Blocks for ML that enable you to get near-instantaneous access to GPU and ML chip instances through Capacity Blocks, extend the durations of your Capacity Blocks, and reserve Capacity Blocks for longer periods of up to six months. With these new features, you have more options to provision GPU and ML chip capacity to meet your machine learning (ML) workload needs.

With Capacity Blocks, you can reserve GPU and ML chip capacity in cluster sizes of one to 64 instances (512 GPUs, or 1,024 Trainium chips), giving you the flexibility to run a wide variety of ML workloads. Starting today, you can provision Capacity Blocks that begin in just minutes, enabling you to quickly access GPU and ML chip capacity. You can also extend your Capacity Block when your ML job takes longer than you anticipated, ensuring uninterrupted access to capacity. Finally, for projects that require GPU or ML chip capacity for longer durations, you can now provision Capacity Blocks for up to six months, allowing you to get capacity for just the amount of time you need.

EC2 Capacity Blocks are available for P5e, P5, P4d, and Trn1 instances in US East (N. Virginia and Ohio), US West (Oregon), Asia Pacific (Tokyo and Melbourne). See the User Guide for a detailed breakdown of instance availability by region.

To learn more, see the Amazon EC2 Capacity Blocks for ML User Guide.

Read more


Request future dated Amazon EC2 Capacity Reservations

Today, we are announcing that you can request Amazon EC2 Capacity Reservations to start on a future date. Capacity Reservations provide assurance for your critical workloads by allowing you to immediately reserve compute capacity in a specific Availability Zone. Starting today, you can now create Capacity Reservations to start on a future date, enabling you to secure capacity for your future needs and providing you with peace of mind for your critical future scaling events.

You can create future dated Capacity Reservations by specifying the capacity you need, start date, and the minimum duration you commit to use the reservation. Once EC2 approves the request, your reservation will be scheduled to go active on the chosen start date and upon activation, you can immediately launch instances.

This new capability is available to all Capacity Reservations customers in all AWS commercial regions, AWS China regions, and the AWS GovCloud (US) Regions at no additional cost. To learn more about these features, please refer to the Capacity Reservations user guide.

Read more


Amazon EC2 Auto Scaling introduces highly responsive scaling policies

Today, we are launching two new capabilities to EC2 Auto Scaling (ASG) that improve the responsiveness of Target Tracking scaling policies. Target Tracking now automatically adapts to the unique usage patterns of your individual applications, and can be configured to monitor high-resolution CloudWatch metrics to make more timely scaling decisions. With this release, you can enhance your application performance, and also maintain high utilization for your EC2 resources to save costs.

Scaling based on sub-minute CloudWatch metrics enables customers, with applications that have volatile demand patterns, like client-serving APIs, live streaming services, ecommerce websites, or on-demand data processing, reduce the time to detect and respond to changing demand. In addition, Target Tracking policies now self-tune their responsiveness, using historical usage data to determine the optimal balance between cost and performance for each application that saves customers’ time and effort.

Both these new features are available in select commercial regions, and Target Tracking policies will begin self-tuning once they have completed analyzing your application’s usage patterns. You can use Amazon Management Console, CLI, SDKs, and CloudFormation to update your Target Tracking configurations. Refer EC2 Auto Scaling user guide to learn more.

Read more


Amazon EC2 introduces provisioning control to launch instances on On-Demand Capacity

Amazon EC2 introduces a new capability that makes it easy for customers to target instance launches on their On-Demand Capacity Reservations (ODCRs). On-Demand Capacity Reservations help you reserve compute capacity for your workloads in a specific Availability Zone for any duration. This new feature allows you to better utilize your On-Demand Capacity Reservations by ensuring that launches from the RunInstances EC2 API and EC2 Auto Scaling groups will only be fulfilled by your targeted or open Capacity Reservations.

To get started, customers simply specify they if want to only launch on ODCR capacity on either their RunInstances EC2 API, Launch Templates, or Auto-Scaling Groups (ASGs).

This capability is now available in all AWS Regions, except China regions. To get started, please refer to the documentation for use with RunInstances API and ASG.
 

Read more


Neptune Analytics Adds Support for Seamless Graph Data Import and Export

Today, we’re launching a new feature that enables customers to easily import Parquet data and export Parquet/CSV data to and from their Neptune Analytics graphs. This new capability simplifies the process of loading Parquet data into Neptune Analytics for graph queries and analysis, while also allowing customers to export graph data as Parquet or CSV files. Exported data can then be moved seamlessly to Neptune DB, data lakes, or ML platforms for further exploration and analysis.

Previously, customers faced challenges with limited integration options, vendor lock-in concerns, cross-platform flexibility, and sharing graph data for collaborative analysis. This new export functionality addresses these pain points by providing a seamless, end-to-end experience. The data extraction occurs from a snapshot, ensuring that database performance remains unaffected. With the ability to import and export graph data via APIs, customers can leverage Neptune Analytics to run graph algorithms, update their graphs, and export the data for use in other databases like Neptune or data processing frameworks like Apache Spark or query services like Amazon Athena. This enhanced flexibility empowers customers to gain deeper insights from their graph data and use it across various tools and environments.

To learn more about Neptune Analytics and native export capability, visit the features page, and user guide.
 

Read more


Announcing static stability for Amazon EC2 instances backed by EC2 instance store on AWS Outposts

AWS Outposts now offers static stability for Amazon EC2 instances backed by EC2 instance store. This enables automatic recovery for workloads running on such EC2 instances from power failures or reboots, even when the connection to the parent AWS Region is temporarily unavailable. This means Outposts servers and Outposts racks can recover faster from power outages, minimizing downtime and data loss.

Outposts provides a consistent hybrid experience by bringing AWS services to customer premises and edge locations on fully managed AWS infrastructure. While Outposts typically runs connected to an AWS Region for resource management, access control, and software updates, the new static stability feature enables workloads running on EC2 instances backed by EC2 instance store to recover from power failures even when connectivity to the AWS Region is unavailable. Note that this capability is currently not available for EC2 instances backed by Amazon EBS volumes.

This capability is in all AWS Regions where Outposts is supported. Check out the Outposts servers FAQs page and the Outposts rack FAQs page for the full list of supported Regions.

To get started, no customer specific action is required. Static stability is now enabled for all EC2 instances backed by EC2 instance store.

Read more


Amazon EC2 added New CPU-Performance Attribute for Instance Type Selection

Starting today, EC2 Auto Scaling and EC2 Fleet customers can express their EC2 instances’ CPU-performance requirements as part of the Attribute-Based Instance Type Selection (ABIS) configuration. With ABIS, customers can already choose a list of instances types by defining a set of desired resource requirements, such as the number of vCPU cores and memory per instance. Now, in addition to the quantitative resource requirements, customers can also identify an instance family that ABIS will use as a baseline to automatically select instance types that offer similar or better CPU performance, enabling customers to further optimize their instance-type selection.

ABIS is a powerful tool for customers looking to leverage instance type diversification to meet their capacity requirements. For example, customers who use Spot Instances to launch into limited EC2 spare capacity for a discounted price, access multiple instance types to successfully fulfill their larger capacity needs and experience fewer interruptions. With this release, for example, customers can use ABIS in a launch request for instances that can be in the C, M, and R instance classes, with a minimum of 4 vCPUs, and provide CPU performance in line with the C6i instance family, or better.

The feature is available in all AWS commercial and the AWS GovCloud (US) Regions. You can use Amazon Management Console, CLI, SDKs, and CloudFormation to update your instance requirements. To get started, refer the user guide for EC2 Auto Scaling and EC2 Fleet.

Read more


Amazon EC2 G6e instances now available in additional regions

Starting today, the Amazon EC2 G6e instances powered by NVIDIA L40S Tensor Core GPUs are now available in Asia Pacific (Tokyo) and Europe (Frankfurt, Spain). G6e instances can be used for a wide range of machine learning and spatial computing use cases. G6e instances deliver up to 2.5x better performance compared to G5 instances and up to 20% lower inference costs than P4d instances.

Customers can use G6e instances to deploy large language models (LLMs) with up to 13B parameters and diffusion models for generating images, video, and audio. Additionally, the G6e instances will unlock customers’ ability to create larger, more immersive 3D simulations and digital twins for spatial computing workloads. G6e instances feature up to 8 NVIDIA L40S Tensor Core GPUs with 384 GB of total GPU memory (48 GB of memory per GPU) and third generation AMD EPYC processors. They also support up to 192 vCPUs, up to 400 Gbps of network bandwidth, up to 1.536 TB of system memory, and up to 7.6 TB of local NVMe SSD storage. Developers can run AI inference workloads on G6e instances using AWS Deep Learning AMIs, AWS Deep Learning Containers, or managed services such as Amazon Elastic Kubernetes Service (Amazon EKS) and AWS Batch, with Amazon SageMaker support coming soon.

Amazon EC2 G6e instances are available today in the AWS US East (N. Virginia, Ohio), US West (Oregon), Asia Pacific (Tokyo), and Europe (Frankfurt, Spain) regions. Customers can purchase G6e instances as On-Demand Instances, Reserved Instances, Spot Instances, or as part of Savings Plans.

To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the G6e instance page.

Read more


Amazon EC2 C7i-flex and M7i-flex instances are now available in AWS Asia Pacific (Malaysia) Region

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) Flex (C7i-flex, M7i-flex) instances powered by custom 4th Gen Intel Xeon Scalable processors (code-named Sapphire Rapids) are available in Asia Pacific (Malaysia) region. These custom processors, available only on AWS, offer up to 15% better performance over comparable x86-based Intel processors utilized by other cloud providers.

Flex instances are the easiest way for you to get price-performance benefits for a majority of general-purpose and compute intensive workloads. C7i-flex and M7i-flex instances deliver up to 19% better price-performance compared to C6i and M6i instances respectively. These instances offer the most common sizes, from large to 8xlarge, and are a great first choice for applications that don't fully utilize all compute resources such as web and application servers, virtual desktops, batch-processing, microservices, databases, caches, and more. For workloads that need larger instance sizes (up to 192 vCPUs and 768 GiB memory) or continuous high CPU usage, you can leverage C7i and M7i instances.

C7i-flex instances are available in the following AWS Regions: US East (N. Virginia, Ohio), US West (N. California, Oregon), Europe (Frankfurt, Ireland, London, Paris, Spain, Stockholm), Canada (Central), Asia Pacific (Malaysia, Mumbai, Seoul, Singapore, Sydney, Tokyo), and South America (São Paulo).

M7i-flex instances are available in the following AWS Regions: US East (N. Virginia, Ohio), US West (N. California, Oregon), Europe (Frankfurt, Ireland, London, Paris, Spain, Stockholm), Canada (Central), Asia Pacific (Malaysia, Mumbai, Seoul, Singapore, Sydney, Tokyo), South America (São Paulo), and the AWS GovCloud (US-East, US-West).
 

Read more


Amazon EC2 R8g instances now available in AWS Europe (Stockholm)

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R8g instances are available in AWS Europe (Stockholm) region. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. Amazon EC2 R8g instances are ideal for memory-intensive workloads such as databases, in-memory caches, and real-time big data analytics. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads.

AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. AWS Graviton4-based R8g instances offer larger instance sizes with up to 3x more vCPU (up to 48xlarge) and memory (up to 1.5TB) than Graviton3-based R7g instances. These instances are up to 30% faster for web applications, 40% faster for databases, and 45% faster for large Java applications compared to AWS Graviton3-based R7g instances. R8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS).

To learn more, see Amazon EC2 R8g Instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.

Read more


Amazon EC2 now provides lineage information for your AMIs

Amazon EC2 now provides source details for your Amazon Machine Images (AMIs). With this lineage information, you can easily trace any copied or derived AMI back to their original AMI source.

Prior to today, you had to maintain a list of AMIs, use tags, and create custom scripts to track the origins of an AMI. This approach was time-consuming, hard to scale, and resulted in operational overheads. Now with this capability, you can easily view details of the source AMI, making it easier for you to understand from where a particular AMI originated. When copying AMIs across AWS Regions, the lineage information clearly links the copied AMIs to their original AMIs. This new capability provides a more streamlined and efficient way to manage and understand the lineage of AMIs within your AWS environment

You can view these details by using the AWS CLI, SDKs, or Console. This capability is available at no additional cost in all AWS Regions, including AWS GovCloud (US) and AWS China Regions. To learn more, please visit our documentation here.

Read more


AWS Lambda supports application performance monitoring (APM) via CloudWatch Application Signals

AWS Lambda now supports Amazon CloudWatch Application Signals, an application performance monitoring (APM) solution, enabling developers and operators to easily monitor the health and performance of their serverless applications built using Lambda.

Customers want an easy way to quickly identify and troubleshoot performance issues to minimize the mean time to recovery (MTTR) and operational costs of running serverless applications. Now, Application Signals provides pre-built, standardized dashboards for critical application metrics (such as throughput, availability, latency, faults, and errors), correlated traces, and interactions between the Lambda function and its dependencies (such as other AWS services), without requiring any manual instrumentation or code changes from developers. This gives operators a single-pane-of-glass view of the health of the application and enables them to drill down to establish the root cause of performance anomalies. You can also create Service Level Objectives (SLOs) in Application Signals to closely track the performance KPIs of critical operations in your application, enabling you to easily identify and triage operations that do not meet your business KPIs. Application Signals auto-instruments your Lambda function using enhanced AWS Distro for OpenTelemetry (ADOT) libraries, delivering better performance (cold start latency and memory consumption) than before.

To get started, visit the Configuration tab in Lambda console and enable Application Signals for your function with just one click in the “Monitoring and operational tools” section. To learn more, visit the launch blog post, Lambda developer guide, and Application Signals developer guide.

Application Signals for Lambda is available in all commercial AWS Regions where Lambda and CloudWatch Application Signals are available.
 

Read more


AWS Elastic Beanstalk adds support for Ruby 3.3

AWS Elastic Beanstalk is a service that provides the ability to deploy and manage applications in AWS without worrying about the infrastructure that runs those applications. Ruby 3.3 on AL2023 adds support for a new parser, a new pure-Ruby just-in-time compiler and several performance improvements. You can create Elastic Beanstalk environment(s) running Ruby 3.3 on AL2023 using any of the Elastic Beanstalk interfaces such as Elastic Beanstalk Console, Elastic Beanstalk CLI, and the Elastic Beanstalk API.

This platform is generally available in commercial regions where Elastic Beanstalk is available including the AWS GovCloud (US) Regions. For a complete list of regions and service offerings, see AWS Regions.

For more information about Ruby and Linux Platforms, see the Elastic Beanstalk developer guide. To learn more about Elastic Beanstalk, visit the Elastic Beanstalk product page.

Read more


Amazon EC2 C6a and R6a instances now available in additional AWS region

Starting today, compute optimized Amazon EC2 C6a and memory optimized Amazon EC2 R6a instances are now available in Asia Pacific (Hyderabad) region. C6a and R6a instances are powered by third-generation AMD EPYC processors with a maximum frequency of 3.6 GHz. C6a instances deliver up to 15% better price performance than comparable C5a instances, and R6a deliver up to 35% better price performance than comparable R5a instances. These instances are built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor that delivers practically all of the compute and memory resources of the host hardware to your instances for better overall performance and security.

With this additional region, C6a instances are available in the following AWS Regions: US East (Northern Virginia, Ohio), US West (Oregon, N. California), Asia Pacific (Hong Kong, Mumbai, Singapore, Sydney, Tokyo, Hyderabad), Canada (Central), Europe (Frankfurt, Ireland, London), and South America (Sao Paulo) and R6a instances are available in the following AWS Regions: US East (Northern Virginia, Ohio), US West (Oregon, N. California), Asia Pacific (Mumbai, Singapore, Sydney, Tokyo, Hyderabad), and Europe (Frankfurt, Ireland).

These instances can be purchased as Savings Plans, Reserved, On-Demand, and Spot instances. To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the C6a instances, and R6a instances pages.
 

Read more


Amazon CloudWatch Synthetics now automatically deletes Lambda resources associated with canaries

Amazon CloudWatch Synthetics, an outside-in monitoring capability which continually verifies your customers’ experience by running snippets of code on AWS Lambda called canaries, will now automatically delete your associated Lambda resources when you try to delete Synthetics canaries minimizing the manual upkeep required to manage AWS resources in your account.

CloudWatch Synthetics creates Lambdas to execute canaries to monitor the health and performance of your web applications or API endpoints. When you delete a canary the Lambda function and its layers are no longer usable. With the release of this feature these Lambdas will be automatically removed when a canary is deleted, reducing the need for additional housekeeping in maintaining your Synthetics canaries. Canaries deleted via AWS console will automatically cleanup related lambda resources. Any new canaries created via CLI/SDK or CFN will automatically opt-in to this feature whereas canaries created before this launch need to be explicitly opted in.

This feature is available in all commercial regions, the AWS GovCloud (US) Regions, and China regions at no additional cost to the customers.

To learn more about the delete behavior of canaries, see the documentation, or refer to the user guide and One Observability Workshop to get started with CloudWatch Synthetics.
 

Read more


AWS Elastic Beanstalk adds support for Node.js 22

AWS Elastic Beanstalk now supports building and deploying Node.js 22 applications on AL2023 Beanstalk environments.

AWS Elastic Beanstalk is a service that provides the ability to deploy and manage applications in AWS without worrying about the infrastructure that runs those applications. Node.js 22 on AL2023 provides updates to the V8 JavaScript engine, improved garbage collection and performance improvements. You can create Elastic Beanstalk environment(s) running Node.js 22 on AL2023 using any of the Elastic Beanstalk interfaces such as Elastic Beanstalk Console, Elastic Beanstalk CLI, and the Elastic Beanstalk API.

This platform is generally available in commercial regions where Elastic Beanstalk is available including the AWS GovCloud (US) Regions. For a complete list of regions and service offerings, see AWS Regions.

For more information about Node.js and Linux Platforms, see the Elastic Beanstalk developer guide. To learn more about Elastic Beanstalk, visit the Elastic Beanstalk product page.

Read more


AWS announces support for predictive scaling for Amazon ECS services

Today, AWS announces support for predictive scaling for Amazon Elastic Container Service (Amazon ECS). Predictive scaling leverages advanced machine learning algorithms to proactively scale your Amazon ECS services ahead of demand surges, reducing overprovisioning costs while improving application responsiveness and availability.

Amazon ECS offers a rich set of service auto scaling options, including target tracking and step scaling policies, that automatically adjust task counts in response to observed load, as well as scheduled scaling to manually define rules to adjust capacity for routine demand patterns. Many applications observe recurring patterns of steep demand changes, such as early morning spikes when business resumes, wherein a reactive scaling policy can be slow to respond. Predictive scaling is a new capability that harnesses advanced machine learning algorithms, pre-trained on millions of data points, to proactively scale out ECS services ahead of anticipated demand surges. You can use predictive scaling alongside your existing auto scaling policies, such as target tracking or step scaling, so that your applications scale based on both real-time and historic patterns. You can also choose a “forecast only” mode to evaluate its accuracy and suitability, before enabling it to “forecast and scale“. Predictive scaling enhances responsiveness and availability for applications with recurring demand patterns, while also reducing the operational effort of manually configuring scaling policies and the costs from overprovisioning.

You can use AWS management console, SDK, CLI, CloudFormation, and CDK to configure predictive auto scaling for your ECS services. For a list of supported AWS Regions, see documentation. To learn more, visit this blog post and documentation.

Read more


Bottlerocket announces new AMIs that are preconfigured to use FIPS 140-3 validated cryptographic modules

Today, AWS has announced new AMIs for Bottlerocket that are preconfigured to use FIPS 140-3 validated cryptographic modules, including the Amazon Linux 2023 Kernel Crypto API and AWS-LC. Bottlerocket is a Linux-based operating system purpose-built for running containers, with a focus on security, minimal footprint, and safe updates.

With these FIPS-enabled Bottlerocket AMIs, the host software uses only FIPS-approved cryptographic algorithms for TLS connections. This includes connectivity to AWS services such as EC2 and Amazon Elastic Container Registry (ECR). Additionally, in regions where FIPS endpoints are available, the AMIs automatically use FIPS-compliant endpoints for these services by default, streamlining secure configurations for containerized workloads.

The FIPS-enabled Bottlerocket AMIs are now available in all commercial and AWS GovCloud (US) Regions. To see the regions where FIPS-endpoints are supported, visit the AWS FIPS 140-3 page.

To get started with Bottlerocket, see the Bottlerocket User Guide. You can also visit the Bottlerocket product page and explore the Bottlerocket GitHub repository for more information.

Read more


AWS Compute Optimizer now supports idle resource recommendation

Today, AWS announces that AWS Compute Optimizer now supports recommendations to help you identify idle AWS resources. With this new recommendation type, you will be able to identify resources that are un-used and may be candidates for turning off or deleting, resulting in cost savings.

With the new idle resource recommendation, you will be able to identify idle EC2 instances, EC2 Auto Scaling groups, EBS volumes, ECS services running on Fargate, and RDS instances. You can view the total savings potential of stopping or deleting these idle resources. Compute Optimizer analyzes 14 consecutive days of utilization history to validate if resources are idle to provide trustworthy savings opportunities. You can also view idle resource recommendation across all AWS accounts in your organization through the Cost Optimization Hub, with de-duplicated estimated savings with other recommendations on the same resources.

For more information about the AWS Regions where Compute Optimizer is available, see AWS Region table.

For more information about Compute Optimizer, visit our product page and documentation. You can start using AWS Compute Optimizer through the AWS Management Console, AWS CLI, and AWS SDK.

Read more


Amazon EKS managed node groups now support AWS Local Zones

Amazon Elastic Kubernetes Service (Amazon EKS) now supports using managed node groups for Kubernetes workloads running on AWS Local Zones. This enhancement allows you to leverage the node provisioning and lifecycle automation of EKS managed node groups for EC2 instances in Local Zones, bringing your Kubernetes applications closer to end-users for improved latency. With this update, you can simplify cluster operations and unify your Kubernetes practices across AWS Local Zones and Regions.

Amazon EKS managed node groups provide an easy-to-use abstraction on top of Amazon EC2 instances and Auto Scaling groups, enabling streamlined creation, upgrading, and termination of Kubernetes cluster nodes (EC2 instances). You can now create EKS managed node groups for AWS Local Zones in new or existing EKS clusters using the Amazon EKS APIs, AWS Management Console, or infrastructure-as-code tools such as AWS CloudFormation and Terraform. This feature comes at no additional cost – you only pay for the AWS resources you provision.

To learn more about using Amazon EKS managed node groups with AWS local zones, please consult the EKS documentation.

Read more


Amazon ECS announces AZ rebalancing that speeds up mean time to recovery after an infrastructure event

Amazon Web Services (AWS) has announced the launch of Availability Zone (AZ) rebalancing for Amazon Elastic Container Service (ECS), a new feature that automatically redistributes containerized workloads across AZs. This capability helps reduce the mean time to recovery after infrastructure events, enabling applications to maintain high availability without requiring manual intervention.

Customers spread tasks across multiple AZs to enhance application resilience and minimize the impact of AZ-level failures, following AWS best practices. However, infrastructure events (such as an AZ outage) can leave the task distribution for an ECS service in an uneven state, potentially causing an availability risk to customer applications. With AZ rebalancing, ECS now automatically adjusts task placement to maintain an even balance, ensuring your applications remain highly available even in the face of failure.

Starting today, customers can enable AZ rebalancing for new and existing ECS services through the AWS CLI or the ECS Console. The feature is available in all Commercial and AWS GovCloud (US) Regions, and supports ECS Fargate and Amazon EC2 launch types. To learn more about AZ rebalancing and how to get started, visit the Amazon ECS documentation page.
 

Read more


AWS Elastic Beanstalk adds support for Windows Bundled Logs

AWS Elastic Beanstalk is now providing Windows Bundled logs to enhance log collection capabilities for customers running applications on the Windows platforms.

AWS Elastic Beanstalk is now providing enhanced log collection capabilities for customers running applications on Windows platforms. Customers can request full logs and Beanstalk will automatically collect and bundle together the most important log files into a single downloadable zip file. This bundled log set can include logs for HealthD Service, IIS, Application Event, Elastic Beanstalk and Cloud Formation.

Elastic Beanstalk support for Windows Bundled Logs is available in all of the AWS Commercial Regions and AWS GovCloud (US) Regions that Elastic Beanstalk supports. For a complete list of regions and service offerings, see AWS Regions.

For more information about Elastic Beanstalk and Windows Bundled Logs see in the AWS Elastic Beanstalk Developer Guide.

Read more


AWS Lambda supports Amazon S3 as a failed-event destination for asynchronous and stream event sources

AWS Lambda now supports Amazon S3 as a failed-event destination for asynchronous invocations, and for Amazon Kinesis and Amazon DynamoDB event source mappings (ESMs). This enables customers to route the failed batch of records and function execution results to S3 using a simple configuration, without the overhead of writing and managing additional code.

Customers building event-driven applications with asynchronous event sources or stream event sources for Lambda can configure services like Amazon Simple Queue Service (SQS) and Amazon Simple Notification Service (SNS) as failed-event destinations to store the results of failed invocations. However, in scenarios where existing failed-event destinations do not support the payload size requirements for the failed events, customers need to write custom logic to retrieve and redrive event payload data. With today’s announcement, customers can configure S3 as a failed-event destination for Lambda functions invoked via asynchronous invocations, Kinesis ESMs, and DynamoDB ESMs. This enables customers to deliver complete event payload data to the failed-event destination, and helps reduce the overhead of managing custom logic to reliably retrieve and redrive failed event data.

This feature is generally available in all AWS Commercial Regions where AWS Lambda and the configured event source or event destination are available.

To enable S3 as a failed-event destination, refer to our documentation for configuring destinations with asynchronous invocations, Kinesis ESMs, and DynamoDB ESMs. This feature incurs no additional charge to use. You pay for charges associated with Amazon S3 usage.

Read more


Amazon ECS now allows you to configure software version consistency

Amazon Elastic Container Service (Amazon ECS) now allows you to configure software version consistency for specific containers within your Amazon ECS services.

By default, Amazon ECS resolves container image tags to the image digest (SHA256 hash of the image manifest) when you create a new Amazon ECS service or deploy an update to the service. This enforces that all tasks in the service are identical and launched with this image digest(s). However, for certain containers within the task (e.g. telemetry sidecars provided by a 3rd party) customers may prefer to not enforce consistency and intead use a mutable container image tag (e.g. LATEST). Now, you can disable software version consistency for one or more containers in your ECS service by configuring the new versionConsistency attribute in the container definition. ECS applies changes to version consistency when you redeploy your ECS service with the task definition revision.

You can disable software version consistency for your Amazon ECS services running on AWS Fargate platform version 1.4.0 or higher and/or version v1.70.0 or higher of the Amazon ECS Agent in all commercial and the AWS GovCloud (US) Regions. To learn more, please visit our documentation.
 

Read more


Amazon EC2 X2iezn instances are now available in additional AWS region

Starting today, memory optimized Amazon EC2 X2iezn instances are available in Middle East (UAE). Amazon EC2 X2iezn instances are powered by 2nd generation Intel Xeon Scalable processors with an all core turbo frequency of up to 4.5 GHz, the fastest in the cloud. These instances are a great fit for electronic design automation (EDA) workloads as well as relational databases that benefit from high single-threaded processor performance and a large memory footprint. The combination of high single-threaded compute performance and a 32:1 ratio of memory to vCPU make X2iezn instances an ideal fit for EDA workloads including physical verification, static timing analysis, power sign-off, and full chip gate level simulation, and database workloads that are license bounded. These instances are built on the AWS Nitro System, which is a rich collection of building blocks that offloads many of the traditional virtualization functions to dedicated hardware, delivering high performance, high availability, and highly-secure cloud instances.

With this additional region, the X2iezn instances are now available in the AWS US West (Oregon), US East (Northern Virginia), Europe (Ireland), Asia Pacific (Tokyo), and Middle East (UAE) regions. X2iezn instances will be available for purchase with Savings Plans, Reserved Instances, Convertible Reserved, On-Demand, and Spot instances, or as Dedicated instances or Dedicated hosts.

To get started with X2iezn instances, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the EC2 X2iezn instances Page, visit the AWS forum for EC2 or connect with your usual AWS Support contacts.

Read more


Self-service capacity management for AWS Outposts

AWS Outposts now supports self-service capacity management making it easy for you to view and manage compute capacity on your Outposts. Outposts brings native AWS services, infrastructure, and operating models to virtually any data center, co-location space, or on-premises facility by providing the same services, tools, and partner solutions with EC2 on premises. Customers have evolving business requirements and often need to fine-tune their application needs as their business scales. Capacity management enables viewing and modifying the configuration of EC2 capacity installed on Outposts.

Customers define their configuration when ordering a new Outposts to support a variety of different instances. Customers utilize capacity management to view these instances on their Outposts, their configured sizes, and their placement within the Outposts. Customers can also use capacity management to view, plan, and modify their capacity configuration which they will customize through this new self-service UI and API.

These capacity management features are available in all AWS Regions where Outposts is supported. Check out the Outposts rack FAQs page and the Outposts servers FAQs page for the full list of supported Regions.

To learn more about these capacity management capabilities for Outposts, read the Outposts user guide. To discuss Outposts capacity needs for your on-premises workloads with an Outposts specialist, submit this form.
 

Read more


Amazon Application Recovery Controller zonal shift and zonal autoshift extend support for EC2 Auto Scaling

Amazon Application Recovery Controller (ARC) zonal shift and zonal autoshift have expanded their capabilities and now support EC2 Auto Scaling. ARC zonal shift helps you quickly recover an unhealthy application in an Availability Zone (AZ), and reduce the duration and severity of impact to the application due to events such as power outages and hardware or software failures. ARC zonal autoshift safely and automatically shifts your application’s traffic away from an AZ when AWS identifies a potential failure affecting that AZ.

EC2 Auto Scaling customers can now shift traffic away from an AZ in the event of a failure. Zonal shift works with EC2 Auto Scaling by stopping dynamic scale-in, so that capacity is not unnecessarily removed and launching new EC2 instances in the healthy AZs only. In addition, you can set health checks to enabled in the impaired AZ or disable health checks in the impaired AZ. When disabled, it will pause unhealthy instance replacement in the AZ that has an active zonal shift. Enable your EC2 Auto Scaling Groups for zonal shift using the EC2 Auto Scaling console or API, and then trigger a zonal shift or enable autoshift via ARC zonal shift console or API. To learn more review the ARC documentation and read this launch blog.

There is no additional charge for using zonal shift or zonal autoshift. See the AWS Regional Services List for the most up-to-date availability information.
 

Read more


EC2 Auto Scaling now supports Amazon Application Recovery Controller zonal shift and zonal autoshift

EC2 Auto Scaling now supports Amazon Application Recovery Controller (ARC) zonal shift and zonal autoshift to help you quickly recover an impaired application from failures in an Availability Zone (AZ). Starting today, you can shift the launches of EC2 instances in an Auto Scaling Group (ASG) away from an impaired AZ to quickly recover your unhealthy application in another AZ, reducing the duration and severity of impact due to events such as power outages and hardware, or software failures. This new integration also brings support for ARC zonal autoshift, which automatically starts a zonal shift for enabled ASGs when AWS identifies a potential failure affecting an AZ.

You can initiate a zonal shift for an ASG from the Amazon EC2 Auto Scaling or Application Recovery Controller console. You can also use the AWS SDK to start a zonal shift and programmatically shift the instances in your ASG away from an AZ, and shift it back once the affected AZ is healthy.

There is no additional charge for using zonal shift. Zonal shift is now available in all AWS Regions. To get started, read the launch blog, or refer to the documentation.
 

Read more


AWS Batch now supports multiple EC2 Launch Templates per Compute Environment

AWS Batch now supports association of multiple Launch Templates (LTs) with AWS Batch Compute Environment (CE). You no longer need to create separate AWS Batch CEs if you wanted to apply different configurations based on the size and type of your Amazon Elastic Compute Cloud (EC2) instances. With support for multiple LTs per CE, you can dynamically choose a unique Amazon Machine Image (AMI), provision right amount of storage, or apply unique resource tags and more by associating different EC2 launch templates with different EC2 instance types used by a CE, enabling you to define flexible configurations for running your workloads using fewer CEs.

You can associate multiple LTs while creating a new CE or update an existing CE to use multiple LTs for different instance types. AWS Batch allows you to define up to 10 LTs, overriding the default LT, per CE for different EC2 instance families or instance family and size combinations. For more information, see Launch Templates page in the AWS Batch User Guide.

AWS Batch supports developers, scientists, and engineers in running efficient batch processing for ML model training, simulations, and analysis at any scale. Multi-Node Parallel jobs are available in any AWS Region where AWS Batch is available.
 

Read more


Amazon VPC Lattice now supports Amazon Elastic Container Service (Amazon ECS)

Amazon VPC Lattice now provides native integration with Amazon ECS, Amazon's fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. This launch enables VPC Lattice to offer comprehensive support across all major AWS compute services, including Amazon Elastic Compute Cloud (Amazon EC2), Amazon Elastic Kubernetes Service (Amazon EKS), AWS Lambda, Amazon ECS, and AWS Fargate. VPC Lattice is a managed application networking service that simplifies the process of connecting, securing, and monitoring applications across AWS compute services, allowing developers to focus on building applications that matter to their business while reducing time and resources spent on network setup and maintenance.

With native ECS integration, you can now directly associate your ECS services with VPC Lattice target groups, eliminating the need for an intermediate Application Load Balancer (ALB). This streamlined integration reduces cost, operational overhead, and complexity, while enabling you to leverage the complete feature sets of both ECS and VPC Lattice. Organizations with diverse compute infrastructure, such as a mix of Amazon EC2, Amazon EKS, AWS Lambda, and Amazon ECS workloads, can benefit from this launch by unifying service-to-service connectivity, security, and observability across all compute platforms.

This new feature is available in all AWS Regions where Amazon VPC Lattice is available.

To get started, see the following resources:

Read more


Amazon Time Sync Service supports Microsecond-Accurate Time in Stockholm Region

The Amazon Time Sync Service now supports clock synchronization within microseconds of UTC on Amazon EC2 instances in the Europe (Stockholm) region.

Built on Amazon's proven network infrastructure and the AWS Nitro System, customers can now access local, GPS-disciplined reference clocks on supported EC2 instances. These clocks can be used to more easily order application events, measure 1-way network latency, increase distributed application transaction speed, and incorporate in-region and cross-region scalability features while also simultaneously simplifying technical designs. This capability is an improvement over many on-premises time solutions, and it is the first microsecond-range time service offered by any cloud provider. Additionally, you can audit your clock accuracy from your instance to measure and monitor the expected microsecond-range accuracy. Customers already using the Amazon Time Sync Service on supported instances will see improved clock accuracy automatically, without needing to adjust their AMI or NTP client settings. Customers can also use standard PTP clients and configure a new PTP Hardware Clock (PHC) to get the best accuracy possible. Both NTP and PTP can be used without needing any updates to VPC configurations.

Amazon Time Sync’s microsecond-accurate time is available starting today in Europe (Stockholm), as well as additional regions on supported EC2 instance types. We will be expanding support to more AWS Regions and EC2 instance types. There is no additional charge for using this service.

Configuration instructions, and more information on the Amazon Time Sync Service, are available in the EC2 User Guide.

Read more


Amazon EC2 G6 instances now available in the AWS GovCloud (US-West) Region

Starting today, the Amazon Elastic Compute Cloud (Amazon EC2) G6 instances powered by NVIDIA L4 GPUs are now available in the AWS GovCloud (US-West) Region. G6 instances can be used for a wide range of graphics-intensive and machine learning use cases.

Customers can use G6 instances for deploying ML models for natural language processing, language translation, video and image analysis, speech recognition, and personalization as well as graphics workloads, such as creating and rendering real-time, cinematic-quality graphics and game streaming. G6 instances feature up to 8 NVIDIA L4 Tensor Core GPUs with 24 GB of memory per GPU and third generation AMD EPYC processors. They also support up to 192 vCPUs, up to 100 Gbps of network bandwidth, and up to 7.52 TB of local NVMe SSD storage.

Customers can purchase G6 instances as On-Demand Instances, Reserved Instances, Spot Instances, or as part of Savings Plans. To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the G6 instance page.

Read more


Amazon EC2 Mac instances now available in AWS Canada (Central) Region

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M2 Mac instances are now generally available (GA) in the AWS Canada (Central) region. This marks the first time we are introducing Mac instances to an AWS Canadian region, providing customers with even greater global accessibility to Apple silicon hardware. Customers can now run their macOS workloads in AWS Canada (Central) region to satisfy their data residency requirements, benefit from improved latency to end-users, while also integrating with their pre-existing AWS environment configurations within this region.

M2 Mac instances deliver up to 10% faster performance over M1 Mac instances when building and testing applications for Apple platforms such as iOS, macOS, iPadOS, tvOS, watchOS, visionOS, and Safari. M2 Mac instances are powered by the AWS Nitro System and are built on Apple M2 Mac Mini computers featuring 8 core CPU, 10 core GPU, 24 GiB of memory, and 16 core Apple Neural Engine.

With this expansion, EC2 M2 Mac instances are available across US East (N.Virginia, Ohio), US West (Oregon), Europe (Frankfurt), Asia Pacific (Sydney), and Canada (Central) regions. To learn more or get started, see Amazon EC2 Mac Instances or visit the EC2 Mac documentation reference.

Read more


Amazon EC2 Capacity Blocks expands to new regions

Today, Amazon Web Services announces that Amazon Elastic Compute Cloud (Amazon EC2) Capacity Blocks for ML is available for P5 instances in two new regions: US West (Oregon) and Asia Pacific (Tokyo). You can use EC2 Capacity Blocks to reserve highly sought-after GPU instances in Amazon EC2 UltraClusters for a future date for the amount of time that you need to run your machine learning (ML) workloads.

EC2 Capacity Blocks enable you to reserve GPU capacity up to eight weeks in advance for durations up to 28 days in cluster sizes of one to 64 instances (512 GPUs), giving you the flexibility to run a broad range of ML workloads. They are ideal for short duration pre-training and fine-tuning workloads, rapid prototyping, and for handling surges in inference demand. EC2 Capacity Blocks deliver low-latency, high-throughput connectivity through colocation in Amazon EC2 UltraClusters.

With this expansion, EC2 Capacity Blocks for ML are available for the following instance types and AWS Regions: P5 instances in US East (N. Virginia), US East (Ohio), US West (Oregon), and Asia Pacific (Tokyo); P5e instances in US East (Ohio); P4d instances in US East (Ohio) and US West (Oregon); Trn1 instances in Asia Pacific (Melbourne).

To get started, visit the AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs. To learn more, see the Amazon EC2 Capacity Blocks for ML User Guide.

Read more


AWS Lambda supports Customer Managed Key (CMK) encryption for Zip function code artifacts

AWS Lambda now supports encryption of Lambda function Zip code artifacts using customer managed keys instead of default AWS owned keys. Using keys that they create, own, and manage can satisfy customer’s organizational security and governance requirements.

AWS Lambda is widely adopted for its simple programming model, built-in event triggers, automatic scaling, and fault tolerance. Previously, Lambda supported customer-managed AWS Key Management Service (AWS KMS) key-based encryption for the configuration data stored inside Lambda, such as function environment variables and SnapStart-enabled function snapshots. With today’s launch, customers can provide their own key to encrypt function code in Zip artifacts, making it easy to audit or control access to the code deployed in the Lambda function.

Customers can encrypt new or existing function Zip code artifacts by supplying a KMS key when creating or updating a function using AWS Lambda API, AWS Management Console, AWS Command Line Interface (AWS CLI), AWS SDK, AWS CloudFormation, or AWS Serverless Application Model (AWS SAM). When the KMS key is disabled, Lambda service and any users using GetFunction API to fetch deployment package will no longer have access to the Zip artifacts deployed with the Lambda function, thus, providing a convenient revocation control to the customers. If no key is provided, Lambda still secures the Zip code artifacts with AWS-managed encryption.

This feature is available in all AWS Regions where Lambda is available, except the China Regions. To learn more, visit documentation.

Read more


EC2 Auto Scaling introduces provisioning control on strict availability zone balance

Amazon EC2 Auto Scaling Groups (ASG) introduces a new capability for customers to strictly balance their workloads across Availability Zones, enabling greater control over provisioning and management of their EC2 instances.

Previously, customers that wanted to strictly balance an ASGs EC2 instances across availability zones had to override default behaviors of EC2 Auto Scaling and invest in custom code to modify the ASG’s existing behaviors with life cycle hooks or through maintaining multiple ASGs. With this feature, customers can now to easily achieve strict availability zone balance and ensure higher levels of resiliency for their applications.

This capability is now available through the AWS Command Line Interface (CLI), AWS SDKs, or the AWS Console in all AWS Regions. To get started, please refer to the documentation.

Read more


Amazon EC2 High Memory instances now available in South America (Sao Paulo) Region

Starting today, Amazon EC2 High Memory instances with 9TiB of memory (u-9tb1.112xlarge) and 18TiB of memory (u-18tb1.112xlarge) are now available in South America (Sao Paulo) region. Customers can start using these new High Memory instances with On Demand and Savings Plan purchase options.

Amazon EC2 High Memory instances are certified by SAP for running Business Suite on HANA, SAP S/4HANA, Data Mart Solutions on HANA, Business Warehouse on HANA, and SAP BW/4HANA in production environments. For details, see the Certified and Supported SAP HANA Hardware Directory.

For information on how to get started with your SAP HANA migration to EC2 High Memory instances, view the Migrating SAP HANA on AWS to an EC2 High Memory Instance documentation. To hear from Steven Jones, GM for SAP on AWS on what this launch means for our SAP customers, you can read his launch blog.
 

Read more


Amazon EC2 High Memory instances now available in Asia Pacific (Mumbai) region

Starting today, Amazon EC2 High Memory instances with 9TB of memory (u-9tb1.112xlarge) are available in the Asia Pacific (Mumbai) region. Customers can start using these new High Memory instances with On Demand and Savings Plan purchase options.

Amazon EC2 High Memory instances are certified by SAP for running Business Suite on HANA, SAP S/4HANA, Data Mart Solutions on HANA, Business Warehouse on HANA, and SAP BW/4HANA in production environments. For details, see the Certified and Supported SAP HANA Hardware Directory.

For information on how to get started with your SAP HANA migration to EC2 High Memory instances, view the Migrating SAP HANA on AWS to an EC2 High Memory Instance documentation. To hear from Steven Jones, GM for SAP on AWS on what this launch means for our SAP customers, you can read his launch blog.
 

Read more


AWS announces availability of Microsoft Windows Server 2025 images on Amazon EC2

Amazon EC2 now supports Microsoft Windows Server 2025 with License Included (LI) Amazon Machine Images (AMIs), providing customers with an easy and flexible way to launch the latest version of Windows Server. By running Windows Server 2025 on Amazon EC2, customers can take advantage of the security, performance, and reliability of AWS with the latest Windows Server features.

Amazon EC2 is the proven, reliable, and secure cloud for your Windows Server workloads. Amazon creates and manages Microsoft Windows Server 2025 AMIs providing a reliable and quick way to launch Windows Server 2025 on EC2 instances. These images support Nitro-based instances with Unified Extensible Firmware Interface (UEFI) to provide enhanced security. These images also come with features such as Amazon EBS gp3 as the default root volume and the AWS NVMe driver pre-installed, which give you faster throughput and maximize price-performance. In addition, you can seamlessly use these images with pre-qualified services such as AWS Systems Manager, Amazon EC2 Image Builder, and AWS License Manager.

Windows Server 2025 AMIs are available in all commercial AWS Regions and the AWS GovCloud (US) Regions. You can find and launch instances directly from the Amazon EC2 console or through API or CLI commands. All instances running Windows Server 2025 AMIs are billed under the EC2 pricing for Windows operating system (OS).

To learn more about the new AMIs, see AWS Windows AMI reference. To learn more about running Windows Server 2025 on Amazon EC2, visit the Windows Workloads on AWS page.

Read more


Amazon EC2 R8g instances now available in AWS Europe (Ireland)

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R8g instances are available in AWS Europe (Ireland) region. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. Amazon EC2 R8g instances are ideal for memory-intensive workloads such as databases, in-memory caches, and real-time big data analytics. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads.

AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. AWS Graviton4-based R8g instances offer larger instance sizes with up to 3x more vCPU (up to 48xlarge) and memory (up to 1.5TB) than Graviton3-based R7g instances. These instances are up to 30% faster for web applications, 40% faster for databases, and 45% faster for large Java applications compared to AWS Graviton3-based R7g instances. R8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS).

To learn more, see Amazon EC2 R8g Instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.

Read more


Amazon SageMaker Notebook Instances now support JupyterLab 4 notebooks

We're excited to announce the availability of JupyterLab 4 on Amazon SageMaker Notebook Instances, providing you with a powerful and modern interactive development environment (IDE) for your data science and machine learning (ML) workflows.

With this update, you can now leverage the latest features and improvements in JupyterLab 4, including faster performance and notebook windowing, making working with large notebooks much more efficient. The Extension Manager now includes both prebuilt Python extensions and extensions from PyPI, making it easier to discover and install the tools you need. The Search and Replace functionality has been improved with new features, including highlighting matches in rendered Markdown cells, searching in the current selection, and regular expression support for replacements. By providing JupyterLab 4 on Amazon SageMaker Notebook Instances, we're empowering you with a cutting-edge development environment to boost your productivity and efficiency when building ML models and exploring data.

JupyterLab 4 notebooks are available in all commercial AWS regions where SageMaker Notebook Instance is available. Visit developer guides for instructions on setting up and using SageMaker notebook instances.

Read more


Introducing Amazon EC2 M8g instances in Dallas Local Zone

AWS announces the general availability of Amazon Elastic Compute Cloud (Amazon EC2) M8g instances in Dallas Local Zone. These instances are powered by AWS Graviton4 processors and built for general-purpose workloads, such as application servers, microservices, gaming servers, midsize data stores, and caching fleets. AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. M8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS).

AWS Local Zones are a type of AWS infrastructure deployment that places compute, storage, database, and other select services closer to large population, industry, and IT centers where no AWS Region exists. You can use Local Zones to run applications that require single-digit millisecond latency for use cases such as real-time gaming, hybrid migrations, media and entertainment content creation, live video streaming, engineering simulations, and AR/VR at the edge.

To get started, you can enable AWS Dallas Local Zone us-east-1-dfw-2a, in the Amazon EC2 Console or the ModifyAvailabilityZoneGroup API, and deploy M8g instances. To learn more, visit AWS Local Zones overview page and see Amazon EC2 M8g instances.

Read more


containers

VPC Lattice now includes TCP support with VPC Resources

With the launch of VPC Resources for Amazon VPC Lattice, you can now access all of your application dependencies through a VPC Lattice service network. You're able to connect to your application dependencies hosted in different VPCs, accounts, and on-premises using additional protocols, including TLS, HTTP, HTTPS, and now TCP. This new feature expands upon the existing HTTP-based services support, enabling you to share a wider range of resources across your organization.

With VPC Resource support, you can add your TCP resources, such as Amazon RDS databases, custom DNS, or IP endpoints, to a VPC Lattice service network. Now, you can share and connect to all your application dependencies, such as HTTP APIs and TCP databases, across thousands of VPCs, simplifying network management and providing centralized visibility with built-in access controls.

VPC Resources are generally available with VPC Lattice in Africa (Cape Town), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Europe (Frankfurt), Europe (Ireland), Europe (Paris), Europe (Stockholm), Middle East (Bahrain), South America (Sao Paulo), US East (N. Virginia), US East (Ohio), US West (Oregon).

To get started, read the VPC Resources launch blog, architecture blog, and VPC Lattice User Guide. Learn more about VPC Lattice, visit Amazon VPC Lattice Getting Started.
 

Read more


Announcing Amazon EKS Hybrid Nodes

Today, AWS announces the general availability of Amazon Elastic Kubernetes Service (Amazon EKS) Hybrid Nodes. With Amazon EKS Hybrid Nodes, you can use your on-premises and edge infrastructure as nodes in Amazon EKS clusters. Amazon EKS Hybrid Nodes unifies Kubernetes management across environments and offloads Kubernetes control plane management to AWS for your on-premises and edge applications.

You can now manage Kubernetes applications running on-premises and in edge environments to meet low-latency, local data processing, regulatory, or policy requirements using the same Amazon EKS clusters, features, and tools as applications running in AWS Cloud. Amazon EKS Hybrid Nodes works with any on-premises hardware or virtual machines, bringing the efficiency, scalability, and availability of Amazon EKS to wherever your applications need to run. You can use a wide range of Amazon EKS features with Amazon EKS Hybrid Nodes including Amazon EKS add-ons, EKS Pod Identity, cluster access management, cluster insights, and extended Kubernetes version support. Amazon EKS Hybrid Nodes is natively integrated with various AWS services including AWS Systems Manager, AWS IAM Roles Anywhere, Amazon Managed Service for Prometheus, Amazon CloudWatch, and Amazon GuardDuty for centralized monitoring, logging, and identity management.

Amazon EKS Hybrid Nodes is available in all AWS Regions, except the AWS GovCloud (US) Regions and the China Regions. Amazon EKS Hybrid Nodes is currently available for new Amazon EKS clusters. With Amazon EKS Hybrid Nodes, there are no upfront commitments or minimum fees, and you are charged per hour for the vCPU resources of your hybrid nodes when they are attached to your Amazon EKS clusters.

To get started and learn more about Amazon EKS Hybrid Nodes, see the Amazon EKS Hybrid Nodes User Guide, product webpage, pricing webpage, and AWS News Launch blog.

Read more


Announcing Amazon EKS Auto Mode

Today at re:Invent, AWS announced Amazon Elastic Kubernetes Service (Amazon EKS) Auto Mode, a new feature that fully automates compute, storage, and networking management for Kubernetes clusters. Amazon EKS Auto Mode simplifies running Kubernetes by offloading cluster operations to AWS, improves the performance and security of your applications, and helps optimize compute costs. 

You can use EKS Auto Mode to get Kubernetes conformant managed compute, networking, and storage for any new or existing EKS cluster. This makes it easier for you to leverage the security, scalability, availability, and efficiency of AWS for your Kubernetes applications. EKS Auto Mode removes the need for deep expertise, ongoing infrastructure management, or capacity planning by automatically selecting the best EC2 instances to run your application. It helps optimize compute costs while maintaining application availability by dynamically scaling EC2 instances based on demand. EKS Auto Mode provisions, operates, secures, and upgrades EC2 instances within your account using AWS-controlled access and lifecycle management. It handles OS patches and updates and limits security risks with ephemeral compute, which strengthens your security posture by default.

EKS Auto Mode is available today in all AWS Regions, except AWS GovCloud (US) and China Regions. You can enable EKS Auto Mode in any EKS cluster running Kubernetes 1.29 and above with no upfront fees or commitments—you pay for the management of the compute resources provisioned, in addition to your regular EC2 costs. 

To get started with EKS Auto Mode, use the EKS API, AWS Console, or your favorite infrastructure as code tooling to enable it in a new or existing EKS cluster. To learn more about EKS Auto Mode and how it can streamline your Kubernetes operations, visit the EKS Auto Mode feature page and see the AWS News launch blog.

Read more


Amazon ECR announces 10x increase in repository limit to 100,000

Amazon Elastic Container Registry (ECR) now supports a 10x increase in the default limit for repositories per region per account to 100,000, up from the previous limit of 10,000. This change better aligns with your growth needs and saves you time from not having to request limit increases till 100,000 repositories. You still have the flexibility to adjust the new limit and request additional increases if you require more than 100,000 repositories per registry.

The new limit increase is already applied to your current registries and is available in all AWS commercial and Gov Cloud (US) regions. To learn more about default ECR service limits, please visit our documentation. You can learn more about storing, managing and deploying container images and artifacts with Amazon ECR, including how to get started, from our product page and user guide.

Read more


Bottlerocket announces new AMIs that are preconfigured to use FIPS 140-3 validated cryptographic modules

Today, AWS has announced new AMIs for Bottlerocket that are preconfigured to use FIPS 140-3 validated cryptographic modules, including the Amazon Linux 2023 Kernel Crypto API and AWS-LC. Bottlerocket is a Linux-based operating system purpose-built for running containers, with a focus on security, minimal footprint, and safe updates.

With these FIPS-enabled Bottlerocket AMIs, the host software uses only FIPS-approved cryptographic algorithms for TLS connections. This includes connectivity to AWS services such as EC2 and Amazon Elastic Container Registry (ECR). Additionally, in regions where FIPS endpoints are available, the AMIs automatically use FIPS-compliant endpoints for these services by default, streamlining secure configurations for containerized workloads.

The FIPS-enabled Bottlerocket AMIs are now available in all commercial and AWS GovCloud (US) Regions. To see the regions where FIPS-endpoints are supported, visit the AWS FIPS 140-3 page.

To get started with Bottlerocket, see the Bottlerocket User Guide. You can also visit the Bottlerocket product page and explore the Bottlerocket GitHub repository for more information.

Read more


Amazon CloudWatch Application Signals launches support for Runtime Metrics

Today, AWS announces the general availability of runtime metrics support in Amazon CloudWatch Application Signals, an OpenTelemetry (OTel) compatible application performance monitoring (APM) feature in CloudWatch. You can view runtime metrics like garbage collection, memory usage, and CPU usage for your Java or Python applications to troubleshoot issues such as high CPU utilization or memory leaks, which can disrupt the end-user experience.

Application Signals simplifies troubleshooting application performance against key business or service level objectives (SLOs) for AWS applications. Without any source code changes, Application Signals collects traces, application metrics(error/latency/throughput), logs and now runtime metrics to bring them together in a single pane of glass view.
Runtime metrics enable real-time monitoring of your application’s resource consumption, such as memory and CPU usage. With Application Signals, you can understand whether anomalies in runtime metrics have any impact on your end-users by correlating them with application metrics such as error/latency/throughput. For example, you will be able to identify if a service latency spike is a result of an increase in garbage collection pauses by viewing these metric graphs side by side. Additionally you will be able to identify thread contention, track memory allocation patterns, and pinpoint memory or CPU spikes that may lead to application slowdowns or crashes, impacting end user experience.

Runtime metrics support is available in all regions Application Signals is available in. Runtime metrics are charged based on Application Signals pricing, see Amazon CloudWatch pricing.

To learn more, see documentation to enable Amazon CloudWatch Application Signals.

Read more


Amazon EKS managed node groups now support AWS Local Zones

Amazon Elastic Kubernetes Service (Amazon EKS) now supports using managed node groups for Kubernetes workloads running on AWS Local Zones. This enhancement allows you to leverage the node provisioning and lifecycle automation of EKS managed node groups for EC2 instances in Local Zones, bringing your Kubernetes applications closer to end-users for improved latency. With this update, you can simplify cluster operations and unify your Kubernetes practices across AWS Local Zones and Regions.

Amazon EKS managed node groups provide an easy-to-use abstraction on top of Amazon EC2 instances and Auto Scaling groups, enabling streamlined creation, upgrading, and termination of Kubernetes cluster nodes (EC2 instances). You can now create EKS managed node groups for AWS Local Zones in new or existing EKS clusters using the Amazon EKS APIs, AWS Management Console, or infrastructure-as-code tools such as AWS CloudFormation and Terraform. This feature comes at no additional cost – you only pay for the AWS resources you provision.

To learn more about using Amazon EKS managed node groups with AWS local zones, please consult the EKS documentation.

Read more


Amazon ECS now allows you to configure software version consistency

Amazon Elastic Container Service (Amazon ECS) now allows you to configure software version consistency for specific containers within your Amazon ECS services.

By default, Amazon ECS resolves container image tags to the image digest (SHA256 hash of the image manifest) when you create a new Amazon ECS service or deploy an update to the service. This enforces that all tasks in the service are identical and launched with this image digest(s). However, for certain containers within the task (e.g. telemetry sidecars provided by a 3rd party) customers may prefer to not enforce consistency and intead use a mutable container image tag (e.g. LATEST). Now, you can disable software version consistency for one or more containers in your ECS service by configuring the new versionConsistency attribute in the container definition. ECS applies changes to version consistency when you redeploy your ECS service with the task definition revision.

You can disable software version consistency for your Amazon ECS services running on AWS Fargate platform version 1.4.0 or higher and/or version v1.70.0 or higher of the Amazon ECS Agent in all commercial and the AWS GovCloud (US) Regions. To learn more, please visit our documentation.
 

Read more


Amazon EKS enhances Kubernetes control plane monitoring

Amazon EKS enhances visibility into the Kubernetes control plane by offering new intuitive dashboards in EKS console and providing a broader set of Kubernetes control plane metrics. This enables cluster administrators to quickly detect, troubleshoot, and remediate issues. All EKS clusters on Kubernetes version 1.28 and above will now automatically display a curated set of dashboards visualizing key control plane metrics within the EKS console, making it easy to observe the health and performance of the control plane. Additionally, a broader set of control plane metrics are made available in Amazon CloudWatch and in a Prometheus endpoint, providing customers with the flexibility to utilize their preferred monitoring solution — be it Amazon CloudWatch, Amazon Managed Service for Prometheus, or third-party monitoring tools.

Newly introduced pre-configured dashboards in the EKS console provide cluster administrators with visual representations of key control plane metrics, enabling rapid assessment of control plane health and performance. Additionally, the EKS console dashboards now integrate with Amazon CloudWatch Log Insights queries, surfacing critical insights from control plane logs directly within the console. Finally, customers now get access to Kubernetes control plane metrics from kube-scheduler and kube-controller-manager, in addition to the existing API server metrics.

The new set of dashboards and metrics are available at no additional charge in all AWS commercial regions and AWS GovCloud (US) Regions. To learn more, visit the launch blog post or EKS user guide.

Read more


Amazon EKS simplifies providing IAM permissions to EKS add-ons

Amazon Elastic Kubernetes Service (EKS) now offers a direct integration between EKS add-ons and EKS Pod Identity, streamlining the lifecycle management process for critical cluster operational software that needs to interact with AWS services outside the cluster.

EKS add-ons that enable integration with underlying AWS resources need IAM permissions to interact with AWS services. EKS Pod Identities simplify how Kubernetes applications obtain AWS IAM permissions. With today’s launch, you can directly manage EKS Pod Identities using EKS add-ons operations through the EKS console, CLI, API, eksctl, and IAC tools like AWS CloudFormation, simplifying usage of Pod Identities for EKS add-ons. This integration expands the selection of Pod Identity compatible EKS add-ons from AWS and AWS Marketplace available for installation through the EKS console during cluster creation.

EKS add-ons integration with Pod Identities is generally available in all commercial AWS regions. To get started, see the EKS user guide.

Read more


AWS Batch now supports multiple EC2 Launch Templates per Compute Environment

AWS Batch now supports association of multiple Launch Templates (LTs) with AWS Batch Compute Environment (CE). You no longer need to create separate AWS Batch CEs if you wanted to apply different configurations based on the size and type of your Amazon Elastic Compute Cloud (EC2) instances. With support for multiple LTs per CE, you can dynamically choose a unique Amazon Machine Image (AMI), provision right amount of storage, or apply unique resource tags and more by associating different EC2 launch templates with different EC2 instance types used by a CE, enabling you to define flexible configurations for running your workloads using fewer CEs.

You can associate multiple LTs while creating a new CE or update an existing CE to use multiple LTs for different instance types. AWS Batch allows you to define up to 10 LTs, overriding the default LT, per CE for different EC2 instance families or instance family and size combinations. For more information, see Launch Templates page in the AWS Batch User Guide.

AWS Batch supports developers, scientists, and engineers in running efficient batch processing for ML model training, simulations, and analysis at any scale. Multi-Node Parallel jobs are available in any AWS Region where AWS Batch is available.
 

Read more


Split cost allocation data for Amazon EKS now supports metrics from Amazon CloudWatch Container Insights

Starting today, you can use CPU and memory metrics collected by Amazon CloudWatch Container Insights for your Amazon Elastic Kubernetes Service (EKS) clusters in split cost allocation data for Amazon EKS, so you can get granular Kubernetes pod-level costs and make it available in AWS Cost and Usage Reports (CUR). This provides more granular cost visibility for your clusters running multiple application containers using shared EC2 instances, enabling better cost allocation for the shared costs of your EKS clusters.

To enable this feature, you need to enable Container Insights with Enhanced Observability for Amazon Elastic Kubernetes Service (EKS). You can use either the Amazon CloudWatch Observability EKS add-on or the Amazon CloudWatch Observability Helm chart to install the CloudWatch agent and the Fluent-bit agent on an Amazon EKS cluster. You also need to enable split cost allocation data for Amazon EKS in the AWS Billing and Cost Management console, and choose Amazon CloudWatch as the metrics source. Once the feature is enabled, the pod-level usage data will be available in CUR within 24 hours.

This feature is available in all AWS Regions where split cost allocation data for Amazon EKS is available. To get started, visit Understanding split cost allocation data. To learn more about Container Insights product and pricing, visit Container Insights and Amazon CloudWatch Pricing.

Read more


AWS introduces service versioning and deployment history for Amazon ECS services

Amazon Elastic Container Service (Amazon ECS) now allows you to view the service revision and deployment history for your long-running applications deployed as Amazon ECS services. This capability makes it easier for you to track and view changes to applications deployed using Amazon ECS, monitor on-going deployments, and debug deployment failures.

Typically, customers deploy long running applications as Amazon ECS services and deploy software updates using a rolling update mechanism where tasks running the old software version are gradually replaced by tasks running the new version. With today’s release, you can now view the deployment history for your Amazon ECS services on the AWS Management Console as well as using the new listServiceDeployments API. You can look at the details of a specific deployment, including whether it succeeded, when it started and completed, and service revision information before and after the deployment using the Console and describeServiceDeployment API. Furthermore, you can look at the immutable configuration for a specific service version, including the task definition, container image digests, load balancer, service connect configuration, etc. using the Console and describeServiceRevision API.

You can view the service version and deployment history for their services deployed on or after October 25, 2024 using the AWS Management Console, API, SDK, and CLI in all AWS Regions. To learn more, visit this blog post and documentation.

Read more


cost-management

Amazon Q Developer now provides natural language cost analysis

Today, AWS announces the addition of cost analysis capabilities to Amazon Q Developer, allowing customers to retrieve and interpret their AWS cost data through natural language interactions. Amazon Q Developer is a generative AI-powered assistant that helps customers build, deploy, and operate applications on AWS. The cost analysis capability helps users of all skill levels to better understand and manage their AWS spending without previous knowledge of AWS Cost Explorer.

Customers can now ask Amazon Q Developer questions about their AWS costs such as "Which region had the largest cost increase last month?" or "What services cost me the most last quarter?". Q interprets these questions, analyzes the relevant cost data, and provides easy-to-understand responses. Each answer includes transparency on the Cost Explorer parameters used and a link to visualize the data in Cost Explorer.

This feature is now available in all AWS Regions where Amazon Q Developer is supported. Customers can access it via the Amazon Q icon in the AWS Management Console. To get started, see the AWS Cost Management user guide.
 

Read more


AWS delivers enhanced root cause insights to help explain cost anomalies

Today, AWS announces new enhanced root cause analysis capabilities for AWS Cost Anomaly Detection, empowering you to better pinpoint and remediate underlying factors driving unplanned cost increases. By creating anomaly monitors, you can analyze spend across services, member accounts, Cost Allocation Tags, and Cost Categories. Once a cost anomaly is detected, Cost Anomaly Detection now analyzes and ranks all possible combinations of services, accounts, regions, and usage types by cost impact, surfacing up to the top 10 root causes with their corresponding cost contributions.

With more information on the key drivers behind an anomaly, you can better identify the specific factors that contributed the most to a cost spike, such as which combination of linked account, region, and usage type led to increased spend in a particular service. With the top root causes ranked by their cost impact, you can more easily take fast, targeted actions to address these key issues before unplanned costs accrue further.

The enhanced root cause analysis is available in all AWS Regions, except the AWS GovCloud (US) Regions and the China Regions. To learn more about this new feature, AWS Cost Anomaly Detection, and how to reduce your risk of spend surprises, visit the AWS Cost Anomaly Detection product page, documentation, and launch blog.

 

Read more


AWS Billing and Cost Management Data Exports for FOCUS 1.0 is now generally available

Today, AWS announces the general availability (GA) of Data Exports for FOCUS 1.0, which has been in public preview since June 2024. FOCUS 1.0 is an open-source cloud cost and usage specification that provides standardization to simplify cloud financial management across multiple sources. Data Exports for FOCUS 1.0 enables customers to export their AWS cost and usage data with the FOCUS 1.0 schema to Amazon S3. The GA release of FOCUS 1.0 is a new table in Data Exports in which key specification conformance gaps have been solved compared to the preview table.

With Data Exports for FOCUS 1.0 (GA), customers receive their costs in four standardized columns, ListCost, ContractedCost, BilledCost, and EffectiveCost. It provides a consistent treatment of discounts and amortization of Savings Plans and Reserved Instances. The standardized schema of FOCUS ensures data can be reliably referenced across sources.

Data Exports for FOCUS 1.0 (GA) is available in the US East (N. Virginia) Region, but includes cost and usage data covering all AWS Regions, except AWS GovCloud (US) Regions and AWS China (Beijing and Ningxia) Regions.

Learn more about Data Exports for FOCUS 1.0 in the User Guide, product details page, and at the FOCUS project webpage. Get started by visiting the Data Exports page in the AWS Billing and Cost Management console and creating an export of the new GA table named “FOCUS 1.0 with AWS columns”. After creating a FOCUS 1.0 GA export, you will no longer need your preview export. You can view the specification conformance of the GA release here.
 

Read more


Enhanced Pricing Calculator now supports discounts and purchase commitments (in preview)

Today, AWS announces the public preview of the enhanced AWS Pricing Calculator that provides accurate cost estimates for new workloads or modifications to your existing AWS usage by incorporating eligible discounts. It also helps you estimate the cost impact of your commitment purchases and their impact to your organization's consolidated bill. With today’s launch, AWS Pricing Calculator now allows you to apply eligible discounts to your cost estimates, enabling you to make informed financial planning decisions.

The enhanced Pricing Calculator, available within the AWS Billing and Cost Management Console, provides two types of cost estimates: cost estimation for a workload, and estimation of a full AWS bill. Using the enhanced Pricing Calculator, you can import your historical usage or create net new usage when creating a cost estimate. You can also get started by importing existing Pricing Calculator estimates, and sharing an estimate with other AWS console users. Using the enhanced Pricing Calculator, you can confidently assess the cost impact and understand your return on investment for migrating workloads, planning new workloads or growth of existing workloads. You can plan for commitment purchases on the AWS cloud. You can also create or access cost estimates using a new public cost estimations API.

The enhanced Pricing Calculator is available in all AWS commercial regions, excluding China. To get started with new Pricing Calculator, visit the AWS Billing and Cost Management Console. To learn more visit the AWS Pricing Calculator user guide and blog.
 

Read more


AWS Billing and Cost Management announces Savings Plans Purchase Analyzer

Today, AWS announces Savings Plans Purchase Analyzer, a new AWS Billing and Cost Management feature that enables you to quickly estimate the cost, coverage, and utilization impact of your planned Savings Plan purchases, so you can make informed purchase decisions in just a few clicks.

Savings Plans Purchase Analyzer enables you to interactively model a wide range of Savings Plan purchase scenarios with customizable parameters, including commitment amounts, custom lookback periods, and the option to exclude expiring Savings Plans. You can compare estimated savings percentage, coverage, and utilization across different purchase scenarios, and evaluate the hourly impact of recommended or custom commitments for renewals or new purchases of Savings Plans.

Savings Plans Purchase Analyzer is available in all AWS Regions, except the AWS GovCloud (US) Regions and the China Regions.

To get started with Savings Plans Purchase Analyzer, visit the product details page and user guide.

Read more


Amazon S3 Express One Zone now supports S3 Lifecycle expirations

Amazon S3 Express One Zone, a high-performance S3 storage class for latency-sensitive applications, now supports object expiration using S3 Lifecycle. S3 Lifecycle can expire objects based on age to help you automatically optimize storage costs.

Now, you can configure S3 Lifecycle rules for S3 Express One Zone to expire objects on your behalf. You can configure an S3 Lifecycle expiration rule either for your entire bucket or for a subset of objects by filtering by prefix or object size. For example, you can create an S3 Lifecycle rule that expires all objects smaller than 512 KB after 3 days and another rule that expires all objects in a prefix after 10 days. Additionally, S3 Lifecycle logs S3 Express One Zone object expirations in AWS CloudTrail, giving you the ability to monitor, set alerts for, and audit them.

Amazon S3 Express One Zone support for S3 Lifecycle expiration is generally available in all AWS Regions where the storage class is available. You can get started with S3 Lifecycle using the Amazon S3 REST API, AWS Command Line Interface (CLI), or AWS Software Development Kit (SDK) client. To learn more about S3 Lifecycle, visit the S3 User Guide.

Read more


AWS End User Messaging announces cost allocation tags for SMS

Today, AWS End User Messaging announces cost allocation tags for SMS resources, allowing you to track spend for each tag associated with a resource. AWS End User Messaging provides developers with a scalable and cost-effective messaging infrastructure without compromising the safety, security, or results of their communications.

You can now assign a tag to each resource, and summarize the spend of that resource using cost allocation tags in the AWS Billing and Cost management console.

To learn more, visit the AWS End User Messaging SMS User Guide.
 

Read more


Split cost allocation data for Amazon EKS now supports metrics from Amazon CloudWatch Container Insights

Starting today, you can use CPU and memory metrics collected by Amazon CloudWatch Container Insights for your Amazon Elastic Kubernetes Service (EKS) clusters in split cost allocation data for Amazon EKS, so you can get granular Kubernetes pod-level costs and make it available in AWS Cost and Usage Reports (CUR). This provides more granular cost visibility for your clusters running multiple application containers using shared EC2 instances, enabling better cost allocation for the shared costs of your EKS clusters.

To enable this feature, you need to enable Container Insights with Enhanced Observability for Amazon Elastic Kubernetes Service (EKS). You can use either the Amazon CloudWatch Observability EKS add-on or the Amazon CloudWatch Observability Helm chart to install the CloudWatch agent and the Fluent-bit agent on an Amazon EKS cluster. You also need to enable split cost allocation data for Amazon EKS in the AWS Billing and Cost Management console, and choose Amazon CloudWatch as the metrics source. Once the feature is enabled, the pod-level usage data will be available in CUR within 24 hours.

This feature is available in all AWS Regions where split cost allocation data for Amazon EKS is available. To get started, visit Understanding split cost allocation data. To learn more about Container Insights product and pricing, visit Container Insights and Amazon CloudWatch Pricing.

Read more


Customers can now make payments using SEPA accounts in Five EU countries

Customers in UK, Spain, Netherlands and Belgium can now create an AWS Account using their bank account. Upon signup, customers with a billing address in these countries can now securely connect their bank account which supports the Single Euro Payment Area (SEPA) standard.

SEPA direct debit is a popular payment method in Europe, widely used to make payments for utility bills. Until today, this feature was available only for customers in Germany. Customers in other countries needed to provide credit or debit card details to complete the sign up. With this launch, customers in 4 additional countries can sign and pay using their SEPA bank accounts.

If you’re a customer signing up for AWS from any of these 5 countries, you can choose "Bank Account" from AWS sign up page, followed by "Link your bank account". Select your bank from the list of available banks and sign in to your bank using your online banking credentials. Signing in to your bank allows you to securely add your bank account to your AWS account and verifies that you are the owner of the bank account. By default, this bank account will be used when paying for your future AWS invoices. Signup with Bank Account is available in Germany, the first country where this feature is available.

To learn more, See Verify and link your bank account to your AWS Europe payment methods.

Read more


AWS SDK now supports ListBillingViewsAPI for AWS Billing Conductor users

Today, AWS announces the general availability of ListBillingViews API in the AWS SDKs, to enable AWS Billing Conductor (ABC) users to create pro forma Cost and Usage Reports (CUR) programmatically.

Today, CUR PutReportDefinition API requires a BillingViewArn - the Amazon resource name for a billing view, to populate the CUR with proforma data. Prior to this launch customers had to manually construct the BillingView Arn by retrieving Payer account and Primary account IDs and adding the metadata to the string arn:aws:billing::payer-account-id:billingview/billing-group-primary-account-id. ABC users can now eliminate these manual steps to retrieve the BillingViewArn, and automate the end-to-end CUR file configuration journey, based on each pro forma billing view available. As a result, ListBillingView API enables ABC users to simplify ABC onboarding and accelerate the ability to set up their rebilling operations.

ListBillingViews API is available in all commercial AWS Regions, excluding the Amazon Web Services China (Beijing) Region, operated by Sinnet and Amazon Web Services China (Ningxia) Region, operated by NWCD.

To learn more about this feature integration, visit the AWS Billing Conductor product page, or review the API Reference.

Read more


cost-usage-reports

AWS SDK now supports ListBillingViewsAPI for AWS Billing Conductor users

Today, AWS announces the general availability of ListBillingViews API in the AWS SDKs, to enable AWS Billing Conductor (ABC) users to create pro forma Cost and Usage Reports (CUR) programmatically.

Today, CUR PutReportDefinition API requires a BillingViewArn - the Amazon resource name for a billing view, to populate the CUR with proforma data. Prior to this launch customers had to manually construct the BillingView Arn by retrieving Payer account and Primary account IDs and adding the metadata to the string arn:aws:billing::payer-account-id:billingview/billing-group-primary-account-id. ABC users can now eliminate these manual steps to retrieve the BillingViewArn, and automate the end-to-end CUR file configuration journey, based on each pro forma billing view available. As a result, ListBillingView API enables ABC users to simplify ABC onboarding and accelerate the ability to set up their rebilling operations.

ListBillingViews API is available in all commercial AWS Regions, excluding the Amazon Web Services China (Beijing) Region, operated by Sinnet and Amazon Web Services China (Ningxia) Region, operated by NWCD.

To learn more about this feature integration, visit the AWS Billing Conductor product page, or review the API Reference.

Read more


customer-enablement

AWS re:Post Private is now integrated with Amazon Bedrock to offer contextual knowledge to organizations

Today, AWS re:Post Private announces its integration with Amazon Bedrock, ushering in a new era of contextualized knowledge management for customer organizations. This feature transforms traditional organizational knowledge practices into a dynamic system of collaborative intelligence, where human expertise and AI capabilities complement each other to build collective wisdom.

At the heart of this integration is re:Post Agent for re:Post Private, an AI-powered assistant that delivers highly contextual technical answers to customer questions, drawing from a rich repository of curated knowledge resources. re:Post Agent for re:Post Private uniquely combines customer-specific private knowledge with AWS's vast public knowledge base, ensuring responses are not only timely but also tailored to each organization's specific context and needs.

By adopting re:Post Private with this new integration, organizations can now harness the full potential of collaborative intelligence. This powerful alliance between human insight and AI efficiency opens up new avenues for problem-solving, innovation, and knowledge sharing within enterprises. Unlock the transformative possibilities of collaborative intelligence and elevate your organization's knowledge management capabilities with re:Post Private.

Read more


Amazon OpenSearch Serverless now supports Binary Vector and FP16 cost savings features

We are excited to announce that Amazon OpenSearch Serverless now is supporting Binary Vector and FP16 compression helping reduce costs by lowering the memory requirements. It also lowers the latency, improve performance with acceptable accuracy tradeoff. OpenSearch Serverless is a serverless deployment option for Amazon OpenSearch Service that makes it simple to run search and analytics workloads without the complexities of infrastructure management. OpenSearch Serverless’ compute capacity used for data ingestion, search, and query is measured in OpenSearch Compute Units (OCUs).

The support for OpenSearch Serverless is now available in 17 regions globally: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe West (Paris), Europe West (London), Asia Pacific South (Mumbai), South America (Sao Paulo), Canada Central (Montreal), Asia Pacific (Seoul). Europe (Zurich), AWS GovCloud (US-West), and AWS GovCloud (US-East). Please refer to the AWS Regional Services List for more information about Amazon OpenSearch Service availability. To learn more about OpenSearch Serverless, see the documentation.

Read more


databases

Amazon Aurora now available as a quick create vector store in Amazon Bedrock Knowledge Bases

Amazon Aurora PostgreSQL is now available as a quick create vector store in Amazon Bedrock Knowledge Bases. With the new Aurora quick create option, developers and data scientists building generative AI applications can select Aurora PostgreSQL as their vector store with one click to deploy an Aurora Serverless cluster preconfigured with pgvector in minutes. Aurora Serverless is an on-demand, autoscaling configuration where capacity is adjusted automatically based on application demand, making it ideal as a developer vector store.

Knowledge Bases securely connects foundation models (FMs) running in Bedrock to your company data sources for Retrieval Augmented Generation (RAG) to deliver more relevant, context-specific, and accurate responses that make your FM more knowledgeable about your business. To implement RAG, organizations must convert data into embeddings (vectors) and store these embeddings in a vector store for similarity search in generative artificial intelligence (AI) applications. Aurora PostgreSQL, with the pgvector extension, has been supported as a vector store in Knowledge Bases for existing Aurora databases. With the new quick create integration with Knowledge Bases, Aurora is now easier to set up as a vector store for use with Bedrock.

The quick create option in Bedrock Knowledge Bases is available in these regions with the exception of AWS GovCloud (US-West) which is planned for Q4 2024. To learn more about RAG with Amazon Bedrock and Aurora, see Amazon Bedrock Knowledge Bases.

Amazon Aurora combines the performance and availability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. To get started using Amazon Aurora PostgreSQL as a vector store for Amazon Bedrock Knowledge Bases, take a look at our documentation.

Read more


Amazon RDS Performance Insights extends On-demand Analysis to new regions

Amazon RDS (Relational Database Service) Performance Insights expands the availability of its on-demand analysis experience to 15 new regions. This feature is available for Aurora MySQL, Aurora PostgreSQL, and RDS for PostgreSQL engines.

This on-demand analysis experience, which was previously available in only 15 regions, is now available in all commercial regions. This feature allows you to analyze Performance Insights data for a time period of your choice. You can learn how the selected time period differs from normal, what went wrong, and get advice on corrective actions. Through simple-to-understand graphs and explanations, you can identify the chief contributors to performance issues. You will also get the guidance on the next steps to act on these issues. This can reduce the mean-time-to-diagnosis for database performance issues from hours to minutes.

Amazon RDS Performance Insights is a database performance tuning and monitoring feature of RDS that allows you to visually assess the load on your database and determine when and where to take action. With one click in the Amazon RDS Management Console, you can add a fully-managed performance monitoring solution to your Amazon RDS database.

To learn more about RDS Performance Insights, read the Amazon RDS User Guide and visit Performance Insights pricing for pricing details and region availability.
 

Read more


Amazon Bedrock Knowledge Bases now supports GraphRAG (preview)

Today, we are announcing the support of GraphRAG, a new capability in Amazon Bedrock Knowledge Bases that enhances Generative AI applications by providing more comprehensive, relevant and explainable responses using RAG techniques combined with graph data. Amazon Bedrock Knowledge Bases offers fully-managed, end-to-end Retrieval-Augmented Generation (RAG) workflows to create highly accurate, low latency, and custom Generative AI applications by incorporating contextual information from your company's data sources. Amazon Bedrock Knowledge Bases now offers a fully-managed GraphRAG capability with Amazon Neptune Analytics.

Previously, customers faced challenges in conducting exhaustive, multi-step searches across disparate content. By identifying key entities across documents, GraphRAG delivers insights that leverage relationships within the data, enabling improved responses to end users. For example, users can ask a travel application for family-friendly beach destinations with direct flights and good seafood restaurants. Developers building Generative AI applications can enable GraphRAG in just a few clicks by specifying their data sources and choosing Amazon Neptune Analytics as their vector store when creating a knowledge base. This will automatically generate and store vector embeddings in Amazon Neptune Analytics, along with a graph representation of entities and their relationships.

GraphRAG with Amazon Neptune is built right into Amazon Bedrock Knowledge Bases, offering an integrated experience with no additional setup or additional charges beyond the underlying services. GraphRAG is available in AWS Regions where Amazon Bedrock Knowledge Bases and Amazon Neptune Analytics are both available (see current list of supported regions). To learn more, visit the Amazon Bedrock User Guide.

Read more


Amazon DynamoDB zero-ETL integration with Amazon SageMaker Lakehouse

Amazon DynamoDB zero-ETL integration with Amazon SageMaker Lakehouse automates the extracting and loading of data from a DynamoDB table into SageMaker Lakehouse, an open and secure lakehouse. You can run analytics and machine learning workloads on your DynamoDB data using SageMaker Lakehouse, without impacting production workloads running on DynamoDB. With this launch, you now have the option to enable analytics workloads using SageMaker Lakehouse, in addition to the previously available Amazon OpenSearch Service and Amazon Redshift zero-ETL integrations.

Using the no-code interface, you can maintain an up-to-date replica of your DynamoDB data in the data lake by quickly setting up your integration to handle the complete process of replicating data and updating records. This zero-ETL integration reduces the complexity and operational burden of data replication to let you focus on deriving insights from your data. You can create and manage integrations using the AWS Management Console, the AWS Command Line Interface (AWS CLI), or the SageMaker Lakehouse APIs.

DynamoDB zero-ETL integration with SageMaker Lakehouse is now available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Hong Kong), Asia Pacific (Singapore), Asia Pacific (Sydney), Europe (Stockholm), Europe (Frankfurt), and Europe (Ireland) AWS Regions. 

To learn more, visit DynamoDB integrations and read the documentation.

Read more


Amazon DynamoDB global tables previews multi-Region strong consistency

Starting today in preview, Amazon DynamoDB global tables now supports multi-Region strong consistency. DynamoDB global tables is a fully managed, serverless, multi-Region, and multi-active database used by tens of thousands of customers. With this new capability, you can now build highly available multi-Region applications with a Recovery Point Objective (RPO) of zero, achieving the highest level of resilience. 

Multi-Region strong consistency ensures your applications can always read the latest version of data from any Region in a global table, removing the undifferentiated heavy lifting of managing consistency across multiple Regions. It is useful for building global applications with strict consistency requirements, such as user profile management, inventory tracking, and financial transaction processing. 

The preview of DynamoDB global tables with multi-Region strong consistency is available in the following Regions: US East (N. Virginia), US East (Ohio), and US West (Oregon). DynamoDB global tables with multi-Region strong consistency is billed according to existing global tables pricing. To learn more about global tables multi-Region strong consistency, see the preview documentation. For information about DynamoDB global tables, see the global tables information page and the developer guide.  

Read more


Amazon Q Business now provides insights from your databases and data warehouses (preview)

Today, AWS announces the public preview of the integration between Amazon Q Business and Amazon QuickSight, delivering a transformative capability that unifies answers from structured data sources (databases, warehouses) and unstructured data (documents, wikis, emails) in a single application.

Amazon Q Business is a generative AI–powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. Amazon QuickSight is a business intelligence (BI) tool that helps you visualize and understand your structured data through interactive dashboards, reports, and analytics. While organizations want to leverage generative AI for business insights, they experience fragmented access to unstructured and structured data.

With the QuickSight integration, customers can now link their structured sources to Amazon Q Business through QuickSight’s extensive set of data source connectors. Amazon Q Business responds in real time, combining the QuickSight answer from your structured sources with any other relevant information found in documents. For example, users could ask about revenue comparisons, and Amazon Q Business will return an answer from PDF financial reports along with real-time charts and metrics from QuickSight. This integration unifies insights across knowledge sources, helping organizations make more informed decisions while reducing the time and complexity traditionally required to gather insights.

This integration is available to all Amazon Q Business Pro, and Amazon QuickSight Reader Pro, and Author Pro users in the US East (N. Virginia) and US West (Oregon) AWS Regions. To learn more, visit the Amazon Q Business documentation site.

Read more


Announcing Amazon Aurora DSQL (Preview)

Today, AWS announces the preview of Amazon Aurora DSQL, a new serverless, distributed SQL database with active-active high availability. Aurora DSQL allows you to build always available applications with virtually unlimited scalability, the highest availability, and zero infrastructure management. It is designed to make scaling and resiliency effortless for your applications, and offers the fastest distributed SQL reads and writes.

Aurora DSQL provides virtually unlimited horizontal scaling with the flexibility to independently scale reads, writes, compute, and storage. It automatically scales to meet any workload demand without database sharding or instance upgrades. Its active-active distributed architecture is designed for 99.99% single-Region and 99.999% multi-Region availability with no single point of failure, and automated failure recovery. This ensures that all reads and writes to any Regional endpoint are strongly consistent and durable. Aurora DSQL is PostgreSQL compatible, offering an easy-to-use developer experience.

Aurora DSQL is now available in preview in the following AWS Regions: US East (N. Virginia), US East (Ohio), and US West (Oregon). 

To learn more about Aurora DSQL features and benefits, check out the Aurora DSQL overview page and documentation. Aurora DSQL is available at no charge during preview. Get started in only a few steps by going to the Aurora DSQL console or using the Aurora DSQL API or AWS CLI.

Read more


Announcing Amazon EC2 I8g instances

AWS is announcing the general availability of Amazon Elastic Compute Cloud (Amazon EC2) storage optimized I8g instances. I8g instances offer the best performance in Amazon EC2 for storage-intensive workloads. I8g instances are powered by AWS Graviton4 processors that deliver up to 60% better compute performance compared to previous generation I4g instances. I8g instances use the latest third generation AWS Nitro SSDs, local NVMe storage that deliver up to 65% better real-time storage performance per TB while offering up to 50% lower storage I/O latency and up to 60% lower storage I/O latency variability. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software enhancing the performance and security for your workloads.

I8g instances offer instance sizes up to 24xlarge, 768 GiB of memory, and 22.5 TB instance storage. They are ideal for real-time applications like relational databases, non-relational databases, streaming databases, search queries and data analytic.

I8g instances are available in the following AWS Regions: US East (N. Virginia) and US West (Oregon).

To learn more, see Amazon EC2 I8g instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs.

Read more


Oracle Database@AWS is now in limited preview

Oracle Database@AWS, a new offering from AWS and Oracle, is now in limited preview. It enables customers to access Oracle Database Services on Oracle Cloud Infrastructure (OCI) managed Exadata infrastructure within AWS data centers. Customers can easily and quickly migrate their Oracle Database workloads, including Oracle Real Application Clusters (RAC) workloads, to Oracle Exadata Database Service within AWS with minimal to no changes. 

Customers can modernize mission-critical applications and develop new intelligent applications with a low-latency network connection between Oracle databases and AWS services. Oracle Database@AWS enables customers to maintain full-feature and architecture compatibility, performance, and availability as on-premises environments. The new offering provides a unified experience between Oracle and AWS with purchasing, management, operations, and collaborative support. Customers’ usage of Oracle Database@AWS qualifies for their existing AWS commitments and Oracle license benefits, including Bring Your Own License (BYOL) and discount programs such as Oracle Support Rewards. 

Oracle Database@AWS is available for limited preview in US East (N. Virginia) and will be available in additional AWS Regions in 2025. 

To get started, customers can request a private offer for Oracle Database@AWS from Oracle via the AWS Marketplace. Once customers subscribe to a private offer, they can use the AWS Management Console to provision and manage their Oracle Database@AWS resources. To learn more, visit Oracle Database@AWS web page and User Guide

Read more


Announcing the general availability of Amazon MemoryDB Multi-Region

Today, AWS announces the general availability of Amazon MemoryDB Multi-Region, a fully managed, active-active, multi-Region database that lets you build multi-Region applications with up to 99.999% availability and microsecond read and single-digit millisecond write latencies. MemoryDB is a fully managed, Valkey- and Redis OSS-compatible database service providing multi-AZ durability, microsecond read and single-digit millisecond write latency, and high throughput. Valkey is an open source, high performance, key-value data store—stewarded by Linux Foundation—and is a drop-in replacement of Redis OSS.  

With MemoryDB Multi-Region, you can build highly available multi-Region applications for increased resiliency. It offers active-active replication so you can serve reads and writes locally from the Regions closest to your customers with microsecond read and single-digit millisecond write latency. MemoryDB Multi-Region asynchronously replicates data between Regions and typically propagates data within a second. It automatically resolves update conflicts and corrects data divergence issues, so you can focus on building your application.       

Get started with MemoryDB Multi-Region from the AWS Management Console or using the latest AWS SDK or AWS Command Line Interface (AWS CLI). First, you need to identify the set of AWS Regions where you want to replicate your data. Then choose an AWS Region to create a new multi-Region cluster and a regional cluster. Once the first regional cluster is created, you can add up to four additional Regions to the multi-Region cluster.  

MemoryDB Multi-Region is available for Valkey in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (London). To learn more, please visit the MemoryDB features page, getting started blog, and documentation. For pricing, please refer to the MemoryDB pricing page.

Read more


AWS DMS Schema Conversion now uses generative AI

AWS Database Migration Service (AWS DMS) Schema Conversion with generative AI is now available. The feature is currently available for database schema conversion from commercial engines, such as Microsoft SQL Server, to Amazon Aurora PostgreSQL-Compatible Edition and Amazon Relational Database Service (Amazon RDS) for PostgreSQL.

Using generative AI recommendations, you can simplify and accelerate your database migration projects, particularly when converting complex code objects which typically require manual conversion, such as stored procedures, functions, or triggers. AWS DMS Schema Conversion with generative AI converts up to 90% of your schema.

AWS DMS Schema Conversion with generative AI is currently available in three AWS Regions: US East (N. Virginia), US West (Oregon), and Europe (Frankfurt).

You can use this feature in the AWS Management Console or AWS Command Line Interface (AWS CLI) by selecting a commercial database such as Microsoft SQL Server as your source database and Amazon Aurora PostgreSQL or Amazon RDS for PostgreSQL as your target when initiating a schema conversion project. When converting applicable objects, you will see an option to enable generative AI for conversion. To get started, visit the AWS DMS Schema Conversion User Guide and check out this blog post.

Read more


Valkey GLIDE 1.2 adds new features from Valkey 8.0, including AZ awareness

AWS adds support for Availability Zone (AZ) awareness in the open-source Valkey General Language Independent Driver for Enterprise (GLIDE) client library. Valkey GLIDE is a reliable, high-performance, and highly available client, and it’s pre-configured with best practices from over a decade of operating Amazon ElastiCache. Valkey GLIDE is compatible with versions 7.2 and 8.0 of Valkey, as well as versions 6.2, 7.0, and 7.2 of Redis OSS. With this update, Valkey GLIDE will direct requests to Valkey nodes within the same Availability Zone, minimizing cross-zone traffic and reducing response time. Java, Python, and Node.js are the currently supported languages for Valkey GLIDE, with further languages in development.

With this update, Valkey GLIDE 1.2 also supports Amazon ElastiCache and Amazon MemoryDB’s JavaScript Object Notation (JSON) data type, allowing customers to store and access JSON data within their clusters. In addition, it supports MemoryDB’s Vector Similarity Search, empowering customers to store, index, and search vectors for AI applications at single-digit millisecond speed.

Valkey GLIDE is open-source, uses the Apache 2.0 license, and works with any Valkey or Redis OSS datastore, including Amazon ElastiCache and Amazon MemoryDB. Learn more about it in this blog post and submit contributions to the Valkey GLIDE GitHub repository.

Read more


Today, we’re introducing a new feature for Neptune Analytics that allows customers to easily provision Amazon VPC interface endpoints (interface endpoints) in their Virtual Private Cloud (Amazon VPC). These endpoints provide direct access from on-premises applications over VPN or AWS Direct Connect, and across AWS Regions via VPC peering. With this feature, network engineers can create and manage VPC resources centrally. By leveraging AWS PrivateLink and interface endpoints, development teams can now establish private, secure network connectivity from their applications to Neptune Analytics with simplified configuration.

Previously, development teams had to manually configure complex network settings, leading to operational overhead and potential misconfigurations that could affect security and connectivity. With AWS PrivateLink support for Neptune Analytics, customers can now streamline private connectivity between VPCs, Neptune Analytics, and on-premises data centers using interface endpoints and private IP addresses. This approach simplifies this process by allowing central teams to create and manage PrivateLink endpoints and development teams to utilize those PrivateLink endpoints for their graphs without needing to manage them directly. This launch allows developers to concentrate on graph load, thereby reducing time-to-value and simplifying overall management.

Please see AWS PrivateLink pricing for the cost details. You can get started with the feature by using AWS API, AWS CLI, or AWS SDK.
 

Read more


Amazon Aurora now supports Graviton4-based R8g database instances

AWS Graviton4-based R8g database instances are now generally available for Amazon Aurora with PostgreSQL compatibility and Amazon Aurora with MySQL compatibility in US East (N. Virginia, Ohio), US West (Oregon), and Europe (Frankfurt) regions. R8g instances offer larger instance sizes, up to 48xlarge and features an 8:1 ratio of memory to vCPU, and the latest DDR5 memory. Graviton4-based instances provide up to a 40% performance improvement and up to 29% price/performance improvement for on-demand pricing over Graviton3-based instances of equivalent sizes on Amazon Aurora databases, depending on database engine, version, and workload.

You can spin up R8g database instances in the Amazon RDS Management Console or using the AWS CLI. Upgrading a database instance to R8g instance family requires a simple instance type modification. For more details, refer to the Aurora documentation.

Amazon Aurora is designed for unparalleled high performance and availability at global scale with PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. To get started with Amazon Aurora, take a look at our getting started page.

Read more


Amazon RDS for PostgreSQL, MySQL, and MariaDB now supports M8g and R8g database instances

AWS Graviton4-based M8g and R8g database (DB) instances are now generally available for Amazon Relational Database Service (RDS) for PostgreSQL, MySQL, and MariaDB. Graviton4-based instances provide up to a 40% performance improvement and up to 29% price/performance improvement for on-demand pricing over Graviton3-based instances of equivalent sizes on Amazon RDS open source databases, depending on database engine, version, and workload.

AWS Graviton4 processors are the latest generation of custom-designed AWS Graviton processors built on the AWS Nitro System. Both M8g and R8g DB instances are available with new 24xlarge and 48xlarge sizes. With these new sizes, M8g and R8g DB instances offer up to 192 vCPU, up to 50Gbps enhanced networking bandwidth, and up to 40Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS).

These instances are now available in the US East (N. Virginia, Ohio), US West (Oregon), and Europe (Frankfurt) Regions. For complete information on pricing and regional availability, please refer to the Amazon RDS pricing page. For information on specific engine versions that support these DB instance types, please see the Amazon RDS documentation.
 

Read more


Amazon RDS for SQL Server Supports Minor Versions in November 2024

New minor versions of Microsoft SQL Server are now available on Amazon RDS for SQL Server, providing performance enhancements and security fixes. Amazon RDS for SQL Server now supports these latest minor versions of SQL Server 2016, 2017, 2019 and 2022 across the Express, Web, Standard, and Enterprise editions.

We encourage you to upgrade your Amazon RDS for SQL Server database instances at your convenience. You can upgrade with just a few clicks in the Amazon RDS Management Console or by using the AWS CLI. Learn more about upgrading your database instances from the Amazon RDS User Guide. The new minor versions include:
 

  • SQL Server 2016 GDR for SP3 - 13.0.6455.2
  • SQL Server 2017 CU31 GDR - 14.0.3485.1
  • SQL Server 2019 CU29 GDR - 15.0.4410.1
  • SQL Server 2022 CU16 - 16.0.4165.4


These minor versions are available in all AWS commercial regions where Amazon RDS for SQL Server databases are available, including the AWS GovCloud (US) Regions.

Amazon RDS for SQL Server makes it simple to set up, operate, and scale SQL Server deployments in the cloud. See Amazon RDS for SQL Server Pricing for pricing details and regional availability.

Read more


AWS DMS now supports Data Masking

AWS Database Migration Service (AWS DMS) now supports Data Masking, enabling customers to transform sensitive data at the column level during migration, helping to comply with data protection regulations like GDPR. Using AWS DMS, you can now create copies of data that redacts information at a column level that you need to protect.

AWS Data Masking will automatically mask the portions of data you specify. Data Masking offers three transformation techniques: digit randomization, digit masking, and hashing. It's available for all endpoints supported by DMS Classic and DMS Serverless in version 3.5.4.

To learn more about Data Masking with AWS DMS, please please refer to the AWS DMS technical documentation.

Read more


Announcing Provisioned Timestream Compute Units (TCUs) for Amazon Timestream for LiveAnalytics

Today, Amazon Timestream for Live Analytics announces the launch of Provisioned Timestream Compute Units (TCUs), a new feature that allows you to provision dedicated compute resources for your queries, providing predictable and cost-effective query performance.

Amazon Timestream for LiveAnalytics is a serverless time-series database that automatically scales to ingest and analyze gigabytes of time-series data and Provisioned TCUs provide an additional layer of control and flexibility for your query workloads. With Provisioned TCUs, you can provision dedicated compute resources for your queries, guaranteeing consistent performance and predictable costs. As your workload evolves, you can easily adjust compute resources to maintain optimal performance and cost control, and accurately allocate resources to match your query needs. To get started with Provisioned TCUs, use the Amazon Timestream for Live Analytics console, AWS SDK, or CLI to provision the desired number of TCUs for your account. You can provision TCUs in multiples of 4, with a minimum of 4 TCUs and a maximum of 1000 TCUs.

Provisioning Timestream Compute Units is currently supported in Asia Pacific (Mumbai) only. To learn more about pricing, visit the Amazon Timestream for Live Analytics pricing page. For more information about Provisioned TCUs, see the Amazon Timestream for Live Analytics Developer Guide.

Read more


Amazon Redshift Query Editor V2 Increases Maximum Result Set and Export size to 100MB

AWS announces Amazon Redshift Query Editor V2 now supports increased maximum result set and export size to 100MB of your query result sets with no row limit. Prior to this limit of your query result sets was* 5MB or 100,000 rows. This enhancement provides greater flexibility for you and your team to work with large datasets, enabling you to generate, analyze, and export more comprehensive data without previous constraints.

If you work with large datasets, such as security logs, gaming data, and other big data workloads, that require in-depth analysis, the previous 5MB or 100,000-row limit on result sets and exports often fell short of your needs, forcing you to piece together insights from multiple queries and downloads. With the new 100MB result set size and export capabilities in Amazon Redshift Query Editor, you can now generate a single, more complete view of your data, export it directly as a CSV or JSON file, and conduct richer analysis to drive better-informed business decisions.

The increased 100MB result set and export size capabilities for Amazon Redshift Query Editor V2 are available in all AWS commercial Regions. For more information about the AWS Regions where Redshift is available, please refer to the AWS Regions table.

To learn more, see the Amazon Redshift documentation.
 

Read more


Neptune Analytics Adds Support for Seamless Graph Data Import and Export

Today, we’re launching a new feature that enables customers to easily import Parquet data and export Parquet/CSV data to and from their Neptune Analytics graphs. This new capability simplifies the process of loading Parquet data into Neptune Analytics for graph queries and analysis, while also allowing customers to export graph data as Parquet or CSV files. Exported data can then be moved seamlessly to Neptune DB, data lakes, or ML platforms for further exploration and analysis.

Previously, customers faced challenges with limited integration options, vendor lock-in concerns, cross-platform flexibility, and sharing graph data for collaborative analysis. This new export functionality addresses these pain points by providing a seamless, end-to-end experience. The data extraction occurs from a snapshot, ensuring that database performance remains unaffected. With the ability to import and export graph data via APIs, customers can leverage Neptune Analytics to run graph algorithms, update their graphs, and export the data for use in other databases like Neptune or data processing frameworks like Apache Spark or query services like Amazon Athena. This enhanced flexibility empowers customers to gain deeper insights from their graph data and use it across various tools and environments.

To learn more about Neptune Analytics and native export capability, visit the features page, and user guide.
 

Read more


Amazon RDS Blue/Green Deployments support minor version upgrade for RDS for PostgreSQL

Amazon Relational Database Service (Amazon RDS) Blue/Green Deployments now supports safer, simpler, and faster minor version upgrades for your Amazon RDS for PostgreSQL databases using physical replication. The use of PostgreSQL physical replication for database change management, such as minor version upgrade, simplifies your RDS Blue/Green Deployments upgrade experience by overcoming PostgreSQL community logical replication limitations.

You can now use Amazon RDS Blue/Green Deployments for deploying multiple database changes to production such as minor version upgrades, shrink storage volume, maintenance updates, and scaling instances in a single switchover event using physical replication. RDS Blue/Green Deployments for PostgreSQL relies on logical replication for major version upgrades.

Blue/Green Deployments for PostgreSQL creates a fully managed staging environment using physical replication for minor version upgrades, that allows you to deploy and test production changes, keeping your current production database safer. With a few clicks, you can switchover the staging environment to be the new production system in as fast as a minute, with no data loss and no changes to your application for database endpoint management.

Amazon RDS Blue/Green Deployments is now available for Amazon RDS for PostgreSQL using physical replication for all minor versions for the major versions 11 and higher in all applicable AWS Regions. In a few clicks, update your databases using Amazon RDS Blue/Green Deployments via the Amazon RDS Console. Learn more about Blue/Green Deployments on the Amazon RDS features page.
 

Read more


AWS DMS now delivers improved performance for data validation

AWS Database Migration Service (AWS DMS) has enhanced data validation performance for database migrations, enabling customers to validate large datasets with significantly faster processing times.

This enhanced data validation is now available in version 3.5.4 of the replication engine for both full load and full load with CDC migration tasks. Currently, this enhancement supports migration paths from Oracle to PostgreSQL, SQL Server to PostgreSQL, Oracle to Oracle, and SQL Server to SQL Server, with additional migration paths planned for future releases.

To learn more about data validation performance improvements with AWS DMS, please refer to the AWS DMS Technical Documentation.

Read more


Amazon RDS for PostgreSQL supports pgvector 0.8.0

Amazon Relational Database Service (RDS) for PostgreSQL now supports pgvector 0.8.0, an open-source extension for PostgreSQL for storing and efficiently querying vector embeddings in your database, letting you use retrieval-augemented generation (RAG) when building your generative AI applications. pgvector 0.8.0 release includes improvements on PostgreSQL query planner’s selection of index when filters are present, which can deliver better query performance and improve search result quality.

pgvector 0.8.0 release includes a variety of improvements to how pgvector filters data using conditions in WHERE clauses and joins that can improve query performance and usability. Additionally, the iterative index scans help prevent ‘overfiltering’, ensuring generation of sufficient results to satisfy the conditions of a query. If an initial index scan doesn't satisfy the query conditions, pgvector will continue to search the index until it hits a configurable threshold. This release also has performance improvements for searching and building HNSW indexes.

pgvector 0.8.0 is available on database instances in Amazon RDS running PostgreSQL 17.1 and higher, 16.5 and higher, 15.9 and higher, 14.14 and higher, and 13.17 and higher in all applicable AWS Regions.

Amazon RDS for PostgreSQL makes it simple to set up, operate, and scale PostgreSQL deployments in the cloud. See Amazon RDS for PostgreSQL Pricing for pricing details and regional availability. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console.

Read more


Amazon RDS Blue/Green Deployments Green storage fully performant prior to switchover

Amazon Relational Database Service (Amazon RDS) Blue/Green Deployments now support managed initialization of Green storage volumes that accelerates the loading of storage blocks from Amazon S3. This ensures that the volumes are fully performant prior to switchover of the Green databases. Blue/Green Deployments create a fully managed staging environment, or Green database, by restoring the Blue database snapshot. The Green database allows you to deploy and test production changes, keeping your current production database, or Blue database, safer.

Previously, you had to manually initialize the storage volumes of the Green databases. With this launch, RDS Blue/Green Deployments will proactively manage and accelerate the storage initialization for your green database instances. You will be able to view the progress of storage initialization using the RDS Console and command line interface (CLI). Managed storage initialization of the Green databases is supported for Blue/Green deployments created for RDS for PostgreSQL, RDS for MySQL, and RDS for MariaDB engines.

Amazon RDS Blue/Green Deployments are available for Amazon RDS for PostgreSQL major versions 12 and higher, RDS for MySQL major versions 5.7 and higher, and Amazon RDS for MariaDB major versions 10.4 and higher.

In a few clicks, update your databases using Amazon RDS Blue/Green Deployments via the Amazon RDS Console. Learn more about RDS Blue/Green Deployments and the supported engine versions here.
 

Read more


Amazon ElastiCache version 8.0 for Valkey brings faster scaling and improved memory efficiency

Today, Amazon ElastiCache introduces support for Valkey 8.0, the latest Valkey major version. This release brings faster scaling for ElastiCache Serverless for Valkey and improved memory efficiency on node-based ElastiCache, compared to previous versions of ElastiCache for Valkey and Redis OSS. Valkey is an open-source, high-performance key-value datastore stewarded by the Linux Foundation and is a drop-in replacement for Redis OSS. Backed by over 40 companies, Valkey has seen rapid adoption since its inception in March 2024.

Hundreds of thousands of customers use ElastiCache to scale their applications, improve performance, and optimize costs. ElastiCache Serverless version 8.0 for Valkey scales to 5 million requests per second (RPS) per cache in minutes, up to 5x faster than Valkey 7.2, with microsecond read latency. With node-based ElastiCache, you can benefit from improved memory efficiency, with 32 bytes less memory per key compared to ElastiCache version 7.2 for Valkey and ElastiCache for Redis OSS. AWS has made significant contributions to open source Valkey in the areas of performance, scalability, and memory optimizations, and we are bringing these benefits into ElastiCache version 8.0 for Valkey.

ElastiCache version 8.0 for Valkey is now available in all AWS regions. You can upgrade from ElastiCache version 7.2 for Valkey or any ElastiCache for Redis OSS version to ElastiCache version 8.0 for Valkey in a few clicks without downtime. You can get started using the AWS Management Console, Software Development Kit (SDK), or Command Line Interface (CLI). For more information, please visit the ElastiCache features page, blog and documentation.
 

Read more


Amazon RDS for PostgreSQL supports minor versions 17.2, 16.6, 15.10, 14.15, 13.18, and 12.22

Amazon Relational Database Service (RDS) for PostgreSQL now supports the latest minor versions 17.2, 16.6, 15.10, 14.15, 13.18, and 12.22. We recommend that you upgrade to the latest minor versions to fix known security vulnerabilities in prior versions of PostgreSQL, and to benefit from the bug fixes added by the PostgreSQL community.

You are able to leverage automatic minor version upgrades to automatically upgrade your databases to more recent minor versions during scheduled maintenance window. Learn more about upgrading your database instances in the Amazon RDS User Guide.

Additionally, starting with PostgreSQL major version 18, Amazon RDS for PostgreSQL will deprecate plcoffee and plls PostgreSQL extensions. We recommend that you stop using Coffee scripts and LiveScript in your applications, ensuring you have an upgrade path for future.

Amazon RDS for PostgreSQL makes it simple to set up, operate, and scale PostgreSQL deployments in the cloud. See Amazon RDS for PostgreSQL Pricing for pricing details and regional availability. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console.
 

Read more


Amazon RDS for MySQL now supports MySQL 8.4 LTS release

Amazon RDS for MySQL now supports MySQL major version 8.4, the latest long-term support (LTS) release from the MySQL community. RDS for MySQL 8.4 is integrated with AWS Libcrypto (AWS-LC) FIPS module (Certificate #4816), and includes support for multi-source replication plugin for analytics, Group Replication plugin for continuous availability, as well as several performance and feature improvements added by the MySQL community. Learn more about the community enhancements in the MySQL 8.4 release notes.

You can leverage Amazon RDS Managed Blue/Green deployments to upgrade your databases from MySQL 8.0 to MySQL 8.4. Learn more about RDS for MySQL 8.4 features and upgrade options, including Managed Blue/Green deployments in the Amazon RDS User Guide.

Amazon RDS for MySQL 8.4 is now available in all AWS Commercial and the AWS GovCloud (US) Regions.

Amazon RDS for MySQL makes it straightforward to set up, operate, and scale MySQL deployments in the cloud. Learn more about pricing details and regional availability at Amazon RDS for MySQL. Create or update a fully managed Amazon RDS for MySQL 8.4 database in the Amazon RDS Management Console.
 

Read more


OpenSearch’s vector engine adds support for UltraWarm on Amazon OpenSearch Service

UltraWarm is a fully managed, warm storage tier that’s designed to deliver cost savings on the Amazon OpenSearch Service. With OpenSearch 2.17+ domains, you can now store k-NN (vector) indexes on UltraWarm storage reducing the cost of serving infrequently access k-NN indexes through warm and cold storage tiers. With UltraWarm storage, you can further cost optimize vector search workloads on the OpenSearch vector engine. To learn more, refer to the documentation.

Read more


AWS Compute Optimizer now supports rightsizing recommendations for Amazon Aurora

AWS Compute Optimizer now provides recommendations for Amazon Aurora DB instances. These recommendations help you identify idle database instances and choose the optimal DB instance class, so you can reduce costs for unused resources and increase the performance of under-provisioned workloads.

AWS Compute Optimizer automatically analyzes Amazon CloudWatch metrics such as CPU utilization, network throughput, and database connections to generate recommendations for your DB instances running Amazon Aurora MySQL-Compatible Edition and Aurora PostgreSQL-Compatible Edition engines. If you enable Amazon RDS Performance Insights on your DB instances, Compute Optimizer will analyze additional metrics such as DBLoad and out-of-memory counters to give you more insights to choose the optimal DB instance configuration. With this launch, AWS Compute Optimizer now supports recommendations for Amazon RDS for MySQL, Amazon RDS for PostgreSQL, and Amazon Aurora database engines.

This new feature is available in all AWS Regions where AWS Compute Optimizer is available except the AWS GovCloud (US) and the China Regions. To learn more about the new feature updates, please visit Compute Optimizer’s product page and user guide.

Read more


Announcing auto migration of EC2 databases to Amazon RDS using AWS Database Migration Service

AWS announces a “1-click move to managed” feature for Amazon Relational Database Service (Amazon RDS) that enables you to easily and seamlessly migrate your self-managed MySQL, PostgreSQL, or MariaDB databases to an equivalent Amazon RDS or Amazon Aurora database.

Using the 1-click move to managed functionality on the Amazon RDS console, you can migrate your self-managed databases running on an Amazon EC2 server to a managed Amazon RDS or Aurora database. This feature eliminates the infrastructure set up burden and makes it easy and seamless to re-platform your application’s database workload to Amazon RDS. Amazon RDS leverages Data Migration Service (DMS) homogeneous migration APIs to abstract and automate the entire process, including networking and system configuration, required to initiate and complete the migration. The process is flexible, scalable, and cost effective because the entire migration is performed using a temporary environment and using native database tools.

The RDS 1-click move to managed feature is now available on the RDS console in AWS commercial regions where homogeneous data migrations are supported. Get started today by visiting the Amazon RDS Console. Refer the RDS user guide or Aurora user guide to learn more.

Read more


Amazon RDS Blue/Green Deployments support storage volume shrink

Amazon Relational Database Service (Amazon RDS) Blue/Green Deployments now supports the ability to shrink the storage volumes for your RDS database instances, allowing you to better utilize your storage resources and manage their costs. You can now increase and decrease your storage volume size based on anticipated application demands.

Previously, to shrink a storage volume, you had to manually create a new database instance with a smaller volume size, manually migrate the data from your current database to the newly created database instance, and switch database endpoints, often resulting in extended downtime. Blue/Green Deployments create a fully managed staging environment, or Green databases, with your specified storage size, and keep the Blue and Green databases in sync. With a few clicks, you can promote the Green databases to be the new production system in as fast as a minute, with no data loss and no changes to you're application to switch database endpoints.

Amazon RDS Blue/Green Deployments support for storage volume shrink is available for Amazon RDS for PostgreSQL major versions 12 and higher, RDS for MySQL major versions 5.7 and higher, and Amazon RDS for MariaDB major versions 10.4 and higher.

In a few clicks, update your databases using Amazon RDS Blue/Green Deployments via the Amazon RDS Console. Learn more about RDS Blue/Green Deployments and the supported engine versions here.

Read more


Amazon Aurora now supports PostgreSQL 17.0 in the Amazon RDS Database preview environment

Amazon Aurora PostgreSQL-Compatible Edition now supports PostgreSQL version 17.0 in the Amazon RDS Database Preview Environment, allowing you to evaluate PostgreSQL 17.0 on Amazon Aurora PostgreSQL. PostgreSQL 17.0 was released by the PostgreSQL community on September 26, 2024. PostgreSQL 17 adds new features like a new memory management system for VACUUM and new SQL/JSON capabilities, including constructors, identity functions, and the JSON_TABLE()function. To learn more about PostgreSQL 17, read here.

Database instances in the RDS Database Preview Environment allow testing of a new database engine without the hassle of having to self-install, provision, and manage a preview version of the Aurora PostgreSQL database software. Clusters are retained for a maximum period of 60 days and are automatically deleted after this retention period. Amazon RDS Database Preview Environment database instances are priced the same as production Aurora instances created in the US East (Ohio) Region.
 

Amazon Aurora is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. To get started with Amazon Aurora, take a look at our getting started page.

Read more


Amazon Aurora Serverless v2 supports scaling to zero capacity

Amazon Aurora Serverless v2 now supports scaling to 0 Aurora Capacity Units (ACUs). This launch enables the database to automatically pause after a period of inactivity based on database connections. When the first connection is requested, the database will automatically resume and scale to meet the application demand. Aurora Serverless v2 measures capacity in ACUs where each ACU is a combination of approximately 2 gibibytes (GiB) of memory, corresponding CPU, and networking. You specify the capacity range and the database scales within this range to support your application’s needs.

With 0 ACUs, customers can now save cost during periods of database inactivity. Instead of scaling down to 0.5 ACUs, the database can now scale down to 0 ACUs. You can get started with this feature with a new cluster or your existing cluster with just a few clicks in the AWS Management console. For a new cluster, set 0 ACUs for the minimum capacity setting. For existing clusters, update to supported versions and then modify the minimum capacity setting to 0 ACUs. 0 ACUs is supported for Aurora PostgreSQL 13.15+, 14.12+, 15.7+, and 16.3+, and Aurora MySQL 3.08+.

Aurora Serverless is an on-demand, automatic scaling configuration for Amazon Aurora. It adjusts capacity in fine-grained increments to provide just the right amount of database resources for an application’s needs. For pricing details and Region availability, visit Amazon Aurora Pricing. To learn more, read the documentation, and get started by creating an Aurora Serverless v2 database using only a few steps in the AWS Management Console.
 

Read more


AWS Advanced NodeJS Driver is Generally Available

The Amazon Web Services (AWS) Advanced NodeJS Driver is now generally available for use with Amazon RDS and Amazon Aurora PostgreSQL and MySQL-compatible database clusters. This database driver provides support for faster switchover and failover times, Federated Authentication, and authentication with AWS Secrets Manager or AWS Identity and Access Management (IAM).

The Amazon Web Services (AWS) Advanced NodeJS Driver is a standalone driver and supports the underlying NodeJS driver with the PostgreSQL Client or the MySQL2 Client. You can install the PostgreSQL and MySQL packages for Windows, Mac or Linux by following established installation guides in GitHub. The driver relies on monitoring the database cluster status and being aware of the cluster topology to determine the new writer. This approach reduces writer failover times to single digit seconds compared to the open-source driver.

The AWS Advanced NodeJS driver is released as an open-source project under the Apache 2.0 Public License. For more details click here to view Getting Started instructions and guidance on how to raise issues.

Read more


Disk-optimized vector engine now available on the Amazon OpenSearch Service

Amazon OpenSearch's vector engine can now run modern search applications at a third of the cost on OpenSearch 2.17 domains. When you configure a k-NN (vector) index for disk mode, it becomes optimized for operating in a low memory environment. With disk mode on, the index is compressed using techniques like binary quantization and search quality (recall) is retained through a disk-optimized rescoring mechanism using full-precision vectors. Disk-mode is an excellent option for vector search workloads that require high accuracy, cost efficiency and are satisfied by low hundreds-of-milliseconds latency. It provides customers with a lower cost alternative to the existing in-memory mode when single-digit latency is unnecessary. To learn more, refer to the documentation.

Read more


Amazon Keyspaces (for Apache Cassandra) now supports adding Regions to existing Keyspaces

Amazon Keyspaces (for Apache Cassandra) is a scalable, serverless, highly available, and fully managed Apache Cassandra-compatible database service that offers 99.999% availability.

Today, Amazon Keyspaces added the capability to add Regions to existing Keyspaces. With this launch, you can convert an existing single-Region Keyspace to a multi-Region Keyspace or add a new Region to an existing multi-Region Keyspace without recreating the existing Keyspaces. As your application traffic and business needs evolve over time, you can easily add new Regions closest to your application to achieve lower read and write latencies. You can also improve the availability and resiliency of your workloads by adding Regions. Keyspaces fully manages all aspects of creating a new Region and populating it with the latest data from other Regions, enabling you to focus your resources on adding value for your customers rather than managing operational tasks. You can still perform read and write operations on your tables in the existing Region during the addition of a new Region. With this capability, you get the flexibility and ease to manage the regional footprint of your application based on your changing needs.

Support for adding Regions to existing Keyspaces is available in all AWS Regions where Amazon Keyspaces offers multi-Region Replication. For more information on multi-Region Replication, see documentation. If you’re new to Amazon Keyspaces, the Getting Started guide shows you how to provision a Keyspace and explore the query and scaling capabilities of Amazon Keyspaces.

Read more


Amazon Aurora MySQL 3.08 (compatible with MySQL 8.0.39) is generally available

Starting today, Amazon Aurora MySQL-Compatible Edition 3 (with MySQL 8.0 compatibility) will support MySQL 8.0.39. In addition to several security enhancements and bug fixes, MySQL 8.0.39 contains enhancements that improve database availability when handling large number of tables and reduce InnoDB issues related to redo logging, and index handling.

Aurora MySQL 3.08 also includes multiple availability improvements to reduce database restarts, memory management telemetry improvements with new CloudWatch metrics, major version upgrade optimizations for Aurora MySQL 2 to 3 upgrades, and general improvements around memory management and observability. For more details, refer to the Aurora MySQL 3.08 and MySQL 8.0.39 release notes.

To upgrade to Aurora MySQL 3.08, you can initiate a minor version upgrade manually by modifying your DB cluster, or you can enable the “Auto minor version upgrade” option when creating or modifying a DB cluster. This release is available in all AWS regions where Aurora MySQL is available.

Amazon Aurora is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. To get started with Amazon Aurora, take a look at our getting started page.

Read more


Amazon Aurora MySQL now supports R7i instances

Amazon Aurora with MySQL compatibility now supports R7i database instances powered by custom 4th Generation Intel Xeon Scalable processors. R7i instances offer larger instance sizes, up to 48xlarge and features an 8:1 ratio of memory to vCPU, and the latest DDR5 memory. These instances are now available in the following AWS Regions: US East (N. Virginia, Ohio), US West (N. California, Oregon), Asia Pacific (Jakarta, Mumbai, Seoul, Singapore, Sydney, Tokyo), Canada (Central), and Europe (Frankfurt, Ireland, London, Milan, Paris, Spain, Stockholm).

You can spin up R7i database instances in the Amazon RDS Management Console or using the AWS CLI. Upgrading a database instance to R7i instance family requires a simple instance type modification. For more details, refer to the Aurora documentation.

Amazon Aurora is designed for unparalleled high performance and availability at global scale with MySQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. To get started with Amazon Aurora, take a look at our getting started page.
 

Read more


Amazon RDS for Oracle now M7i and R7i instances types

Amazon Relational Database (RDS) for Oracle now supports M7i and R7i database instance types. M7i and R7i are the latest Intel-based offering and are available with a new maximum instance size of 48xlarge, which brings 50% more vCPU and memory than the maximum size of M6i and R6i instance types.

M7i and R7i instances are available for Amazon RDS for Oracle in Bring Your Own License model for both Oracle Database Enterprise Edition (EE) and Oracle Database Standard Edition 2 (SE2) editions. You can launch the new database instance in the Amazon RDS Management Console or using the AWS CLI.

Amazon RDS for Oracle is a fully managed commercial database that makes it easy to set up, operate, and scale Oracle deployments in the cloud. To learn more about Amazon RDS for Oracle, read RDS for Oracle User Guide and visit Amazon RDS for Oracle Pricing for available instance configurations, pricing details, and region availability.
 

Read more


Amazon Keyspaces (for Apache Cassandra) reduces prices by up to 75%

Amazon Keyspaces (for Apache Cassandra) is a scalable, serverless, highly available, and fully managed Apache Cassandra-compatible database service. Effective Today, Amazon Keyspaces (for Apache Cassandra) is reducing prices by up to 75% across several pricing dimensions.

Amazon Keyspaces supports both on-demand and provisioned capacity modes for writing and reading data within a Region or across multiple Regions. Keyspaces’ on-demand mode provides a fully serverless experience with pay-as-you-go pricing and automatic scaling, eliminating the need for capacity planning. Many customers choose on-demand mode for its simplicity, enabling them to build modern, serverless applications that can start small and seamlessly scale to millions of requests per second.

Amazon Keyspaces has lowered prices for on-demand mode by up to 56% for single-Region and up to 65% for multi-Region usage, and for provisioned mode by up to 13% for single-Region and up to 20% for multi-Region usage. Additionally, to make data deletion more cost-effective, Keyspaces has lowered time-to-live (TTL) delete prices by 75%. Previously, on-demand was the cost-effective choice for spiky workloads, but with this pricing change, it now offers a lower cost for most provisioned capacity workloads as well. This change transforms on-demand mode into the recommended and default choice for the majority of Keyspaces workloads.

Together, these price reductions make Amazon Keyspaces even more cost-effective and simplify building, scaling, and managing Cassandra workloads. This pricing change is available in all AWS Regions where AWS offers Amazon Keyspaces. To learn more about the new price reductions, visit the Amazon Keyspaces Pricing.

Read more


Amazon DynamoDB reduces prices for on-demand throughput and global tables

Amazon DynamoDB is a serverless, NoSQL, fully managed database with single-digit millisecond performance at any scale. Starting today, we have made Amazon DynamoDB even more cost-effective by reducing prices for on-demand throughput by 50% and global tables by up to 67%.

DynamoDB on-demand mode offers a truly serverless experience with pay-per-request pricing and automatic scaling without the need for capacity planning. Many customers prefer the simplicity of on-demand mode to build modern, serverless applications that can start small and scale to millions of requests per second. While on-demand was previously cost effective for spiky workloads, with this pricing change, most provisioned capacity workloads on DynamoDB will achieve a lower price with on-demand mode. This pricing change is transformative as it makes on-demand the default and recommended mode for most DynamoDB workloads.

Global tables provide a fully managed, multi-active, multi-Region data replication solution that delivers increased resiliency, improved business continuity, and 99.999% availability for globally distributed applications at any scale. DynamoDB has reduced pricing for multi-Region replicated writes to match the pricing of single-Region writes, simplifying cost modeling for multi-Region applications. For on-demand tables, this price change lowers replicated write pricing by 67%, and for tables using provisioned capacity, replicated write pricing has been reduced by 33%.

These pricing changes are already in effect, in all AWS Regions, starting November 1, 2024 and will be automatically reflected in your AWS bill. To learn more about the new price reductions, see the AWS Database Blog, or visit the Amazon DynamoDB Pricing page.
 

Read more


Amazon RDS for PostgreSQL now supports major version 17

Amazon RDS for PostgreSQL now supports major version 17, starting with PostgreSQL version 17.1. The release includes support for the latest minor versions 16.5, 15.9, 14.14, 13.17, and 12.21. RDS for PostgreSQL comes with support for 94 PostgreSQL extensions such as pgvector 0.8.0, pg_tle 1.4.0, pgactive 2.1.4, and hypopg.1.4.1 that are updated to support PostgreSQL 17. This release also includes support for a new SQL function for monitoring autovacuum, providing insights to prevent transaction ID wraparound.

PostgreSQL 17 community updates include support for vacuuming that reduces memory usage, improves time to finish vacuuming, and shows progress of vacuuming indexes. With PostgreSQL 17, you no longer need to drop logical replication slots when performing a major version upgrade. PostgreSQL 17 continues to build on the SQL/JSON standard, adding support for `JSON_TABLE` features that can convert JSON to a standard PostgreSQL table. PostgreSQL 17 also includes general improvements to query performance and adds more flexibility to partition management with the ability to SPLIT/MERGE partitions.

You can upgrade your database using several options including RDS Blue/Green deployments, upgrade in-place, restore from a snapshot. Learn more about upgrading your database instances in the Amazon RDS User Guide.

Amazon RDS for PostgreSQL makes it simple to set up, operate, and scale PostgreSQL deployments in the cloud. See Amazon RDS for PostgreSQL Pricing for pricing details and regional availability. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console.

Read more


Amazon DynamoDB introduces warm throughput for tables and indexes

Amazon DynamoDB now supports a new warm throughput value and the ability to easily pre-warm DynamoDB tables and indexes. The warm throughput value provides visibility into the number of read and write operations your DynamoDB tables can readily handle, while pre-warming let’s you proactively increase the value to meet future traffic demands.

DynamoDB automatically scales to support workloads of virtually any size. However, when you have peak events like product launches or shopping events, request rates can surge 10x or even 100x in a short period of time. You can now check your tables’ warm throughput value to assess if your table can handle large traffic spikes for peak events. If you expect an upcoming peak event to exceed the current warm throughput value for a given table, you can pre-warm that table in advance of the peak event to ensure it scales instantly to meet demand.

Warm throughput values are available for all provisioned and on-demand tables and indexes at no cost. Pre-warming your table's throughput incurs a charge. See Amazon DynamoDB Pricing page for pricing details. This capability is now available in all AWS commercial Regions. See the Developer Guide to learn more.

Read more


Amazon RDS for MySQL supports new minor version 8.0.40

Amazon Relational Database Service (Amazon RDS) for MySQL now supports MySQL minor version 8.0.40. We recommend that you upgrade to the latest minor versions to fix known security vulnerabilities in prior versions of MySQL, and to benefit from the bug fixes, performance improvements, and new functionality added by the MySQL community. Learn more about the enhancements in RDS for MySQL 8.0.40 in the Amazon RDS user guide.

You can leverage automatic minor version upgrades to automatically upgrade your databases to more recent minor versions during scheduled maintenance windows. You can also leverage Amazon RDS Managed Blue/Green deployments for safer, simpler, and faster updates to your MySQL instances. Learn more about upgrading your database instances, including automatic minor version upgrades and Blue/Green Deployments, in the Amazon RDS User Guide.

Amazon RDS for MySQL makes it simple to set up, operate, and scale MySQL deployments in the cloud. Learn more about pricing details and regional availability at Amazon RDS for MySQL. Create or update a fully managed Amazon RDS for MySQL database in the Amazon RDS Management Console.

Read more


Amazon Timestream for InfluxDB is now available in China regions

You can now use Amazon Timestream for InfluxDB in the Amazon Web Services China (Beijing) Region, operated by Sinnet and Amazon Web Services China (Ningxia) Region, operated by NWC. Timestream for InfluxDB makes it easy for application developers and DevOps teams to run fully managed InfluxDB databases on Amazon Web Services for real-time time-series applications using open-source APIs.

Timestream for InfluxDB offers the full feature set available in the InfluxDB 2.7 release of the open-source version, and adds deployment options with Multi-AZ high availability and enhanced durability. For high availability, Timestream for InfluxDB allows you to automatically create a primary database instance and synchronously replicate the data to an instance in a different Availability Zone. When it detects a failure, Timestream for InfluxDB automatically fails over to a standby instance without manual intervention.

With the latest release, customers can use Amazon Timestream for InfluxDB in the following regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Canada (Central), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Jakarta), Europe (Paris), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Stockholm), Europe (Spain), Middle East (UAE), Amazon Web Services China (Beijing) Region, operated by Sinnet, and Amazon Web Services China (Ningxia) Region, operated by NWCD. To get started with Amazon Timestream, visit our product page.

Read more


Amazon DynamoDB announces user experience enhancements to organize your tables in the AWS GovCloud (US) Regions

Amazon DynamoDB now enables customers to easily find frequently used tables in the DynamoDB console in the AWS GovCloud (US) Regions. Customers can favorite their tables in the console’s tables page for quicker table access.

Customers can click the favorites icon to view their favorited tables in the console’s tables page. With this update, customers have a faster and more efficient way to find and work with tables that they often monitor, manage, and explore.

Customers can start using favorite tables at no additional cost. Get started with creating a DynamoDB table from the AWS Management Console.

Read more


Today, AWS announced support for a new Apache Flink connector for Amazon DynamoDB. The new connector, contributed by AWS for the Apache Flink open source project, adds Amazon DynamoDB Streams as a new source for Apache Flink. You can now process DynamoDB streams events with Apache Flink, a popular framework and engine for processing and analyzing streaming data.

Amazon DynamoDB is a serverless, NoSQL database service that enables you to develop modern applications at any scale. DynamoDB Streams provides a time-ordered sequence of item-level changes (insert, update, and delete) in a DynamoDB table. With Amazon Managed Service for Apache Flink, you can transform and analyze DynamoDB streams data in real time using Apache Flink and integrate applications with other AWS services such as Amazon S3, Amazon OpenSearch, Amazon Managed Streaming for Apache Kafka, and more. Apache Flink connectors are software components that move data into and out of an Amazon Managed Service for Apache Flink application. You can use the new connector to read data from a DynamoDB stream starting with Apache Flink version 1.19. With Amazon Managed Service for Apache Flink there are no servers and clusters to manage, and there is no compute and storage infrastructure to set up.

The Apache Flink repo for AWS connectors can be found here. For detailed documentation and setup instructions, visit our Documentation Page.

Read more


Amazon Neptune Serverless is now available in 6 additional AWS Regions

Amazon Neptune Serverless is now available in the Europe (Paris), South America (Sao Paulo), Asia Pacific (Jakarta), Asia Pacific (Mumbai), Asia Pacific (Hong Kong), and Asia Pacific (Seoul) AWS Regions.

Amazon Neptune is a fast, reliable, and fully managed graph database service for building and running applications with highly connected datasets, such as knowledge graphs, fraud graphs, identity graphs, and security graphs. If you have unpredictable and variable workloads, Neptune Serverless automatically determines and provisions the compute and memory resources to run the graph database. Database capacity scales up and down based on the application’s changing requirements to maintain consistent performance, saving up to 90% in database costs compared to provisioning at peak capacity.

With today’s launch, Neptune Serverless is available in 19 AWS Regions. For pricing and region availability, please visit the Neptune pricing page.

You can create a Neptune Serverless cluster from the AWS Management console, AWS Command Line Interface (CLI), or SDK. To learn more about Neptune Serverless visit the product page, or the documentation.

Read more


Announcing AWS DMS Serverless improved Oracle to S3 full load throughput

AWS Database Migration Service Serverless (AWS DMSS) now offers improved throughput for Oracle to Amazon S3 full load migrations. With this enhancement, you can now migrate data from Oracle databases to S3 up to two times faster than previously possible with AWS DMSS.

AWS DMSS Oracle to Amazon S3 Full Load performance enhancements will be applied automatically whenever AWS DMSS detects a full load migration between an Oracle database and Amazon S3. For detailed information on these improvements, refer to the AWS DMSS enhanced throughput documentation.

To learn more, see the AWS DMS Full Load for Oracle databases documentation. For AWS DMS regional availability, please refer to the AWS Region Table.

Read more


Amazon RDS for SQL Server supports minor versions in October 2024

New minor versions of Microsoft SQL Server are now available on Amazon RDS for SQL Server, providing performance enhancements and security fixes. Amazon RDS for SQL Server now supports these latest minor versions of SQL Server 2016, 2017, 2019 and 2022 across the Express, Web, Standard, and Enterprise editions.

We encourage you to upgrade your Amazon RDS for SQL Server database instances at your convenience. You can upgrade with just a few clicks in the Amazon RDS Management Console or by using the AWS CLI. Learn more about upgrading your database instances from the Amazon RDS User Guide. The new minor versions include:

  • SQL Server 2016 SP3 GRD - 13.0.6450.1
  • SQL Server 2017 CU31 - 14.0.3480.1
  • SQL Server 2019 CU28 - 15.0.4395.2
  • SQL Server 2022 CU15 - 16.0.4150.1


These minor versions are available in all AWS commercial regions where Amazon RDS for SQL Server databases are available, including the AWS GovCloud (US) Regions.

Amazon RDS for SQL Server makes it simple to set up, operate, and scale SQL Server deployments in the cloud. See Amazon RDS for SQL Server Pricing for pricing details and regional availability.

Read more


Amazon RDS for Oracle now supports October 2024 Release Update

Amazon Relational Database Service (Amazon RDS) for Oracle now supports the October 2024 Release Update (RU) for Oracle Database versions 19c and 21c.

To learn more about Oracle RUs supported on Amazon RDS for each engine version, see the Amazon RDS for Oracle Release notes. If the auto minor version upgrade (AmVU) option is enabled, your DB instance is upgraded to the latest quarterly RU six to eight weeks after it is made available by Amazon RDS for Oracle in your AWS Region. These upgrades will happen during the maintenance window. To learn more, see the Amazon RDS maintenance window documentation.

For more information about the AWS Regions where Amazon RDS for Oracle is available, see the AWS Region table.

Read more


Amazon RDS Performance Insights now supports Data API for Aurora MySQL

Amazon RDS (Relational Database Service) Performance Insights now allows customers to monitor queries run through the RDS Data API for Aurora MySQL clusters. The RDS Data API provides an HTTP endpoint to run SQL statements on an Amazon Aurora DB cluster.

With this launch, customers are now able to use Performance Insights to monitor the impact of the queries run through the RDS Data API on their database performance. Additionally, customers can identify these queries and their related statistics by slicing the database load metric using the host name dimension, and filtering for 'RDS Data API'.

Amazon RDS Performance Insights is a database performance tuning and monitoring feature of RDS that allows you to visually assess the load on your database and determine when and where to take action. With one click in the Amazon RDS Management Console, you can add a fully-managed performance monitoring solution to your Amazon RDS database.

To learn more about RDS Performance Insights, read the Amazon RDS User Guide and visit Performance Insights pricing for pricing details and region availability.
 

Read more


AWS announces CSV result format support for Amazon Redshift Data API

Amazon Redshift Data API enables you to access data efficiently from Amazon Redshift data warehouses by eliminating the need to manage database drivers, connections, network configurations, data buffering, and more. Data API now supports comma seperated values (CSV) result format which provides flexibility in how you access and process data, allowing you to choose between JSON and CSV formats based on your application needs.

With CSV result format, you can now specify whether you want your query results formatted as JSON or CSV through the --result-format parameter when calling ExecuteStatement and BatchExecuteStatement APIs. To retrieve CSV results, use the new GetStatementResultV2 API which supports CSV results, while GetStatementResult API continues to support only JSON. If not specified, the default format remains JSON.

CSV support with Data API is now generally available for both Redshift Provisioned and Amazon Redshift Serverless data warehouses in all AWS commercial and the AWS GovCloud (US) Regions which support Data API. To get started and learn more, visit Amazon Redshift database developers guide.

Read more


desktop-and-app-streaming

Announcing Idle Disconnect Timeout for Amazon WorkSpaces

Amazon WorkSpaces now supports Idle Disconnect Timeout for Windows WorkSpaces Personal with the Amazon DCV protocol. WorkSpaces administrators can now configure how long a user can be inactive while connected to a personal WorkSpace, before they are disconnected. This setting is already available for WorkSpaces Pools, but this launch includes end user notifications for idle users, warning that their session will be disconnected soon, for both Personal and Pools.

Idle Disconnect Timeout helps Amazon WorkSpaces administrators better optimize costs and resources for their fleet. This feature helps ensure that customers who pay for their resources hourly are only paying for the WorkSpaces that are actually in use. The notifications also provide improved overall user experience for both Personal and Pools end users, by warning them about the pending disconnection and giving them a chance to continue or save their work beforehand.

Idle Disconnect Timeout is available at no additional cost for Windows WorkSpaces running DCV, in all the AWS Regions where WorkSpaces is currently available. To get started with Amazon WorkSpaces, see Getting Started with Amazon WorkSpaces.

To enable this feature, you must be using Windows WorkSpaces Personal DCV host agent version 2.1.0.1554 or later. Your users must be on WorkSpaces Windows or macOS client versions 5.24 or later, WorkSpaces Linux client version 2024.7 or later, or on Web Access. Refer to the client version release notes for more details. To learn more, visit Manage your Windows WorkSpaces in the Amazon WorkSpaces Administrator Guide.

Read more


Amazon WorkSpaces Secure Browser now supports inline data redaction

Today, AWS End User Computing Services announced customers may now redact specified data fields in web content accessed with Amazon WorkSpaces Secure Browser. With inline data redaction, administrators can create policies that help predict and redact certain data (e.g., Social Security numbers, credit card numbers, etc.) before it is displayed on the screen.

Inline data redaction helps customers raise the security bar for accessing certain data by automatically redacting data from strings of text displayed in web pages. Using the AWS Management Console, administrators can create redaction policies by choosing from 30 built-in data types (e.g., Social Security Numbers, Credit Card Numbers), or create their own custom data types. Administrators can set policies governing the strictness of enforcement and define the URLs where redaction should be enforced. For example, you can define redaction policies for your support agents to help prevent the visual display of credit card numbers from web based payment systems. This way, you can help ensure that the credit card number field is redacted without restricting access to other data necessary to provide support.

Inline data redaction is available for your portal at no additional charge, in all the AWS Regions where WorkSpaces Secure Browser is available.

If you are new to WorkSpaces Secure Browser you can get started by visiting the pricing page and adding the Free Trial offer to your AWS account. Then, go to the Amazon WorkSpaces Secure Browser management console and create a portal, today.

Read more


Amazon WorkSpaces introduces support for Rocky Linux

Amazon Web Services today announced support for Rocky Linux from CIQ on Amazon WorkSpaces Personal, a fully managed virtual desktop offering. With this launch, organizations can provide their end users with an RPM Package Manager compatible environment, optimized for running compute-intensive applications, while helping to improve IT agility and reduce costs. Now WorkSpaces Personal customers have the flexibility to choose from a wider range of Linux distributions including Rocky Linux, Red Hat Enterprise Linux, and Ubuntu Desktop.

With Rocky Linux on WorkSpaces Personal, IT organizations can enable developers to work in an environment that is consistent with their production environment, and provide power users like engineers and data scientists with on-demand access to Rocky Linux environments as needed - quickly spinning up and tearing down instances and managing the entire fleet through the AWS Console, without the burden of capacity planning or license management. WorkSpaces Personal offers a wide range of high-performance, license-included, fully-managed virtual desktop bundles—enabling organizations to only pay for the resources they use.

Rocky Linux on WorkSpaces Personal is available in all AWS Regions where WorkSpaces Personal is available, except for AWS China Regions. Depending on the WorkSpaces Personal running mode, you will be charged hourly or monthly for your virtual desktops. For more details on pricing, refer to Amazon WorkSpaces Pricing.

To get started with Rocky Linux on WorkSpaces Personal, sign in to the AWS Management Console and open the Amazon WorkSpaces console.  For more information, see the Amazon WorkSpaces Administration Guide.
 

Read more


Amazon WorkSpaces WSP enables desktop traffic over TCP/UDP port 443

Amazon WorkSpaces Amazon DCV-enabled desktop traffic now supports both TCP and UDP over Port 443. This feature will be used automatically, requiring no configuration changes. Customers using port 4195 can continue to do so. The WorkSpaces client application prioritizes UDP (QUIC) for optimal performance, but will fallback to TCP if UDP is blocked. The WorkSpaces web client will connect over either TCP Port 4195 or 443. If Port 4195 is blocked, the client will exclusively use port 443.

Organizations managing WorkSpaces may not be the same as the organization managing the client networks where users will connect to WorkSpaces. Each network is managed independently, creating administration challenges, delays, or roadblocks to change outbound access rules. By opening WorkSpaces DCV desktop traffic over TCP/UDP Port 443 with support for fallback to TCP if UDP is not available, customers no longer need to open TCP/UDP 4195 unique ports.

WorkSpaces DCV enabled desktop traffic over TCP/UDP Port 443 support is available in all AWS Regions where Amazon WorkSpaces is available. There is no additional charge for this feature. Please see the Amazon WorkSpaces Administration Guide for more information.

Read more


developer-tools

Amazon Q Developer now provides transformation capabilities for .NET porting (Preview)

Today, AWS announces new generative-AI powered transformation capabilities of Amazon Q Developer in public preview to accelerate porting of .NET Framework applications to cross-platform .NET. Using these capabilities, you can modernize your Windows .NET applications to be Linux-ready up to four times faster than traditional methods and realize up to 40% savings in licensing costs.

With this launch, Amazon Q Developer is now equipped with agentic capabilities for transformation that allow you to port hundreds of .NET Frameworks applications running on Windows to Linux-ready cross-platform .NET. Using Amazon Q Developer, you can delegate your tedious manual porting tasks and help free up your team’s precious time to focus on innovation.

You can chat with Amazon Q Developer in natural language to share high-level transformation objectives and connect it to your source code repositories. Amazon Q Developer then starts the transformation process with the assessment of your application code to identify .NET versions, supported project types, and their dependencies, and then ports the assessed application code along with their accompanying unit tests to cross-platform .NET. You and your team can collaboratively review, adjust, and approve the transformation process. Additionally, Amazon Q Developer provides a detailed work log as a documented trail of transformation decisions to support your organizational compliance objectives.

The transformation capabilities of Amazon Q Developer are available in public preview via a web experience and in your Visual Studio integrated development environment (IDE). To learn more, read the blogs on the web experience and the IDE experience, and visit Amazon Q Developer transformation capabilities webpage and documentation.
 

Read more


Amazon Q Developer can now automate code reviews

Starting today, Amazon Q Developer can also perform code reviews, automatically providing comments on your code in the IDE, flagging suspicious code patterns, providing patches where available, and even assessing deployment risk so you can get feedback on your code quickly.

Q Developer is a generative AI-powered assistant for designing, building, testing, deploying, and maintaining software. Its agents for software development have a deep understanding of your entire code repos, so they can accelerate many tasks beyond coding. By automating the first round of code reviews and improving review consistency, Q Developer empowers code authors to fix issues faster, streamlining the process for both authors and reviewers. With this new capability, Q Developer can help you get immediate feedback for your code reviews and code fixes where available, so you can increase the speed of iteration and improve the quality of your code.

This capability is available in the integrated development environment (IDE) through a new chat command: /review. You can start automating code reviews via the Visual Studio Code and IntelliJ IDEA Integrated Development Environments (IDEs) with both an Amazon Q Developer Free Tier or Pro Tier subscription. For more details on pricing, see Amazon Q Developer pricing

This capability is available in all AWS Regions where Amazon Q Developer is available. To get started with generating documentation, visit Amazon Q Developer or read the news blog.

Read more


Amazon Q Developer adds operational investigation capability (Preview)

Amazon Q Developer now helps you accelerate operational investigations across your AWS environment in just a fraction of the time. With a deep understanding of your AWS cloud environment and resources, Amazon Q Developer looks for anomalies in your environment, surfaces related signals for you to explore, identifies potential root-cause hypothesis, and suggests next steps to help you remediate issues faster. 

Amazon Q Developer works alongside you throughout your operational troubleshooting journey from issue detection and triaging, through remediation. You can initiate an investigation by selecting the Investigate action on any Amazon CloudWatch data widget across the AWS Management Console. You can also configure Amazon Q to automatically investigate when a CloudWatch alarm is triggered. When an investigation starts, Amazon Q Developer sifts through various signals about your AWS environment including CloudWatch telemetry, AWS CloudTrail Logs, deployment information, changes to resource configuration, and AWS Health events. 

CloudWatch now provides a dedicated investigation experience where teams can collaborate and add findings, view related signals and anomalies, and review suggestions for potential root cause hypotheses. This new capability also provides remediation suggestions for common operational issues across your AWS environment by surfacing relevant AWS Systems Manager Automation runbooks, AWS re:Post articles, and documentation. It also integrates with your existing operational workflows such as Slack via AWS Chatbot. 

The new operational investigation capability within Amazon Q Developer is available at no additional cost during preview in the US East (N. Virginia) Region. To learn more, see getting started and best practice documentation

Read more


Announcing GitLab Duo with Amazon Q (Preview)

Today, AWS announces a preview of GitLab Duo with Amazon Q, embedding advanced agent capabilities for software development and workload transformation directly in GitLab's enterprise DevSecOps platform. With this launch, GitLab Duo with Amazon Q delivers a seamless development experience across tasks and teams, automating complex, multi-step tasks for software development, security, and transformation —all using the familiar GitLab workflows developers already know. 

Using GitLab Duo, developers can delegate issues to Amazon Q agents using quick actions. to build new features faster, maximize quality and security with AI-assisted code reviews, create and execute unit tests, and upgrade a legacy Java codebase. GitLab’s unified data store across the software development life cycle (SDLC) gives Amazon Q project context to accelerate and automate end-to-end workflows for software development, simplifying the complex toolchains historically required for collaboration across teams.

  • Streamline software development: Go from new feature idea in an issue, to merge-ready code in minutes. Iterate directly from GitLab, using feedback in comments to accelerate development workflows from end-to-end.
  • Optimize code: Generate unit tests for new merge request to save developer time and ensure consistent quality assurance practices are enforced across teams.
  • Maximize quality and security: Provide AI-driven code quality, security reviews and generated fixes to accelerate feedback cycles.
  • Transform enterprise workloads: Starting with Java 8 or 11 codebases, developers can upgrade to Java 17 directly from a GitLab project to improve application security, performance, and remove technical debt.

Visit the Amazon Q Developer integrations page to learn more.

Read more


Amazon Q Developer can now generate documentation within your source code

Starting today, Amazon Q Developer can document your code by automatically generating readme files and data-flow diagrams within your projects. 

Today, developers report they spend an average of just one hour per day coding. They spend most of their time on tedious, undifferentiated tasks such as learning codebases, writing and reviewing documentation, testing, managing deployments, troubleshooting issues or finding and fixing vulnerabilities. Q Developer is a generative AI-powered assistant for designing, building, testing, deploying, and maintaining software. Its agents for software development have a deep understanding of your entire code repos, so they can accelerate many tasks beyond coding. With this new capability, Q Developer can help you understand your existing code bases faster, or quickly document new features, so you can focus on shipping features for your customers.

This capability is available in the integrated development environment (IDE) through a new chat command: /doc . You can get started generating documentation within the Visual Studio Code and IntelliJ IDEA IDEs with an Amazon Q Developer Free Tier or Pro Tier subscription. For more details on pricing, see Amazon Q Developer pricing.

This capability is available in all AWS Regions where Amazon Q Developer is available. To get started with generating documentation, visit Amazon Q Developer or read the news blog

Read more


Amazon Q Developer transformation capabilities for mainframe modernization are now available (Preview)

Today, AWS announces new generative AI–powered capabilities of Amazon Q Developer in public preview to help customers and partners accelerate large-scale assessment and modernization of mainframe applications.

Amazon Q Developer is enterprise-ready, offering a unified web experience tailored for large-scale modernization, federated identity, and easier collaboration. Keeping you in the loop, Amazon Q Developer agents analyze and document your code base, identify missing assets, decompose monolithic applications into business domains, plan modernization waves, and refactor code. You can chat with Amazon Q Developer in natural language to share high-level transformation objectives, source repository access, and project context. Amazon Q Developer agents autonomously classify and organize application assets and create comprehensive code documentation to understand and expand the knowledge base of your organization. The agents combine goal-driven reasoning using generative AI and modernization expertise to develop modernization plans customized for your code base and transformation objectives. You can then collaboratively review, adjust, and approve the plans through iterative engagement with the agents. Once you approve the proposed plan, Amazon Q Developer agents autonomously refactor the COBOL code into cloud-optimized Java code while preserving business logic.

By delegating tedious tasks to autonomous Amazon Q Developer agents with your review and approvals, you and your team can collaboratively drive faster modernization, larger project scale, and better transformation quality and performance using generative AI large language models. You can enhance governance and compliance by maintaining a well-documented and explainable trail of transformation decisions.

To learn more, read the blog and visit Amazon Q Developer transformation capabilities webpage and documentation.

Read more


Amazon Q Developer launches Java upgrade transformation CLI (Public Preview)

Amazon Q Developer launches public preview of Java upgrade transformation CLI (command line interface). The CLI allows you to invoke transformations from the command line and perform transformations at scale.

The CLI provides the following capabilities:

  • Transform your Java applications from Java 8, Java 11 to Java 17 (available in the IDE and now in the CLI)
  • Custom transformations (new and only in CLI): The CLI will allow you to perform custom transformations you defined specific to your code bases in your organization. Prior to this launch, Amazon Q Developer would upgrade open-source libraries in your Java applications. With custom transformations in the CLI, you can define your own transformations specific to your code bases and internal libraries. You can define custom transformations using ast-grep, a code tool for structural search and replace. Amazon Q Developer can perform your custom transformations and leverage Q’s AI debugging capabilities.
  • Build on local environment (new and only in CLI): The CLI will perform the verification build on your local environment, which ensures running unit tests and integration tests during build verifications.

This capability is available in the command line, on Linux and Mac OS. You can learn more about the Code Transformation CLI and get started here.

Read more


Amazon Q Developer for the Eclipse IDE is now in public preview

The Amazon Q Developer plugin for the Eclipse IDE is now in public preview. With this launch, developers can leverage the power of Q Developer, the most capable generative AI-powered assistant for software development, within the Eclipse IDE.

Eclipse developers can now chat with Amazon Q Developer about their project and code faster with inline code suggestions within the IDE. Developers can also leverage Amazon Q Developer customization to receive tailored responses and code recommendations that conform to their team's internal libraries, proprietary algorithmic techniques, and enterprise code style. This helps users build faster while enhancing productivity across the entire software development lifecycle.

The Amazon Q Developer plugin for the Eclipse IDE Public Preview is available in all AWS regions where Q Developer is supported. Learn more and download the free Amazon Q Developer plugin for Eclipse to get started.

Read more


Amazon Q Developer can now provide more personalized chat answers based on console context

Today, AWS announces the general availability of console context awareness for the Amazon Q Developer chat within the AWS Management Console. This new capability allows Amazon Q Developer to dynamically understand and respond to inquiries based on the specific AWS service you are currently viewing or configuring and the region you are operating within. For example, if you are working within the Amazon Elastic Container Service (Amazon ECS) console, you can ask "How can I create a cluster?" and Amazon Q Developer will recognize the context and provide relevant guidance tailored to creating ECS clusters.

This update enables more natural conversations without providing repetitive context details, allowing you to arrive at the answers you seek faster. This capability is included at no additional cost in the Amazon Q Developer Free Tier. For the Amazon Q Developer Pro Tier, which requires a paid subscription, this capability is also included. For more information on pricing, please see the Amazon Q Developer Pricing page. You can access this feature in all regions Amazon Q Developer chat is available in the AWS Management Console. You can get started today by chatting with Amazon Q Developer in the AWS Management Console.
 

Read more


Amazon Q Java transformation launches Step-by-Step and Library Upgrades

Amazon Q Developer Java upgrade transformation now offers step-by-step upgrades, and library upgrades for Java 17 applications. This new feature allows developers to review and accept code changes in multiple diffs, and to test proposed changes in each diff step-by-step. Additionally, Amazon Q can now upgrade libraries for applications already on Java 17, enabling continuous maintenance.

This launch significantly improves the code review and application modernization process. By allowing developers to review smaller amount of code changes at a time, it makes error fixes easier when manual completion is required. The ability to upgrade apps already on Java 17 to the latest reliable libraries helps organizations save time and effort in maintaining their applications across the board.

This capability is available within the Visual Studio Code and IntelliJ IDEs.

To learn more and get started with these new features here.

Read more


Amazon Q Developer now provides natural language cost analysis

Today, AWS announces the addition of cost analysis capabilities to Amazon Q Developer, allowing customers to retrieve and interpret their AWS cost data through natural language interactions. Amazon Q Developer is a generative AI-powered assistant that helps customers build, deploy, and operate applications on AWS. The cost analysis capability helps users of all skill levels to better understand and manage their AWS spending without previous knowledge of AWS Cost Explorer.

Customers can now ask Amazon Q Developer questions about their AWS costs such as "Which region had the largest cost increase last month?" or "What services cost me the most last quarter?". Q interprets these questions, analyzes the relevant cost data, and provides easy-to-understand responses. Each answer includes transparency on the Cost Explorer parameters used and a link to visualize the data in Cost Explorer.

This feature is now available in all AWS Regions where Amazon Q Developer is supported. Customers can access it via the Amazon Q icon in the AWS Management Console. To get started, see the AWS Cost Management user guide.
 

Read more


AWS CodePipeline now supports publishing ECR image and AWS InspectorScan as new actions

AWS CodePipeline introduces the ECRBuildAndPublish action and the AWS InspectorScan action in its action catalog. The ECRBuildAndPublish action enables you to easily build a docker image and publish it to ECR as part of your pipeline execution. The InspectorScan action enables you to scan your source code repository or docker image as part of your pipeline execution.

Previously, if you wanted to build and publish a docker image, or run vulnerability scan, you had to create a CodeBuild project, configure the project with the appropriate commands, and add a CodeBuild action to your pipeline to run the project. Now, you can simply add these actions to your pipeline, and let the pipeline handle the rest for you.

To learn more about using the ECRBuildAndPublish action in your pipeline, visit our documentation. To learn more about using the InspectorScan action in your pipeline, visit our documentation. For more information about AWS CodePipeline, visit our product page. These new actions are available in all regions where AWS CodePipeline is supported, except the AWS GovCloud (US) Regions and the China Regions.

Read more


Application Signals provides OTEL support via X-Ray OTLP endpoint for traces

CloudWatch Application Signals, an application performance monitoring (APM) solution, enables developers and operators to easily monitor the health and performance of their applications hosted across different compute platforms such as EKS, ECS and more. Customers can now use OpenTelemetry Protocol (OTLP), an open-source protocol, to send traces to the X-Ray OTLP endpoint, and unlock application performance monitoring capabilities with Application Signals.

OpenTelemetry Protocol (OTLP) is a standardized protocol for transmitting telemetry data from your applications to monitoring solutions like CloudWatch. Developers who use OpenTelemetry to instrument their applications can now send traces to the X-Ray OTLP endpoint, unlocking, via Application Signals, pre-built, standardized dashboards for critical application metrics (throughput/latency/errors), correlated trace spans, and interactions between applications and its dependencies (such as other AWS services). This provides operators with a complete picture of the application's health, allowing them to pinpoint the source of performance issues. By creating Service Level Objectives (SLOs) within Application Signals, customers can track performance indicators of crucial application functions. This makes it simple to spot and address any operations falling short of their business goals. Finally, customers can also analyze application issues in business context such as troubleshoot customer support tickets or find top customers impacted due to application disruptions by searching and analyzing transaction (or trace) spans.

OTLP endpoint for traces is available in all regions where Application Signals is generally available. For pricing, see Amazon CloudWatch pricing. See documentation to learn more.

Read more


Amazon Q Developer Chat Customizations is now generally available

Today, Amazon Web Services (AWS) is excited to announce the general availability of customizable chat responses generated by Amazon Q Developer in the IDE. With this capability, you can securely connect Q Developer to your private codebases to receive more precise chat responses that take into account your organization’s internal APIs, libraries, classes, and methods. Readmes and best practices demonstrated within your code repositories are also utilized within your customization. You can use a customized version of Q Developer chat in the IDE to ask questions about how your internal codebase is structured, and where and how certain functions or libraries are used. With these capabilities, Q Developer can boost productivity by reducing the time builders spend examining previously written code and deciphering internal APIs, documentation, and other resources.

To get started, you first need to add your organization’s private repositories to Q Developer through the AWS Management Console, and then create and activate your customization. You can easily manage access to a customization from the AWS Management Console so that only specific developers have access. Each customization is isolated from other customers, and none of the customizations built with these new capabilities will be used to train the foundation models underlying Q Developer.

These capabilities are available as part of the Amazon Q Developer Pro subscription. To learn more about pricing, please visit Amazon Q Developer Pricing.

To learn more, see the Amazon Q Developer webpage.
 

Read more


Amazon CloudWatch Synthetics now supports Playwright runtime to create canaries with NodeJS

CloudWatch Synthetics, which continuously monitors web applications and APIs by running scripted canaries to help you detect issues before they impact end-users, now supports the Playwright framework for creating NodeJS canaries enabling comprehensive monitoring and diagnosis of complex user journeys and issues that are challenging to automate with other frameworks.

Playwright is an open-source automation library for testing web applications. You can now create multi-tab workflows in a canary using the Playwright runtime which comes with the advantage of troubleshooting failed runs with logs stored directly to CloudWatch Logs database in your AWS account. This replaces the previous method of storing logs as text files and enables you to leverage CloudWatch Logs Insights for query-based filtering, aggregation, and pattern analysis. You can now query CloudWatch logs for your canaries using the canary run ID or step name, making the troubleshooting process faster and more precise than one relying on timestamp correlation for searching logs. Playwright-based canaries also generate artifacts like reports, metrics, and HAR files, even when canaries times out, ensuring you have the required data needed for root cause analysis in those scenarios. Additionally, the new runtime simplifies canary configuration by allowing customization through a JSON file, removing the need to call a library function in the canary code.

Playwright runtime is available for creating canaries in NodeJS in all commercial regions at no additional cost to users.

To learn more about the runtime, see documentation, or refer to the user guide to get started with CloudWatch Synthetics.

Read more


Mountpoint for Amazon S3 now supports a high performance shared cache

You can now use Amazon S3 Express One Zone as a high performance read cache with Mountpoint for Amazon S3. The cache can be shared by multiple compute instances and can elastically scale to any dataset size. Mountpoint for S3 is a file client that translates local file system API calls to REST API calls on S3 objects. With this launch, Mountpoint for S3 can cache data in S3 Express One Zone after it’s read, making the subsequent read requests up to 7x faster compared to reading data from S3 Standard.

Previously, Mountpoint for S3 could cache recently accessed data in Amazon EC2 instance storage, EC2 instance memory, or an Amazon EBS volume. This improved performance for repeated read access from the same compute instance for dataset sizes up to the size of the available local storage. Starting today, you can also opt in to caching data in S3 Express One Zone, benefiting applications that repeatedly read a shared dataset across multiple compute instances, without any limits on the total dataset size. Once you opt in, Mountpoint for S3 retains objects with sizes up to one megabyte in S3 Express One Zone. This is ideal for compute-intensive use cases such as machine learning training for computer vision models where applications repeatedly read millions of small images from multiple instances.

Mountpoint for Amazon S3 is an open source project backed by AWS support, which means customers with AWS Business and Enterprise Support plans get 24/7 access to cloud support engineers. To get started, visit the GitHub page and product page.

Read more


Accelerate AWS CloudFormation troubleshooting with Amazon Q Developer assistance

AWS CloudFormation now offers generative AI assistance powered by Amazon Q Developer to help troubleshoot unsuccessful CloudFormation deployments. This new capability provides easy-to-understand analysis and actionable steps to simplify the resolution of the most common resource provisioning errors encountered during CloudFormation deployments.

When creating or modifying a CloudFormation stack, CloudFormation can encounter errors in resource provisioning, such as missing required parameters for an EC2 instance or inadequate permissions. Previously, troubleshooting a failed stack operation could be a time-consuming process. After identifying the root cause of the failure, you had to search through blogs and documentation for solutions and determine the next steps, leading to longer resolution times. Now, when you review a failed stack operation in the CloudFormation Console, CloudFormation automatically highlights the likely root cause of the failure. You can click the "Diagnose with Q" button in the error alert box and Amazon Q Developer will provide a human-readable analysis of the error, helping you understand what went wrong. If you need further assistance, you can click the "Help me resolve" button to receive actionable resolution steps tailored to your specific failure scenario, helping you accelerate resolution of the error.

To get started, open the CloudFormation Console and navigate to the stack events tab for a provisioned stack. This feature is available in AWS Regions where AWS CloudFormation and Amazon Q Developer are available. Refer to the AWS Region table for service availability details. Visit our user guide to learn more about this feature.
 

Read more


AWS CloudFormation Hooks now allows AWS Cloud Control API resource configurations evaluation

AWS CloudFormation Hooks now allow you to evaluate resource configurations from AWS Cloud Control API (CCAPI) create and update operations. Hooks allow you to invoke custom logic to enforce security, compliance, and governance policies on your resource configurations. CCAPI is a set of common application programming interfaces (APIs) that is designed to make it easy for developers to manage their cloud infrastructure in a consistent manner and leverage the latest AWS capabilities faster. By extending Hooks to CCAPI, customers can now inspect resource configurations prior to CCAPI create and update operations, and block or warn the operations if there is a non-compliant resource found.

Before this launch, customers would publish Hooks that would only be invoked during CloudFormation operations. Now, customers can extend their resource Hook evaluations beyond CloudFormation to CCAPI based operations. Customers with existing resource Hooks, or who are using the recently launched pre-built Lambda and Guard hooks, simply need to specify “Cloud_Control” as a target in the hooks’ configuration.

Hooks is available in all AWS Commercial Regions. The CCAPI support is available for customers who use CCAPI directly or third-party IaC tools that have CCAPI providers support.

To get started, refer to Hooks user guide and CCAPI user guide for more information. Learn the detail of this feature from this AWS DevOps Blog.
 

Read more


Amazon CloudWatch Application Signals launches support for Runtime Metrics

Today, AWS announces the general availability of runtime metrics support in Amazon CloudWatch Application Signals, an OpenTelemetry (OTel) compatible application performance monitoring (APM) feature in CloudWatch. You can view runtime metrics like garbage collection, memory usage, and CPU usage for your Java or Python applications to troubleshoot issues such as high CPU utilization or memory leaks, which can disrupt the end-user experience.

Application Signals simplifies troubleshooting application performance against key business or service level objectives (SLOs) for AWS applications. Without any source code changes, Application Signals collects traces, application metrics(error/latency/throughput), logs and now runtime metrics to bring them together in a single pane of glass view.
Runtime metrics enable real-time monitoring of your application’s resource consumption, such as memory and CPU usage. With Application Signals, you can understand whether anomalies in runtime metrics have any impact on your end-users by correlating them with application metrics such as error/latency/throughput. For example, you will be able to identify if a service latency spike is a result of an increase in garbage collection pauses by viewing these metric graphs side by side. Additionally you will be able to identify thread contention, track memory allocation patterns, and pinpoint memory or CPU spikes that may lead to application slowdowns or crashes, impacting end user experience.

Runtime metrics support is available in all regions Application Signals is available in. Runtime metrics are charged based on Application Signals pricing, see Amazon CloudWatch pricing.

To learn more, see documentation to enable Amazon CloudWatch Application Signals.

Read more


Author AWS CloudFormation Hooks using the CloudFormation Guard domain specific language

AWS CloudFormation Hooks now allows customers to use the AWS CloudFormation Guard domain specific language to author hooks. Customers use AWS CloudFormation Hooks to invoke custom logic to inspect resource configurations prior to a create, update or delete AWS CloudFormation stack operation. If a non-compliant configuration is found, Hooks can block the operation or let the operation continue with a warning. With this launch, you can now author hooks by simply pointing to a Guard rule set stored as an S3 object.

Prior to this launch, customers authored hooks using a programming language and registered the hooks as extensions on the CloudFormation registry using the cfn-cli. This pre-built hook simplifies this authoring process and provides customers the ability to extend their existing Guard rules used for static template validation. Now, you can store your Guard rules, either as individual or compressed files in an S3 bucket, and provide your S3 URI in your hooks configuration.

The Guard hook is available at no additional charge in all AWS Commercial Regions. To get started, you can use the new Hooks console workflow within CloudFormation console, AWS CLI, or CloudFormation.

To learn more about the Guard hook, check out the AWS DevOps Blog or refer to the Guard Hook User Guide. Refer to Guard User Guide to learn more about Guard including how to write Guard rules.
 

Read more


AWS CloudFormation Hooks now support custom AWS Lambda functions

AWS CloudFormation Hooks introduces a pre-built hook that allows you to simply point to an AWS Lambda function in your account. With CloudFormation Hooks, you can provide custom logic that proactively evaluate your resource configurations before provisioning. Today’s launch allows you to provide your custom logic as a Lambda function, allowing a simpler way for you to author a hook while gaining extended flexibility of hosting Lambda functions in your account.

Prior to this launch, customers used the CloudFormation CLI (cfn-cli) to author and publish hooks to the CloudFormation registry. Now, customers can simply activate the Lambda hook and pass a Lambda Amazon Resource Names (ARNs) for hooks to invoke. This allows you to directly edit your Lambda function to make updates without re-configuring your hook. Additionally, you no longer have to register your custom logic to CloudFormation registry.

The Lambda hook is available at no additional charge in all AWS Commercial Regions. Customers will incur a charge for Lambda usage. Refer to Lambda’s pricing guide for more information. To get started, you can use the new Hooks console workflow within CloudFormation console, AWS CLI, or CloudFormation.

To learn more about the Lambda hook, check out the detailed feature walkthrough on the AWS DevOps Blog or refer to the Lambda Hook User Guide. To get started with creating your Lambda function, visit AWS Lambda User Guide.
 

Read more


CloudWatch RUM now supports percentile aggregations and simplified troubleshooting with web vitals metrics

CloudWatch RUM, which captures real-time data on web application performance and user interactions, helping you quickly detect and resolve issues impacting the user experience, now supports percentile aggregation of web vital metrics and simplified events based troubleshooting directly from the web vitals anomaly.

Google uses the 75th percentile (p75) of a web page’s Core Web Vitals—Largest Contentful Paint, First Input Delay, and Cumulative Layout Shift—to influence page ranking. With CloudWatch RUM, you can now monitor these p75 values of web page vitals and ensure that majority of your visitors experience optimal performance, minimizing the impact of outliers. You can also click on any point in the Web Vitals graph to view correlated page events, allowing you to quickly dive into event details such as browser, device, and geolocation to identify specific conditions causing performance issues. Additionally, you can track affected users and sessions for in-depth analysis and quickly troubleshoot issues without the added steps of applying filters to retrieve correlated events in CloudWatch RUM.

These enhancements are available in all regions where CloudWatch RUM is available at no additional cost to users.

See documentation to learn more about the feature, or see user guide or AWS One Observability Workshop to get started with real user monitoring using CloudWatch RUM.

Read more


AWS Amplify launches the full-stack AI kit for Amazon Bedrock

Today, AWS announces the general availability of the AWS Amplify AI kit for Amazon Bedrock, the quickest way for fullstack developers to build web apps with AI capabilities such as chat, conversational search, and summarization. The Amplify AI kit allows developers to easily leverage their data to get customized responses from Amazon Bedrock AI models. The Amplify AI kit allows anyone with knowledge of JavaScript or TypeScript, and web frameworks like React or Next.js, to add AI experiences to their apps, without any prior machine learning expertise.

The AI kit offers the following capabilities:

  • A pre-built, fully customizable <AIConversation> React UI component that offers a real-time, streaming chat experience along with features like UI responses instead of plain-text, chat history, and resumable conversations.
  • A type-safe client that provides secure server-side access to Amazon Bedrock.
  • Secure, built-in capabilities to share user context (e.g. data the user can access) with Amazon Bedrock models.
  • Define tools with additional context that can be invoked by the models.
  • A fullstack TypeScript developer experience layered on Amplify Gen 2 and AWS AppSync.


To get started with the AI kit, see our launch blog.

Read more


Easily troubleshoot NodeJS applications with Amazon CloudWatch Application Signals

Today, AWS announces the general availability of NodeJS applications monitoring on Amazon CloudWatch Application Signals, an OpenTelemetry (OTel) compatible application performance monitoring (APM) feature in CloudWatch. Application Signals simplifies the process of automatically tracking application performance against key business or service level objectives (SLOs) for AWS applications. Service operators can access a pre-built, standardized dashboard for AWS application metrics through Application Signals.

Customers already use Application Signals to monitor their Java, Python and .NET applications deployed on EKS, EC2 and other platforms. With this release, they can now easily onboard and troubleshoot issues in their NodeJS applications with no additional code. NodeJS application developers can quickly triage current operational health, and whether their applications are meeting their longer-term performance goals. Customers can ensure high availability of their NodeJS applications through Application Signals’ easy navigation flow, starting with an alert for a service level indicator (SLI) gone unhealthy and deep diving from there to an error or a spike in the auto generated graphs for application metrics (latency/errors/requests). In a single pane of glass view, they can correlate application metrics with traces, application logs and infrastructure metrics to troubleshoot issues with their application in a few clicks.

Application Signals is available in all commercial AWS Regions, except, CA West (Calgary) Region, Asia Pacific (Malaysia), AWS GovCloud (US) Regions and China Regions. For pricing, see Amazon CloudWatch pricing.

To learn more, see documentation to enable Amazon CloudWatch Application Signals for Amazon EKS, Amazon EC2, native Kubernetes and custom instrumentation for other platforms.

Read more


AWS App Studio is now generally available

AWS App Studio, a generative AI–powered app-building service that uses natural language to build enterprise-grade applications, is now generally available. App Studio helps technical professionals (such as IT project managers, data engineers, enterprise architects, and solution architects) build intelligent, secure, and scalable applications without requiring deep software development skills. App Studio handles deployments, operations, and maintenance, allowing users to focus on solving business challenges and boosting productivity.

App Studio is the fastest and easiest way to build enterprise-grade applications. Getting started is simple. Users describe the application they need in natural language, and App Studio’s generative AI–powered assistant creates an application with a multipage UI, a data model, and business logic. Builders can easily modify applications using natural language, or with App Studio’s visual canvas. They can also enhance their applications with generative AI using built-in components to generate content, summarize information, and analyze files. Applications can connect to existing data using built-in connectors for AWS (such as Amazon Aurora, Amazon DynamoDB, and Amazon S3) and Salesforce, and also hundreds of third-party services (such as HubSpot, Jira, Twilio, and Zendesk) using an API connector. Users can customize the look and feel of their applications to align with brand guidelines by selecting their logo and company color palette. With App Studio it’s free to build—you only pay for the time employees spend using the published applications, saving up to 80% compared to other comparable offerings.

App Studio is generally available in the following AWS Regions: US West (Oregon) and Europe (Ireland).

To learn more and get started, visit AWS App Studio, review the documentation, and read the announcement.

Read more


Amazon Q Developer in the AWS Management Console now uses the service you’re viewing as context for your chat

Amazon Q Developer in the AWS Management Console now provides context-aware assistance for your questions about resources in your account. This feature allows you to ask questions directly related to the console page you're viewing, eliminating the need to specify the service or resource in your query. Q Developer uses the current page as additional context to provide more accurate and relevant responses, streamlining your interaction with AWS services and resources. When the service or resource cannot be inferred, Q Developer now prompts for clarification about the specific resource in question. It presents a list of potentially relevant resources, allowing you to select the appropriate one.

Customers use AWS Management Console's curated experiences to investigate and act on their resources. Q Developer chat in the console allows customers to ask questions about AWS services and resources. Now, Q Developer uses the resource you're currently viewing as context, reducing the need to specify resource identifiers to Q. For example, if you are viewing an EC2 instance and ask Amazon Q, “what is the ami of this instance?” you will not need to specify the instance you are referring to. For ambiguous questions without clear context, Q Developer offers potentially relevant resource options. Q can now count up to 500 resources of a specific type to assist with quantification.

Start gaining deeper insight into your resources using the AWS resource inspection capabilities with Amazon Q in the AWS console. Learn more about Amazon Q Developer here.
 

Read more


Amazon Q Developer plugins for Datadog and Wiz now generally available

Today's launch extends the abilities of Q Developer to access trusted AWS partner services that customers know and love. Administrators on the Q Developer Pro Tier can enable plugins in the AWS Management Console by configuring the credentials to access these third party services. Builders can now easily query and interact with Datadog and Wiz services directly in the console using Q Developer, helping them find information faster and stay in the flow longer. Customers can access a subset of information from Datadog and Wiz using natural language by asking “@datadog are there any active alerts?” or @wiz what are my top 3 security issues today?

Datadog, an AWS Advanced Technology Partner and the observability and security platform for cloud applications, provides AWS customers with unified, real-time observability and security across their entire technology stack.

With Wiz, organizations can democratize security across the development lifecycle, empowering them to build fast and securely. As an AWS Security Competency Partner, Wiz is committed to effectively reducing risk for AWS customers by seamlessly integrating into AWS services.

When starting a new conversation with Q Developer, use the commands @datadog or @wiz to quickly learn more about these services in the context of your AWS resources. Q Developer will call out to these service APIs, assemble a natural language response, and return a summary with deep links to the Datadog and Wiz resources.

To learn more about Amazon Q Developer, visit the service overview page.

Read more


Application Signals now supports burn rate for application performance goals

Amazon CloudWatch Application Signals, an application performance monitoring (APM) feature in CloudWatch, makes it easy to automatically instrument and track application performance against their most important business or service level objectives (SLOs). Customers can now receive alerts when these SLOs reach a critical burn rate. This new feature allows you to calculate how quickly your service is consuming its error budget relative to the SLO's attainment goal. Burn rate metrics provide a clear indication of whether you're meeting, exceeding, or at risk of failing your SLO goals.

Today, with burn rate metrics, you can configure CloudWatch alarms to notify you automatically when your error budget consumption exceeds specified thresholds. This allows for proactive management of service reliability, empowering your teams to take prompt action to achieve long-term performance targets. By setting multiple alarms with varying look-back windows, you can identify sudden error rate spikes and gradual shifts that could affect your error budget.

Burn rates are available in all regions where Application Signals is generally available - 28 commercial AWS Regions except CA West (Calgary) and Asia Pacific (Malaysia) regions. For pricing, see Amazon CloudWatch pricing. See SLO documentation to learn more, or refer to the user guide and AWS One Observability Workshop to get started with Application Signals.

Read more


AWS CodeBuild now supports Windows Docker builds in reserved capacity fleets

AWS CodeBuild now supports building Windows docker images in reserved capacity fleets. AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages ready for deployment.

Additionally, you can bring in your own Amazon Machine Images (AMIs) in reserved capacity for Linux and Windows platforms. This enables you to customize your build environment including building and testing with different kernel modules, for more flexibility.

The feature is now available in US East (N. Virginia), US East (Ohio), US West (Oregon), South America (Sao Paulo), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Mumbai), Europe (Ireland), and Europe (Frankfurt) where reserved capacity fleets are supported.

You can follow the Windows docker image sample to get started. To configure your own AMIs in reserved capacity fleets, please visit reserved capacity documentation. To learn more about how to get started with CodeBuild, visit the AWS CodeBuild product page.

Read more


AWS Fault Injection Service now generates experiment reports

AWS Fault Injection Service (AWS FIS) now generates reports for experiments, reducing the time and effort to produce evidence of resilience testing. The report summarizes experiment actions and captures application response from a customer-provided Amazon CloudWatch Dashboard.

With AWS FIS, you can run fault injection experiments to create realistic failure conditions under which to practice your disaster recovery and failover tests. To provide evidence of this testing and your application’s recovery response, you can configure experiments to generate a report that you can download from the AWS FIS Console and that is automatically delivered to an Amazon S3 bucket of your choice. After the experiment completes, you can review the report to evaluate the impact of the experiment on your key application and resource metrics. Additionally, you can share the reports with stakeholders, including your compliance teams and auditors as evidence of required testing.

Experiment reports are generally available in all commercial AWS Regions where FIS is available. To get started, you can log into the AWS FIS Console, or you can use the FIS API, SDK, or AWS CLI. For detailed pricing information, please visit the FIS pricing page. To learn more, view the documentation.

Read more


AWS CodePipeline open source starter templates for simplified getting started experience

Today, AWS CodePipeline open-sourced its starter templates library, which allows you to view the CloudFormation templates that power the different pipeline scenarios available in CodePipeline.

The starter template library is a valuable resource if you are new to CodePipeline. With the starter templates, you can see the resources being provisioned, understand how different pipeline stages are configured, and use these templates as a starting point for building more advanced pipelines. This increased transparency allows you to take a more hands-on approach to your CI/CD workflows and align them with your specific business requirements.

AWS CodePipeline starter templates library is released as an open-source project under the Apache 2.0 license. You can access the source code in the GitHub repository here. For more information about AWS CodePipeline, visit our product page.

Read more


Configure Route53 CIDR blocks rules based on Internet Monitor suggestions

With Amazon CloudWatch Internet Monitor’s new traffic optimization suggestions feature, you can configure your Amazon Route 53 CIDR blocks to map your application’s client users to an optimal AWS Region based on network behavior.

Internet Monitor now provides actionable suggestions to help you optimize your Route 53 IP-based routing configurations. By leveraging the new traffic insights for your application, you can easily identify the optimal AWS Regions for routing your end user traffic, and then configure your Route 53 IP-based routing based on these recommendations.

Internet Monitor collects performance data and measures latency for your client subnets behind each DNS resolver. This enables Internet Monitor to recommend the AWS Region that will provide the lowest latency for your users, based on their locations, so that you can fine-tune your DNS routing to provide the best performance for users.

To learn more, visit the Cloud Watch Internet Monitor user guide documentation.

Read more


AWS CodeBuild now supports additional compute types for reserved capacity

AWS CodeBuild now supports 18 new compute options for your reserved capacity fleets. You can select up to 96 vCPUs and 192 GB of memory to build and test your software applications on Linux x86, Arm, and Windows platforms. AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages ready for deployment.

Customers using reserved capacity can now access the new compute types by configuring vCPU, memory size, and disk space attributes on the fleets. With the addition of these new types, you now have a wider range of compute options across different Linux and Windows platforms for your workloads.

The new compute types are now available in US East (N. Virginia), US East (Ohio), US West (Oregon), South America (Sao Paulo), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Mumbai), Europe (Ireland), and Europe (Frankfurt) where reserved capacity fleets are supported.

To learn more about compute options in reserved capacity, please visit our documentation. To learn more about how to get started with CodeBuild, visit the AWS CodeBuild product page.
 

Read more


Amazon SageMaker Notebook Instances now support JupyterLab 4 notebooks

We're excited to announce the availability of JupyterLab 4 on Amazon SageMaker Notebook Instances, providing you with a powerful and modern interactive development environment (IDE) for your data science and machine learning (ML) workflows.

With this update, you can now leverage the latest features and improvements in JupyterLab 4, including faster performance and notebook windowing, making working with large notebooks much more efficient. The Extension Manager now includes both prebuilt Python extensions and extensions from PyPI, making it easier to discover and install the tools you need. The Search and Replace functionality has been improved with new features, including highlighting matches in rendered Markdown cells, searching in the current selection, and regular expression support for replacements. By providing JupyterLab 4 on Amazon SageMaker Notebook Instances, we're empowering you with a cutting-edge development environment to boost your productivity and efficiency when building ML models and exploring data.

JupyterLab 4 notebooks are available in all commercial AWS regions where SageMaker Notebook Instance is available. Visit developer guides for instructions on setting up and using SageMaker notebook instances.

Read more


game-development

Amazon CloudWatch Internet Monitor adds AWS Local Zones support for VPC subnets

Today, Amazon CloudWatch Internet Monitor introduces support for select AWS Local Zones. Now, you can monitor internet traffic performance for VPC subnets deployed in Local Zones.

With this new feature, you can also view optimization suggestions that include Local Zones. On the Optimize tab in the Internet Monitor console, select the toggle to include Local Zones in traffic optimization suggestions for your application. Additionally, you can compare your current configuration with other supported Local Zones. Select the option to see more optimization suggestions, and then choose specific Local Zones to compare. By comparing latency differences, you can determine the proposed best configuration for your traffic.

At launch, CloudWatch Internet Monitor supports the following Local Zones: us-east-1-dfw-2a, us-east-1-mia-2a, us-east-1-qro-1a, us-east-1-lim-1a, us-east-1-atl-2a, us-east-1-bue-1a, us-east-1-mci-1a, us-west-2-lax-1a, us-west-2-lax-1b, and af-south-1-los-1a.

To learn more, visit the Internet Monitor user guide documentation.

Read more


Amazon GameLift adds containers for faster dev iteration and simplified management

We are excited to announce Amazon GameLift now supports containers for building, deploying, and running game server packages. Amazon GameLift is a fully managed service that allows developers to quickly manage and scale dedicated game servers for multiplayer games. With this new capability, Amazon GameLift supports end-to-end development of containerized workloads, including deployment and scaling on-premises, in the cloud, or hybrid configurations. This reduces the time it takes to deploy a new version to approximately 5 minutes. It makes production updates faster, and removes the need to host separate customized development environments for quick iteration.

Containers package the entire runtime environment needed to run game servers, including code, dependencies, and configuration files. This allows developers to seamlessly move game server builds between local machines, staging, and production deployments without worrying about missing dependencies or configuration drift. Containers also enable efficient resource utilization by running multiple isolated game servers on the same host machine. Overall, containerization simplifies deployment, ensures consistent and secure environments, and optimizes resource usage for game servers. Containers integrate with AWS Graviton instances and Spot Instances, and run games designed for a containerized environments including those built with popular game engines like Unreal and Unity.

Amazon GameLift manged containers support is now generally available in all Amazon GameLift regions except AWS China. To get started with Amazon GameLift managed containers, visit the Amazon GameLift managed containers documentation.

Read more


internet-of-things

Announcing Commands feature for AWS IoT Device Management

Today, AWS IoT Device Management announced the general availability of the Commands feature, a managed capability that allows developers to build innovative applications where users can perform remote command and control actions on targeted devices and track the status of those executions. With this feature, you can send instructions, trigger device actions, or modify device configuration settings on-demand, simplifying the development of consumer facing applications.

Using the Commands feature, you can set fine-grained access controls, timeout settings, and receive real-time updates and notifications for each command execution, without having to manually create and manage MQTT topics, payload formats, Rules, Lambda functions, and status tracking. In addition, the feature supports custom payload formats, allowing you to define and store command entities as AWS resources for recurring use.

The AWS IoT Device Management commands feature is available in all AWS Regions where AWS IoT Device Management is offered. To learn more, see technical documentation. To get started, log in to the AWS IoT Management Console or use the CLI.
 

Read more


AWS IoT SiteWise announces new generative AI-powered industrial assistant

AWS IoT SiteWise is a managed service that simplifies the collection, organization, and monitoring of industrial equipment data at scale. Today, we are excited to announce the general availability of AWS IoT SiteWise Assistant, a generative AI-powered assistant in AWS IoT SiteWise that allows industrial users to gain insights, solve problems, and take actions from their operational data and other data sources intuitively using natural language queries.

With the AWS IoT SiteWise Assistant, you can easily interact with your operational data by clicking on alarms in the SiteWise Monitor dashboard to get summaries or by asking questions like "What assets have active alarms?" or "How do I fix the wind turbine's low RPM issue?". The assistant understands the context of your industrial data in AWS IoT SiteWise from sources like sensors, machines, and related processes, and then contextualizes the data with your centralized knowledge base using Amazon Kendra to provide useful insights, empowering faster decision making to reduce downtime, optimize processes, and improve productivity.

AWS IoT SiteWise Assistant introduces new APIs that allow industrial solutions to access these insights on-demand. Developers can integrate capabilities of the Assistant into their industrial applications using updated IoT AppKit widgets like Chatbots, Line Charts, and KPI Gauges. Additionally, a Preview of the new Assistant-aware AWS IoT SiteWise Monitor portal offers a no-code experience for visualizing key data-driven insights.

AWS IoT SiteWise Assistant is now available in US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (Ireland). Check out the user guide, API reference, and launch blog to learn more.

Read more


AWS IoT Core adds capabilities to enrich MQTT messages and simplify permission management

AWS IoT Core, a managed cloud service that lets you securely connect Internet of Things (IoT) devices to the cloud and manage them at scale, announces two new capabilities - ability to enrich MQTT messages with additional data and use thing-to-connection association for simplifying permission management. Message enrichment capability enables developers to augment MQTT messages from devices with additional information from thing registry, without modifying their devices. The thing-to-connection association enables mapping an MQTT client to a registry thing, for client IDs that don’t match thing name. This will enable developers to leverage registry information in IoT policies, easily associate device actions to lifecycle events, and utilize existing capabilities like custom cost allocation and resource-specific logging, previously only available for matching client IDs and thing names.

To enrich all messages from devices, developers can define a subset of registry attributes as propagating attributes. They can customize their message routing, processing workflows using this appended data. For example, in automotive applications, developers can selectively route messages to the desired backend depending on the appended metadata, such as vehicle make and type stored in thing registry. Additionally, with thing-to-connection association, developers can leverage existing features like using registry metadata in IoT policies, associate AWS IoT Core lifecycle events to a thing, do custom cost allocation through billing groups, and enable resource-specific logging, even if MQTT client ID and thing name differ.

These new features are available in all AWS regions where AWS IoT Core is present. For more information refer to the developer guide and API documentation.

Read more


management-and-governance

Amazon CloudWatch now provides centralized visibility into telemetry configurations

Amazon CloudWatch now offers centralized visibility into critical AWS service telemetry configurations, such as Amazon VPC Flow Logs, Amazon EC2 Detailed Metrics, and AWS Lambda Traces. This enhanced visibility enables central DevOps teams, system administrators, and service teams to identify potential gaps in their infrastructure monitoring setup. The telemetry configuration auditing experience seamlessly integrates with AWS Config to discover AWS resources, and can be turned on for the entire organization using the new AWS Organizations integration with Amazon CloudWatch.

With visibility into telemetry configurations, you can identify monitoring gaps that might have been missed in your current setup. For example, this helps you identify gaps in your EC2 detailed metrics so that you can address them and easily detect short-lived performance spikes and build responsive auto-scaling policies. You can audit telemetry configuration coverage at both resource type and individual resource levels, refining the view by filtering across specific accounts, resource types, or resource tags to focus on critical resources.

The telemetry configurations auditing experience is available in US East (N. Virginia), US West (Oregon), US East (Ohio), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm) regions. There is no additional cost to turn on the new experience, including for AWS Config.

You can get started with auditing your telemetry configurations using the Amazon CloudWatch Console, by clicking on Telemetry config in the navigation panel, or programmatically using the API/CLI. To learn more, visit our documentation.

Read more


AWS Config now supports a service-linked recorder

AWS Config added support for a service-linked recorder, a new type of AWS Config recorder that is managed by an AWS service and can record configuration data on service-specific resources, such as the new Amazon CloudWatch telemetry configurations audit. By enabling the service-linked recorder in Amazon CloudWatch, you gain centralized visibility into critical AWS service telemetry configurations, such as Amazon VPC Flow Logs, Amazon EC2 Detailed Metrics, and AWS Lambda Traces.

With service-linked recorders, an AWS service can deploy and manage an AWS Config recorder on your behalf to discover resources and utilize the configuration data to provide differentiated features. For example, an Amazon CloudWatch managed service-linked recorder helps you identify monitoring gaps within specific critical resources within your organization, providing a centralized, single-pane view of telemetry configuration status. Service-linked recorders are immutable to ensure consistency, prevention of configuration drift, and simplified experience. Service-linked recorders operate independently of any existing AWS Config recorder, if one is enabled. This allows you to independently manage your AWS Config recorder for your specific use cases while authorized AWS services can manage the service-linked recorder for feature specific requirements.

Amazon CloudWatch managed service-linked recorder is now available in US East (N. Virginia), US West (Oregon), US East (Ohio), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney) Europe (Frankfurt), Europe (Ireland), Europe (Stockholm) regions. The AWS Config service-linked recorder specific to Amazon CloudWatch telemetry configuration feature is available to customers at no additional cost.

To learn more, please refer to our documentation.
 

Read more


Amazon Web Services announces declarative policies

Today, AWS announces the general availability of declarative policies, a new management policy type within AWS Organizations. These policies simplify the way customers enforce durable intent, such as baseline configuration for AWS services within their organization. For example, customers can configure EC2 to allow instance launches using AMIs vended by specific providers and block public access in their VPC with a few simple clicks or commands for their entire organization using declarative policies.

Declarative policies are designed to prevent actions that are non-compliant with the policy. The configuration defined in the declarative policy is maintained even when services add new APIs or features, or when customers add new principals or accounts to their organization. With declarative policies, governance teams have access to the account status report which provides insight into the current configuration for an AWS service across their organization. This helps them asses readiness to enforce configuration at scale. Administrators can provide additional transparency to end users by configuring custom error messages to redirect them to internal wikis or ticketing systems through declarative policies.

To get started, navigate to the AWS Organizations console to create and attach declarative policies. You can also use AWS Control Tower, AWS CLI or CloudFormation templates to configure these policies. Declarative policies today support EC2, EBS and VPC configurations with support for other services coming soon. To learn more see documentation and blog post.

Read more


Amazon CloudWatch and Amazon OpenSearch Service launch an integrated analytics experience

Amazon Web Services announces a new integrated analytics experience and zero-ETL integration between Amazon CloudWatch and Amazon OpenSearch Service for customers to get the best of both services. CloudWatch customers can now leverage OpenSearch’s Piped Processing Language (PPL) and OpenSearch SQL. Additionally, CloudWatch customers can accelerate troubleshooting with out-of-the-box curated dashboards for vended logs like Amazon Virtual Private Cloud (VPC), AWS CloudTrail, and AWS WAF. OpenSearch customers can now analyze CloudWatch Logs without having to duplicate data.

With this integration, CloudWatch Logs customers have two more query languages for log analytics, in addition to CloudWatch Logs Insights QL. Customers can use SQL to analyze data, correlate logs using JOIN, sub-queries, and use SQL functions, namely, JSON, mathematical, datetime, and string functions for intuitive log analytics. They can also use the OpenSearch PPL to filter, aggregate and analyze their data. With a few clicks, CloudWatch Logs customers can create OpenSearch dashboards for VPC, WAF, and CloudTrail logs to monitor, analyze, and troubleshoot using visualizations derived from the logs. OpenSearch customers no longer have to copy logs from CloudWatch for analysis, or create ETL pipelines. Now, they can use OpenSearch Discover to analyze CloudWatch logs in-place, and build indexes and dashboards on CloudWatch Logs.

This is now available in the regions where OpenSearch Service direct query is available. Please read pricing and free tier details on Amazon CloudWatch Pricing, and OpenSearch Service Pricing. To get started, please refer to Amazon CloudWatch Logs vended dashboard and Amazon OpenSearch Service Developer Guide.

Read more


Amazon CloudWatch Container Insights launches enhanced observability for Amazon ECS

Amazon CloudWatch Container Insights introduces enhanced observability for Amazon Elastic Container Service (ECS) running on Amazon EC2 and Amazon Fargate with out-of-the-box detailed metrics, from cluster level down to container level to deliver faster problem isolation and troubleshooting.

Enhanced observability enables customers to visually drill up and down across various container layers and directly spot issues like memory leaks in individual containers, reducing mean time to resolution. With enhanced observability customers can now view their clusters, services, tasks or containers sorted by resource consumption, quickly identify anomalies, and mitigate risks pro-actively before end user experience is impacted. Using Container Insights’ new landing page, customers can now easily understand overall health and performance of clusters across multiple accounts, identify the ones operating under high utilization and pinpoint the root cause by directly browsing to the related detailed dashboards view saving time and effort.

You can get started with enhanced observability at cluster level or account level by selecting “Enhanced” radio button on Amazon ECS console or through the AWS CLI, CloudFormation and CDK. You can also collect instance level metrics from EC2 by launching the CloudWatch agent as a daemon service on your Container Insights enabled clusters.

Container Insights is available in all public AWS Regions, including the AWS GovCloud (US) Regions, China (Beijing, operated by Sinnet), and China (Ningxia, operated by NWCD). Container Insights with enhanced observability for ECS comes with a flat metric pricing – see pricing page for details. For further information, visit the Container Insights documentation.

Read more


Amazon CloudWatch adds network performance monitoring for AWS workloads using flow monitors

Amazon CloudWatch Network Monitoring now allows you to monitor network performance of your AWS workloads by using flow monitors. The new feature provides near real-time visibility of network performance for workloads between compute instances such as Amazon EC2 and Amazon EKS, and AWS services such as Amazon S3, Amazon RDS, and Amazon DynamoDB, enabling you to rapidly detect and attribute network-driven impairments for your workloads.

CloudWatch Network Monitoring uses flow monitors to provide TCP-based performance metrics for packet loss and latency, and network health indicators of your AWS workloads to help you quickly pinpoint the root cause of issues. Flow monitors help you determine if a problem is caused by your application stack or by the underlying AWS infrastructure, so that you can proactively monitor your end user experience. If you need to contact AWS Support, Network Monitoring provides AWS Support with the same network health information, along with details about the underlying infrastructure, to help accelerate troubleshooting and resolution.

We are consolidating CloudWatch Internet Monitor and CloudWatch Network Monitor within CloudWatch Network Monitoring, which now includes flow monitors, synthetic monitors, and internet monitors. Use flow monitors to passively monitor the network performance of AWS workloads, synthetic monitors to actively monitor hybrid network segments, and internet monitors to monitor internet segments.

For the full list of AWS Regions where Network Monitoring for AWS workloads is available, visit the Regions list. To learn more, visit the Amazon CloudWatch Network Monitoring documentation.
 

Read more


AWS announces Amazon CloudWatch Database Insights

AWS announces the general availability of Amazon CloudWatch Database Insights with support for Amazon Aurora PostgreSQL and Amazon Aurora MySQL. Database Insights is a database observability solution that provides a curated experience designed for DevOps engineers, application developers, and database administrators (DBAs) to expedite database troubleshooting and gain a holistic view into their database fleet health.

Database Insights consolidates logs and metrics from your applications, your databases, and the operating systems on which they run into a unified view in the console. Using its pre-built dashboards, recommended alarms, and automated telemetry collection, you can monitor the health of your database fleets and use a guided troubleshooting experience to drill down to individual instances for root-cause analysis. Application developers can correlate the impact of database dependencies with the performance and availability of their business-critical applications. This is because they can drill down from the context of their application performance view in Amazon CloudWatch Application Signals to the specific dependent database in Database Insights.

You can get started with Database Insights by enabling it on your Aurora clusters using the Aurora service console, AWS APIs, and SDKs. Database Insights delivers database health monitoring aggregated at the fleet level, as well as instance-level dashboards for detailed database and SQL query analysis.

Database Insights is available in all public AWS Regions and applies a new vCPU-based pricing – see pricing page for details. For further information, visit the Database Insights documentation.
 

Read more


AWS Control Tower launches managed controls using declarative policies

Today, we are excited to announce the general availability of managed, preventive controls implemented using declarative policies in AWS Control Tower. These policies are a set of new optional controls that help you consistently enforce the desired configuration for a service. For example, customers can deploy a declarative, policy-based preventive control that disallows public sharing of Amazon Machine Images (AMIs). Declarative policies help you ensure that the controls configured are always enforced regardless of the introduction of new APIs, or when new principals or accounts are added.

Today, AWS Control Tower is releasing declarative, policy-based preventive controls for Amazon Elastic Compute Cloud (Amazon EC2) service, Amazon Virtual Private Cloud (Amazon VPC) and Amazon Elastic Block Store (Amazon EBS). These controls help you achieve control objectives such as limit network access, enforce least privilege, and manage vulnerabilities. AWS Control Tower’s new declarative policy-based preventive controls complement AWS Control Tower’s existing control capabilities, enabling you to disallow actions that lead to policy violations.

The combination of preventive, proactive, and detective controls helps you monitor whether your multi-account AWS environment is secure and managed in accordance with best practices. For a full list of AWS regions where AWS Control Tower is available, see AWS Region Table.

Read more


AWS Control Tower adds prescriptive backup plans to landing zone capabilities

Today, AWS Control Tower added AWS Backup to the list of AWS services you can optionally configure with prescriptive guidance. This configuration option allows you to select from a range of recommended backup plans, seamlessly integrating data backup and recovery workflows into your Control Tower landing zone and organizational units. A landing zone is a well-architected, multi-account AWS environment based on security and compliance best practices. AWS Control Tower automates the setup of a new landing zone using best-practices blueprints for identity, federated access, logging, account structure, and with this launch adds data retention.

When you choose to enable AWS Backup on your landing zone, and then select applicable organizational units, Control Tower creates a backup plan with predefined rules, like retention days, frequency, and time window during which backups occur, that define how to backup AWS resources across all governed member accounts. Applying the backup plan at the Control Tower landing zone ensures it is consistent for all member accounts in-line with best practice recommendations from AWS Backup.

For a full list of Regions where AWS Control Tower is available, see the AWS Region Table. To learn more, visit the AWS Control Tower homepage or see the AWS Control Tower User Guide.

Read more


AWS AppConfig supports automatic rollback safety from third-party alerts

AWS AppConfig has added support for third-party monitors to trigger automatic rollbacks when there are problems with updates to feature flags, experimental flags, or configuration data. Customers can now connect AWS AppConfig to third-party application performance monitoring (APM) solutions; previously monitoring required Amazon CloudWatch. This monitoring gives more confidence and additional safety controls when making any change on production.

Unexpected downtime or degraded performance can occur from faulty changes to feature flags or configuration data. AWS AppConfig provides safety guardrails to reduce this risk. One key safety guardrail for AWS AppConfig is the ability to have AWS AppConfig immediately roll back a change when a monitor alerts during the rollout of a feature flag or configuration change. This automation can typically remediate problems faster than a human operator can. Customers can use AWS AppConfig Extensions to connect to any API-enabled APM, including proprietary solutions.

Third-party alarm rollback for AWS AppConfig is available in all AWS Regions, including the AWS GovCloud (US) Regions. To get started, use the AWS AppConfig Getting Started Guide, or learn about AWS AppConfig automatic rollback.
 

Read more


Amazon CloudWatch adds context to observability data in service consoles, accelerating analysis

Amazon CloudWatch now adds context to observability data, making it much easier for IT operators, application developers, and Site Reliability Engineers (SREs) to navigate related telemetry, visualize relationships between resources, and accelerate analysis. This new feature transforms disparate metrics and logs into real-time insights, to identify root cause of issues faster and improve operational efficiency.

With this feature, Amazon CloudWatch now automatically visualizes the relationships within observability data and underlying AWS resources, such as Amazon EC2 instances and AWS Lambda functions. This feature is integrated across the AWS Management Console, accessible from multiple entry points including CloudWatch widgets, CloudWatch alarms, CloudWatch Application Signals, and CloudWatch Container Insights. Selecting this feature opens a side panel where you can explore and dive deeper into related metrics and logs all without leaving your current view. By selecting other metrics or resources of interest within the panel, you can streamline your troubleshooting process.

This new capability is enabled by default in all commercial AWS Regions. To view and explore related telemetry and resources, we recommend updating to the latest version of Amazon CloudWatch agent.

To learn more, visit the Amazon CloudWatch product page or view the documentation.

Read more


Find security, compliance, and operating metrics in AWS Resource Explorer

Today, AWS announced the general availability of a new console experience in AWS Resource Explorer that centralizes resource insights and properties from AWS services. With this release, you now have a single console experience to use simple keyword-based search for your AWS resources, view relevant resource properties, and confidently take action to organize your resources.

You can now inspect resource properties, resource-level cost with AWS Cost Explorer, AWS Security Hub findings, AWS Config compliance and configuration history, event timelines with AWS CloudTrail, and a relationship graph showing connected resources. You can also take actions on resources directly from the Resource Explorer console, such as manage tags, add resources to applications, and get additional information about a resource with Amazon Q. For example, now you can use Resource Explorer to search for untagged AWS Lambda functions, inspect the properties and tags of a specific function, examine a relationship graph to see what other resources it is connected to, and tag the function accordingly – all from a single console.

Resource Explorer is available at no additional charge, though features such as compliance information and configuration history require use of AWS Config, which is charged separately. These features are available in all AWS Regions where Resource Explorer is generally available. For more information on Resource Explorer, please visit our documentation. To learn more about how to configure Resource Explorer for your organization, view our multi-account search getting started guide.

Read more


Application Signals provides OTEL support via X-Ray OTLP endpoint for traces

CloudWatch Application Signals, an application performance monitoring (APM) solution, enables developers and operators to easily monitor the health and performance of their applications hosted across different compute platforms such as EKS, ECS and more. Customers can now use OpenTelemetry Protocol (OTLP), an open-source protocol, to send traces to the X-Ray OTLP endpoint, and unlock application performance monitoring capabilities with Application Signals.

OpenTelemetry Protocol (OTLP) is a standardized protocol for transmitting telemetry data from your applications to monitoring solutions like CloudWatch. Developers who use OpenTelemetry to instrument their applications can now send traces to the X-Ray OTLP endpoint, unlocking, via Application Signals, pre-built, standardized dashboards for critical application metrics (throughput/latency/errors), correlated trace spans, and interactions between applications and its dependencies (such as other AWS services). This provides operators with a complete picture of the application's health, allowing them to pinpoint the source of performance issues. By creating Service Level Objectives (SLOs) within Application Signals, customers can track performance indicators of crucial application functions. This makes it simple to spot and address any operations falling short of their business goals. Finally, customers can also analyze application issues in business context such as troubleshoot customer support tickets or find top customers impacted due to application disruptions by searching and analyzing transaction (or trace) spans.

OTLP endpoint for traces is available in all regions where Application Signals is generally available. For pricing, see Amazon CloudWatch pricing. See documentation to learn more.

Read more


AWS Systems Manager now support Windows Server 2025, Ubuntu Server 24.04, and Ubuntu Server 24.10

AWS Systems Manager now supports instances running Windows Server 2025, Ubuntu Server 24.04, and Ubuntu Server 24.10. Systems Manager customers running these operating systems versions now have access to all AWS Systems Manager Node Management capabilities, including Fleet Manager, Compliance, Inventory, Hybrid Activations, Session Manager, Run Command, State Manager, Patch Manager, and Distributor. For a full list of supported operating systems and machine types for AWS Systems Manager, see the user guide. Patch Manager enables you to automatically patch instances with both security-related and other types of updates across your infrastructure for a variety of common operating systems, including Windows Server, Amazon Linux, and Red Hat Enterprise Linux (RHEL). For a full list of supported operating systems for AWS Systems Manager Patch Manager, see the Patch Manager prerequisites user guide page.

This feature is available in all AWS Regions where AWS Systems Manager is available. For more information, visit the Systems Manager product page and Systems Manager documentation.
 

Read more


Announcing features to favorite applications and quickly access your recently used applications

Today, we’re excited to launch application favoriting and quick access features in the AWS Management Console. Now you can pin your most-used applications as favorites and quickly return to recently visited applications.

Customers can easily designate favorite applications with a single click, and sort your most important applications, bringing favorites to the top of your list. Recently visited applications can now be accessed in the Recently Visited widget on Console Home, streamlining your workflow and reducing the time spent searching for frequently used resources. You can also access favorites, recently visited, and a list of all applications in the Services menu in the navigation bar from anywhere in the AWS Console.

These new features are available in all public AWS Regions.

To start using recently visited and favorited applications, visit the Applications widget on Console Home by signing into the AWS Management Console and use the star icon to designate favorite applications.

Read more


AWS Announces Amazon Q account resources chat in the AWS Console Mobile App

Today, Amazon Web Services (AWS) is announcing the general availability of Amazon Q Developer’s AWS account resources chat capability in the AWS Console Mobile Application. With this capability, you can use your device’s voice input and output capabilities along with natural language prompts to list resources in your AWS account, get specific resource details, and ask about related resources while on-the-go.

From the Amazon Q tab in the AWS Console Mobile App, you can ask Q to “list my running EC2 instances in us-east-1” or “list my S3 buckets” and Amazon Q returns a list of resource details, along with a summary. You can ask “what Amazon EC2 instances is Amazon CloudWatch alarm <name> monitoring” or ask “what related resources does my ec2 instance <id> have?” and Amazon Q will respond with specific resource details in a mobile friendly format.

The Console Mobile App lets users view and manage a select set of resources to stay informed and connected with their AWS resources while on-the-go. Visit the product page for more information about the Console Mobile Application.
 

Read more


Announcing general availability of AWS Chatbot SDK

AWS announces general availability of AWS Chatbot SDKs. This launch provides developers access to AWS Chatbot’s control plane APIs by using the AWS SDK.

With this launch, customers can programmatically implement ChatOps workflows in their chat channels. They can now utilize the SDK to configure Microsoft Teams and Slack channels for monitoring and diagnosing issues. They can use SDK to configure action buttons and command aliases so that channel members can fetch telemetry and diagnose issues quickly. They can also programmatically tag resources to enforce tag-based controls in their environments.

AWS Chatbot SDKs are available at no additional cost in AWS Regions where AWS Chatbot is offered. Visit the AWS Chatbot product page and API guide in AWS Chatbot documentation to learn more.
 

Read more


AWS Resilience Hub introduces a summary view

AWS Resilience Hub introduces a new summary view, providing an executive level view of the resilience posture of the application portfolio defined on Resilience Hub. The new summary view allows you to visualize the state of your application portfolio, so you can efficiently manage and improve your applications’ ability to withstand and recover from disruptions.

Understanding the current state of application resilience can be a challenge, especially when it comes to identifying which applications need attention and communicating this information across your organization. The new summary view in Resilience Hub helps you to quickly identify applications that require remediation and streamline resilience management across your application portfolio. In addition to the new summary view, we are providing the ability to export the data powering the summary view to allow you to create custom reports for stakeholder communication. The summary and export functions allows teams to quickly assess the current state of application resilience and take necessary actions to improve it.

The new summary view is available in all of the AWS Regions where AWS Resilience Hub is supported. For the most up-to-date availability information, see the AWS Regional Services List.

To learn more about AWS Resilience Hub, visit our product page. To get started with AWS Resilience Hub, sign into the AWS console.

Read more


Announcing the new AWS User Notifications SDK

Today, we announced the general availability of AWS User notifications SDK which enables you to programmatically configure and get notifications (e.g., AWS Health events, EC2 Instance state change, or CloudWatch Alarms). The User Notifications SDK makes it easy to automate the creation of notification configurations in your accounts; e.g., a Cloud Center of Excellence (CCoE) can set up AWS Health notifications for each provisioned account.

With User Notifications SDK, you specify which events you want to be notified about, and in which channels (email, AWS Chatbot for Microsoft Teams and Slack notifications, and AWS Console Mobile App push notifications) with no need to access the Management Console. Users with User Notifications permissions can enable notifications for use cases like AWS Health events, Amazon CloudWatch alarms, or Amazon EC2 instance state changes. For example, notify your team’s Slack channel whenever an EC2 instance in region US East (Virginia) or Europe (Frankfurt) with tag ‘production’ changes state to “stopped”.

The User Notifications SDK is offered at no additional cost.

For more information, visit the AWS User Notifications product page and documentation. To get started, go to AWS User Notifications API reference and AWS User Notifications Contacts API reference. CloudFormation support will be coming soon.

Read more


Amazon CloudWatch Synthetics now supports Playwright runtime to create canaries with NodeJS

CloudWatch Synthetics, which continuously monitors web applications and APIs by running scripted canaries to help you detect issues before they impact end-users, now supports the Playwright framework for creating NodeJS canaries enabling comprehensive monitoring and diagnosis of complex user journeys and issues that are challenging to automate with other frameworks.

Playwright is an open-source automation library for testing web applications. You can now create multi-tab workflows in a canary using the Playwright runtime which comes with the advantage of troubleshooting failed runs with logs stored directly to CloudWatch Logs database in your AWS account. This replaces the previous method of storing logs as text files and enables you to leverage CloudWatch Logs Insights for query-based filtering, aggregation, and pattern analysis. You can now query CloudWatch logs for your canaries using the canary run ID or step name, making the troubleshooting process faster and more precise than one relying on timestamp correlation for searching logs. Playwright-based canaries also generate artifacts like reports, metrics, and HAR files, even when canaries times out, ensuring you have the required data needed for root cause analysis in those scenarios. Additionally, the new runtime simplifies canary configuration by allowing customization through a JSON file, removing the need to call a library function in the canary code.

Playwright runtime is available for creating canaries in NodeJS in all commercial regions at no additional cost to users.

To learn more about the runtime, see documentation, or refer to the user guide to get started with CloudWatch Synthetics.

Read more


Amazon CloudWatch Logs launches the ability to transform and enrich logs

Amazon CloudWatch Logs announces log transformation and enrichment to improve log analytics at scale with consistent, and context-rich format. Customers can add structure to their logs using pre-configured templates for common AWS services such as AWS Web Application Firewall (WAF), Route53, or build custom transformers with native parsers such as Grok. Customers can also rename existing attributes and add additional metadata to their logs such as accountId, and region.

Logs emitted from various sources vary widely in format and attribute names, which makes analysis across sources cumbersome. With today’s launch, customers can simplify their log analytics experience by transforming all their logs into a standardized JSON structure. Transformed logs can be leveraged to accelerate analytics experience using field indexes, discovered fields in CloudWatch Logs Insights, provide flexibility in alarming using metric filters and forwarding via subscription filters. Customers can manage log transformations natively within CloudWatch without needing to setup complex pipelines.

Log transformation and enrichment capability is available in all AWS Commercial Regions, and included with existing Standard log class ingestion price. Logs Store (Archival) costs will be based on log size after transformation, which may exceed the original log volume. With a few clicks in the Amazon CloudWatch Console, customers can configure transformers at log group level. Alternatively, customers can setup transformers at account, or log group level using AWS Command Line Interface (AWS CLI), AWS CloudFormation, AWS Cloud Development Kit (AWS CDK), and AWS SDKs. Read the documentation to learn more about this capability.
 

Read more


AWS Lambda supports application performance monitoring (APM) via CloudWatch Application Signals

AWS Lambda now supports Amazon CloudWatch Application Signals, an application performance monitoring (APM) solution, enabling developers and operators to easily monitor the health and performance of their serverless applications built using Lambda.

Customers want an easy way to quickly identify and troubleshoot performance issues to minimize the mean time to recovery (MTTR) and operational costs of running serverless applications. Now, Application Signals provides pre-built, standardized dashboards for critical application metrics (such as throughput, availability, latency, faults, and errors), correlated traces, and interactions between the Lambda function and its dependencies (such as other AWS services), without requiring any manual instrumentation or code changes from developers. This gives operators a single-pane-of-glass view of the health of the application and enables them to drill down to establish the root cause of performance anomalies. You can also create Service Level Objectives (SLOs) in Application Signals to closely track the performance KPIs of critical operations in your application, enabling you to easily identify and triage operations that do not meet your business KPIs. Application Signals auto-instruments your Lambda function using enhanced AWS Distro for OpenTelemetry (ADOT) libraries, delivering better performance (cold start latency and memory consumption) than before.

To get started, visit the Configuration tab in Lambda console and enable Application Signals for your function with just one click in the “Monitoring and operational tools” section. To learn more, visit the launch blog post, Lambda developer guide, and Application Signals developer guide.

Application Signals for Lambda is available in all commercial AWS Regions where Lambda and CloudWatch Application Signals are available.
 

Read more


Introducing an AWS Management Console Visual Update (Preview)

Now available in Preview, the visual update in the AWS Management Console helps customers scan content, focus on the key information, and find what they are looking for more effectively, while preserving the familiar and consistent experience. The new, modern layout also provides easy access to contextual tools.

Customers now benefit from optimized information density that maximizes available content on screen, allowing them to see more content at a glance. Thanks to a reduced visual complexity, crisper styles and improved use of color, the experience is more intuitive, readable, and efficient. We modernized the interface, with rounder shapes and a new family of illustrations, complemented by added motion to bring moments of delight. While introducing these visual enhancements, we continue to offer a predictable experience that adheres to the highest accessibility standards.

The visual update is available in selected consoles across all AWS Regions, with the latest version of Cloudscape Design System. We will be extending the update across all services. Visit the AWS Management Console to experience the visual update.

Read more


Amazon CloudWatch launches full visibility into application transactions

AWS announces the general availability of an enhanced search and analytics experience in CloudWatch Application Signals. This feature empowers developers and on-call engineers with complete visibility into application transaction spans, which are the building blocks of distributed traces that capture detailed interactions between users and various application components.

This feature offers 3 core benefits. First, developers can answer any questions related to application performance or end-user impact through an interactive visual editor and enhancements to Logs Insights queries. They can correlate spans with end-user issues using attributes like customer name or order number. With the new JSON parse and unnest functions in Logs Insights, they can link transactions to business events such as failed payments and troubleshoot. Second, developers can diagnose rarely occurring issues, such as p99 latency spikes in APIs, with the enhanced troubleshooting capabilities in Amazon CloudWatch Application Signals that correlates application metrics with comprehensive transaction spans. Finally, CloudWatch Logs offers advanced features for transaction spans, including data masking, forwarding via subscription filters, and metric extraction. You can enable these capabilities for existing spans sent to X-Ray or by sending spans to a new OTLP (OpenTelemetry Protocol) endpoint for traces. This allows you to enhance your observability while maintaining flexibility in your setup.

You can search and analyze spans in all regions where Application Signals is available. A new pricing option is also available , encompassing Application Signals, X-Ray traces, and complete visibility into transaction spans - see Amazon CloudWatch pricing. Refer to documentation for more details.
 

Read more


The new AWS Systems Manager experience: Simplifying node management

The new AWS Systems Manager experience helps you scale operational efficiency by simplifying node management, making it easier to manage nodes running anywhere— whether it's EC2 instances, hybrid servers, or servers running in a multicloud environment. The new AWS Systems Manager experience gives you a comprehensive, centralized view to easily manage all of your nodes at scale.

With this launch, you can now see all managed and unmanaged nodes across your organizations’ AWS accounts and Regions from a single place. You can also identify, diagnose, and remediate unmanaged nodes. Once remediated, meaning they are managed by Systems Manager, you can leverage the full suite of Systems Manager tools to patch nodes with security updates, securely connect to nodes without managing SSH keys or bastion hosts, automate operational commands at scale, and gain comprehensive visibility across your entire fleet. Systems Manager is also now integrated with Amazon Q Developer which extends your ability to see and control your nodes from anywhere in the AWS console. For example, you can ask Amazon Q to “show me managed instances running Amazon Linux 1” to quickly get the information you need for operational investigations. It's the same powerful Systems Manager many customers rely on, improved and simplified to help you save time and effort.

The new Systems Manager experience is available in AWS Regions found here.

Get started now at no additional cost and easily enable the new experience in Systems Manager. For more information, visit the Systems Manager product page and user guide.
 

Read more


AWS CloudTrail Lake launches enhanced analytics and cross-account data access

AWS announces two significant enhancements to CloudTrail Lake, a managed data lake that enables you to aggregate, immutably store, and analyze your activity logs at scale:

  • Comprehensive dashboard capabilities: A new "Highlights" dashboard provides an at-a-glance overview of your AWS activity logs including AI-powered insights (AI-powered insights is in preview). Additionally, we have added 14 new pre-built dashboards catering to various use cases such as security and operational monitoring. These dashboards provide a starting point to analyze trends, detect anomalies, and conduct efficient investigations across your AWS environments. For example, the security dashboard displays top access denied events, failed console login attempts, and more. You can also create custom dashboards with scheduled refreshes, tailoring your monitoring to specific needs.
  • Cross-account sharing of event data stores: This feature allows you to securely share your event data stores with select IAM identities using Resource-Based Policies (RBP). These identities can then query the shared event data store within the same AWS Region where the event data store was created, facilitating more comprehensive analysis across your organization while maintaining security.

These features are available in all AWS Regions where AWS CloudTrail Lake is supported, except AI-powered insights on the “Highlights" dashboard, which is in preview in N. Virginia, Oregon, and Tokyo Regions. While these enhancements are available at no additional cost, standard CloudTrail Lake query charges apply when running queries to generate results or create visualizations for the CloudTrail Lake dashboards. To learn more, visit the AWS CloudTrail documentation or read our News Blog.

Read more


Amazon CloudWatch Synthetics now automatically deletes Lambda resources associated with canaries

Amazon CloudWatch Synthetics, an outside-in monitoring capability which continually verifies your customers’ experience by running snippets of code on AWS Lambda called canaries, will now automatically delete your associated Lambda resources when you try to delete Synthetics canaries minimizing the manual upkeep required to manage AWS resources in your account.

CloudWatch Synthetics creates Lambdas to execute canaries to monitor the health and performance of your web applications or API endpoints. When you delete a canary the Lambda function and its layers are no longer usable. With the release of this feature these Lambdas will be automatically removed when a canary is deleted, reducing the need for additional housekeeping in maintaining your Synthetics canaries. Canaries deleted via AWS console will automatically cleanup related lambda resources. Any new canaries created via CLI/SDK or CFN will automatically opt-in to this feature whereas canaries created before this launch need to be explicitly opted in.

This feature is available in all commercial regions, the AWS GovCloud (US) Regions, and China regions at no additional cost to the customers.

To learn more about the delete behavior of canaries, see the documentation, or refer to the user guide and One Observability Workshop to get started with CloudWatch Synthetics.
 

Read more


Accelerate AWS CloudFormation troubleshooting with Amazon Q Developer assistance

AWS CloudFormation now offers generative AI assistance powered by Amazon Q Developer to help troubleshoot unsuccessful CloudFormation deployments. This new capability provides easy-to-understand analysis and actionable steps to simplify the resolution of the most common resource provisioning errors encountered during CloudFormation deployments.

When creating or modifying a CloudFormation stack, CloudFormation can encounter errors in resource provisioning, such as missing required parameters for an EC2 instance or inadequate permissions. Previously, troubleshooting a failed stack operation could be a time-consuming process. After identifying the root cause of the failure, you had to search through blogs and documentation for solutions and determine the next steps, leading to longer resolution times. Now, when you review a failed stack operation in the CloudFormation Console, CloudFormation automatically highlights the likely root cause of the failure. You can click the "Diagnose with Q" button in the error alert box and Amazon Q Developer will provide a human-readable analysis of the error, helping you understand what went wrong. If you need further assistance, you can click the "Help me resolve" button to receive actionable resolution steps tailored to your specific failure scenario, helping you accelerate resolution of the error.

To get started, open the CloudFormation Console and navigate to the stack events tab for a provisioned stack. This feature is available in AWS Regions where AWS CloudFormation and Amazon Q Developer are available. Refer to the AWS Region table for service availability details. Visit our user guide to learn more about this feature.
 

Read more


AWS CloudFormation Hooks now allows AWS Cloud Control API resource configurations evaluation

AWS CloudFormation Hooks now allow you to evaluate resource configurations from AWS Cloud Control API (CCAPI) create and update operations. Hooks allow you to invoke custom logic to enforce security, compliance, and governance policies on your resource configurations. CCAPI is a set of common application programming interfaces (APIs) that is designed to make it easy for developers to manage their cloud infrastructure in a consistent manner and leverage the latest AWS capabilities faster. By extending Hooks to CCAPI, customers can now inspect resource configurations prior to CCAPI create and update operations, and block or warn the operations if there is a non-compliant resource found.

Before this launch, customers would publish Hooks that would only be invoked during CloudFormation operations. Now, customers can extend their resource Hook evaluations beyond CloudFormation to CCAPI based operations. Customers with existing resource Hooks, or who are using the recently launched pre-built Lambda and Guard hooks, simply need to specify “Cloud_Control” as a target in the hooks’ configuration.

Hooks is available in all AWS Commercial Regions. The CCAPI support is available for customers who use CCAPI directly or third-party IaC tools that have CCAPI providers support.

To get started, refer to Hooks user guide and CCAPI user guide for more information. Learn the detail of this feature from this AWS DevOps Blog.
 

Read more


Amazon CloudWatch Logs announces field indexes and enhanced log group selection in Logs Insights

Amazon CloudWatch Logs introduces field indexes and enhanced log group selection to accelerate log analysis. Now, you can index critical log attributes like requestId and transactionId to accelerate query performance and scan relevant indexed data. This means faster troubleshooting, and easier identification of trends. You can create up to 20 field indexes per log group, and once defined, all future logs matching the defined fields will remain indexed for up to 30 days. Additionally, CloudWatch Logs Insights now supports querying up to 10,000 log groups, across one or more accounts linked via cross-account observability.

Customers using field indexes, will benefit from faster query execution times while searching across vast amounts of logs. CloudWatch Logs Insights queries using “filter field = value” syntax will automatically leverage indexes, when available. When combined with enhanced log group selection, customers can now gain faster insights across a much larger set of logs in Logs Insights. Customers can select up to 10,000 log groups via either log group prefix or "All" log groups option. To further optimize query performance and costs, customers can use the new "filterIndex" command to limit queries to indexed data only.

Field indexes are available in all AWS Regions where CloudWatch Logs is available and are included as part of standard log class ingestion at no additional cost.

To get started, define index policy at account level or per log-group level within AWS console, or programmatically via API/CLI. See documentation to learn more about field indexes.
 

Read more


AWS Compute Optimizer now supports rightsizing recommendations for Amazon Aurora

AWS Compute Optimizer now provides recommendations for Amazon Aurora DB instances. These recommendations help you identify idle database instances and choose the optimal DB instance class, so you can reduce costs for unused resources and increase the performance of under-provisioned workloads.

AWS Compute Optimizer automatically analyzes Amazon CloudWatch metrics such as CPU utilization, network throughput, and database connections to generate recommendations for your DB instances running Amazon Aurora MySQL-Compatible Edition and Aurora PostgreSQL-Compatible Edition engines. If you enable Amazon RDS Performance Insights on your DB instances, Compute Optimizer will analyze additional metrics such as DBLoad and out-of-memory counters to give you more insights to choose the optimal DB instance configuration. With this launch, AWS Compute Optimizer now supports recommendations for Amazon RDS for MySQL, Amazon RDS for PostgreSQL, and Amazon Aurora database engines.

This new feature is available in all AWS Regions where AWS Compute Optimizer is available except the AWS GovCloud (US) and the China Regions. To learn more about the new feature updates, please visit Compute Optimizer’s product page and user guide.

Read more


AWS Compute Optimizer now supports idle resource recommendation

Today, AWS announces that AWS Compute Optimizer now supports recommendations to help you identify idle AWS resources. With this new recommendation type, you will be able to identify resources that are un-used and may be candidates for turning off or deleting, resulting in cost savings.

With the new idle resource recommendation, you will be able to identify idle EC2 instances, EC2 Auto Scaling groups, EBS volumes, ECS services running on Fargate, and RDS instances. You can view the total savings potential of stopping or deleting these idle resources. Compute Optimizer analyzes 14 consecutive days of utilization history to validate if resources are idle to provide trustworthy savings opportunities. You can also view idle resource recommendation across all AWS accounts in your organization through the Cost Optimization Hub, with de-duplicated estimated savings with other recommendations on the same resources.

For more information about the AWS Regions where Compute Optimizer is available, see AWS Region table.

For more information about Compute Optimizer, visit our product page and documentation. You can start using AWS Compute Optimizer through the AWS Management Console, AWS CLI, and AWS SDK.

Read more


Amazon CloudFront now supports additional log formats and destinations for access logs

Amazon CloudFront announces enhancements to its standard access logging capabilities, providing customers with new log configuration and delivery options. Customers can now deliver CloudFront access logs directly to two new destinations: Amazon CloudWatch Logs and Amazon Data Firehose. Customers can select from an expanded list of log output formats, including JSON and Apache Parquet (for logs delivered to S3). Additionally, they can directly enable automatic partitioning of logs delivered to S3, select specific log fields, and set the order in which they are included in the logs.

Until today, customers had to write custom logic to partition logs, convert log formats, or deliver logs to CloudWatch Logs or Data Firehose. The new logging capabilities provide native log configurations, eliminating the need for custom log processing. For example, customers can now directly enable features like Apache Parquet format for CloudFront logs delivered to S3 to improve query performance when using services like Amazon Athena and AWS Glue.

Additionally, customers enabling access log delivery to CloudWatch Logs will receive 750 bytes of logs free for each CloudFront request. Standard access log delivery to Amazon S3 remains free. Please refer to the 'Additional Features' section of the CloudFront pricing page for more details.

Customers can now enable CloudFront standard logs to S3, CloudWatch Logs and Data Firehose through the CloudFront console or APIs. CloudFormation support will be coming soon. For detailed information about the new access log features, please refer to the Amazon CloudFront Developer Guide.

Read more


AWS Control Tower improves Hooks management for proactive controls and extends proactive controls support in additional regions

Today, we are excited to release an improved AWS CloudFormation Hooks management capability for AWS Control Tower proactive controls. With this release, Hooks deployed for proactive controls will now be managed by AWS Control Tower. Additionally, we are releasing proactive controls in AWS Canada West (Calgary) and Asia Pacific (Malaysia) regions. These controls help you meet control objectives such as establish logging and monitoring, encrypt data at rest, or improve resiliency. To see a full list of the proactive controls, see the Controls Reference Guide.

AWS Control Tower’s proactive control capabilities leverage AWS CloudFormation Hooks to identify and block non-compliant resources proactively before AWS CloudFormation provisions them. Previously, proactive control deployed Hooks were protected to ensure only AWS Control Tower can modify them, preventing customers from authoring their own Hooks. With this release, proactive control deployed Hooks are now directly managed by the AWS Control Tower service, allowing customers to author their own Hooks, while also benefiting from the AWS Control Tower proactive controls.

AWS Control Tower’s proactive controls are available in all AWS commercial Regions where AWS Control Tower is available. For a full list of AWS Regions where AWS Control Tower is available, see AWS Region Table. You can start deploying the AWS Control Tower controls from the console or using AWS Control Tower control APIs.
 

Read more


Amazon CloudWatch Application Signals launches support for Runtime Metrics

Today, AWS announces the general availability of runtime metrics support in Amazon CloudWatch Application Signals, an OpenTelemetry (OTel) compatible application performance monitoring (APM) feature in CloudWatch. You can view runtime metrics like garbage collection, memory usage, and CPU usage for your Java or Python applications to troubleshoot issues such as high CPU utilization or memory leaks, which can disrupt the end-user experience.

Application Signals simplifies troubleshooting application performance against key business or service level objectives (SLOs) for AWS applications. Without any source code changes, Application Signals collects traces, application metrics(error/latency/throughput), logs and now runtime metrics to bring them together in a single pane of glass view.
Runtime metrics enable real-time monitoring of your application’s resource consumption, such as memory and CPU usage. With Application Signals, you can understand whether anomalies in runtime metrics have any impact on your end-users by correlating them with application metrics such as error/latency/throughput. For example, you will be able to identify if a service latency spike is a result of an increase in garbage collection pauses by viewing these metric graphs side by side. Additionally you will be able to identify thread contention, track memory allocation patterns, and pinpoint memory or CPU spikes that may lead to application slowdowns or crashes, impacting end user experience.

Runtime metrics support is available in all regions Application Signals is available in. Runtime metrics are charged based on Application Signals pricing, see Amazon CloudWatch pricing.

To learn more, see documentation to enable Amazon CloudWatch Application Signals.

Read more


Author AWS CloudFormation Hooks using the CloudFormation Guard domain specific language

AWS CloudFormation Hooks now allows customers to use the AWS CloudFormation Guard domain specific language to author hooks. Customers use AWS CloudFormation Hooks to invoke custom logic to inspect resource configurations prior to a create, update or delete AWS CloudFormation stack operation. If a non-compliant configuration is found, Hooks can block the operation or let the operation continue with a warning. With this launch, you can now author hooks by simply pointing to a Guard rule set stored as an S3 object.

Prior to this launch, customers authored hooks using a programming language and registered the hooks as extensions on the CloudFormation registry using the cfn-cli. This pre-built hook simplifies this authoring process and provides customers the ability to extend their existing Guard rules used for static template validation. Now, you can store your Guard rules, either as individual or compressed files in an S3 bucket, and provide your S3 URI in your hooks configuration.

The Guard hook is available at no additional charge in all AWS Commercial Regions. To get started, you can use the new Hooks console workflow within CloudFormation console, AWS CLI, or CloudFormation.

To learn more about the Guard hook, check out the AWS DevOps Blog or refer to the Guard Hook User Guide. Refer to Guard User Guide to learn more about Guard including how to write Guard rules.
 

Read more


Announcing AWS CloudFormation support for Recycle Bin rules

Today, AWS announces AWS CloudFormation support for Recycle Bin, a data recovery feature that enables restoration of accidentally deleted Amazon EBS Snapshots and EBS-backed AMIs. You can now use Recycle Bin rules as a resource in your AWS CloudFormation templates, stacks, and stack sets.

Using AWS CloudFormation, you can now create, edit, and delete Recycle Bin rules as part of your CloudFormation templates and incorporate Recycle Bin rules into your automated infrastructure deployments. For example, region-level Recycle Bin rules protects all resources of the specified type in the AWS Region in which the rule is created. If you have a template that automates the provisioning of new accounts, you can now add a region-level Recycle Bin rule to it. This ensures that all EBS Snapshots and/or EBS-backed AMIs in those accounts are automatically protected from accidental deletions and stored in the Recycle Bin according to the region-level rule.

This feature is now available in all AWS Commercial Regions and the AWS GovCloud (US) Regions.

To get started using Recycle Bin in AWS CloudFormation, visit the AWS CloudFormation console. Please refer to the AWS CloudFormation user guide for information on using Recycle Bin rules as a resource in your templates, stacks, and stack sets. Learn more about Recycle Bin here.
 

Read more


AWS CloudFormation Hooks introduces stack and change set target invocation points

AWS CloudFormation Hooks announces the general availability of new target invocation points: stack and change set. CloudFormation Hooks allows you to invoke custom logic to inspect resource configurations prior to CloudFormation operations to enforce organizational best practices and ensure only compliant resources are provisioned. Today’s launch extends this capability beyond resource properties, enabling expressive safety checks that consider the entire context of a stack and the planned CloudFormation operation changes.

Customers previously used Hooks to run validation checks on resource properties before provisioning. Now, by targeting the stack as the control point, you can run hooks against the entire template payload and target multiple resources at once. This allows you to examine resource relationships and their dependencies. Moreover, you can use the change set invocation point to run Hooks when a change set is created to evaluate the updated template and change set payload. This allows you to automate your change set review, and reduce the end-to-end time to resolve issues. You can set Hooks to fail the deployment or warn about the operations if there is any non-compliant configurations found.

The stack and change set target control points are now available in all AWS Commercial Regions. Refer to Hooks developer guide to learn more.

Read more


AWS CloudFormation Hooks now support custom AWS Lambda functions

AWS CloudFormation Hooks introduces a pre-built hook that allows you to simply point to an AWS Lambda function in your account. With CloudFormation Hooks, you can provide custom logic that proactively evaluate your resource configurations before provisioning. Today’s launch allows you to provide your custom logic as a Lambda function, allowing a simpler way for you to author a hook while gaining extended flexibility of hosting Lambda functions in your account.

Prior to this launch, customers used the CloudFormation CLI (cfn-cli) to author and publish hooks to the CloudFormation registry. Now, customers can simply activate the Lambda hook and pass a Lambda Amazon Resource Names (ARNs) for hooks to invoke. This allows you to directly edit your Lambda function to make updates without re-configuring your hook. Additionally, you no longer have to register your custom logic to CloudFormation registry.

The Lambda hook is available at no additional charge in all AWS Commercial Regions. Customers will incur a charge for Lambda usage. Refer to Lambda’s pricing guide for more information. To get started, you can use the new Hooks console workflow within CloudFormation console, AWS CLI, or CloudFormation.

To learn more about the Lambda hook, check out the detailed feature walkthrough on the AWS DevOps Blog or refer to the Lambda Hook User Guide. To get started with creating your Lambda function, visit AWS Lambda User Guide.
 

Read more


CloudWatch RUM now supports percentile aggregations and simplified troubleshooting with web vitals metrics

CloudWatch RUM, which captures real-time data on web application performance and user interactions, helping you quickly detect and resolve issues impacting the user experience, now supports percentile aggregation of web vital metrics and simplified events based troubleshooting directly from the web vitals anomaly.

Google uses the 75th percentile (p75) of a web page’s Core Web Vitals—Largest Contentful Paint, First Input Delay, and Cumulative Layout Shift—to influence page ranking. With CloudWatch RUM, you can now monitor these p75 values of web page vitals and ensure that majority of your visitors experience optimal performance, minimizing the impact of outliers. You can also click on any point in the Web Vitals graph to view correlated page events, allowing you to quickly dive into event details such as browser, device, and geolocation to identify specific conditions causing performance issues. Additionally, you can track affected users and sessions for in-depth analysis and quickly troubleshoot issues without the added steps of applying filters to retrieve correlated events in CloudWatch RUM.

These enhancements are available in all regions where CloudWatch RUM is available at no additional cost to users.

See documentation to learn more about the feature, or see user guide or AWS One Observability Workshop to get started with real user monitoring using CloudWatch RUM.

Read more


AWS support case management is now available in AWS Chatbot for Microsoft Teams and Slack

AWS Chatbot announces general availability of AWS Support case management in Microsoft Teams and Slack. AWS customers can now use AWS Chatbot to monitor AWS support cases updates and respond to them from chat channels.

When troubleshooting issues, customers need to stay informed up-to-date on the latest support case updates in a place where they are collaborating. Previously, customers had to install a separate app or navigate to the Console to manage support cases. Now, customers can monitor and manage support cases from Microsoft Teams and Slack with AWS Chatbot.

To manage support cases from chat channels with AWS Chatbot, customers subscribe a chat channel to support case events published in EventBridge. As new case correspondences get added, AWS Chatbot sends the support case update notifications to the configured chat channels. Channel members can the use action buttons on the notifications to view the latest case updates and respond to them without leaving the chat channel.

To interact with support cases in chat channels, you must have a Business, Enterprise On-Ramp, or Enterprise Support plan. The case management in chat applications is available at no additional cost in AWS Regions where AWS Chatbot is offered. Get started with AWS Chatbot by visiting the AWS Management Chatbot Console and by downloading the AWS Chatbot app from the Microsoft Teams marketplace or Slack App Directory. Visit the AWS Chatbot product page and Managing AWS Support cases from chat channels in AWS Chatbot documentation to learn more.
 

Read more


AWS Chatbot adds support for chatting about AWS resources with Amazon Q Developer in Microsoft Teams and Slack

We are excited to announce the general availability of Amazon Q Developer in AWS Chatbot, which provides answers to customers’ AWS resource related queries in Microsoft Teams and Slack.

When issues occur, customers need to quickly find relevant resources to troubleshoot issues. Customer can now ask questions in natural language in chat channels to list resources in AWS accounts, get specific resource details, and ask about related resources using Amazon Q Developer.

With Amazon Q Developer in AWS Chatbot, customers find AWS resources by typing "@aws show ec2 instances in running state in us-east-1" or “@aws what is the size of the auto scaling group XX in us-east-2?”

Get started with AWS Chatbot by visiting the Chatbot Console and by downloading the AWS Chatbot app from the Microsoft Teams marketplace or Slack App Directory. To get started with chatting with Amazon Q in AWS Chatbot, visit the Asking Amazon Q questions in AWS Chatbot in AWS Chatbot documentation.

Read more


Easily troubleshoot NodeJS applications with Amazon CloudWatch Application Signals

Today, AWS announces the general availability of NodeJS applications monitoring on Amazon CloudWatch Application Signals, an OpenTelemetry (OTel) compatible application performance monitoring (APM) feature in CloudWatch. Application Signals simplifies the process of automatically tracking application performance against key business or service level objectives (SLOs) for AWS applications. Service operators can access a pre-built, standardized dashboard for AWS application metrics through Application Signals.

Customers already use Application Signals to monitor their Java, Python and .NET applications deployed on EKS, EC2 and other platforms. With this release, they can now easily onboard and troubleshoot issues in their NodeJS applications with no additional code. NodeJS application developers can quickly triage current operational health, and whether their applications are meeting their longer-term performance goals. Customers can ensure high availability of their NodeJS applications through Application Signals’ easy navigation flow, starting with an alert for a service level indicator (SLI) gone unhealthy and deep diving from there to an error or a spike in the auto generated graphs for application metrics (latency/errors/requests). In a single pane of glass view, they can correlate application metrics with traces, application logs and infrastructure metrics to troubleshoot issues with their application in a few clicks.

Application Signals is available in all commercial AWS Regions, except, CA West (Calgary) Region, Asia Pacific (Malaysia), AWS GovCloud (US) Regions and China Regions. For pricing, see Amazon CloudWatch pricing.

To learn more, see documentation to enable Amazon CloudWatch Application Signals for Amazon EKS, Amazon EC2, native Kubernetes and custom instrumentation for other platforms.

Read more


Amazon Application Recovery Controller zonal shift and zonal autoshift extend support for EC2 Auto Scaling

Amazon Application Recovery Controller (ARC) zonal shift and zonal autoshift have expanded their capabilities and now support EC2 Auto Scaling. ARC zonal shift helps you quickly recover an unhealthy application in an Availability Zone (AZ), and reduce the duration and severity of impact to the application due to events such as power outages and hardware or software failures. ARC zonal autoshift safely and automatically shifts your application’s traffic away from an AZ when AWS identifies a potential failure affecting that AZ.

EC2 Auto Scaling customers can now shift traffic away from an AZ in the event of a failure. Zonal shift works with EC2 Auto Scaling by stopping dynamic scale-in, so that capacity is not unnecessarily removed and launching new EC2 instances in the healthy AZs only. In addition, you can set health checks to enabled in the impaired AZ or disable health checks in the impaired AZ. When disabled, it will pause unhealthy instance replacement in the AZ that has an active zonal shift. Enable your EC2 Auto Scaling Groups for zonal shift using the EC2 Auto Scaling console or API, and then trigger a zonal shift or enable autoshift via ARC zonal shift console or API. To learn more review the ARC documentation and read this launch blog.

There is no additional charge for using zonal shift or zonal autoshift. See the AWS Regional Services List for the most up-to-date availability information.
 

Read more


Announcing Amazon CloudWatch Metrics support in AWS End User Messaging

Today, AWS announces general availability support for 10 new Amazon CloudWatch metrics in AWS End User Messaging for the SMS and MMS channel. AWS End User Messaging provides developers with a scalable and cost-effective messaging infrastructure without compromising the safety, security, or results of their communications.

You can now use CloudWatch metrics to monitor SMS and MMS message performance. The new metrics allow you to track the number of messages sent and delivered, messages feedback rates such as one-time passcodes conversions, and track messages blocked by SMS protect. Customers can use CloudWatch Metrics Insights to graph and identify trends in real time and monitor those trends directly in the AWS End User Messaging console or in Amazon CloudWatch.

To learn more, visit the AWS End User Messaging SMS User Guide.
 

Read more


AWS Command Line Interface adds PKCE-based authorization for single sign-on

The AWS Command Line Interface (AWS CLI) v2 now supports OAuth 2.0 authorization code flows using the Proof Key for Code Exchange (PKCE) standard. This provides a simple and safe way to retrieve credentials for AWS CLI commands.

The AWS CLI is a unified tool that enables you to control multiple AWS services from the command line and to automate them through scripts. AWS CLI v2 offers integration with AWS IAM Identity Center, the recommended service for managing workforce access to AWS applications and multiple AWS accounts. The authorization code flow with PKCE is the recommended best practice for access to AWS resources from desktops and mobile devices with web browsers. It is now the default behavior when running the aws sso login or aws configure sso commands.

To learn more, see Configuring IAM Identity Center authentication with the AWS CLI in the AWS CLI User Guide. Share your questions, comments, and issues with us on GitHub. AWS IAM Identity Center is available at no additional cost in AWS Regions.
 

Read more


Amazon Managed Service for Prometheus collector adds support for update and AWS console

Amazon Managed Service for Prometheus collector, a fully-managed agentless collector for Prometheus metrics, adds support for updating the scrape configuration inline and support for configuration via the Amazon Managed Service for Prometheus AWS console. Starting today, you can update collector parameters including scrape configuration as well as the destination Amazon Managed Service for Prometheus workspace. Further, you can view and edit collectors from within the Amazon Managed Service for Prometheus console.

Customers can now quickly iterate on the scrape configuration of Amazon Managed Service for Prometheus collectors. With this launch, customers can add, remove, and update scrape targets and jobs without downtime. In addition, you can now use the Amazon Managed Service for Prometheus AWS console to list, create, edit, and delete collectors.

Amazon Managed Service for Prometheus collector is available in all regions where Amazon Managed Service for Prometheus is available. To learn more about Amazon Managed Service for Prometheus collector, visit the user guide or product page.

Read more


Amazon SageMaker now provides new set up experience for Amazon DataZone projects

Amazon SageMaker now provides a new set up experience for Amazon DataZone projects, making it easier for customers to govern access to data and machine learning (ML) assets. With this capability, administrators can now set up Amazon DataZone projects by importing their existing authorized users, security configurations, and policies from Amazon SageMaker domains.

Today, Amazon SageMaker customers use domains to organize list of authorized users, and a variety of security, application, policy, and Amazon Virtual Private Cloud configurations. With this launch, administrators can now accelerate the process of setting up governance for data and ML assets in Amazon SageMaker. They can import users and configurations from existing SageMaker domains to Amazon DataZone projects, mapping SageMaker users to corresponding Amazon DataZone project members. This enables project members to search, discover, and consume ML and data assets within Amazon SageMaker capabilities such as Studio, Canvas, and notebooks. Also, project members can publish these assets from Amazon SageMaker to the DataZone business catalog, enabling other project members to discover and request access to them.

This capability is available in all Amazon Web Services regions where Amazon SageMaker and Amazon DataZone are currently available. To get started, see the Amazon SageMaker administrator guide.

Read more


AWS Control Tower launches configurable managed controls implemented using resource control policies

Today we are excited to announce the launch of AWS managed controls implemented using resource control policies (RCPs) in AWS Control Tower. These new optional preventive controls help you centrally apply organization-wide access controls around AWS resources in your organization. Additionally, you can now configure the new RCP and existing service control policies (SCP) preventive controls to specify AWS IAM (principal and resource) exemptions where applicable. Exemptions can be configured when you don’t want a principal or a resource to be governed by the control. To see a full list of the new controls, see the controls reference guide.

With this addition, AWS Control Tower now supports over 30 configurable preventive controls, providing off-the-shelf AWS-managed controls to help you scale your business using new AWS workloads and services. At launch, you can enable AWS Control Tower RCPs for Amazon Simple Storage Service, AWS Security Token Service, AWS Key Management Service, Amazon Simple Queue Service, and AWS Secrets Manager service. For example, an RCP can enforce the requirement that “Require the organization's Amazon S3 resources to be accessible only by IAM principals that belong to the organization,” regardless of the permissions granted on individual S3 bucket policies.

AWS Control Tower’s new RCP based preventive controls are available in all AWS commercial Regions where AWS Control Tower is available. For a full list of AWS regions where AWS Control Tower is available, see AWS Region Table.
 

Read more


AWS launches user-based subscription of Microsoft Remote Desktop Services

Today, AWS announces the general availability of Microsoft Remote Desktop Services with AWS provided licenses. Customers can now purchase user-based subscription of Microsoft Remote Desktop Services licenses directly from AWS. This new offering provides licensing flexibility and business continuity for customers running graphical user interface (GUI) based applications on Amazon Elastic Compute Cloud (Amazon EC2) Windows instances.

Thousands of customers use Windows Server on Amazon EC2 to host custom applications or independent software vendor (ISV) products that require remote connectivity via Microsoft Remote Desktop Services. Previously, customers had to procure the licenses through various Microsoft licensing agreements. With the AWS provided subscription, customers can now access Microsoft Remote Desktop Services licenses from AWS on a per-user, per-month basis, eliminating the need for separate licensing agreements and reducing operational overhead. Unlike AWS provided Microsoft Office and Visual Studio, customers can continue using their existing Active Directory(s) for managing user access to GUI-based applications on Amazon EC2. Moreover, customers can have more than two concurrent user sessions with Windows Server instances. Lastly, AWS License Manager enables centralized tracking for license usage, simplifying governance and cost management. Customers can start using AWS provided Microsoft Remote Desktop Services licenses without rebuilding their existing Amazon EC2 instances, providing a seamless migration path for existing workloads.

AWS provided user-based subscription of Microsoft Remote Desktop Services license is available in all AWS Regions currently License Manager supports. For further questions, visit the user guide. To learn more and get started, visit here.
 

Read more


AWS CloudTrail Lake enhances log analysis with AI-powered features

AWS announces two AI-powered enhancements to AWS CloudTrail Lake, a managed data lake that helps you capture, immutably store, access, and analyze your activity logs, as well as AWS Config configuration items. These new capabilities simplify log analysis, enabling deeper insights and quicker investigations across your AWS environments:

  • AI-powered natural language query generation in CloudTrail Lake is now generally available in seven AWS Regions: Mumbai, N. Virginia, London, Tokyo, Oregon, Sydney, and Canada (Central). This feature allows you to ask questions about your AWS activity in plain English, without writing complex SQL queries. For example, you can ask, "Which API events failed in the last week due to missing permissions?" CloudTrail Lake then generates the corresponding SQL query, streamlining your analysis of AWS activity logs (management and data events).
  • AI-powered query result summarization is now available in preview in the N. Virginia, Oregon, and Tokyo Regions. This feature provides natural language summaries of your query results, regardless of whether the query was generated through the natural language query generation feature or manually written in SQL. This capability significantly reduces the time and effort required to extract meaningful insights from your AWS activity logs (management, data, and network activity events). For example, after running a query to find users with the most access denied requests, you can click "Summarize" to get a concise overview of the key findings.

Please note that running queries will incur CloudTrail Lake query charges. Refer to CloudTrail pricing for details. To learn more, visit the AWS CloudTrail documentation.

Read more


Application Signals now supports burn rate for application performance goals

Amazon CloudWatch Application Signals, an application performance monitoring (APM) feature in CloudWatch, makes it easy to automatically instrument and track application performance against their most important business or service level objectives (SLOs). Customers can now receive alerts when these SLOs reach a critical burn rate. This new feature allows you to calculate how quickly your service is consuming its error budget relative to the SLO's attainment goal. Burn rate metrics provide a clear indication of whether you're meeting, exceeding, or at risk of failing your SLO goals.

Today, with burn rate metrics, you can configure CloudWatch alarms to notify you automatically when your error budget consumption exceeds specified thresholds. This allows for proactive management of service reliability, empowering your teams to take prompt action to achieve long-term performance targets. By setting multiple alarms with varying look-back windows, you can identify sudden error rate spikes and gradual shifts that could affect your error budget.

Burn rates are available in all regions where Application Signals is generally available - 28 commercial AWS Regions except CA West (Calgary) and Asia Pacific (Malaysia) regions. For pricing, see Amazon CloudWatch pricing. See SLO documentation to learn more, or refer to the user guide and AWS One Observability Workshop to get started with Application Signals.

Read more


AWS Control Tower launches the ability to resolve drift for optional controls

AWS Control Tower customers can now use the ResetEnabledControl API to programmatically resolve the control drift or re-deploy the control to its intended configuration. A control drift occurs when the AWS Control Tower managed control is modified outside the AWS Control Tower governance. Resolving drift helps you to adhere to your governance and compliance requirements. You can use this API with all AWS Control Tower optional controls except service control policies(SCPs) based preventive controls. AWS Control Tower APIs enhance the end-to-end developer experience by enabling automation for integrated workflows and managing workloads at scale.

Below is the list of AWS Control Tower control APIs that are now supported in the regions where AWS Control Tower is available. Please visit the AWS Control Tower API reference for more information.

  • AWS Control Tower Control APIs - EnableControl, DisableControl, GetControlOperation, GetEnabledControl, ListEnabledControls, UpdateEnabledControl, TagResource, UnTagResource, ListTagsForResource, ResetEnabledControl API.

To learn more, visit the AWS Control Tower homepage. For more information about the AWS Regions where AWS Control Tower is available, see the AWS Region table.
 

Read more


Amazon SageMaker Model Registry now supports defining machine learning model lifecycle stages

Today, we are excited to announce that Amazon SageMaker Model Registry now supports custom machine learning (ML) model lifecycle stages. This capability further improves model governance by enabling data scientists and ML engineers to define and control the progression of their models across various stages, from development to production.

Customers use Amazon SageMaker Model Registry as a purpose-built metadata store to manage the entire lifecycle of ML models. With this launch, data scientists and ML engineers can now define custom stages such as development, testing, and production for ML models in the model registry. This makes it easy to track and manage models as they transition across different stages in the model lifecycle from training to inference. They can also track stage approval status such as Pending Approval, Approved, and Rejected to check when the model is ready to move to the next stage. These custom stages and approval status help data scientists and ML engineers define and enforce model approval workflows, ensuring that models meet specific criteria before advancing to the next stage. By implementing these custom stages and approval processes, customers can standardize their model governance practices across their organization, maintain better oversight of model progression, and ensure that only approved models reach production environments.

This capability is available in all AWS regions where Amazon SageMaker Model Registry is currently available except GovCloud regions. To learn more, see Staging Construct for your Model Lifecycle.

Read more


Get x-ray vision into AWS CloudFormation deployments with a timeline view

AWS CloudFormation now offers a capability called deployment timeline view that allows customers to monitor and visualize the sequence of actions CloudFormation takes in a stack operation. This capability provides visibility into the ordering and duration of resource provisioning actions for a stack operation. This empowers developers to optimize their CloudFormation templates and speed up troubleshooting of deployment issues.

When you create, update, or delete a stack, CloudFormation initiates resource-level provisioning actions based on a resource dependency graph. For example, if you submit a CloudFormation template with an EC2 instance, Security Group, and VPC, CloudFormation creates the VPC, Security Group, and EC2 instance in that order. Previously, you could only see the chronological list of stack operation events, which provided limited visibility into dependencies between resources and the ordering of provisioning actions. Now, you can see a graphical visualization that shows the order in which CloudFormation provisions resources within a stack, color-coding the status of each resource, and the duration of each provisioning action. If a resource provisioning encounters an error, it highlights the likely root cause. This allows you to determine the optimal grouping of resources into templates, for minimizing deployment times and improving maintainability.

The new capability is available in all AWS Regions where CloudFormation is supported. Refer to the AWS Region table for service availability details.

Get started by initiating a stack operation and accessing the deployment timeline view from the stack events tab in the CloudFormation Console. To learn more about the deployment timeline view, visit the AWS CloudFormation User Guide.
 

Read more


AWS CloudTrail Lake announces enhanced event filtering

AWS enhances event filtering in AWS CloudTrail Lake, a managed data lake that helps you capture, immutably store, access, and analyze your activity logs, as well as AWS Config configuration items. Enhanced event filtering expands upon existing filtering capabilities, giving you even greater control over which CloudTrail events are ingested into your event data stores. This enhancement increases the efficiency and precision of your security, compliance, and operational investigations while helping reduce costs.

You can now filter both management and data events by the following new attributes:

  • eventSource: The service that the request was made to
  • eventType: Type of event that generated the event record (e.g., AwsApiCall, AwsServiceEvent, etc)
  • userIdentity.arn: IAM entity that made the request
  • sessionCredentialFromConsole: Whether the event originated from an AWS Management Console session or not

For management events, you can additionally filter by eventName which identifies the requested API action.

For each of these attributes, you can specify values to include or exclude. For example, you can now filter CloudTrail events based on the userIdentity.arn attribute to exclude events generated by specific IAM roles or users. You can exclude a dedicated IAM role used by a service that performs frequent API calls for monitoring purposes. This allows you to significantly reduce the volume of CloudTrail events ingested into CloudTrail Lake, lowering costs while maintaining visibility into relevant user and system activities.

Enhanced event filtering is available in all AWS Regions where AWS CloudTrail Lake is supported, at no additional charge. To learn more, visit the AWS CloudTrail documentation.

Read more


Amazon DataZone updates pricing and removes the user-level subscription fee

Today, Amazon DataZone has announced updates to its pricing, which will make the service more accessible and cost-effective for customers. Customers will no longer be charged monthly subscription fee for every configured user. Instead, Amazon DataZone now offers a pay-as-you-go model, where you are charged only for the resources you use. Additionally, DataZone has reduced the price for metadata storage from $0.417 per GB to $0.40 per GB. Finally, Amazon DataZone has also introduced free access to some of the core DataZone APIs that power the key user experiences such as creating and managing their domains, blueprints, and projects.

These price updates are part of Amazon's ongoing commitment to providing flexible, transparent, and cost-effective data management and data governance capabilities to customers. Customers can now scale their usage without being constrained by per-user costs, and make the service accessible to a wider user base.

These pricing changes will be applicable starting Nov 1, 2024 in all AWS Regions where Amazon DataZone is available, including: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Seoul), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (Stockholm), Europe (London), and South America (São Paulo).

Visit Amazon DataZone’s pricing page for more details.
 

Read more


EC2 Auto Scaling introduces provisioning control on strict availability zone balance

Amazon EC2 Auto Scaling Groups (ASG) introduces a new capability for customers to strictly balance their workloads across Availability Zones, enabling greater control over provisioning and management of their EC2 instances.

Previously, customers that wanted to strictly balance an ASGs EC2 instances across availability zones had to override default behaviors of EC2 Auto Scaling and invest in custom code to modify the ASG’s existing behaviors with life cycle hooks or through maintaining multiple ASGs. With this feature, customers can now to easily achieve strict availability zone balance and ensure higher levels of resiliency for their applications.

This capability is now available through the AWS Command Line Interface (CLI), AWS SDKs, or the AWS Console in all AWS Regions. To get started, please refer to the documentation.

Read more


AWS Well-Architected adds enhanced implementation guidance

Today, we are announcing updates to the AWS Well-Architected Framework, featuring comprehensive guidance to help customers build and operate secure, high-performing, resilient, and efficient workloads on AWS. This update includes 14 newly refreshed best practices, including the Reliability Pillar, representing the first major improvements since 2022.

The refreshed Framework offers prescriptive guidance, expanded best practices, and updated resources to help customers tailor AWS recommendations to their specific needs, accelerating cloud adoption and applying best practices more effectively.

These updates strengthen workload security, reliability, and efficiency, empowering organizations to scale confidently and build resilient, sustainable architectures. The Reliability Pillar, in particular, provides deeper insights for creating dependable cloud solutions.

What partners are saying about the updated guidance: Well-Architected Partner, 6Pillar, CEO, Lorenzo Modesto “While the updated content that the AWS Well-Architected Team is generating is massively helpful for both WA Partners and those AWS Consulting Partners who want to become WA Partners, what’s most powerful is the focus on partners automating their WA practices.”

The updated AWS Well-Architected Framework is available now for all AWS customers. Updates in this release will be incorporated into the AWS Well-Architected Tool in future releases, which you can use to review your workloads, address important design considerations, and help you follow the AWS Well-Architected Framework guidance. To learn more about the AWS Well-Architected Framework, visit the AWS Well-Architected Framework documentation.
 

Read more


media-services

AWS announces Media Quality-Aware Resiliency for live streaming

Starting today, you can enable Media Quality-Aware Resiliency (MQAR), an integrated capability between Amazon CloudFront and AWS Media Services that provides dynamic, cross-region origin selection and failover based on a dynamically generated video quality score. Built for customers that need always-on ‘eyes-on-glass’ to deliver live events and 24/7 programming channels, MQAR automatically switches between regions in seconds to recover from video quality degradation in one of the regions. This is designed to help deliver a high quality of experience to viewers.

Previously, you could use a CloudFront origin group to failover between two AWS Elemental MediaPackage origins in different AWS Regions based on HTTP error codes. Now with MQAR, your live event streaming workflow has the resiliency to withstand video quality issues including black frames, freeze or dropped frames, or repeated frames. AWS Elemental MediaLive analyzes the video input delivered from the source and dynamically generates a quality score reflecting perceived changes in video quality. Subsequently, your CloudFront distribution continuously selects the MediaPackage origin that reports the highest quality score. You can create CloudWatch alerts to be notified of quality issues using the provided metrics for quality indicators.

To get started with MQAR, deploy a cross-region channel delivery using AWS Media Services and configure CloudFront to use MQAR in the origin group. CloudFormation support will be coming soon. There is no additional cost for enabling MQAR, standard pricing applies for CloudFront and AWS Media Services. To learn more about MQAR, refer to the launch blog and the CloudFront Developer Guide.

Read more


Amazon IVS introduces Multitrack Video to save input costs

Today we are launching Multitrack Video, a new capability in Amazon Interactive Video Service (Amazon IVS) which can save you up to 75% on live video input costs with standard channels. With Multitrack Video, you send multiple video quality renditions directly from your own device instead of using Amazon IVS for transcoding.

Multitrack Video is supported in OBS Studio. Once you enable Multitrack Video on your IVS channels, your broadcasters can simply check a box in OBS to automatically send an optimal set of video qualities based on their hardware and network capabilities. This enables viewers to watch in the best quality for their connection, while you pay $0.50 an hour for standard channel input compared to $2.00 an hour without Multitrack Video. For more pricing information, visit the Amazon IVS pricing page.

Amazon IVS is a managed live streaming solution that is designed to make low-latency or real-time video available to viewers around the world. Video ingest and delivery are available over a managed network of infrastructure optimized for live video. Visit the AWS region table for a full list of AWS Regions where the Amazon IVS console and APIs for control and creation of video streams are available.

To get started, see the Multitrack Video documentation

Read more


AWS Deadline Cloud now supports GPU accelerated EC2 Instance Types

Today, AWS announces support for NVIDIA GPU accelerated instances in service-managed fleets (SMF) in AWS Deadline Cloud. AWS Deadline Cloud is a fully managed service that simplifies render management for teams creating computer-generated 2D/3D graphics and visual effects for films, TV shows, commercials, games, and industrial design.

Now you can use Deadline Cloud SMF to create auto-scaling fleets of GPU accelerated instances without having to set up, configure, or manage the worker infrastructure yourself. Deadline Cloud SMF can be set up in minutes to deploy NVIDIA GPU accelerated EC2 Instance Types (G4dn, G5, G6, Gr6, G6e) with NVIDIA GRID drivers and Windows Server 2022 or Linux (AL2023) operating systems. This expands the digital content creation software you can use within a fully managed render farm.

NVIDIA GPU accelerated instances, are supported in service-managed fleets in all AWS Regions where Deadline Cloud is available.

For more information, please visit the Deadline Cloud product page, and see the Deadline Cloud pricing page for price details.

Read more


messaging

Introducing the AWS Digital Sovereignty Competency

Digital sovereignty has been a priority for AWS since its inception. AWS remains committed to offering customers the most advanced sovereignty controls and features in the cloud. With the increasing importance of digital sovereignty for public sector organizations and regulated industries, AWS is excited to announce the launch of the AWS Digital Sovereignty Competency.

The AWS Digital Sovereignty Competency curates and validates a community of AWS Partners with advanced sovereignty capabilities and solutions, including deep experience in helping customers address sovereignty and compliance requirements. These partners can assist customers with residency control, access control, resilience, survivability, and self-sufficiency.

Through this competency, customers can search for and engage with trusted local and global AWS Partners that have technically validated experience in addressing customers’ sovereignty requirements. Many partners have built sovereign solutions that leverage AWS innovations and built-in controls and security features.

In addition to these offerings, AWS Digital Sovereignty Partners provide skills and knowledge of local compliance requirements and regulations, making it easier for customers to meet their digital sovereignty requirements while benefiting from the performance, agility, security, and scale of the AWS Cloud.

Read more


Amazon Connect launches AI guardrails for Amazon Q in Connect

Amazon Q in Connect, a generative AI powered assistant for customer service, now enables customers to natively configure AI guardrails to implement safeguards based on their use cases and responsible AI policies. Contact center administrators can configure company-specific guardrails for Amazon Q in Connect to filter harmful and inappropriate responses, redact sensitive personal information, and limit incorrect information in the responses due to potential large language model (LLM) hallucination.

For end-customer self-service scenarios, guardrails can be used to ensure Amazon Q in Connect responses are constrained to only company-related topics and maintain professional communication standards. Additionally, when agents leverage Amazon Q in Connect to help solve customer issues, these guardrails can prevent accidental exposure of personally identifiable information (PII) to agents. Contact center administrators will have the flexibility to configure these guardrails and selectively apply them to different contact types.

For region availability, please see the availability of Amazon Connect features by Region. To learn more about Amazon Q in Connect, please visit the website or see the help documentation.
 

Read more


Amazon Connect now provides the ability to record audio during IVR and other automated interactions

Amazon Connect now enables you to record audio when your customer engages with self-service interactive voice response (IVR) and other automated interactions. On the Contact details page, you can listen to the recording or review logs which includes information such as the bot transcription or touch-tone menu selection. Recording settings can be configured using the “Set recording and analytics behavior” block on the Amazon Connect drag-and-drop workflow designer, allowing you to easily specify portions of the experience to record. For example, pausing and resuming recordings before and after sensitive exchanges, such as when a customer shares their credit card or social security number. These new capabilities make it easy for you to monitor and audit the quality of your self-service experiences or to record interactions for compliance or policy purposes.

These features are available in all AWS regions where Amazon Connect is available. To learn more, see the Amazon Connect Administrator Guide and Amazon Connect API Reference. To learn more about Amazon Connect, the AWS contact center as a service solution on the cloud, please visit the Amazon Connect website.

Read more


Amazon Connect Contact Lens now supports external voice

Amazon Connect now integrates with other voice systems for real-time and post-call analytics, so you can use Amazon Connect Contact Lens with your existing voice system to help improve customer experience and agent performance.

Amazon Connect Contact Lens provides call recordings, conversational analytics (including contact transcript, sensitive data redaction, content categorization, theme detection, sentiment analysis, real-time alerts, and post-contact summary), and agent performance evaluations (including evaluation forms, automated evaluation, supervisor review) with a rich user experience to display, search and filter customer interactions, and programmatic access to data streams and the data lake. If you are an existing Amazon Connect customer, you can expand use of Contact Lens to other voice systems for consistent analytics in a single data warehouse. If you want to migrate your contact center to Amazon Connect, you can start with Contact Lens analytics and performance insights before migrating their agents.

Contact Lens supports external voice in the US East (N. Virginia) and US West (Oregon) AWS Regions.

To learn more about Amazon Connect and call transfers, review the following resources:

Read more


Amazon Connect now supports external voice transfers

Amazon Connect now integrates with other voice systems to directly transfer voice calls and metadata without using the public telephone network. You can use Amazon Connect telephony and Interactive Voice Response (IVR) with your existing voice systems to help improve customer experience and reduce costs.

Amazon Connect IVR provides conversational voice bots in 30+ languages with natural language processing, automated speech recognition, and text-to-speech to help personalize customer service, provide self-service for complex tasks, and collect information to reduce agent handling time. Now, you can use Amazon Connect to modernize the IVR experience of your existing contact center and your enterprise and branch voice systems. Additionally, enterprises migrating their contact center to Amazon Connect can start with Connect telephony and IVR for immediate modernization ahead of agent migration.

External voice transfer is available in the US East (N. Virginia) and US West (Oregon) AWS Regions.

To learn more about Amazon Connect and call transfers, review the following resources:

Read more


Amazon Connect Contact Lens launches built-in dashboards to analyze conversational AI bot performance

Amazon Connect Contact Lens now offers built-in dashboards to monitor the performance of your conversational AI bots making it easy for you to analyze and continuously improve your self-service and automated experiences. From the Contact Lens flows performance dashboard, you can view Amazon Lex and Q in Connect bot analytics including how your customers communicate their issues, the most common contact reasons, and the outcomes of the interaction. From the dashboard, you can navigate to the bot management page and make updates in a couple clicks to improve bot accuracy. These new capabilities make it easy for you analyze the performance of your conversational AI experiences, all within the Connect web UI.

These features are available in all AWS regions where Amazon Connect and Amazon Lex are available. To learn more about these metrics and the flows performance dashboard, see the Amazon Connect Administrator Guide and Amazon Connect API Reference. To learn more about Amazon Connect, the AWS contact center as a service solution on the cloud, please visit the Amazon Connect website.

Read more


Amazon Connect launches simplified conversational AI bot creation

Amazon Connect now makes it as easy as a few clicks for you to create, edit, and continuously improve conversational AI bots for interactive voice response (IVR) and chatbot self-service experiences. Now, you can configure and design your bots (powered by Amazon Lex) directly from the Connect web UI, allowing you to deliver dynamic, conversational AI experiences to understand your customer’s intent, ask follow-on questions, and automate resolution of their issues.

By using Amazon Connect’s drag-and-drop workflow designer, you can enhance your bots with Amazon Connect Customer Profiles, making it easy to deliver personalized experiences with no code. For example, you can upgrade your touch-tone menu (e.g., Press 1 for Account Support) with a bot to greet your customer by name, proactively offer to help them pay an upcoming bill, and offer them additional support options. In a few clicks, you can also customize and launch the Connect widget to further enhance your customer’s digital experience. These new bot building capabilities in Amazon Connect make it easy for you create and launch bot-powered self-service experiences by reducing the need for you to manage multiple applications or custom integrations.

To learn more refer to our public documentation. This new feature is available in all AWS regions where Amazon Connect and Amazon Lex is available. To learn more about Amazon Connect, the AWS contact center as a service solution on the cloud, please visit the Amazon Connect website.

Read more


Amazon Connect now supports WhatsApp Business messaging

Amazon Connect now supports WhatsApp Business messaging, enabling you to deliver personalized experiences to your customers who use WhatsApp, one of the world's most popular messaging platforms, increasing customer satisfaction and reducing costs. Rich messaging features such as inline images and videos, list messages, and quick replies allow your customers to browse product recommendations, check order status, or schedule appointments.

Amazon Connect for WhatsApp Business messaging makes it easy for your customers to initiate a conversation by simply tapping on WhatsApp-enabled phone numbers or chat buttons published on your website or mobile app, or by scanning a QR code. As a result, you are able to reduce call volumes and lower operational costs by deflecting calls to chats. WhatsApp Business messaging uses the same generative AI-powered chatbots, routing, configuration, analytics, and agent experience as voice, chat, SMS, Apple Messages for Business, tasks, web calling, and email in Amazon Connect, making it easy for you to deliver seamless omnichannel customer experiences.

Amazon Connect for WhatsApp Business messaging is available in US East (N. Virginia), US West (Oregon), Europe (Frankfurt), Europe (London), and Asia Pacific (Singapore) regions.

To learn more and get started, please refer to the help documentation, pricing page, or visit the Amazon Connect website.

Read more


Amazon Connect launches generative AI-powered self-service with Amazon Q in Connect

Amazon Q in Connect, a generative-AI powered assistant for customer service, now supports end-customer self-service interactions across Interactive Voice Response (IVR) and digital channels. With this launch, businesses can augment their existing self-service experiences with generative AI capabilities to create more personalized and dynamic experiences to improve customer satisfaction and first contact resolution.

Amazon Q in Connect can directly converse with end-customers and reason over undefined intents for more ambiguous scenarios to provide customers accurate responses. For example, Amazon Q in Connect can help end-customers by completing actions such as booking trips, applying for loans, or scheduling doctor appointments. Amazon Q in Connect also supports Q&A, helping end-customer get the information they need as well as asking end-customers follow up questions to determine the right answers. If a customer requires additional support, Amazon Q in Connect provides seamless transition to customer service agents, preserving the full conversation context ensuring a cohesive customer experience.

For region availability, please see the availability of Amazon Connect features by Region. To learn more about Amazon Q in Connect, please visit the website or see the help documentation.

Read more


Amazon Connect now makes it easier to collect sensitive customer data within chats

Amazon Connect now makes it easier for you to collect sensitive customer data and deliver seamless transactional experiences within chats, enhancing the overall customer experience. You can now support inline chat interactions such as processing payments, updating customer information (e.g., address changes), or collecting customer data (e.g., account details) without requiring the customer to switch channels or navigate to another page on your website.

To get started, use Amazon Connect’s No-code UI builder to create step-by-step guides with forms, enable the ‘This view has sensitive data’ option in the Show view flow block to ensure compliance with data protection and privacy standards, and use a Lambda function to send the collected customer data to any application (e.g., a payment processor).

This feature is supported in all commercial AWS regions where Amazon Connect is offered. To learn more and get started please refer to the help documentation or read the blog post.

Read more


Amazon Connect now allows agents to self-assign tasks

Amazon Connect now allows agents to create and assign a task to themselves by checking a box from the agent workspace or contact control panel (CCP). For example, an agent can schedule a follow up action to update to a customer by scheduling a task for a preferred time and checking the self assignment option. Amazon Connect Tasks empowers you to prioritize, assign, and track all contact center agent tasks to completion, improving agent productivity and ensuring customer issues are quickly resolved.

This feature is supported in all AWS regions where Amazon Connect is offered. To learn more, see our documentation. To learn more about Amazon Connect, the easy-to-use cloud contact center, visit the Amazon Connect website.

Read more


Amazon Connect Contact Lens launches calibrations for agent performance evaluations

You can now perform calibrations to drive consistency and accuracy in how managers evaluate agent performance, so that agents receive feedback that is consistent. During a calibration, multiple managers can evaluate the same contact using the same evaluation form. You can then review differences in evaluations filled by different managers to align managers on evaluation best practices and identify opportunities to improve the evaluation form, e.g. rephrasing an evaluation question to be more specific, so that it is consistently answered by managers. You can also compare manager’s answers with an approved evaluation to measure and improve manager accuracy on evaluating agent performance.

This feature is available in all regions where Contact Lens performance evaluations is already available. To learn more, please visit our documentation and our webpage. For information about Contact Lens pricing, please visit our pricing page.
 

Read more


Amazon Connect now provides granular disconnect reasons for chats

The Amazon Connect contact record now includes granular disconnect reasons for chats, enabling you to improve and personalize customer experiences based on how a chat is ended. For example, if the agent disconnects due to a network issue, you can route the chat to the next best agent, or if the customer disconnects due to idleness, you can proactively send an SMS to re-engage them.

Disconnect reasons are available for chats in all AWS regions where Amazon Connect is offered. To learn more refer to the help documentation.

Read more


Amazon Connect Email is now generally available

Amazon Connect Email provides built-in capabilities that make it easy for you to prioritize, assign, and automate the resolution of customer service emails, improving customer satisfaction and agent productivity. With Amazon Connect Email, you can receive and respond to emails sent by customers to business addresses or submitted via web forms on your website or mobile app. You can configure auto-responses, prioritize emails, create or update cases, and route emails to the best available agent when agent assistance is required. Additionally, these capabilities work seamlessly with Amazon Connect outbound campaigns enabling you to deliver proactive and personalized email communications.

To get started, configure an email address using the Amazon Connect-provided domain or integrate your own email domain using Amazon Simple Email Service (Amazon SES). Amazon Connect Email uses the same configuration, routing, analytics, and agent experience as voice, chat, SMS, tasks, and web-calling in Amazon Connect, making it easy for you to deliver seamless omnichannel customer experiences.

Amazon Connect Email is available in the US East (N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), and Europe (London) regions. To learn more and get started, please refer to the help documentation, pricing page, or visit the Amazon Connect website.
 

Read more


AWS re:Post Private is now integrated with Amazon Bedrock to offer contextual knowledge to organizations

Today, AWS re:Post Private announces its integration with Amazon Bedrock, ushering in a new era of contextualized knowledge management for customer organizations. This feature transforms traditional organizational knowledge practices into a dynamic system of collaborative intelligence, where human expertise and AI capabilities complement each other to build collective wisdom.

At the heart of this integration is re:Post Agent for re:Post Private, an AI-powered assistant that delivers highly contextual technical answers to customer questions, drawing from a rich repository of curated knowledge resources. re:Post Agent for re:Post Private uniquely combines customer-specific private knowledge with AWS's vast public knowledge base, ensuring responses are not only timely but also tailored to each organization's specific context and needs.

By adopting re:Post Private with this new integration, organizations can now harness the full potential of collaborative intelligence. This powerful alliance between human insight and AI efficiency opens up new avenues for problem-solving, innovation, and knowledge sharing within enterprises. Unlock the transformative possibilities of collaborative intelligence and elevate your organization's knowledge management capabilities with re:Post Private.

Read more


Amazon Connect Contact Lens launches custom dashboards

Amazon Connect Contact Lens now supports creating custom dashboards, as well as adding or removing widgets from existing dashboards. With these dashboards, you can view and compare real-time and historical aggregated performance, trends, and insights using custom-defined time periods (e.g., week over week), summary charts, time-series chart, etc. Now, you can further customize these dashboards by changing widgets to create the view that best fits your specific business need. For example, if you want to monitor self-service, queue, and agent performance, you can add all three types of widgets to your dashboard to have a single end to end view of contact center performance.

This feature is available in all commercial AWS regions where Amazon Connect is offered. To learn more about dashboards, see the Amazon Connect Administrator Guide. To learn more about Amazon Connect, the AWS cloud-based contact center, please visit the Amazon Connect website.

Read more


AWS End User Messaging launches message feedback tracking

Today, AWS End User Messaging now allows you to track feedback for messages sent through the SMS, and MMS channel. AWS End User Messaging provides developers with a scalable and cost-effective messaging infrastructure without compromising the safety, security, or results of their communications.

For each SMS and MMS you send, you can now track message feedback rates like one-time passcode conversions, promotional offer link clicks, or online shopping cart additions. Message feedback rates allow you to track leading indicators for message performance that is specific to your use-case.

To learn more, visit the AWS End User Messaging SMS User Guide.
 

Read more


AWS End User Messaging announces integration with Amazon EventBridge

Today, AWS End User Messaging announces an integration with Amazon EventBridge. AWS End User Messaging provides developers with a scalable and cost-effective messaging infrastructure without compromising the safety, security, or results of their communications.

Now your SMS, MMS and voice delivery events which contain information like the status of the message, price, and carrier information will be available in EventBridge. You can then send send your SMS events to other AWS services and the many SaaS applications that EventBridge integrates with. EventBridge also allows you to create rules that filter and route your SMS events to event destinations you specify.

To learn more, visit the AWS End User Messaging SMS User Guide.
 

Read more


Amazon Connect launches support for callbacks when using Chats and Tasks

Amazon Connect now enables you to request callbacks from Chats and Tasks in addition to voice calls. For example, if a customer reaches out after hours when no agent is available, they can request a callback by sending a chat message or completing a webform request (via Tasks). Callbacks allow end-customers to get a call from an available agent during normal business hours, without requiring them to stay on the line.

This feature is supported in all AWS regions where Amazon Connect is offered. To learn more, see our documentation. To learn more about Amazon Connect, the easy-to-use cloud contact center, visit the Amazon Connect website.

Read more


migration

Announcing AWS Transfer Family web apps

AWS Transfer Family web apps are a new resource that you can use to create a simple interface for accessing your data in Amazon S3 through a web browser. With Transfer Family web apps, you can provide your workforce with a fully managed, branded, and secure portal for your end users to browse, upload, and download data in S3.

Transfer Family offers fully managed file transfers over SFTP, FTPS, FTP, and AS2, enabling seamless workload migrations with no need to change your third-party clients or their configurations. Now, you can also enable browser-based transfers for non-technical users in your organization through a user-friendly interface. Transfer Family web apps are integrated with AWS IAM Identity Center and S3 Access Grants, enabling fine-grained access controls that map corporate identities in your existing directories directly to S3 datasets. With a few clicks in the Transfer Family console, you can generate a shareable URL for your web app. Then, your authenticated users can start accessing data you authorize them to access through their web browsers.

Transfer Family web apps are available in select AWS Regions. You can get started with Transfer Family web apps in the Transfer Family console. For pricing, visit the Transfer Family pricing page. To learn more, read the AWS News Blog or visit the Transfer Family User Guide.
 

Read more


Announcing Amazon Q Developer transformation capabilities for VMware (Preview)

Today, AWS announces the preview of Amazon Q Developer transformation capabilities for VMware, the first generative AI–powered assistant that can simplify and accelerate the migration and modernization of VMware workloads to Amazon Elastic Compute Cloud (EC2). These new capabilities help you streamline complex VMware transformation tasks, reducing the time and effort required to move VMware workloads to the cloud. Using advanced AI techniques to automate critical steps in the migration process, Amazon Q Developer helps accelerate your cloud journey, reduce costs, and drive innovation.

Amazon Q Developer transformation agents simplify and automate VMware transformation tasks including on-premises application data discovery, wave planning, network translation and deployment, and orchestration of the overall migration process. Two of the most challenging aspects of VMware transformations— wave planning and network translation— are now automated using VMware domain-expert agents and large language models (LLMs). These AI-powered tools convert VMware networking configurations and firewall rules into native AWS network constructs, significantly reducing complexity and potential errors. Importantly, Amazon Q Developer maintains a balance between automation and human oversight, proactively promoting user input at key decision points to ensure accuracy and control throughout the migration and modernization process.

The preview of Amazon Q Developer transformation capabilities for VMware is available in US East (N. Virginia) AWS region. To learn more about Amazon Q Developer and how it can accelerate your migration to AWS, visit Amazon Q Developer.

Read more


AWS Application Discovery Service now supports data from commercially available discovery tools

Today, AWS announces additional file support for AWS Application Discovery Service (ADS), which adds the ability to import VMware data generated by 3rd-party datacentre tools. With today’s launch, you can now directly take an export from Dell Technology’s RVTools and load it into ADS without any file manipulation.

ADS provides a system of record for configuration, performance, tags, network connections, and application grouping of your existing on-premises workloads. Now with the support for additional file formats, you have the option to kick off your migration journey using the data you already have. At any time later you have the option to deploy either ADS Discovery Agents or ADS Agentless Collectors and the data will automatically be merged into a unified view of your datacentre.

These new capabilities are available in all AWS Regions where AWS Application Discovery Service is available.

To learn more, please see the user guide for AWS Application Discovery Service. For more information on using the ADS import action via the AWS SDK or CLI, please see the API reference.

Read more


AWS Application Discovery Service adds integration with AWS Application Migration Service

Today AWS announces an integration between AWS Application Discovery Service (ADS) and AWS Application Migration Service (MGN), which allows data collected about your on-premises workloads to directly feed into your migration execution plan. This new capability provides a one-click export of the on-premises server configuration, tags, application grouping, and Amazon EC2 recommendations gathered during planning in a format supported by MGN.

ADS provides a system of record for configuration, performance, tags, and application groupings of your existing on-premises workloads. Now when using the Amazon EC2 instance recommendations feature, you also are provided an MGN-ready inventory file. This file can then be directly imported into MGN, removing the need to rediscover your workloads.

This new no-cost capability is available in all AWS Regions where AWS Application Discovery Service is available.

To learn more, please see the user guides for AWS Application Discovery Service and AWS Application Migration Service.
 

Read more


AWS Application Discovery Service (ADS) now supports AWS PrivateLink, providing private connectivity between virtual private clouds (VPCs), on-premises networks and ADS without exposing traffic to the public internet. With this integration, administrators can use VPC endpoint policies to seamlessly route their discovery data from either the ADS Agentless Collector or ADS Discovery Agent directly into ADS for analysis and migration planning.

This new feature is available in all AWS Regions where AWS Application Discovery Service and AWS PrivateLink are available.

To get started, see the AWS PrivateLink section of AWS Application Discovery Service user guide.

Read more


AWS DMS now delivers improved performance for data validation

AWS Database Migration Service (AWS DMS) has enhanced data validation performance for database migrations, enabling customers to validate large datasets with significantly faster processing times.

This enhanced data validation is now available in version 3.5.4 of the replication engine for both full load and full load with CDC migration tasks. Currently, this enhancement supports migration paths from Oracle to PostgreSQL, SQL Server to PostgreSQL, Oracle to Oracle, and SQL Server to SQL Server, with additional migration paths planned for future releases.

To learn more about data validation performance improvements with AWS DMS, please refer to the AWS DMS Technical Documentation.

Read more


AWS Transfer Family is now available in the AWS Asia Pacific (Malaysia) Region

Customers in AWS Asia Pacific (Malaysia) Region can now use AWS Transfer Family.

AWS Transfer Family provides fully managed file transfers for Amazon Simple Storage Service (Amazon S3) and Amazon Elastic File System (Amazon EFS) over Secure File Transfer Protocol (SFTP), File Transfer Protocol (FTP), FTP over SSL (FTPS) and Applicability Statement 2 (AS2). In addition to file transfers, Transfer Family enables common file processing and event-driven automation for managed file transfer (MFT) workflows, helping customers to modernize and migrate their business-to-business file transfers to AWS.

To learn more about AWS Transfer Family, visit our product page and user-guide. See the AWS Region Table for complete regional availability information.

Read more


Network connections is now discoverable with AWS Application Discovery Service Agentless Collector

Starting today, the AWS Application Discovery Service Agentless Collector supports the discovery of on-premises network connections, allowing you to understand your on-premises dependencies and plan your AWS migration. With the Agentless Collector, one virtual appliance deployed within your on-premises data center can discover and monitor the performance of VMware virtual machines, database metadata and utilization metrics, and now network connections.

Using network connection data to build applications is an important step when building a migration plan to the AWS cloud. By using AWS Migration Hub to explore the relationship and dependencies between servers, migration practitioners can be confident which servers should be part of a migration wave or application.

The network connections capability is now generally available, and can be used in all AWS Regions where AWS Application Discovery Service is available. Customers already running the Agentless Collector with active auto-updates only need to provide read-only credentials to enable the feature.

To learn more, read the user guide. Accelerate your migration with AWS Application Discovery Service today.

Read more


AWS Mainframe Modernization achieves FedRAMP Moderate and SOC compliance

AWS Mainframe Modernization has added approval for Federal Risk and Authorization Management Program (FedRAMP) Moderate and System and Organization Controls (SOC) reports.

AWS Mainframe Modernization has achieved Federal Risk and Authorization Management Program (FedRAMP) Moderate authorization, listed on the FedRAMP marketplace, approved by the FedRAMP Joint Authorization Board (JAB) for the AWS US East / West Region which includes US East (Ohio), US East (N. Virginia), US West (N. California), and US West (Oregon) Regions. FedRAMP is a US government-wide program that delivers a standard approach to the security assessment, authorization, and continuous monitoring for cloud products and services.

AWS Mainframe Modernization is now System and Organization Controls (SOC) compliant. AWS System and Organization Controls (SOC) Reports are independent third-party examination reports that demonstrate how AWS achieves key compliance controls and objectives. The purpose of these reports is to help you and your auditors understand the AWS controls established to support operations and compliance. AWS Mainframe Modernization is SOC compliant in all AWS regions where it is generally available, including the AWS GovCloud (US) Regions.

The AWS Mainframe Modernization service allows customers and partners to modernize and migrate on-premise mainframe applications and test, run, and operate them on AWS Cloud native managed runtimes. It enables modernization patterns like refactor and replatform, as well as augmentation patterns supported by data replication and file transfer. To learn more, please visit AWS Mainframe Modernization service product and documentation pages.
 

Read more


mobile-services

Storage Browser for Amazon S3 is now generally available

Amazon S3 is announcing the general availability of Storage Browser for S3, an open source component that you can add to your web applications to provide your end users with a simple interface for data stored in S3. With Storage Browser for S3, you can provide authorized end users, such as customers, partners, and employees, with access to easily browse, download, and upload data in S3 directly from your own applications. Storage Browser for S3 is available in the AWS Amplify React and JavaScript client libraries.

With the general availability of Storage Browser for S3, your end users can now search for their data based on file name and can copy and delete data they have access to. Additionally, Storage Browser for S3 now automatically calculates checksums of the data your end users upload and blocks requests that do not pass these durability checks.

We welcome your contributions and feedback on our roadmap, which outlines the plan for adding new capabilities to Storage Browser for S3. Storage Browser for S3 is backed by AWS Support, which means customers with AWS Business and Enterprise Support plans get 24/7 access to cloud support engineers. To learn more and get started, visit the AWS News Blog and the UI documentation.
 

Read more


Amazon Connect now supports WhatsApp Business messaging

Amazon Connect now supports WhatsApp Business messaging, enabling you to deliver personalized experiences to your customers who use WhatsApp, one of the world's most popular messaging platforms, increasing customer satisfaction and reducing costs. Rich messaging features such as inline images and videos, list messages, and quick replies allow your customers to browse product recommendations, check order status, or schedule appointments.

Amazon Connect for WhatsApp Business messaging makes it easy for your customers to initiate a conversation by simply tapping on WhatsApp-enabled phone numbers or chat buttons published on your website or mobile app, or by scanning a QR code. As a result, you are able to reduce call volumes and lower operational costs by deflecting calls to chats. WhatsApp Business messaging uses the same generative AI-powered chatbots, routing, configuration, analytics, and agent experience as voice, chat, SMS, Apple Messages for Business, tasks, web calling, and email in Amazon Connect, making it easy for you to deliver seamless omnichannel customer experiences.

Amazon Connect for WhatsApp Business messaging is available in US East (N. Virginia), US West (Oregon), Europe (Frankfurt), Europe (London), and Asia Pacific (Singapore) regions.

To learn more and get started, please refer to the help documentation, pricing page, or visit the Amazon Connect website.

Read more


AWS Amplify introduces passwordless authentication with Amazon Cognito

AWS Amplify is excited to announce support for Amazon Cognito's new passwordless authentication features, enabling developers to implement secure sign-in methods using SMS one-time passwords, email one-time passwords, and WebAuthn passkeys in their applications with Amplify client libraries for JavaScript, Swift, and Android. This update simplifies the implementation of passwordless authentication flows, addressing the growing demand for more secure and user-friendly login experiences while reducing the risks associated with traditional password-based systems.

This new capability enhances application security and user experience by eliminating the need for traditional passwords, reducing the risk of credential-based attacks while streamlining the login process. Passwordless authentication is ideal for organizations aiming to strengthen security and increase user adoption across various sectors, including e-commerce, finance, and healthcare. By removing the frustration of remembering complex passwords, this feature can significantly improve user engagement and simplify account management for both users and organizations.

The passwordless authentication feature is now available in all AWS regions where Amazon Cognito is supported, enabling developers worldwide to leverage this functionality in their applications.

To get started with passwordless authentication in AWS Amplify, visit the AWS Amplify documentation for detailed guides and examples

Read more


Amazon Q Business now available as browser extension

Today, Amazon Web Services announces the general availability of Amazon Q Business browser extensions for Google Chrome, Mozilla Firefox, and Microsoft Edge. Users can now supercharge their browsers’ intelligence and receive context-aware, generative AI assistance, making it easy to get on-the-go help for their daily tasks.

The Amazon Q Business browser extension makes it easy for users to summarize web pages, ask questions about web content or uploaded files, and leverage large language model knowledge directly within their browser. With the browser extension, users can maximize reading productivity, streamline their research and analysis of complex information, and get instant help when creating content.

The Amazon Q Business browser extension is now available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (N. California), and US West (Oregon).

Learn how to boost your productivity with AI-powered assistance within your browser by visiting the Amazon Q Business product page and the Amazon Q Business documentation site.

Read more


AWS AppSync now supports cross account sharing of GraphQL APIs

AWS AppSync is a fully managed API management service that connects applications to events, data, and AI models. AppSync now supports sharing GraphQL APIs across AWS accounts using AWS Resource Access Manager (RAM). This new feature allows customers to securely share their AppSync GraphQL APIs configured with IAM authorization, including private APIs, with other AWS accounts within their organization or with third parties.

Before today, customers had to set up additional networking infrastructure to share their private GraphQL APIs between their organization accounts. With this enhancement, customers can now centralize their GraphQL API management in a dedicated account and share access to these APIs with other accounts. For example, a central API team can create and manage private GraphQL APIs, then share them with different application or networking teams in different accounts. This approach simplifies API governance, improves security, and enables more flexible and scalable architectures for multi-account environments. Customers can optionally enable CloudTrail to capture API activities related to AWS AppSync GraphQL APIs as events for additional security and visibility.

This feature is now available in all AWS Regions where AWS AppSync is available.

To get started, refer to the AWS AppSync GraphQL documentation, and visit the AWS RAM console to start sharing your APIs. For more information about sharing resources with AWS RAM, see the AWS RAM User Guide.

Read more


Amazon CloudWatch Synthetics now supports Playwright runtime to create canaries with NodeJS

CloudWatch Synthetics, which continuously monitors web applications and APIs by running scripted canaries to help you detect issues before they impact end-users, now supports the Playwright framework for creating NodeJS canaries enabling comprehensive monitoring and diagnosis of complex user journeys and issues that are challenging to automate with other frameworks.

Playwright is an open-source automation library for testing web applications. You can now create multi-tab workflows in a canary using the Playwright runtime which comes with the advantage of troubleshooting failed runs with logs stored directly to CloudWatch Logs database in your AWS account. This replaces the previous method of storing logs as text files and enables you to leverage CloudWatch Logs Insights for query-based filtering, aggregation, and pattern analysis. You can now query CloudWatch logs for your canaries using the canary run ID or step name, making the troubleshooting process faster and more precise than one relying on timestamp correlation for searching logs. Playwright-based canaries also generate artifacts like reports, metrics, and HAR files, even when canaries times out, ensuring you have the required data needed for root cause analysis in those scenarios. Additionally, the new runtime simplifies canary configuration by allowing customization through a JSON file, removing the need to call a library function in the canary code.

Playwright runtime is available for creating canaries in NodeJS in all commercial regions at no additional cost to users.

To learn more about the runtime, see documentation, or refer to the user guide to get started with CloudWatch Synthetics.

Read more


AWS AppSync launches AI gateway capabilities with new Amazon Bedrock integration in AppSync GraphQL

AWS AppSync, a fully managed API management service that connects applications to events, data, and AI models. Today, customers use AppSync as an AI gateway to trigger generative AI workflows and use subscriptions, powered by WebSockets, to return progressive updates from long-running invocations. This allows them to implement asynchronous patterns. However, in some cases, customers need to make short synchronous invocations to their models. AWS AppSync now supports Amazon Bedrock runtime as a data source for GraphQL APIs, enabling seamless integration of generative AI capabilities. This new feature allows developers to make short synchronous invocations (10 seconds or less) to foundation models and inference profiles in Amazon Bedrock directly from their AppSync GraphQL APIs.

The integration supports calling the converse and invokeModel APIs. Developers can interact with Anthropic models like Claude 3.5 Haiku and Claude 3.5 Sonnet for data analysis and structured object generation tasks. They can also use Amazon Titan models to generate embeddings, create summaries, or extract action items from meeting minutes.

For longer-running invocations, customers can continue using AWS Lambda functions in event mode to interact with Bedrock models and send progressive updates to clients via subscriptions.

This new data source is available in all AWS Regions where AWS AppSync is available. To get started, customers can visit the AWS AppSync console and refer to the AWS AppSync documentation for more information.
 

Read more


AWS AppSync GraphQL APIs now support data plane logging to AWS CloudTrail

Today, AWS AppSync announced support for logging GraphQL data plane operations (query, mutation, and subscription operations and connect requests to your real-time WebSocket endpoint) using AWS CloudTrail, enabling customers to have greater visibility into GraphQL API activity in their AWS account for best practices in security and operational troubleshooting. AWS AppSync GraphQL is a serverless GraphQL service that gives application developers the ability to access data from multiple databases, micro-services, and AI models with a single GraphQL API request.

CloudTrail captures API activities related to AWS AppSync GraphQL APIs as events, including calls from the AWS console and calls made programmatically to the AWS AppSync GraphQL API endpoints. Using the information that CloudTrail collects, you can identify a specific request to an AWS AppSync GraphQL API, the IP address of the requester, the requester's identity, and the date and time of the request. Logging AWS AppSync GraphQL APIs using CloudTrail helps you enable operational and risk auditing, governance, and compliance of your AWS account.

To opt-in for CloudTrail logging you can simply configure logging on your data stream using the AWS CloudTrail Console or by using CloudTrail APIs.

Logging data plane AWS AppSync GraphQL APIs using AWS CloudTrail is now available in all commercial AWS Regions where AppSync is available. To learn more about logging data plane APIs using AWS CloudTrail, see AWS Documentation. For more information about CloudTrail, see the AWS CloudTrail User Guide.

Read more


Amazon Location Service launches Enhanced Places, Routes, and Maps

Amazon Location Service now offers enhanced Places, Routes, and Maps functionality, enabling developers to add advanced location capabilities into their applications more easily. These improvements introduce new capabilities and a new streamlined developer experience to support location-based use cases across industries such as healthcare, transportation & logistics, and retail.

The enhancements include powerful search functions like Geocode to search addresses, Search Nearby to find local businesses, and Autocomplete to predict typed addresses, as well as richer places details including opening hours and contact information. This release also introduces advanced route planning capabilities such as Toll Cost calculation, Waypoint Optimization for multi-stop delivery, Isoline or serviceable area calculation, and supporting a variety of travel restrictions. For example, a food delivery app can use Search Nearby to find and recommend local restaurants, Optimize Waypoints to plan efficient driver routes for multiple orders, and Snap-to-Road to visualize the driver's traveled path on a map. These enhancements are accompanied by new standalone SDKs, making it easier for developers to start new mapping projects, or migrate their existing workloads to Amazon Location Service to benefit from the cost reduction, privacy protection, and ease of integration with other AWS services.

Enhanced Places, Routes, and Maps are available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Stockholm), Europe (Spain), and South America (São Paulo). To learn more, please visit the Developer Guide.
 

Read more


networking

VPC Lattice now includes TCP support with VPC Resources

With the launch of VPC Resources for Amazon VPC Lattice, you can now access all of your application dependencies through a VPC Lattice service network. You're able to connect to your application dependencies hosted in different VPCs, accounts, and on-premises using additional protocols, including TLS, HTTP, HTTPS, and now TCP. This new feature expands upon the existing HTTP-based services support, enabling you to share a wider range of resources across your organization.

With VPC Resource support, you can add your TCP resources, such as Amazon RDS databases, custom DNS, or IP endpoints, to a VPC Lattice service network. Now, you can share and connect to all your application dependencies, such as HTTP APIs and TCP databases, across thousands of VPCs, simplifying network management and providing centralized visibility with built-in access controls.

VPC Resources are generally available with VPC Lattice in Africa (Cape Town), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Europe (Frankfurt), Europe (Ireland), Europe (Paris), Europe (Stockholm), Middle East (Bahrain), South America (Sao Paulo), US East (N. Virginia), US East (Ohio), US West (Oregon).

To get started, read the VPC Resources launch blog, architecture blog, and VPC Lattice User Guide. Learn more about VPC Lattice, visit Amazon VPC Lattice Getting Started.
 

Read more


AWS announces AWS Data Transfer Terminal for high-speed data uploads

Today, AWS announces the launch of AWS Data Transfer Terminal, a secure, physical location where you can bring your storage devices, connect directly to the AWS network, and upload data to AWS including Amazon Simple Storage Service (Amazon S3), Amazon Elastic File System (Amazon EFS), and others using a high throughput connection. Currently, Data Transfer Terminals are located in Los Angeles, and New York. You can reserve a time slot to visit your nearest Data Transfer Terminal facility to upload data.

AWS Data Transfer Terminals are ideal for customer scenarios that create or collect large amounts of data that need to be transferred to the AWS cloud quickly and securely on an as-needed basis. These use cases span various industries and applications, including video production data for processing in the media and entertainment industry, training data for Advanced Driver Assistance Systems (ADAS) in the automotive industry, migrating legacy data in the financial services industry, and uploading equipment sensor data in the industrial and agricultural . By using Data Transfer Terminal, you can significantly reduce the time it takes to upload large amounts of data, enabling you to process ingested data within minutes, as opposed to days or weeks. Once data is uploaded to AWS, you can efficiently analyze large datasets with Amazon Athena, train and run machine learning models with ingested data using Amazon SageMaker, or build scalable applications using Amazon Elastic Compute Cloud (Amazon EC2).

To learn more, visit the Data Transfer Terminal product page and documentation. To get started, make a reservation at your nearby Data Transfer Terminal in the AWS Console.

Read more


AWS Verified Access now supports secure access to resources over non-HTTP(S) protocols (Preview)

Today, AWS announces the preview of AWS Verified Access’ new feature that supports secure access to resources that connect over protocols such as TCP, SSH, and, RDP. With this launch, Verified Access enables you to provide secure, VPN-less access to your corporate applications and resources using AWS zero trust principles. This feature eliminates the need to manage separate access and connectivity solutions for your non-HTTP(S) resources on AWS and simplifies security operations.

Verified Access evaluates each access request in real time based on the user’s identity and device posture, using fine-grained policies. With this feature, you can extend your existing Verified Access policies to enable secure access to non-HTTP(S) resources such as git-repositories, databases, and a group of EC2 instances. For example, you can create centrally managed policies that grant SSH access across your EC2 fleet to only authenticated members of the system administration team, while ensuring that connections are permitted only from compliant devices. This simplifies your security operations by allowing you to create, group, and manage access policies for applications and resources with similar security requirements from a single interface.

This feature of AWS Verified Access is available in preview in 18 AWS regions: US East (Ohio), US East (Northern Virginia), US West (N California), US West (Oregon), Canada (Central), Asia Pacific (Sydney), Asia Pacific (Jakarta), Asia Pacific (Tokyo), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Europe (Ireland), Europe (London), Europe (Frankfurt), Europe (Milan), Europe (Stockholm), South America (São Paulo), and, Israel (Tel Aviv).

To learn more, visit the product page, launch blog and documentation.

Read more


AWS PrivateLink now supports native cross-region connectivity. Until now, Interface VPC endpoints only supported connectivity to VPC endpoint services in the same region. This launch enables customers to connect to VPC endpoint services hosted in other AWS Regions in the same AWS partition over Interface endpoints.

As a service provider, you can enable access to your VPCE service for customers in all existing and upcoming AWS Regions without the need to setup additional infrastructure in each region. As a service consumer, you can privately connect to VPCE services in other AWS Regions without the need to setup cross-region peering or exposing your data over the public internet. Cross-region enabled VPCE services can be accessed through Interface endpoints at a private IP address in your VPC, enabling simpler and more secure inter-region connectivity.

To learn about pricing for this feature, please see the AWS PrivateLink pricing page. The capability is available in US East (N. Virginia), US West (Oregon), Europe (Ireland), Asia Pacific (Singapore), South America (São Paulo), Asia Pacific (Tokyo) and Asia Pacific (Sydney) Regions. To learn more, visit AWS PrivateLink in the Amazon VPC Developer Guide.

Read more


AWS Cloud WAN simplifies on-premises connectivity via AWS Direct Connect

AWS Cloud WAN now supports native integration with AWS Direct Connect, simplifying connectivity between your on-premises networks and the AWS cloud. The new capability enables you to directly attach your Direct Connect gateways to Cloud WAN without the need for an intermediate AWS Transit Gateway, allowing seamless connectivity between your data centers or offices with AWS Virtual Private Clouds (VPCs) across AWS regions globally.

Cloud WAN allows you to build, monitor, and manage a unified global network that interconnects your resources in the AWS cloud and your on-premises environments. Direct Connect allows you to create a dedicated network connection to AWS, bypassing the public Internet. Until today, customers needed to deploy an intermediate transit gateway to interconnect their Direct Connect-based networks with Cloud WAN. Starting today, you can directly attach your Direct Connect gateway to a Cloud WAN core network simplifying connectivity between your on-premises locations and VPCs. The new Cloud WAN Direct Connect attachment adds support for automatic route propagation between AWS and on-premises networks using Border Gateway Protocol (BGP). Direct Connect attachments also supports existing Cloud WAN features such as central policy-based management, tag-based attachment automation and segmentation for advanced security.

The new Direct Connect attachment for Cloud WAN is initially available in eleven commercial regions. Pricing for Direct Connect attachment is the same as any other Cloud WAN attachment. For additional information, please visit Cloud WAN documentation, pricing page and blog post.

Read more


AWS Application Load Balancer introduces Certificate Authority advertisement to simplify client behavior while using Mutual TLS

Application Load balancer (ALB) now supports advertise Certificate Authority (CA) subject name stored in its associated Trust Store to simplify the certificate selection experience. By enabling this feature, the ALB will send a list of CA subject names to clients attempting to connect to the load balancer. Clients can use this list to identify which of their certificates will be accepted by the ALB, which reduces connection errors during mutual authentication.

You can optionally configure the Advertise CA subject name feature using AWS APIs, AWS CLI, or the AWS Management Console. This feature is available for ALBs in all commercial AWS Regions, the AWS GovCloud (US) Regions and China Regions. To learn more, refer to the ALB documentation.

Read more


Cross-zone enabled Application Load Balancer now supports zonal shift and zonal autoshift

AWS Application Load Balancer (ALB) now supports Amazon Application Recovery Controller’s zonal shift and zonal autoshift features on load balancers that are enabled across zones. Zonal shift allows you to quickly shift traffic away from an impaired Availability Zone (AZ) and recover from events such as bad application deployment and gray failures. Zonal autoshift safely and automatically shifts your traffic away from an AZ when AWS identifies potential impact to it.

Enabling cross-zone on ALBs is a popular configuration for customers that require an even distribution of traffic across application targets in multiple AZs. With this launch, customers can shift traffic away from an AZ in the event of a failure just like they are able to for cross-zone disabled load balancers. When zonal shift or autoshift is triggered, the ALB will block all traffic to targets in the AZ that is impacted and remove the zonal IP from DNS. You can configure this feature in two steps: First, enable configuration to allow zonal shift to act on your load balancer(s) using the ALB console or API. Second, trigger zonal shift or enable zonal autoshift for the chosen ALBs via Amazon Application Recovery Controller console or API.

Zonal shift and zonal autoshift support on ALB is available in all commercial AWS Regions, including the AWS GovCloud (US) Regions. To learn more, please refer to the ALB zonal shift documentation.

Read more


AWS Application Load Balancer introduces header modification for enhanced traffic control and security

Application Load Balancer (ALB) now supports HTTP request and response header modification giving you greater controls to manage your application’s traffic and security posture without having to alter your application code.

This feature introduces three key capabilities: renaming specific load balancer generated headers, inserting specific response headers, and disabling server response header. With header rename, you can now rename all ALB generated Transport Layer Security (TLS) headers that the load balancer adds to requests, which includes the six mTLS headers and two TLS headers (version and cipher). This capability enables seamless integration with existing applications that expect headers in a specific format, thereby minimizing changes to your backends while using TLS features on the ALB. With header insertion, you can insert custom headers related to Cross-Origin Resource Sharing (CORS) and critical security headers like HTTP Strict-Transport-Security (HSTS). Finally, the capability to disable the ALB generated “Server” header in responses reduces exposure of server-specific information, adding an extra layer of protection to your application. These response header modification features give you the ability to centrally enforce your organizations security posture at the load balancer instead of enforcement at individual applications, which can be prone to errors.

You can configure Header Modification feature using AWS APIs, AWS CLI, or the AWS Management Console. This feature is available for ALBs in all commercial AWS Regions, AWS GovCloud (US) Regions and China Regions. To learn more, refer to the ALB documentation.
 

Read more


Amazon VPC IPAM now supports enabling IPAM for organizational units within AWS Organizations

Today, AWS announced the ability for Amazon VPC IP Address Manager (IPAM) to be enabled and used for specific organizational units (OUs) within AWS Organizations. This allows you to enable IPAM for specific types of workloads, such as production workloads, or for specific business subsidiaries, that are grouped as OUs in your organization.

VPC IPAM makes it easier for you to plan, track, and monitor IP addresses for your AWS workloads. Typically, you would enable IPAM for the entire organization giving you a unified view of all the IP addresses. In some cases, you may want to enable IPAM only for parts of your organization. For example, you want to enable IPAM for all types of workloads, except sandbox which is isolated from your core-network and contains only experimental workloads. Or, you want to onboard selected business subsidiaries that need IPAM ahead of others in the organization. In such cases, you can use this new feature to enable IPAM for specific parts of your organization that are grouped as OUs.

Amazon VPC IPAM is available in all AWS Regions, including China (Beijing, operated by Sinnet), and China (Ningxia, operated by NWCD), and the AWS GovCloud (US) Regions.

To learn more about this feature, view the service documentation. For details on IPAM pricing, refer to the IPAM tab on the Amazon VPC Pricing page.

Read more


Amazon CloudWatch Internet Monitor adds AWS Local Zones support for VPC subnets

Today, Amazon CloudWatch Internet Monitor introduces support for select AWS Local Zones. Now, you can monitor internet traffic performance for VPC subnets deployed in Local Zones.

With this new feature, you can also view optimization suggestions that include Local Zones. On the Optimize tab in the Internet Monitor console, select the toggle to include Local Zones in traffic optimization suggestions for your application. Additionally, you can compare your current configuration with other supported Local Zones. Select the option to see more optimization suggestions, and then choose specific Local Zones to compare. By comparing latency differences, you can determine the proposed best configuration for your traffic.

At launch, CloudWatch Internet Monitor supports the following Local Zones: us-east-1-dfw-2a, us-east-1-mia-2a, us-east-1-qro-1a, us-east-1-lim-1a, us-east-1-atl-2a, us-east-1-bue-1a, us-east-1-mci-1a, us-west-2-lax-1a, us-west-2-lax-1b, and af-south-1-los-1a.

To learn more, visit the Internet Monitor user guide documentation.

Read more


Amazon CloudFront now supports Anycast Static IPs

Amazon CloudFront introduces Anycast Static IPs, providing customers with a dedicated list of IP addresses for connecting to all CloudFront edge locations worldwide.

Typically, CloudFront uses rotating IP addresses to serve traffic. Customers implementing Anycast Static IPs will receive a dedicated list of static IP addresses for their workloads. CloudFront Anycast Static IPs enables customers to provide a dedicated list of IP addresses to partners and their customers for enhancing security and simplifying network management across various use cases. For example, a common use case is allow-listing the static IP addresses in network firewalls.

CloudFront supports Anycast Static IPs from all edge locations. This excludes Amazon Web Services China (Beijing) region, operated by Sinnet, and the Amazon Web Services China (Ningxia) region, operated by NWCD. CloudFormation support will be coming soon. Learn more about Anycast Static IPs here and for more information, please refer to the Amazon CloudFront Developer Guide. For pricing, please see CloudFront Pricing.

Read more


Amazon Application Recovery Controller zonal shift and zonal autoshift now support Amazon EKS in the GovCloud (US) Regions

Amazon Application Recovery Controller (ARC) zonal shift and zonal autoshift have expanded their capabilities and now support Amazon Elastic Kubernetes Service (Amazon EKS) in the GovCloud (US) Regions. ARC zonal shift helps customers quickly recover an unhealthy application in an Availability Zone (AZ), and reduce the duration and severity of impact to the application due to events such as power outages and hardware or software failures. ARC zonal autoshift safely and automatically shifts an application’s traffic away from an AZ when AWS identifies a potential failure affecting that AZ.

Amazon EKS customers can now shift traffic away from an AZ in the event of a failure. Zonal shift works with Amazon EKS by shifting in-cluster traffic to healthy AZs and ensuring Pods aren’t scheduled in the impaired AZ. You can enable EKS clusters for zonal shift using the EKS console or API.

There is no additional charge for using zonal shift or zonal autoshift. Amazon EKS support for zonal shift is now available in all commercial AWS Regions and the AWS GovCloud (US) Regions. To get started, read the documentation.
 

Read more


Introducing Amazon Route 53 Resolver DNS Firewall Advanced

Today, AWS announced Amazon Route 53 Resolver DNS Firewall Advanced, a new set of capabilities on Route 53 Resolver DNS Firewall that allow you to monitor and block suspicious DNS traffic associated with advanced DNS threats, such as DNS tunneling and Domain Generation Algorithms (DGAs), that are designed to avoid detection by threat intelligence feeds or are difficult for threat intelligence feeds alone to track and block in time.

Today, Route 53 Resolver DNS Firewall helps you block DNS queries made for domains identified as low-reputation or suspected to be malicious, and to allow queries for trusted domains. With DNS Firewall Advanced, you can now enforce additional protections that monitor and block your DNS traffic in real-time based on anomalies identified in the domain names being queried from your VPCs. To get started, you can configure one or multiple DNS Firewall Advanced rule(s), specifying the type of threat (DGA, DNS tunneling) to be inspected. You can add the rule(s) to a DNS Firewall rule group, and enforce it on your VPCs by associating the rule group to each desired VPC directly or by using AWS Firewall Manager, AWS Resource Access Manager (RAM), AWS CloudFormation, or Route 53 Profiles.

Route 53 Resolver DNS Firewall Advanced is available in all AWS Regions, including the AWS GovCloud (US) Regions. To learn more about the new capabilities and the pricing, visit the Route 53 Resolver DNS Firewall webpage and the Route 53 pricing page. To get started, visit the Route 53 documentation.

Read more


Configure Route53 CIDR blocks rules based on Internet Monitor suggestions

With Amazon CloudWatch Internet Monitor’s new traffic optimization suggestions feature, you can configure your Amazon Route 53 CIDR blocks to map your application’s client users to an optimal AWS Region based on network behavior.

Internet Monitor now provides actionable suggestions to help you optimize your Route 53 IP-based routing configurations. By leveraging the new traffic insights for your application, you can easily identify the optimal AWS Regions for routing your end user traffic, and then configure your Route 53 IP-based routing based on these recommendations.

Internet Monitor collects performance data and measures latency for your client subnets behind each DNS resolver. This enables Internet Monitor to recommend the AWS Region that will provide the lowest latency for your users, based on their locations, so that you can fine-tune your DNS routing to provide the best performance for users.

To learn more, visit the Cloud Watch Internet Monitor user guide documentation.

Read more


networking-and-content-delivery

AWS PrivateLink customers can now use VPC endpoints (powered by AWS PrivateLink) to privately and securely access VPC resources. These resources, such as databases or clusters, can be in your VPC or on-premises network, need not be load-balanced, and can be shared with other teams in your organization or with external independent software vendor (ISV) partners.

AWS PrivateLink is a highly available and scalable technology that enables your VPCs to have private unidirectional connection to VPC endpoint services, including supported AWS services and AWS Marketplace services, and now to VPC resources. Prior to this launch, customers could only access or share services that use Network Load Balancer or Gateway Load Balancer. Now, customers can share any VPC resource using AWS Resource Access Manager (AWS RAM). This resource can be an AWS-native resource such as an RDS database, a domain name, or an IP address in another VPC or on-premises environment. Once shared, the intended users can access these resources privately using VPC endpoints. They can use a resource VPC endpoint to access one resource or pool multiple resources in an Amazon VPC Lattice service network, and access the service network using a service network VPC endpoint. There are standard charges for sharing and accessing VPC resources — please see the pricing pages for AWS PrivateLink and VPC Lattice.

This capability is available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Africa (Cape Town), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Paris), Europe (Stockholm), Middle East (Bahrain), and South America (Sao Paulo).

To learn more about this capability and get started, please read our launch blog or refer to the AWS PrivateLink documentation.

Read more


AWS Network Firewall expands the list of supported protocols and keywords in firewall rules

Today, we are excited to announce support for new protocols in AWS Network Firewall so you can protect your Amazon VPCs using application-specific inspection rules. With this launch, AWS Network Firewall will detect protocols like HTTP2, QUIC, and PostgreSQL so you can apply firewall inspection rules to these protocols. You can also use new rule keywords in TLS, SNMP, DHCP, and Kerberos rules to apply granular security controls to your stateful inspection rules.

AWS Network Firewall is a managed firewall service that makes it easy to deploy essential network protections for all your Amazon VPCs. It’s flexible rules engine lets you define firewall rules that give you fine-grained control over network traffic. You can also enable AWS Managed Rules for intrusion detection and prevention signatures that protect against threats such as botnets, scanners, web attacks, phishing and emerging events.

You can create AWS Network Firewall rules using Amazon VPC console, AWS CLI or the Network Firewall API. To see which regions AWS Network Firewall is available in, visit the AWS Region Table. For more information, please see the AWS Network Firewall product page and the service documentation.
 

Read more


Amazon Application Recovery Controller zonal shift and zonal autoshift support Application Load Balancers

Amazon Application Recovery Controller (ARC) zonal shift and zonal autoshift have expanded their capabilities and now support Application Load Balancers (ALB) with cross-zone configuration enabled. ARC zonal shift helps you quickly recover an unhealthy application in an Availability Zone (AZ), and reduce the duration and severity of impact to the application due to events such as power outages and hardware or software failures. ARC zonal autoshift safely and automatically shifts your application’s traffic away from an AZ when AWS identifies a potential failure affecting that AZ.

All ALB customers with cross-zone enabled load balancers can now shift traffic away from an AZ in the event of a failure. Zonal shift works with ALB by blocking all traffic to targets in the impaired AZ and removing the zonal IP from DNS. You need to first enable your ALBs for zonal shift using the ALB console or API, and then trigger a zonal shift or enabled autoshift via ARC zonal shift console or API. Read this launch blog to see how zonal shift can be used with ALB.

Zonal shift and zonal autoshift support for ALB with cross-zone configuration enabled is now available in all commercial AWS Regions and the AWS GovCloud (US) Regions.

There is no additional charge for using zonal shift or zonal autoshift. To get started, visit the product page or read the documentation.

Read more


AWS announces Media Quality-Aware Resiliency for live streaming

Starting today, you can enable Media Quality-Aware Resiliency (MQAR), an integrated capability between Amazon CloudFront and AWS Media Services that provides dynamic, cross-region origin selection and failover based on a dynamically generated video quality score. Built for customers that need always-on ‘eyes-on-glass’ to deliver live events and 24/7 programming channels, MQAR automatically switches between regions in seconds to recover from video quality degradation in one of the regions. This is designed to help deliver a high quality of experience to viewers.

Previously, you could use a CloudFront origin group to failover between two AWS Elemental MediaPackage origins in different AWS Regions based on HTTP error codes. Now with MQAR, your live event streaming workflow has the resiliency to withstand video quality issues including black frames, freeze or dropped frames, or repeated frames. AWS Elemental MediaLive analyzes the video input delivered from the source and dynamically generates a quality score reflecting perceived changes in video quality. Subsequently, your CloudFront distribution continuously selects the MediaPackage origin that reports the highest quality score. You can create CloudWatch alerts to be notified of quality issues using the provided metrics for quality indicators.

To get started with MQAR, deploy a cross-region channel delivery using AWS Media Services and configure CloudFront to use MQAR in the origin group. CloudFormation support will be coming soon. There is no additional cost for enabling MQAR, standard pricing applies for CloudFront and AWS Media Services. To learn more about MQAR, refer to the launch blog and the CloudFront Developer Guide.

Read more


Amazon CloudFront announces origin modifications using CloudFront Functions

Amazon CloudFront now supports origin modification within CloudFront Functions, enabling you to conditionally change or update origin servers on each request. You can now write custom logic in CloudFront Functions to overwrite origin properties, use another origin in your CloudFront distribution, or forward requests to any public HTTP endpoint.

Origin modification allows you to create custom routing policies for how traffic should be forwarded to your application servers on cache misses. For example, you can use origin modification to determine the geographic location of a viewer and then forward the request, on cache misses, to the closest AWS Region running your application. This ensures the lowest possible latency for your application. Previously, you had to use AWS Lambda@Edge to modify origins, but now this same capability is available in CloudFront Functions with better performance and lower costs. Origin modification supports updating all existing origin capabilities such as setting custom headers, adjusting timeouts, setting Origin Shield, or changing the primary origin in origin groups.

Origin modification is now available within CloudFront Functions at no additional charge. For more information, see the CloudFront Developer Guide. For examples of how to use origin modification, see our GitHub examples repository.

Read more


Amazon API Gateway now supports Custom Domain Name for private REST APIs

Amazon API Gateway (APIGW) now gives you the ability to manage your private REST APIs using custom user-friendly private DNS name like private.example.com, simplifying API discovery. This feature enhances your security posture by continuing to encrypt your private API traffic with Transport Layer Security (TLS), while providing full control over managing the lifecycle of the TLS certificate associated with your domain.

API providers can get started with this feature in four simple steps using APIGW console and/or API(s). First, create a private custom domain. Second, configure an Amazon Certificate Manager (ACM) provided or imported certificate for the domain. Third, map multiple private APIs using base path mappings. Fourth, control invokes to the domain using resource policies. API providers can optionally share the domain across accounts using Amazon Resource Access Manager (RAM) to provide consumers the ability to access APIs from different accounts. Once a domain is shared using RAM, a consumer can use VPC endpoint(s) to invoke multiple private custom domains across accounts.

Custom domain name for private REST APIs is now available on API Gateway in all AWS Regions, including the AWS GovCloud (US) Regions. Please visit the API Gateway documentation and AWS blog post to learn more.
 

Read more


Amazon CloudFront now supports additional log formats and destinations for access logs

Amazon CloudFront announces enhancements to its standard access logging capabilities, providing customers with new log configuration and delivery options. Customers can now deliver CloudFront access logs directly to two new destinations: Amazon CloudWatch Logs and Amazon Data Firehose. Customers can select from an expanded list of log output formats, including JSON and Apache Parquet (for logs delivered to S3). Additionally, they can directly enable automatic partitioning of logs delivered to S3, select specific log fields, and set the order in which they are included in the logs.

Until today, customers had to write custom logic to partition logs, convert log formats, or deliver logs to CloudWatch Logs or Data Firehose. The new logging capabilities provide native log configurations, eliminating the need for custom log processing. For example, customers can now directly enable features like Apache Parquet format for CloudFront logs delivered to S3 to improve query performance when using services like Amazon Athena and AWS Glue.

Additionally, customers enabling access log delivery to CloudWatch Logs will receive 750 bytes of logs free for each CloudFront request. Standard access log delivery to Amazon S3 remains free. Please refer to the 'Additional Features' section of the CloudFront pricing page for more details.

Customers can now enable CloudFront standard logs to S3, CloudWatch Logs and Data Firehose through the CloudFront console or APIs. CloudFormation support will be coming soon. For detailed information about the new access log features, please refer to the Amazon CloudFront Developer Guide.

Read more


Amazon CloudFront now supports gRPC delivery

Amazon CloudFront now supports delivery for gRPC applications. gRPC is a modern, open-source remote procedure call (RPC) framework that allows bidirectional communication between a client and a server over HTTP/2 connections. Applications built with gRPC benefit from improved latency using efficient bidirectional streaming and a binary message format, called Protocol Buffers, which are smaller than traditional payloads, like JSON used with RESTful APIs

gRPC reduces communication latency for applications which require continuous client-server interactions for a responsive user experience. For example, a ride-sharing application can use gRPC service to automatically update the location of the requested vehicles on the user's device without the user having to request updates each time. gRPC addresses some of the latency challenges associated with using REST APIs for bidirectional communication. With REST APIs, clients establish a connection to the server, make a request, receive a response, and then terminate the connection, which introduces extra latency on each request. With gRPC, the client and server can send multiple messages independently and concurrently using a single connection. Using CloudFront to deliver gRPC applications, customers receive the full advantages of gRPC, plus CloudFront's worldwide reach, speed, and security.

CloudFront supports gRPC from all edge locations. This excludes Amazon Web Services China (Beijing) region, operated by Sinnet, and the Amazon Web Services China (Ningxia) region, operated by NWCD. Requests and data transfer fees apply to this feature. For further details, visit the CloudFront pricing page and the Developer Guide.
 

Read more


Load Balancer Capacity Unit Reservation for Application and Network Load Balancers

Application Load Balancer (ALB) and Network Load Balancer (NLB) now support Load Balancer Capacity Unit (LCU) Reservation that allows you to proactively set a minimum capacity for your load balancer, complementing its existing ability to auto-scale based on your traffic pattern.

With this feature, you can prepare for anticipated traffic surges by reserving a guaranteed minimum capacity in advance, providing customers increased scale and availability during high-demand events. LCU Reservation is ideal for scenarios such as event ticket sales, new product launches, or release of popular content. When using this feature, you pay only for the reserved LCUs and any additional usage above the reservation. You can easily configure this feature through the ELB console or API.

The feature is available for ALB in all commercial AWS Regions, including the AWS GovCloud (US) Regions and NLB in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm). To learn more, please refer to the ALB Documentation and NLB Documentation.

Read more


Amazon CloudFront announces VPC origins

Amazon CloudFront announces Virtual Private Cloud (VPC) origins, a new feature that allows customers to use CloudFront to deliver content from applications hosted in VPC private subnets. With VPC origins, customers can have their Application Load Balancers (ALB), Network Load Balancers (NLB), and EC2 Instances in a private subnet that is accessible only through their CloudFront distributions. This makes it easy for customers to secure their web applications, allowing them to focus on growing their businesses while improving security and maintaining high-performance and global scalability with CloudFront.

AWS customers use CloudFront to deliver highly performant and globally scalable applications. Customers serving content from Amazon S3, AWS Elemental Services and Lambda Function URLs can use Origin Access Control as a managed solution to secure their origins. For origins in VPCs, customers had to keep their origins in public subnets, and use Access Control Lists and other mechanisms to restrict access to their origins. Customers had to spend on-going effort to implement and maintain these solutions, leading to undifferentiated work. VPC origins streamlines security management and reduces operational complexity, making it easy to use CloudFront as the single front door for applications.

VPC origins are available in AWS Commercial Regions only, and the full list of supported AWS Regions is available here. There is no additional cost for using VPC origins with CloudFront. CloudFormation support will be coming soon. To learn more, visit CloudFront VPC origins.

Read more


Amazon VPC IP Address Manager is now available in Asia Pacific (Malaysia) Region

Amazon Virtual Private Cloud IP Address Manager (Amazon VPC IPAM) that makes it easier for you to plan, track, and monitor IP addresses for your AWS workloads, is now available in Asia Pacific (Malaysia) Region.

Amazon VPC IPAM allows you to easily organize your IP addresses based on your routing and security needs, and set simple business rules to govern IP address assignments. Using VPC IPAM, you can automate IP address assignment to Amazon VPCs and subnets, eliminating the need to use spreadsheet-based or homegrown IP address planning applications, which can be hard to maintain and time-consuming. VPC IPAM automatically tracks critical IP address information, eliminating the need to manually track or do bookkeeping for IP addresses. VPC IPAM keeps your IP address monitoring data (up to a maximum of three years), which you can use to do retrospective analysis and audits for your network security and routing policies.

With this Region expansion, Amazon VPC IPAM is available in all AWS Regions, including the AWS GovCloud (US) Regions and China Regions.

To learn more about IPAM, view the IPAM documentation. For details on pricing, refer to the IPAM tab on the Amazon VPC Pricing Page.

Read more


AWS announces Block Public Access for Amazon Virtual Private Cloud

Today, AWS announced Virtual Private Cloud (VPC) Block Public Access (BPA), a new centralized declarative control that enables network and security administrators to authoritatively block Internet traffic for their VPCs. VPC BPA supersedes any other setting and ensures your VPC resources are protected from unfettered Internet access in compliance with your organizations security and governance policy.

Amazon VPC allows customers to launch AWS resources in a logically isolated virtual network. Often times customers have thousands of AWS accounts and VPCs that are owned by multiple business units or application developer teams. Central administrators have the critical responsibility to ensure that resources in their VPCs are accessible to the public Internet in a highly controlled fashion. VPC BPA offers a single declarative control that allows admins to easily block Internet access to VPCs via the Internet Gateway or the Egress-only Internet Gateway and ensures that there is no unintended public exposure to their AWS resources regardless of their routing and security configuration. Admins can apply BPA across all or select VPCs in their account, block bi-directional or ingress-only Internet connectivity and exclude select subnets for resources that need Internet access. VPC BPA is integrated with AWS Network Access Analyzer and VPC Flow Logs to support impact analysis, provide advanced visibility and help customers meet audit and compliance requirements.

VPC BPA is available in all AWS Regions where Amazon VPC is offered. There is no additional charge for using this feature. For additional information, visit the Amazon VPC documentation and blog post.
 

Read more


Amazon VPC Lattice now supports Amazon Elastic Container Service (Amazon ECS)

Amazon VPC Lattice now provides native integration with Amazon ECS, Amazon's fully managed container orchestration service that makes it easy for you to deploy, manage, and scale containerized applications. This launch enables VPC Lattice to offer comprehensive support across all major AWS compute services, including Amazon Elastic Compute Cloud (Amazon EC2), Amazon Elastic Kubernetes Service (Amazon EKS), AWS Lambda, Amazon ECS, and AWS Fargate. VPC Lattice is a managed application networking service that simplifies the process of connecting, securing, and monitoring applications across AWS compute services, allowing developers to focus on building applications that matter to their business while reducing time and resources spent on network setup and maintenance.

With native ECS integration, you can now directly associate your ECS services with VPC Lattice target groups, eliminating the need for an intermediate Application Load Balancer (ALB). This streamlined integration reduces cost, operational overhead, and complexity, while enabling you to leverage the complete feature sets of both ECS and VPC Lattice. Organizations with diverse compute infrastructure, such as a mix of Amazon EC2, Amazon EKS, AWS Lambda, and Amazon ECS workloads, can benefit from this launch by unifying service-to-service connectivity, security, and observability across all compute platforms.

This new feature is available in all AWS Regions where Amazon VPC Lattice is available.

To get started, see the following resources:

Read more


AWS Application Load Balancer announces CloudFront integration with built-in WAF

We are announcing a new one-click integration on Application Load Balancer (ALB) to attach an Amazon CloudFront distribution from the ALB console. This enables the easy use of CF as a distributed single point of entry for your application that ingests, absorbs, and filters all inbound traffic before it reaches your ALB. The features also enables an AWS WAF preconfigured WebACL with basic security protections as a first line of defense against common web threats. Overall, you can easily enable seamless protections from ALB, CloudFront, and AWS WAF with minimal configurations to secure your application.

Previously to accelerate and secure your applications, you had to configure a CloudFront distribution with proper caching, request forwarding, and security protections that connected to your ALB on the right port and protocol. This required navigation between multiple services and manual configuration. With this new integration, the ALB console handles the creation and configuration of ALB, CloudFront and AWS WAF. CloudFront enables your application’s Cache-Control headers to cache content like HTML, CSS/JavaScript, and images close to viewers, improving performance and reducing load on your application. With an additional checkbox, you can attach a security group configured to allow traffic from CloudFront IP addresses; if maintained as the only inbound rule, it ensures all requests are processed and inspected by CloudFront and WAF.

This new integration is available for both new and existing Application Load Balancers. Standard ALB, CloudFront, and AWS WAF pricing apply. The feature is available in all commercial AWS Regions. To learn more about this feature, visit the ALB and CloudFront sections in the AWS User Guide.

Read more


AWS Client VPN now supports the latest Ubuntu OS versions - 22.04 LTS and 24.04 LTS

AWS Client VPN now supports Linux desktop client with Ubuntu versions 22.04 LTS and 24.04 LTS. You can now run the AWS supplied VPN client on the latest Ubuntu OS versions. AWS Client VPN desktop clients are available free of charge, and can be downloaded here.

AWS Client VPN is a managed service that securely connects your remote workforce to AWS or on-premises networks. It supports desktop clients for MacOS, Windows, and Ubuntu-Linux. With this release, CVPN now supports the latest version of Ubuntu client (i.e. 22.04 LTS and 24.04 LTS). It already support Mac OS version 12.0, 13.0 and 14.0, and Windows 10 and 11.

This client version is available in all regions where AWS Client VPN is generally available with no additional cost.

To learn more about Client VPN:

Read more


Amazon CloudFront no longer charges for requests blocked by AWS WAF

Effective October 25, 2024, all CloudFront requests blocked by AWS WAF are free of charge. With this change, CloudFront customers will never incur request fees or data transfer charges for requests blocked by AWS WAF. This update requires no changes to your applications and applies to all CloudFront distributions using AWS WAF.

AWS WAF will continue billing for evaluating and blocking these requests. To learn more about using AWS WAF with CloudFront, visit Use AWS WAF protections in the CloudFront Developer Guide.

Read more


AWS announces new edge location in Qatar

Amazon Web Services (AWS) announces expansion in Qatar by launching a new Amazon CloudFront edge location in Doha, Qatar. The new AWS edge location brings the full suite of benefits provided by Amazon CloudFront, a secure, highly distributed, and scalable content delivery network (CDN) that delivers static and dynamic content, APIs, and live and on-demand video with low latency and high performance.

All Amazon CloudFront edge locations are protected against infrastructure-level DDoS threats with AWS Shield that uses always-on network flow monitoring and in-line mitigation to minimize application latency and downtime. You also have the ability to add additional layers of security for applications to protect them against common web exploits and bot attacks by enabling AWS Web Application Firewall (WAF).

Traffic delivered from this edge location is included within the Middle East region pricing. To learn more about AWS edge locations, see CloudFront edge locations.

Read more


nice-dcv

Announcing Idle Disconnect Timeout for Amazon WorkSpaces

Amazon WorkSpaces now supports Idle Disconnect Timeout for Windows WorkSpaces Personal with the Amazon DCV protocol. WorkSpaces administrators can now configure how long a user can be inactive while connected to a personal WorkSpace, before they are disconnected. This setting is already available for WorkSpaces Pools, but this launch includes end user notifications for idle users, warning that their session will be disconnected soon, for both Personal and Pools.

Idle Disconnect Timeout helps Amazon WorkSpaces administrators better optimize costs and resources for their fleet. This feature helps ensure that customers who pay for their resources hourly are only paying for the WorkSpaces that are actually in use. The notifications also provide improved overall user experience for both Personal and Pools end users, by warning them about the pending disconnection and giving them a chance to continue or save their work beforehand.

Idle Disconnect Timeout is available at no additional cost for Windows WorkSpaces running DCV, in all the AWS Regions where WorkSpaces is currently available. To get started with Amazon WorkSpaces, see Getting Started with Amazon WorkSpaces.

To enable this feature, you must be using Windows WorkSpaces Personal DCV host agent version 2.1.0.1554 or later. Your users must be on WorkSpaces Windows or macOS client versions 5.24 or later, WorkSpaces Linux client version 2024.7 or later, or on Web Access. Refer to the client version release notes for more details. To learn more, visit Manage your Windows WorkSpaces in the Amazon WorkSpaces Administrator Guide.

Read more


partner-network

Buy with AWS accelerates solution discovery and procurement on AWS Partner websites

Today, AWS Marketplace announces Buy with AWS, a new feature that helps accelerate discovery and procurement on AWS Partners’ websites for products available in AWS Marketplace. Partners that sell or resell products in AWS Marketplace can now offer new experiences on their websites that are powered by AWS Marketplace. Customers can more quickly identify solutions from Partners that are available in AWS Marketplace and use their AWS accounts to access a streamlined purchasing experience.

Customers browsing on Partner websites can explore products that are “Available in AWS Marketplace” and request demos, access free trials, and request custom pricing. Customers can conveniently and securely make purchases by clicking the Buy with AWS button and completing transactions by logging in to their AWS accounts. All purchases made through Buy with AWS are transacted and managed within AWS Marketplace, allowing customers to take advantage of benefits such as consolidated AWS billing, centralized subscriptions management, and access to cost optimization tools.

For AWS Partners, Buy with AWS provides a new way to engage website visitors and accelerate the path-to-purchase for customers. By adding Buy with AWS buttons to Partner websites, Partners can give website visitors the ability to subscribe to free trials, make purchases, and access custom pricing using their AWS accounts. Partners can complete an optional integration and build new experiences on websites that allow customers to search curated product listings and filter products from the AWS Marketplace catalog.

Learn more about making purchases using Buy with AWS. Learn how AWS Partners can start selling using Buy with AWS.

Read more


Introducing the Amazon Security Lake Ready Specialization

We are excited to announce the new Amazon Security Lake Ready Specialization, which recognizes AWS Partners who have technically validated their software solutions to integrate with Amazon Security Lake and demonstrated successful customer deployments. These solutions have been technically validated by AWS Partner Solutions Architects for their sound architecture and proven customer success. Security Lake Ready software solutions can either contribute data to the Security Lake or consume this data and provide analytics, delivering a cohesive security solution for AWS customers.

Amazon Security Lake automates data management tasks for customers, reducing costs and consolidating security data that customers own. It uses the Open Cybersecurity Schema Framework (OCSF), an open standard that helps customers address the challenges of data normalization and schema mapping across multiple log sources. With Amazon Security Lake Ready software solutions, customers now have a single place with verified partner solutions where security data can be stored in an open-source format, ready for identifying potential threats and vulnerabilities, and for security investigations and analytics.

Explore Amazon Security Lake Ready software solutions that can help your organization improve the protection of workloads, applications, and data by significantly reducing the operational overhead of managing security data. To learn more about how to become an Amazon Security Lake Ready Partner, visit the AWS Service Ready Program webpage.
 

Read more


Respond and recovery more quickly with AWS Security Incident Response Partners

Today, AWS Security Incident Response launches a new AWS Specialization with approved partners from the AWS Partner Network (APN). AWS customers today rely on various 3rd party tools and services to support their internal security incident response capabilities. To better help both customers and partners, AWS introduced AWS Security Incident Response, a new service that helps customers prepare for, respond to, and recover from security events. Alongside approved AWS Partners, AWS Security Incident Response monitors, investigates, and escalates triaged security findings from Amazon GuardDuty and other threat detection tools through AWS Security Hub. Security Incident Response identifies and escalates only high-priority incidents. Partners and customers can also leverage collaboration and communication features to streamline coordinated incident response for faster reaction and recovery. For example, service members can create  a predefined "Incident Response Team" that is automatically alerted whenever a security case is escalated. Alerted members, which includes customers and partners, can then communicate and collaborate in a centralized format, with native feature integrations such as in-console messaging, video conferencing, and quick and secure data transfer. 

Customers can access the service alongside AWS Partners that have been vetted and approved to use Security Incident Response. Learn more and explore AWS Security Incident Response Partners with specialized expertise to help you respond when it matters most.

Read more


Start collaborating on multi-partner opportunities with Partner Connections (Preview)

Today, AWS Partner Central announces the preview of Partner Connections, a new feature allowing AWS Partners to discover and connect with other Partners for collaboration on shared customer opportunities. With Partner Connections, Partners can co-sell joint solutions, accelerate deal progression, and expand their reach by teaming with other AWS Partners.

At the core of Partner Connections are two key capabilities: connections discovery and multi-partner opportunities. The connections discovery feature uses AI-powered recommendations to streamline Partner matchmaking, making it easier for Partners to find suitable collaborators and add them to their network. With multi-partner opportunities, Partners can work together seamlessly to create and manage joint customer opportunities in APN Customer Engagements (ACE). This integrated approach allows Partners to work seamlessly with AWS and other Partners on shared opportunities, reducing the operational overhead of managing multi-partner opportunities.

Partners can also create, update, and share multi-partner opportunities using the Partner Central API for Selling. This allows Partners to collaborate with other Partners and AWS on joint sales opportunities from their own customer relationship management (CRM) system.

Partner Connections (Preview) is available to all eligible AWS Partners who have signed the ACE Terms and Conditions and have linked their AWS account to their Partner Central account. To get started, log in to AWS Partner Central and review the ACE user guide for more information. To see how Partner Connections works, read the blog.

Read more


Introducing the AWS Digital Sovereignty Competency

Digital sovereignty has been a priority for AWS since its inception. AWS remains committed to offering customers the most advanced sovereignty controls and features in the cloud. With the increasing importance of digital sovereignty for public sector organizations and regulated industries, AWS is excited to announce the launch of the AWS Digital Sovereignty Competency.

The AWS Digital Sovereignty Competency curates and validates a community of AWS Partners with advanced sovereignty capabilities and solutions, including deep experience in helping customers address sovereignty and compliance requirements. These partners can assist customers with residency control, access control, resilience, survivability, and self-sufficiency.

Through this competency, customers can search for and engage with trusted local and global AWS Partners that have technically validated experience in addressing customers’ sovereignty requirements. Many partners have built sovereign solutions that leverage AWS innovations and built-in controls and security features.

In addition to these offerings, AWS Digital Sovereignty Partners provide skills and knowledge of local compliance requirements and regulations, making it easier for customers to meet their digital sovereignty requirements while benefiting from the performance, agility, security, and scale of the AWS Cloud.

Read more


AWS Security Competency Update: New AI Security Category

Introducing a new AI Security category in the AWS Security competency to help customers easily identify AWS Partners with deep experience securing AI environments, and defending AI workloads against advanced threats and attacks. Partners in this new category are validated for their capabilities in areas like prevention of sensitive data disclosure, prevention of injection attacks, security posture management, implementing responsible AI filtering, and more.

The rapid adoption of AI, and especially generative AI is transforming how customers build applications, but also introduces new security risks that require specialized expertise. Customers need solutions that can secure AI models, tools, datasets, and other deployment resources used in these applications. Unlock the power of AI, while keeping your AI applications, and data safe with validated partner solutions.

Learn more about the AWS Security competency and explore validated partners with customer success in the new AI Security category.
 

Read more


Deploy GROW with SAP on AWS from AWS Marketplace

GROW with SAP on AWS is now available for subscription from AWS Marketplace. As a complete offering of solutions, best practices, adoption acceleration services, community and learning, GROW with SAP helps any size organization adopt cloud enterprise resource planning (ERP) with speed, predictability, and continuous innovation. GROW with SAP on AWS can be implemented in months instead of years with traditional on-premises ERP implementations.

By implementing GROW with SAP on AWS, you can simplify everyday work, grow your business, and secure your success. At the core of GROW with SAP is SAP S/4HANA Cloud, a full-featured SaaS ERP suite built on the learnings of SAP’s 50+ years of industry best practices. GROW with SAP allows your organization to gain end- to- end process visibility and control with integrated systems across HR, procurement, sales, finance, supply chain, and manufacturing. It also includes SAP Business AI-powered processes leveraging AWS to provide data-driven insights and recommendations. Customers can also innovate with generative AI using their SAP data through Amazon Bedrock models in the SAP generative AI hub. GROW with SAP on AWS takes advantage of AWS Graviton processors, which offer up to 60% less energy than comparable cloud instances for the same performance.

GROW with SAP on AWS is initially available in the US-East Region.

To subscribe to GROW with SAP on AWS, visit the AWS Marketplace listing. Or, to learn more, visit the GROW with SAP on AWS detail page.

Read more


New streamlined deployment experience for Databricks on AWS

Today, AWS introduces an enhanced version of SaaS Quick Launch for Databricks Data Intelligence Platform in AWS Marketplace, delivering a streamlined Databricks workspace deployment experience on AWS. Databricks is a unified data analytics platform that enables organizations to accelerate data-driven innovation. SaaS Quick Launch for Databricks automates installation and configuration steps, simplifying the process of launching Databricks workspaces on AWS, where data professionals manage notebooks, clusters, and data engineering jobs.

Previously, deploying Databricks on AWS required manual configuration and knowledge of AWS infrastructure provisioning tools. Now all users, including data engineers, data scientists, and business analysts, can quickly and easily deploy Databricks on AWS through AWS Marketplace in three guided steps. When subscribing to Databricks in AWS Marketplace, customers can use the new streamlined deployment experience to rapidly configure, deploy, and access their Databricks workspaces and accelerate their data analytics, machine learning, and data science initiatives on AWS. Through this simplified process, the necessary AWS resources are automatically provisioned and integrated with Databricks following AWS best practices for security and high availability.

This streamlined deployment experience in AWS Marketplace is currently available for all AWS Regions supported by Databricks.

To get started with the new streamlined deployment experience for Databricks, visit Databricks Data Intelligence Platform in AWS Marketplace.

Read more


Introducing the AWS Consumer Goods Competency

In the ever-evolving consumer goods industry, innovation and agility are paramount. AWS has launched the AWS Consumer Goods Competency to support digital transformation. This initiative connects businesses with top validated AWS Partners offering specialized industry solutions.

These partners provide expertise across six critical areas: product development, manufacturing, supply chain, marketing, unified commerce, and digital transformation. To earn the designation, partners must complete a rigorous technical validation process based on the AWS Well-Architected Framework, ensuring reliable, secure, and efficient cloud operations.

By collaborating with these validated partners, consumer goods companies can drive innovation, enhance customer experiences, and gain competitive market advantages. The AWS Competency Partner program is a comprehensive framework that identifies partners with exceptional technical expertise and proven customer success. This formal AWS Specialization recognizes partners' capabilities in advancing industry technology.

With this new AWS Competency, AWS reinforces its commitment to supporting digital transformation in the consumer goods sector. Businesses can now accelerate their innovation, streamline operations, and deliver exceptional customer experiences in the highly competitive market.

Read more


Colombian Sellers and Channel Partners now available in AWS Marketplace

AWS Marketplace now enables customers to discover and subscribe to software from Colombia Independent Software Vendors (ISVs) and Channel Partners. This expansion serves to increase the breadth of software and data offerings, adding to the 20,000+ software listings and data products from 5000+ sellers.

Starting today, AWS Marketplace customers around the world can directly procure software and data products from ISVs in Colombia, making it easier than ever to reach data-driven decisions and build operations in the cloud. In addition, AWS Marketplace customers can now purchase software through regional and local Channel Partners in Colombia, who offer knowledge of their business, localized support, and trusted expertise, through Channel Partner Private Offers (CPPO).

Software from Colombian ISVs such as Software Colombia, CARI AI and Nuevosmedios are now available in AWS Marketplace. In addition, Channel Partners such as Ikusi, Axity Colombia, Netdata, and AndeanTrade are now able to sell software in AWS Marketplace through CPPO. ISVs and Channel Partners from Colombia join the ever-growing offerings from AWS Marketplace and more products are added regularly.

AWS Marketplace is a curated digital catalog of third-party software that makes it easy for customers to find, buy, and deploy solutions that run on Amazon Web Services (AWS).

For more information on listing in AWS Marketplace, please visit the AWS Marketplace Seller Guide.
For more information on purchasing solutions through AWS Marketplace, please visit the AWS Marketplace Buyer Guide.

Read more


Announcing AWS Partner Assistant, a generative AI-powered virtual assistant for AWS Partners

AWS Partner Assistant, a generative AI–powered virtual assistant built on Amazon Q Business, is now available for Partners in AWS Partner Central and the AWS Marketplace Management Portal. Partner Assistant makes it easier for you to get quick answers to common questions—helping you boost productivity and accelerate your AWS Partner journey to unlock benefits faster.

Partner Assistant enables you to reduce the need for manual searches by generating real-time guidance and concise summaries from guides and documentation that are available specifically for AWS Partners. For example, you can ask Partner Assistant how to list a software as a service (SaaS) product in AWS Marketplace, for details about available funding programs for Partners, or how to obtain the Generative AI Competency. The assistant’s responses include links to resources available in Partner Central and AWS Docs for further details.

AWS Partner Assistant is available to all Partners who have linked their Partner Central and AWS accounts.

Get started using AWS Partner Assistant by logging in to AWS Partner Central or the AWS Marketplace Management Portal and accessing the chat from the bottom right of your screen. Learn more about becoming an AWS Partner.
 

Read more


Self-Service Know Your Customer (KYC) for AWS Marketplace Sellers

AWS Marketplace now offers a self-service Know Your Customer (KYC) feature for all sellers wishing to transact via the AWS Europe, Middle East, and Africa (EMEA) Marketplace Operator. The KYC verification process is required for sellers to receive disbursements via the AWS EMEA Marketplace Operator. This new self-service feature helps sellers complete this KYC process quickly and easily, and unblocks their business growth in EMEA region.

Completing KYC and onboarding to EMEA Marketplace operator allows sellers to provide a more localized experience for their customers. Customers will see consistent Value Added Tax (VAT) charges across all their AWS purchases. They can also pay using their local bank accounts through Single Euro Payment Area (SEPA) for AWS Marketplace Invoices. Additionally, customers will get invoices for all their AWS services and Marketplace purchases from a single entity - AWS EMEA. This makes billing and procurement much simpler for customers in Europe, the Middle East, and Africa.

The new self-service KYC experience empowers sellers to complete verification independently, reducing the time to onboard and eliminating the need to coordinate with AWS Marketplace support team.

We invite all AWS Marketplace sellers to take advantage of this new feature to expand their reach in the EMEA region and provide an improved purchasing experience for their customers. To get started, please visit the AWS Marketplace Seller Guide.

Read more


AWS Marketplace introduces AI-powered product summaries and comparisons

AWS Marketplace now provides AI-powered product summaries and comparisons for popular software as a service (SaaS) products, helping you make faster and more informed software purchasing decisions. Use this feature to compare similar SaaS products across key evaluation criteria such as customer reviews, product popularity, features, and security credentials. Additionally, you can gain AI-summarized insights into key decision factors like ease of use, customer support, and cost effectiveness.

Sifting through thousands of options on the web to find software products that best fit your business needs can be challenging and time-consuming. The new product comparisons feature in AWS Marketplace helps with simplifying this process for you. It leverages machine learning to recommend similar SaaS products for consideration. It then uses generative AI to summarize product information and customer reviews, highlight unique aspects of products, and helps you understand key differences to identify the best product for your use cases. You can also customize the comparison sets and download comparisons tables to share with colleagues.

The product comparisons feature is available for popular SaaS products in all commercial AWS Regions where AWS Marketplace is available.

Check out AI-generated product summaries in AWS Marketplace. Find the new experience on popular SaaS product pages such as Databricks Data Intelligence Platform and Trend Cloud One. To learn more about how the experience works, visit the AWS Marketplace Buyer Guide.

Read more


Announcing enhanced purchase order support for AWS Marketplace

Today, AWS Marketplace is extending transaction purchase order number support to products with pay-as-you-go pricing, including Amazon Bedrock subscriptions, software as a service (SaaS) contracts with consumption pricing, and AMI annuals. Additionally, you can update purchase order numbers post-subscription prior to invoice creation to ensure your invoices reflect the proper purchase order. This launch helps you allocate costs and makes it easier to process and pay invoices.

The purchase order feature in AWS Marketplace allows the purchase order number that you provide at the time of the transaction in AWS Marketplace to appear on all invoices related to that purchase. Now, you can provide a purchase order at the time of purchase for most products available in AWS Marketplace, including products with pay-as-you-go pricing. You can add or update purchase orders post-subscription, prior to invoice generation, within the AWS Marketplace console. You can also provide more than one PO for products appearing on your monthly AWS Marketplace invoice and receive a unique invoice for each purchase order. Additionally, you can add a unique PO for each fixed charge and associated AWS Marketplace monthly usage charges at the time of purchase, or post-subscription in the AWS Marketplace console.

You can update purchase orders for existing subscriptions under manage subscriptions in the AWS Marketplace console. To enable transaction purchase orders for AWS Marketplace, sign in to the management account (for AWS Organizations) and enable the AWS Billing integration in the AWS Marketplace Console settings. To learn more, read the AWS Marketplace Buyer Guide.

Read more


AWS Marketplace announces improved offer and agreement management capabilities for sellers

AWS Marketplace now offers improved capabilities to help sellers manage agreements and create new offers more efficiently. Sellers can access an improved agreements navigation experience, export details to PDF, and clone past private offers in the AWS Marketplace Management Portal.

The new agreements experience makes it easier to find agreements for a specific offer or by the customer and take action based on the agreement’s status—active, expiring, expired, replaced, or cancelled. This holistic view enables you to retrieve agreements faster to help you prepare for customer engagements and identify renewal or expansion opportunities. To simplify sharing and offline collaboration, you can now export details into PDF format. Additionally, the new offer cloning capability enables you to replicate common offer configurations from past direct private offers. This gives you the ability to quickly make adjustments for renewals and revisions to ongoing offers.

These features are available for all AWS Partners selling SaaS, Amazon Machine Images (AMI), containers, and professional services products in AWS Marketplace. To learn more, visit the AWS Marketplace Seller Guide, or access the AWS Marketplace Management Portal to try the new capabilities.

Read more


AWS Partner Network automates Foundational Technical Reviews using Amazon Bedrock

Today, AWS is announcing automation for the Foundational Technical Review (FTR) process using Amazon Bedrock. The new generative AI-driven automation process for the FTR optimizes the review timeline for AWS Partners, offering review decisions in minutes, accelerating a process that previously could take weeks. Gaining FTR approval allows Partners to fast-track their AWS Partner journey, unlocking access to AWS Partner Network (APN) programs and co-sell opportunities with AWS.

Partners seeking access to AWS funding programs, the AWS Competency Program to validate expertise, and the AWS ISV Accelerate Program for co-sell support must qualify their solutions by completing the FTR. With this launch, AWS has automated the FTR and enhanced the experience for Partners, with successful reviews being approved in minutes. Unsuccessful reviews will be forwarded for manual review, and an AWS expert will make contact within two weeks to remediate potential gaps. Partners will receive an email notification informing them of the review result, reducing wait time from weeks to minutes. Additionally, partners will be able to submit responses in several non-English languages, saving time for translation and improving the accuracy of their submissions. This generative AI-based automation accelerates the technical validation step, allowing Partners to spend more time on business initiatives.

AWS Partners can request the FTR for their solution in AWS Partner Central. To learn more about the FTR, sign in to AWS Partner Central and download the FTR Guide (software or service solution).
 

Read more


Enhanced account linking experience across AWS Marketplace and AWS Partner Central

Today, AWS announces an improved account linking experience for AWS Partners to create and connect their AWS Marketplace accounts with AWS Partner Central, as well as onboarding associated users. Account Linking allows Partners to seamlessly navigate between Partner Central and Marketplace Management Portal using Single Sign-On (SSO), connect Partner Central solutions to AWS Marketplace listings, link private offers to opportunities for tracking deals from pipeline to customer offers, and access AWS Marketplace insights within centralized AWS Partner Analytics Dashboard. Linking accounts also unlocks access to valuable Amazon Partner Network (APN) program benefits such as ISV Accelerate and accelerated sales cycles.

The new account linking experience introduces three major improvements to streamline the self-guided linking workflow. First, it simplifies the process to associate your AWS account with AWS Marketplace by registering your legal business name. Second, it automates the creation and bulk assignment of Identity and Access Management (IAM) roles to AWS Partner Central users, eliminating the need for manual creation in the AWS IAM console. Third, it introduces three new AWS managed policies to simplify permission management for AWS Partner Central and Marketplace access. The new policies offer fine-grained access options, ranging from full Partner Central access to personalized access to co-sell or marketplace offer management.

This new experience is available for all AWS Partners. To get started, navigate to the “Account Linking” feature on the AWS Partner Central homepage. To learn more, review the AWS Partner Central documentation.

Read more


New visualizations available in AWS Partner Central Analytics and Insights Dashboards

Amazon Web Services, Inc. (AWS) announces the launch of three new data visualizations in the Analytics and Insights dashboard experience directly accessible in AWS Partner Central, helping Partners maintain ACE eligibility and improve customer opportunity prioritization.

Prior to this launch, AWS Partners relied on AWS Sales teams to communicate ACE Eligibility requirements and understand AWS integrations in use. In addition, Partners manually assessed customer engagement levels to determine the likelihood for potential sales. Partners can now utilize the “Opportunity Summary data” table and the “At a glance” tab to gain actionable insights to meet requirements faster and drive customer success.

The first insight is the AWS Marketplace solution engagement score, this indicates customer likelihood to purchase Partner solutions. This helps partners identify and prioritize high potential opportunities. The second insight provides Partners with visibility into the criteria that determines ACE Eligibility for AWS to share customer opportunities with them. The third insight provides Partners with visibility into whether their organization has completed AWS Partner CRM integration for ACE, eliminating duplicative efforts for regional partner teams.

To learn more about the AWS Marketplace Solution engagement score, ACE eligibility status, and CRM integration status for Analytics and Insights, log in to Partner Central and explore the Analytics and Insights User Guide.

Read more


Gain new insights into your sales pipeline

Today, Amazon Web Services, Inc. (AWS) announces new pipeline performance data visualizations in the Analytics and Insights Dashboard. Partners can now inspect win rate of closed opportunities, assess top performing segments, and identify required actions on open opportunities.

Drill downs by customer region, segment, and industry are available for key metrics including open opportunity count, opportunities that require updates, and win rates. Additionally, AWS Specialization partners in the APN Customer Engagements (ACE) program get more insights with co-sell recommendation scores. The co-sell recommendation score assesses how well their solutions are positioned to meet customer needs. By combining top performing benchmarks and co-sell recommendation scores, partners can see where they are most well-positioned for co-selling and delivering for AWS customer use cases.

To get started, log into your AWS Partner Central account and navigate to the Opportunities tab within the Analytics and Insights Dashboard. Here, you'll find new visuals for pipeline performance and co-sell recommendation scores.

To learn about all the new features the dashboard has to offer, log into AWS Partner Central and explore the Analytics and Insights User Guide!
 

Read more


Announcing business planning feature in AWS Partner Central

AWS Partner Central is launching a business planning feature to help AWS Partners create successful partnerships and accelerate co-sell with AWS.

Currently, Partners have multiple touchpoints, conversations, and emails with AWS Partner management and sales teams as part of business planning exercises. AWS is making this collaboration easier and more efficient by centralizing the business planning process and standardizing templates in Partner Central. This will provide a central mechanism to help track progress toward business goals with AWS.

Partners can create joint business plans with AWS that are tailor-made for their unique business needs. Partners can review and edit inputs, set goals, and track progress in a single experience. Comprehensive reporting provides year-to-date actual performance, current-year attainment, and year-over-year changes for selected business metrics, reducing manual effort for collecting data from various sources.

The business planning feature is available to AWS Partners who are actively engaged with AWS Partner management teams to create joint business plans. To get started, reach out to your AWS Partner contact to initiate a business plan. Once a draft plan is shared, log in to AWS Partner Central, navigate to “My company,” and click on “Business plan” to start collaborating.

Read more


AWS Partner CRM Connector Adds Partner Central API Support

Starting today, the AWS Partner CRM Connector further simplifies co-sell actions between Salesforce and AWS Partner Central through APN Customer Engagement (ACE) integration. Partners can now share and receive AWS opportunities faster through the Partner Central API, use multi-object mapping to simplify related field mapping and reduce redundant data between Salesforce and ACE Pipeline Manager, and receive submission updates via EventBridge, making it easier than ever to supercharge co-selling and sales motions.

These new capabilities enable partners manage AWS co-sell opportunities with increased speed and flexibility. The Partner Central API accelerates information sharing, while EventBridge pushes real-time update notifications for key actions as they occur. Multi-object mapping adds another layer of efficiency, giving partners control over data flow by simplifying account look-ups and reducing repetitive entries across Salesforce fields and business workflows.

This modular connector provides greater governance, visibility, and effectiveness in management of ACE opportunities and leads, and AWS Marketplace private offers and resale authorizations. It enables automation through sales process alignment, and accelerates adoption through the extension of capabilities to field sales teams.

The AWS Partner CRM Connector for Salesforce is available as an application to install at no-cost from the Salesforce AppExchange.

Visit AWS Partner Central documentation to learn more, and learn more about the CRM Connector in the AWS Partner CRM Integration documentation.

Read more


AWS Partner Central now provides API for Selling with AWS

Today, AWS introduces the AWS Partner Central API for Selling, enabling AWS Partners to integrate their Customer Relationship Management (CRM) systems with AWS Partner Central. This API allows partners to streamline and scale their co-selling process by automating the creation and management of APN Customer Engagements (ACE) opportunities within their own CRM. This API provides improved efficiency, scale, and error handing compared to the existing Amazon S3-based CRM integration, and is available to all AWS Partners.

AWS Partner Central API for Selling enables partners to create, update, view, and assign opportunities, as well as accept invitations to engage on AWS referrals. Additionally, partners can retrieve a list of their solutions on AWS Partner Central, and associate specific solutions, AWS products, or AWS Marketplace offers with opportunities as needed. Real-time notifications via AWS EventBridge keep partners up to date on any changes on the opportunity. The API also integrates with AWS services, enabling partners to monitor co-selling via Amazon CloudWatch and audit with AWS CloudTrail. Partners can use this API in combination with the AWS Marketplace Catalog API to manage the entire opportunity-to-offer process directly within their CRM.

AWS Partner Central API for Selling is now available in the US East (N. Virginia) region and is accessible through AWS SDKs in .NET, Python, Java, Go, and other programming languages. Partners can also use this API via the AWS Partner CRM Connector or our multiple integration partners.

Learn more on Automations for Partners. To get started, visit AWS Partner Central API documentation.

Read more


Announcing an improved self-guided experience for AWS Partner Central

AWS is improving the self-guided experience for AWS Partners by adding task categorization and grouping. The new experience helps partners prioritize the key actions needed to accelerate their journey from onboarding to AWS Partner Central to selling on AWS Marketplace.

This new experience makes it easier to quickly understand the benefits of a task, time required, and additional resources available to complete the task. This helps Partners better triage, prioritize, and delegate tasks as needed. We are also introducing task categories, such as Account, Solution, and Program tasks. Account tasks help partners set up or link their AWS Marketplace accounts, and onboard new Partner Central users. Program tasks recommend relevant programs, guide partners through onboarding, and prompt partners to complete any pending requirements to qualify for the program benefits. Solution tasks allow partners to track the progress of their solution development across the build/market/sell/grow stages of the Partner Profitability Framework, as they complete their solution-based journey and list in AWS Marketplace.

The new Task experience is available to all AWS Partners globally by logging in to AWS Partner Central and accessing “My tasks” from the AWS Partner Central top navigation. Visit the AWS Partner Network site to learn more about becoming an AWS Partner.

Read more


quantum-technologies

Announcing the Quantum Embark advisory program for customers new to quantum computing

AWS announces Quantum Embark, a new program aimed at getting customers ready for quantum computing by providing an expert-led approach as they begin their quantum computing journey. With this program, customers can explore the value of quantum computing for their business, understand the pace of development of the technology, and prepare for its impact. Quantum Embark is designed to cut through the hype and focus on actionable outcomes.

Quantum computing has the potential to revolutionize industries by solving problems that are beyond the ability of even the most powerful computers. However, to get buy-in from internal stakeholders and establish a long-term quantum roadmap, customers need trustworthy guidance specific to their most important use cases. Quantum Embark is a program of advisory services consisting of three modules: (1) Use Case Discovery, which focuses on the most tangible opportunities; (2) Technical Enablement, where users get hands-on experience with quantum computing via Amazon Braket; and (3) Deep Dive, which deepens customers’ understanding of mapping quantum algorithms to target applications identified in the Use Case Discovery module. Upon completion, customers have a reusable runbook consisting of recommended tooling, a projected roadmap and documentation to engage leadership and line of business teams for target application areas.

With Quantum Embark, you only pay for the modules you choose with no long-term commitments. Check out our blog to learn how some customers are already getting value out of this program. Visit the Braket console or contact your AWS Account Team to get started.

Read more


security-identity-and-compliance

AWS Config now supports a service-linked recorder

AWS Config added support for a service-linked recorder, a new type of AWS Config recorder that is managed by an AWS service and can record configuration data on service-specific resources, such as the new Amazon CloudWatch telemetry configurations audit. By enabling the service-linked recorder in Amazon CloudWatch, you gain centralized visibility into critical AWS service telemetry configurations, such as Amazon VPC Flow Logs, Amazon EC2 Detailed Metrics, and AWS Lambda Traces.

With service-linked recorders, an AWS service can deploy and manage an AWS Config recorder on your behalf to discover resources and utilize the configuration data to provide differentiated features. For example, an Amazon CloudWatch managed service-linked recorder helps you identify monitoring gaps within specific critical resources within your organization, providing a centralized, single-pane view of telemetry configuration status. Service-linked recorders are immutable to ensure consistency, prevention of configuration drift, and simplified experience. Service-linked recorders operate independently of any existing AWS Config recorder, if one is enabled. This allows you to independently manage your AWS Config recorder for your specific use cases while authorized AWS services can manage the service-linked recorder for feature specific requirements.

Amazon CloudWatch managed service-linked recorder is now available in US East (N. Virginia), US West (Oregon), US East (Ohio), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney) Europe (Frankfurt), Europe (Ireland), Europe (Stockholm) regions. The AWS Config service-linked recorder specific to Amazon CloudWatch telemetry configuration feature is available to customers at no additional cost.

To learn more, please refer to our documentation.
 

Read more


Amazon Bedrock Guardrails supports multimodal toxicity detection for image content (Preview)

Organizations are increasingly using applications with multimodal data to drive business value, improve decision-making, and enhance customer experiences. Amazon Bedrock Guardrails now supports multimodal toxicity detection for image content, enabling organizations to apply content filters to images. This new capability with Guardrails, now in public preview, removes the heavy lifting required by customers to build their own safeguards for image data or spend cycles with manual evaluation that can be error-prone and tedious.

Bedrock Guardrails helps customers build and scale their generative AI applications responsibly for a wide range of use cases across industry verticals including healthcare, manufacturing, financial services, media and advertising, transportation, marketing, education, and much more. With this new capability, Amazon Bedrock Guardrails offers a comprehensive solution, enabling the detection and filtration of undesirable and potentially harmful image content while retaining safe and relevant visuals. Customers can now use content filters for both text and image data in a single solution with configurable thresholds to detect and filter undesirable content across categories such as hate, insults, sexual, and violence, and build generative AI applications based on their responsible AI policies.

This new capability in preview is available with all foundation models (FMs) on Amazon Bedrock that support images including fine-tuned FMs in 11 AWS regions globally: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Europe (London), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Tokyo), and Asia Pacific (Mumbai), and AWS GovCloud (US-West).

To learn more, visit the Amazon Bedrock Guardrails product page, read the News blog, and documentation.

Read more


Introducing the Amazon Security Lake Ready Specialization

We are excited to announce the new Amazon Security Lake Ready Specialization, which recognizes AWS Partners who have technically validated their software solutions to integrate with Amazon Security Lake and demonstrated successful customer deployments. These solutions have been technically validated by AWS Partner Solutions Architects for their sound architecture and proven customer success. Security Lake Ready software solutions can either contribute data to the Security Lake or consume this data and provide analytics, delivering a cohesive security solution for AWS customers.

Amazon Security Lake automates data management tasks for customers, reducing costs and consolidating security data that customers own. It uses the Open Cybersecurity Schema Framework (OCSF), an open standard that helps customers address the challenges of data normalization and schema mapping across multiple log sources. With Amazon Security Lake Ready software solutions, customers now have a single place with verified partner solutions where security data can be stored in an open-source format, ready for identifying potential threats and vulnerabilities, and for security investigations and analytics.

Explore Amazon Security Lake Ready software solutions that can help your organization improve the protection of workloads, applications, and data by significantly reducing the operational overhead of managing security data. To learn more about how to become an Amazon Security Lake Ready Partner, visit the AWS Service Ready Program webpage.
 

Read more


Respond and recovery more quickly with AWS Security Incident Response Partners

Today, AWS Security Incident Response launches a new AWS Specialization with approved partners from the AWS Partner Network (APN). AWS customers today rely on various 3rd party tools and services to support their internal security incident response capabilities. To better help both customers and partners, AWS introduced AWS Security Incident Response, a new service that helps customers prepare for, respond to, and recover from security events. Alongside approved AWS Partners, AWS Security Incident Response monitors, investigates, and escalates triaged security findings from Amazon GuardDuty and other threat detection tools through AWS Security Hub. Security Incident Response identifies and escalates only high-priority incidents. Partners and customers can also leverage collaboration and communication features to streamline coordinated incident response for faster reaction and recovery. For example, service members can create  a predefined "Incident Response Team" that is automatically alerted whenever a security case is escalated. Alerted members, which includes customers and partners, can then communicate and collaborate in a centralized format, with native feature integrations such as in-console messaging, video conferencing, and quick and secure data transfer. 

Customers can access the service alongside AWS Partners that have been vetted and approved to use Security Incident Response. Learn more and explore AWS Security Incident Response Partners with specialized expertise to help you respond when it matters most.

Read more


Introducing the AWS Digital Sovereignty Competency

Digital sovereignty has been a priority for AWS since its inception. AWS remains committed to offering customers the most advanced sovereignty controls and features in the cloud. With the increasing importance of digital sovereignty for public sector organizations and regulated industries, AWS is excited to announce the launch of the AWS Digital Sovereignty Competency.

The AWS Digital Sovereignty Competency curates and validates a community of AWS Partners with advanced sovereignty capabilities and solutions, including deep experience in helping customers address sovereignty and compliance requirements. These partners can assist customers with residency control, access control, resilience, survivability, and self-sufficiency.

Through this competency, customers can search for and engage with trusted local and global AWS Partners that have technically validated experience in addressing customers’ sovereignty requirements. Many partners have built sovereign solutions that leverage AWS innovations and built-in controls and security features.

In addition to these offerings, AWS Digital Sovereignty Partners provide skills and knowledge of local compliance requirements and regulations, making it easier for customers to meet their digital sovereignty requirements while benefiting from the performance, agility, security, and scale of the AWS Cloud.

Read more


AWS Security Competency Update: New AI Security Category

Introducing a new AI Security category in the AWS Security competency to help customers easily identify AWS Partners with deep experience securing AI environments, and defending AI workloads against advanced threats and attacks. Partners in this new category are validated for their capabilities in areas like prevention of sensitive data disclosure, prevention of injection attacks, security posture management, implementing responsible AI filtering, and more.

The rapid adoption of AI, and especially generative AI is transforming how customers build applications, but also introduces new security risks that require specialized expertise. Customers need solutions that can secure AI models, tools, datasets, and other deployment resources used in these applications. Unlock the power of AI, while keeping your AI applications, and data safe with validated partner solutions.

Learn more about the AWS Security competency and explore validated partners with customer success in the new AI Security category.
 

Read more


Amazon Bedrock Guardrails now supports Automated Reasoning checks (Preview)

With the launch of the Automated Reasoning checks safeguard in Amazon Bedrock Guardrails, AWS becomes the first and only major cloud provider to integrate automated reasoning in our generative AI offerings. Automated Reasoning checks help detect hallucinations and provide a verifiable proof that a large language model (LLM) response is accurate. Automated Reasoning tools are not guessing or predicting accuracy. Instead, they rely on sound mathematical techniques to definitively verify compliance with expert-created Automated Reasoning Policies, consequently improving transparency. Organizations increasingly use LLMs to improve user experiences and reduce operational costs by enabling conversational access to relevant, contextualized information. However, LLMs are prone to hallucinations. Due to the ability of LLMs to generate compelling answers, these hallucinations are often difficult to detect. The possibility of hallucinations and an inability to explain why they occurred slows generative AI adoption for use cases where accuracy is critical.

With Automated Reasoning checks, domain experts can more easily build specifications called Automated Reasoning Policies that encapsulate their knowledge in fields such as operational workflows and HR policies. Users of Amazon Bedrock Guardrails can validate generated content against an Automated Reasoning Policy to identify inaccuracies and unstated assumptions, and explain why statements are accurate in a verifiable way. For example, you can configure Automated Reasoning checks to validate answers on topics defined in complex HR policies (which can include constraints on employee tenure, location, and performance) and explain why an answer is accurate with supporting evidence.

Contact your AWS account team to request access to Automated Reasoning checks in Amazon Bedrock Guardrails in US East (N. Virginia) and US West (Oregon) AWS regions. To learn more, visit Amazon Bedrock Guardrails and read the News blog.
 

Read more


Amazon Web Services announces declarative policies

Today, AWS announces the general availability of declarative policies, a new management policy type within AWS Organizations. These policies simplify the way customers enforce durable intent, such as baseline configuration for AWS services within their organization. For example, customers can configure EC2 to allow instance launches using AMIs vended by specific providers and block public access in their VPC with a few simple clicks or commands for their entire organization using declarative policies.

Declarative policies are designed to prevent actions that are non-compliant with the policy. The configuration defined in the declarative policy is maintained even when services add new APIs or features, or when customers add new principals or accounts to their organization. With declarative policies, governance teams have access to the account status report which provides insight into the current configuration for an AWS service across their organization. This helps them asses readiness to enforce configuration at scale. Administrators can provide additional transparency to end users by configuring custom error messages to redirect them to internal wikis or ticketing systems through declarative policies.

To get started, navigate to the AWS Organizations console to create and attach declarative policies. You can also use AWS Control Tower, AWS CLI or CloudFormation templates to configure these policies. Declarative policies today support EC2, EBS and VPC configurations with support for other services coming soon. To learn more see documentation and blog post.

Read more


Amazon OpenSearch Service zero-ETL integration with Amazon Security Lake

Amazon OpenSearch Service now offers a zero-ETL integration with Amazon Security Lake, enabling you to query and analyze security data in-place directly through OpenSearch. This integration allows you to efficiently explore voluminous data sources that were previously cost-prohibitive to analyze, helping you streamline security investigations and obtain comprehensive visibility of your security landscape. By offering the flexibility to selectively ingest data and eliminating the need to manage complex data pipelines, you can now focus on effective security operations while potentially lowering your analytics costs.

Using the powerful analytics and visualization capabilities in OpenSearch Service, you can perform deeper investigations, enhance threat hunting, and proactively monitor your security posture. Pre-built queries and dashboards using the Open Cybersecurity Schema Framework (OCSF) can further accelerate your analysis. The built-in query accelerator boosts performance and enables fast-loading dashboards, enhancing your overall experience. This integration empowers you to accelerate investigations, uncover insights from previously inaccessible data sources, optimize analytics efficiency and costs, with minimal data migration.

OpenSearch Service zero-ETL integration with Security Lake is now generally available in 13 regions globally: Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), US East (Ohio), US East (N. Virginia), US West (Oregon), South America (São Paulo), Europe (Paris), and Canada (Central).

To learn more on using this capability, see the OpenSearch Service Integrations page and the OpenSearch Service Developer Guide. To learn more about how to configure and share Security Lake, see the Get Started Guide.
 

Read more


Amazon GuardDuty introduces GuardDuty Extended Threat Detection

Today, Amazon Web Services (AWS) announces the general availability of Amazon GuardDuty Extended Threat Detection. This new capability allows you to identify sophisticated, multi-stage attacks targeting your AWS accounts, workloads, and data. You can now use new attack sequence findings that cover multiple resources and data sources over an extensive time period, allowing you to spend less time on first-level analysis and more time responding to critical severity threats to minimize business impact.

GuardDuty Extended Threat Detection uses artificial intelligence and machine learning algorithms trained at AWS scale and automatically correlates security signals from across AWS services to detect critical threats. This capability allows for the identification of attack sequences, such as credential compromise followed by data exfiltration, and represents them as a single, critical-severity finding. The finding includes an incident summary, a detailed events timeline, mapping to MITRE ATT&CK® tactics and techniques, and remediation recommendations.

GuardDuty Extended Threat Detection is available in all AWS commercial Regions where GuardDuty is available. This new capability is automatically enabled for all new and existing GuardDuty customers at no additional cost. You do not need to enable all GuardDuty protection plans. However, enabling additional protection plans will increase the breadth of security signals, allowing for more comprehensive threat analysis and coverage of attack scenarios. You can take action on findings directly from the GuardDuty console or via its integrations with AWS Security Hub and Amazon EventBridge.

To get started, visit the Amazon GuardDuty product page or try GuardDuty free for 30 days on the AWS Free Tier.
 

Read more


AWS Verified Access now supports secure access to resources over non-HTTP(S) protocols (Preview)

Today, AWS announces the preview of AWS Verified Access’ new feature that supports secure access to resources that connect over protocols such as TCP, SSH, and, RDP. With this launch, Verified Access enables you to provide secure, VPN-less access to your corporate applications and resources using AWS zero trust principles. This feature eliminates the need to manage separate access and connectivity solutions for your non-HTTP(S) resources on AWS and simplifies security operations.

Verified Access evaluates each access request in real time based on the user’s identity and device posture, using fine-grained policies. With this feature, you can extend your existing Verified Access policies to enable secure access to non-HTTP(S) resources such as git-repositories, databases, and a group of EC2 instances. For example, you can create centrally managed policies that grant SSH access across your EC2 fleet to only authenticated members of the system administration team, while ensuring that connections are permitted only from compliant devices. This simplifies your security operations by allowing you to create, group, and manage access policies for applications and resources with similar security requirements from a single interface.

This feature of AWS Verified Access is available in preview in 18 AWS regions: US East (Ohio), US East (Northern Virginia), US West (N California), US West (Oregon), Canada (Central), Asia Pacific (Sydney), Asia Pacific (Jakarta), Asia Pacific (Tokyo), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Europe (Ireland), Europe (London), Europe (Frankfurt), Europe (Milan), Europe (Stockholm), South America (São Paulo), and, Israel (Tel Aviv).

To learn more, visit the product page, launch blog and documentation.

Read more


AWS announces AWS Security Incident Response for general availability

Today, AWS announces the general availability of AWS Security Incident Response, a new service that helps you prepare for, respond to, and recover from security events. This service offers automated monitoring and investigation of security findings to free up your resources from routine tasks, communication and collaboration features to streamline response coordination, and direct 24/7 access to the AWS Customer Incident Response Team (CIRT).

Security Incident Response integrates with existing detection services, such as Amazon GuardDuty, and third-party tools through AWS Security Hub to rapidly review security alerts, escalate high-priority findings, and, with your permission, implement containment actions. It reduces the number of alerts your team needs to analyze, saving time and allowing your security personnel to focus on strategic initiatives. The service centralizes all incident-related communications, documentation, and actions, making coordinated incident response across internal and external stakeholders possible and reducing the time to coordinate from hours to minutes. You can preconfigure incident response team members, set up automatic notifications, manage case permissions, and use communication tools like video conferencing and in-console messaging during security events. By accessing the service through a single, centralized dashboard in the AWS Management Console, you can monitor active cases, review resolved security incident cases, and track key metrics, such as the number of triaged events and mean time to resolution, in real time. If you require specialized expertise, you can connect 24/7 to the AWS CIRT in only one step.

For more information about AWS Regions where Security Incident Response is available, refer to the following service documentation.

To get started, visit the Security Incident Response console, and explore the overview page to learn more. For configuration details, refer to the Security Incident Response User Guide.

Read more


AWS Network Firewall expands the list of supported protocols and keywords in firewall rules

Today, we are excited to announce support for new protocols in AWS Network Firewall so you can protect your Amazon VPCs using application-specific inspection rules. With this launch, AWS Network Firewall will detect protocols like HTTP2, QUIC, and PostgreSQL so you can apply firewall inspection rules to these protocols. You can also use new rule keywords in TLS, SNMP, DHCP, and Kerberos rules to apply granular security controls to your stateful inspection rules.

AWS Network Firewall is a managed firewall service that makes it easy to deploy essential network protections for all your Amazon VPCs. It’s flexible rules engine lets you define firewall rules that give you fine-grained control over network traffic. You can also enable AWS Managed Rules for intrusion detection and prevention signatures that protect against threats such as botnets, scanners, web attacks, phishing and emerging events.

You can create AWS Network Firewall rules using Amazon VPC console, AWS CLI or the Network Firewall API. To see which regions AWS Network Firewall is available in, visit the AWS Region Table. For more information, please see the AWS Network Firewall product page and the service documentation.
 

Read more


AWS Artifact enhances agreements with improved access control and tracking

We are excited to announce enhancements to the agreement functionality on AWS Artifact that will improve how you manage and track agreement execution.

You can now provide fine-grained access to agreements in AWS Artifact at the AWS Identity and Access Management (IAM) Action and Resource level. To make it easy for you to configure IAM permissions, we have introduced “AWSArtifactAgreementsReadOnlyAccess”and “AWSArtifactAgreementsFullAccessmanaged policies for AWS Artifact agreements, which provide read-only permissions and full permissions respectively. We have also implemented CloudTrail logging for agreement activities on AWS Artifact. This enables you to easily track and audit user activity and API calls related to agreements. To take advantage of the new features through Artifact console, please update your IAM policies and opt in to use the new fine-grained permissions by selecting that option on the Artifact Agreements console.

We also introduced a new API called listCustomerAgreements that allows you to list active customer agreements for each AWS Account. This API enables automation and efficient tracking of active agreements for customers, especially for those managing a large number of accounts or complex compliance requirements.

These features are available in all AWS commercial regions. To learn more about AWS Artifact and how to manage agreements, refer to the documentation and AWS Artifact API reference.
 

Read more


Amazon OpenSearch Ingestion now supports writing security data to Amazon Security Lake

Amazon OpenSearch Ingestion now allows you to write data into Amazon Security Lake in real-time, allowing you to ingest security data from both AWS and custom sources and uncover valuable insights into potential security issues in near-realtime. Amazon Security Lake centralizes security data from AWS environments, SaaS providers and on- premises into a purpose-built data lake. With this integration, customers can now seamlessly ingest and normalize security data from all popular custom sources before writing it into Amazon Security Lake.

Amazon Security Lake uses the Open Cybersecurity Schema Framework (OCSF) to normalize and combine security data from a broad range of enterprise security data sources in the Apache Parquet format. With this feature, you can now use Amazon OpenSearch Ingestion to ingest and transform security data from popular 3rd party sources like Palo Alto, CrowdStrike, and SentinelOne into OCSF format before writing the data into Security Lake. Once the data is written to Security Lake, it is available in the AWS Glue Data Catalog and AWS Lake Formation tables for the respective source.

This feature is available in all the 15 AWS commercial regions where Amazon OpenSearch Ingestion is currently available: US East (Ohio), US East (N. Virginia), US West (Oregon), US West (N. California), Europe (Ireland), Europe (London), Europe (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Singapore), Asia Pacific (Mumbai), Asia Pacific (Seoul), Canada (Central), South America (Sao Paulo), and Europe (Stockholm).

To learn more, see the Amazon OpenSearch Ingestion webpage and the Amazon OpenSearch Service Developer Guide.

Read more


Announcing AWS STS support for ECDSA-based signatures of OIDC tokens

Today, AWS Security Token Service (STS) is announcing support for digitally signing OpenID Connect (OIDC) JSON Web Tokens (JWTs) using Elliptic Curve Digital Signature Algorithm (ECDSA) keys. A digital signature guarantees the JWT’s authenticity and integrity and ECDSA is a popular, NIST-approved digital signature algorithm. When your identity provider (IdP) authenticates a user, it crafts a signed OIDC JWT representing that user’s identity. When your authenticated user calls the AssumeRoleWithWebIdentity API and passes their OIDC JWT, STS vends short-term credentials that enable access to your protected AWS resources.

You now have a choice between using RSA and ECDSA keys when your IdP digitally signs an OIDC JWT. To begin using ECDSA keys with your OIDC IdP, update your IdP’s JWKS document with the new key information. No change to your AWS Identity and Access Management (IAM) configuration is needed to use ECDSA-based signatures of your OIDC JWTs.

Support for ECDSA-based signatures of OIDC JWTs is available in all AWS Regions, including the AWS GovCloud (US) Regions .

To learn more about using OIDC to authenticate your users and workloads, please visit OIDC Federation in the IAM Users Guide.

Read more


Announcing new feature tiers: Essentials and Plus for Amazon Cognito

Amazon Cognito launches new user pool feature tiers: Essentials and Plus. The Essentials tier offers comprehensive and flexible user authentication and access control features, allowing customers to implement secure, scalable, and customized sign-up and sign-in experiences for their application within minutes. It supports password-based log-in, multi-factor authentication (email, SMS, TOTP), and log-in with social identity providers, along with recently announced Managed Login and passwordless log-in (passkeys, email, SMS) features. Essentials also supports customizing access tokens and disallowing password reuse. The Plus tier is geared toward customers with elevated security needs for their applications by offering threat protection capabilities against suspicious log-ins. Plus includes all Essentials features and additionally supports risk-based adaptive authentication, compromised credentials detection, and exporting user authentication event logs to analyze threat signals.

Essentials will be the default tier for new users pools created by customers. Customers also have the flexibility to switch between all available tiers anytime based on their application needs. For existing user pools, customers can enable the new tiers or continue using their current user pool configurations without making any changes. Customers using advanced security features (ASF) in Amazon Cognito should consider the Plus tier, which includes all ASF capabilities, additional capabilities such as passwordless log-in, and up to 60% savings compared to using ASF.

The Essentials and Plus tiers are available at new pricing. Essentials and Plus are available in all AWS Regions where Amazon Cognito is available except AWS GovCloud (US) Regions.

To learn more, refer to:

Read more


AWS Shield Advanced is now available in Asia Pacific (Malaysia) Region

Starting today, you can use AWS Shield Advanced in the AWS Asia Pacific (Malaysia) Region. AWS Shield Advanced is a managed application security service that safeguards applications running on AWS from distributed denial of service (DDoS) attacks. Shield Advanced provides always-on detection and automatic inline mitigations that minimize application downtime and latency from DDoS attacks. Also, it provides protections against more sophisticated and larger attacks for your applications running on Amazon Elastic Compute Cloud (EC2), Amazon Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator, and Amazon Route 53. To learn more visit, the AWS Shield Advanced product page.

For a full list of AWS regions where AWS Shield Advanced is available, visit the AWS Regional Services page. AWS Shield Advanced pricing may vary between regions. For more information about pricing, visit the AWS Shield Pricing page.
 

Read more


Amazon Cognito introduces Managed Login to support rich branding for end user journeys

Amazon Cognito introduces Managed Login, a fully-managed, hosted sign-in and sign-up experience that customers can personalize to align with their company or application branding. Amazon Cognito provides millions of users with secure, scalable, and customizable sign-up and sign-in experiences. With Managed Login, Cognito customers can now use its no-code visual editor to customize the look and feel of the user journey from signup and login to password recovery and multi-factor authentication.

Managed Login helps customers offload the undifferentiated heavy lifting of designing and maintaining custom implementations such as passwordless authentication and localization. For example, Managed Login offers pre-built integrations for passwordless login, including sign-in with passkeys, email, or text message. This provides customers the flexibility to implement low-friction and secure authentication methods without the need to author custom code. With Managed Login, customers now design and manage their end-user sign-up and sign-in experience through the AWS Management Console. Additionally, Cognito has also revamped its getting started experience with application-specific (e.g., for web applications) guidance for customers to swiftly configure their user pools. Together with Managed Login and a simplified getting started experience, customers can now get their applications to end users faster than ever before with Amazon Cognito.

Managed Login is offered as part of the Cognito Essentials tier and can be used in all AWS Regions where Amazon Cognito is available except the AWS GovCloud (US) Regions. To get started, refer to:

Read more


Amazon Cognito now supports passwordless authentication for low-friction and secure logins

Amazon Cognito now allows you to secure user access to your applications with passwordless authentication, including sign-in with passkeys, email, and text message. Passkeys are based on FIDO standards and use public key cryptography, which enables strong, phishing-resistant authentication. With passwordless authentication, you can reduce the friction associated with traditional password-based authentication and thus simplify the user log-in experience for their applications. For example, if your users choose to use passkeys to log in, they can do so using a built-in authenticator, such as Touch ID on Apple MacBooks and Windows Hello facial recognition on PCs.

Amazon Cognito provides millions of users with secure, scalable, and customizable sign-up and sign-in experiences within minutes. With this launch, AWS is now extending the support for passwordless authentication to the applications you build. This enables your end-users to log in to your applications with a low-friction and secure approach.

Passwordless authentication is offered as part of the Cognito Essentials tier and can be used in all AWS Regions where Amazon Cognito is available except the AWS GovCloud (US) Regions. To get started, see the following resources:

Read more


AWS CloudFormation Hooks now allows AWS Cloud Control API resource configurations evaluation

AWS CloudFormation Hooks now allow you to evaluate resource configurations from AWS Cloud Control API (CCAPI) create and update operations. Hooks allow you to invoke custom logic to enforce security, compliance, and governance policies on your resource configurations. CCAPI is a set of common application programming interfaces (APIs) that is designed to make it easy for developers to manage their cloud infrastructure in a consistent manner and leverage the latest AWS capabilities faster. By extending Hooks to CCAPI, customers can now inspect resource configurations prior to CCAPI create and update operations, and block or warn the operations if there is a non-compliant resource found.

Before this launch, customers would publish Hooks that would only be invoked during CloudFormation operations. Now, customers can extend their resource Hook evaluations beyond CloudFormation to CCAPI based operations. Customers with existing resource Hooks, or who are using the recently launched pre-built Lambda and Guard hooks, simply need to specify “Cloud_Control” as a target in the hooks’ configuration.

Hooks is available in all AWS Commercial Regions. The CCAPI support is available for customers who use CCAPI directly or third-party IaC tools that have CCAPI providers support.

To get started, refer to Hooks user guide and CCAPI user guide for more information. Learn the detail of this feature from this AWS DevOps Blog.
 

Read more


Author AWS CloudFormation Hooks using the CloudFormation Guard domain specific language

AWS CloudFormation Hooks now allows customers to use the AWS CloudFormation Guard domain specific language to author hooks. Customers use AWS CloudFormation Hooks to invoke custom logic to inspect resource configurations prior to a create, update or delete AWS CloudFormation stack operation. If a non-compliant configuration is found, Hooks can block the operation or let the operation continue with a warning. With this launch, you can now author hooks by simply pointing to a Guard rule set stored as an S3 object.

Prior to this launch, customers authored hooks using a programming language and registered the hooks as extensions on the CloudFormation registry using the cfn-cli. This pre-built hook simplifies this authoring process and provides customers the ability to extend their existing Guard rules used for static template validation. Now, you can store your Guard rules, either as individual or compressed files in an S3 bucket, and provide your S3 URI in your hooks configuration.

The Guard hook is available at no additional charge in all AWS Commercial Regions. To get started, you can use the new Hooks console workflow within CloudFormation console, AWS CLI, or CloudFormation.

To learn more about the Guard hook, check out the AWS DevOps Blog or refer to the Guard Hook User Guide. Refer to Guard User Guide to learn more about Guard including how to write Guard rules.
 

Read more


AWS CloudFormation Hooks now support custom AWS Lambda functions

AWS CloudFormation Hooks introduces a pre-built hook that allows you to simply point to an AWS Lambda function in your account. With CloudFormation Hooks, you can provide custom logic that proactively evaluate your resource configurations before provisioning. Today’s launch allows you to provide your custom logic as a Lambda function, allowing a simpler way for you to author a hook while gaining extended flexibility of hosting Lambda functions in your account.

Prior to this launch, customers used the CloudFormation CLI (cfn-cli) to author and publish hooks to the CloudFormation registry. Now, customers can simply activate the Lambda hook and pass a Lambda Amazon Resource Names (ARNs) for hooks to invoke. This allows you to directly edit your Lambda function to make updates without re-configuring your hook. Additionally, you no longer have to register your custom logic to CloudFormation registry.

The Lambda hook is available at no additional charge in all AWS Commercial Regions. Customers will incur a charge for Lambda usage. Refer to Lambda’s pricing guide for more information. To get started, you can use the new Hooks console workflow within CloudFormation console, AWS CLI, or CloudFormation.

To learn more about the Lambda hook, check out the detailed feature walkthrough on the AWS DevOps Blog or refer to the Lambda Hook User Guide. To get started with creating your Lambda function, visit AWS Lambda User Guide.
 

Read more


Amazon EKS simplifies providing IAM permissions to EKS add-ons

Amazon Elastic Kubernetes Service (EKS) now offers a direct integration between EKS add-ons and EKS Pod Identity, streamlining the lifecycle management process for critical cluster operational software that needs to interact with AWS services outside the cluster.

EKS add-ons that enable integration with underlying AWS resources need IAM permissions to interact with AWS services. EKS Pod Identities simplify how Kubernetes applications obtain AWS IAM permissions. With today’s launch, you can directly manage EKS Pod Identities using EKS add-ons operations through the EKS console, CLI, API, eksctl, and IAC tools like AWS CloudFormation, simplifying usage of Pod Identities for EKS add-ons. This integration expands the selection of Pod Identity compatible EKS add-ons from AWS and AWS Marketplace available for installation through the EKS console during cluster creation.

EKS add-ons integration with Pod Identities is generally available in all commercial AWS regions. To get started, see the EKS user guide.

Read more


AWS Controllers for Kubernetes for AWS Private CA now generally available

AWS Controllers for Kubernetes (ACK) service controller for AWS Private Certificate Authority (AWS Private CA) has graduated to generally available status.

By using ACK service controller for AWS Private CA, customers can now provision and manage AWS Private CA certificate authorities (CAs) and private certificates directly from Kubernetes. You can use private certificates to secure containers with encryption and identify workloads. AWS Private CA enables creation of private CA hierarchies, including root and subordinate CAs, without the investment and maintenance costs of operating an on-premises CA. With AWS Private CA, you can issue certificates automatically and at scale from a highly-available, managed cloud CA that is backed by hardware security modules.

To get started using ACK service controller for AWS Private CA visit the documentation. You can learn more about ACK and other service controllers here.

Read more


AWS Command Line Interface adds PKCE-based authorization for single sign-on

The AWS Command Line Interface (AWS CLI) v2 now supports OAuth 2.0 authorization code flows using the Proof Key for Code Exchange (PKCE) standard. This provides a simple and safe way to retrieve credentials for AWS CLI commands.

The AWS CLI is a unified tool that enables you to control multiple AWS services from the command line and to automate them through scripts. AWS CLI v2 offers integration with AWS IAM Identity Center, the recommended service for managing workforce access to AWS applications and multiple AWS accounts. The authorization code flow with PKCE is the recommended best practice for access to AWS resources from desktops and mobile devices with web browsers. It is now the default behavior when running the aws sso login or aws configure sso commands.

To learn more, see Configuring IAM Identity Center authentication with the AWS CLI in the AWS CLI User Guide. Share your questions, comments, and issues with us on GitHub. AWS IAM Identity Center is available at no additional cost in AWS Regions.
 

Read more


Introducing Amazon Route 53 Resolver DNS Firewall Advanced

Today, AWS announced Amazon Route 53 Resolver DNS Firewall Advanced, a new set of capabilities on Route 53 Resolver DNS Firewall that allow you to monitor and block suspicious DNS traffic associated with advanced DNS threats, such as DNS tunneling and Domain Generation Algorithms (DGAs), that are designed to avoid detection by threat intelligence feeds or are difficult for threat intelligence feeds alone to track and block in time.

Today, Route 53 Resolver DNS Firewall helps you block DNS queries made for domains identified as low-reputation or suspected to be malicious, and to allow queries for trusted domains. With DNS Firewall Advanced, you can now enforce additional protections that monitor and block your DNS traffic in real-time based on anomalies identified in the domain names being queried from your VPCs. To get started, you can configure one or multiple DNS Firewall Advanced rule(s), specifying the type of threat (DGA, DNS tunneling) to be inspected. You can add the rule(s) to a DNS Firewall rule group, and enforce it on your VPCs by associating the rule group to each desired VPC directly or by using AWS Firewall Manager, AWS Resource Access Manager (RAM), AWS CloudFormation, or Route 53 Profiles.

Route 53 Resolver DNS Firewall Advanced is available in all AWS Regions, including the AWS GovCloud (US) Regions. To learn more about the new capabilities and the pricing, visit the Route 53 Resolver DNS Firewall webpage and the Route 53 pricing page. To get started, visit the Route 53 documentation.

Read more


Centrally manage root access in AWS Identity and Access Management (IAM)

Today, AWS Identity and Access Management (IAM) is launching a new capability allowing customers to centrally manage their root credentials, simplify auditing of credentials, and perform tightly scoped privileged tasks across their AWS member accounts managed using AWS Organizations.

Now, administrators can remove unnecessary root credentials for member accounts in AWS Organizations and then, if needed, perform tightly scoped privileged actions using temporary credentials. By removing unnecessary credentials, administrators have fewer highly privileged root credentials that they must secure with multi-factor authentication (MFA), making it easier to effectively meet MFA compliance requirements. This helps administrators control highly privileged access in their accounts, reduces operational effort, and makes it easier for them to secure their AWS environment.

The capability to manage root access in AWS member accounts is available in all AWS Regions, including the AWS GovCloud (US) Regions and China Regions. To get started managing your root access in IAM, visit the list of resources below:

Read more


Customize scope of IAM Access Analyzer unused access analysis

Customers use Identity and Access Management (IAM) Access Analyzer unused access findings to identify over permissive access granted to IAM roles and users in their accounts or AWS organization. Now, customers can optionally customize the analysis to meet their needs. Customers can select accounts, roles, and users to exclude from analysis and focus on specific areas to identify and remediate unused access. They can use identifiers such as account ID or scale configuration using role tags. By scoping the analyzer to monitor a sub-set of accounts and roles, customers can streamline findings review and optimize costs of using unused access analysis. Customers can update the configuration at any time to change the scope of analysis. With the new offering, IAM Access Analyzer provides enhanced controls to help customers tailor the analysis more closely to their organization’s security needs.

This new feature is available in all AWS Commercial Regions. To learn more about IAM Access Analyzer unused access analysis, see the documentation.

Read more


Introducing resource control policies (RCPs) to centrally restrict access to AWS resources

AWS is excited to announce resource control policies (RCPs) in AWS Organizations to help you centrally establish a data perimeter across your AWS environment. With RCPs, you can centrally restrict external access to your AWS resources at scale. At launch, RCPs apply to resources of the following AWS services: Amazon Simple Storage Service (Amazon S3), AWS Security Token Service, AWS Key Management Service, Amazon Simple Queue Service, and AWS Secrets Manager.

RCPs are a type of organization policy that can be used to centrally create and enforce preventative controls on AWS resources in your organization. Using RCPs, you can centrally set the maximum available permissions to your AWS resources as you scale your workloads on AWS. For example, an RCP can help enforce the requirement that “no principal outside my organization can access Amazon S3 buckets in my organization,” regardless of the permissions granted through individual bucket policies. RCPs complement service control policies (SCPs), an existing type of organization policy. While SCPs offer central control over the maximum permissions for IAM roles and users in your organization, RCPs offer central control over the maximum permissions on AWS resources in your organization.

Customers that use AWS IAM Access Analyzer to identify external access can review the impact of RCPs on their resource permissions. For an updated list of AWS services that support RCPs, refer to the list of services supporting RCPs. RCPs are available in all AWS commercial Regions. To learn more, visit the RCPs documentation.
 

Read more


AWS Directory Service is available in the AWS Asia Pacific (Malaysia) Region

AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft AD, and AD Connector are now available in the AWS Asia Pacific (Malaysia) Region.

Built on actual Microsoft Active Directory (AD), AWS Managed Microsoft AD enables you to migrate AD-aware applications while reducing the work of managing AD infrastructure in the AWS Cloud. You can use your Microsoft AD credentials to connect to AWS applications such as Amazon Relational Database Service (RDS) for SQL Server, Amazon RDS for PostgreSQL, and Amazon RDS for Oracle. You can keep your identities in your existing Microsoft AD or create and manage identities in your AWS managed directory.

AD Connector is a proxy that enables AWS applications to use your existing on-premises AD identities without requiring AD infrastructure in the AWS Cloud. You can also use AD Connector to join Amazon EC2 instances to your on-premises AD domain and manage these instances using your existing group policies.

Please see all AWS Regions where AWS Managed Microsoft AD and AD Connector are available. To learn more, see AWS Directory Service.
 

Read more


AWS IAM Identity Center now supports search by permission set name

Today, AWS IAM Identity Center announced support for permission set search, enabling you to filter existing permission sets based on their names. This simplifies managing access to AWS accounts via IAM Identity Center, allowing you to use any substring in the permission set name to quickly lookup a permission set.

IAM Identity Center is where you create, or connect, your workforce users once and centrally manage their access to multiple AWS accounts and applications. Now, you can filter and find a permission set using any part of the name that you gave to the permission set, in addition to using the Amazon Resource Name (ARN).

IAM Identity Center enables you to connect your existing source of workforce identities to AWS once and manage access to multiple AWS accounts from a central place, as well as access the personalized experiences offered by AWS applications, such as Amazon Q; and define and audit user-aware access to data in AWS services, such as Amazon Redshift. IAM Identity Center is available at no additional cost in all AWS Regions where it is supported. To learn more, see the AWS IAM Identity Center User Guide.

Read more


AWS Firewall Manager is now available in the AWS Asia Pacific (Malaysia) Region

AWS Firewall Manager is now available in the AWS Asia Pacific (Malaysia) region, enabling customers to create policies to manage their VPC Security Groups, VPC network access control lists (NACLs), and AWS WAF protections for applications running in this region. Support for other policy types will be available in the coming months. Firewall Manager is now available in a total of 32 AWS commercial regions, 2 GovCloud regions, and all Amazon CloudFront edge locations.

AWS Firewall Manager is a security management service that enables customers to centrally configure and manage firewall rules across their accounts and resources. Using AWS Firewall Manager, customers can manage AWS WAF rules, AWS Shield Advanced protections, AWS Network Firewall, Amazon Route53 Resolver DNS Firewall, VPC security groups, and VPC network access control lists (NACLs) across their AWS Organizations. AWS Firewall Manager makes it easier for customers to ensure that all firewall rules are consistently enforced and compliant, even as new accounts and resources are created.

To get started, see the AWS Firewall Manager documentation for more details and the AWS Region Table for the list of regions where AWS Firewall Manager is currently available. To learn more about AWS Firewall Manager, its features, and its pricing, visit the AWS Firewall Manager website.
 

Read more


Starting today, AWS Identity and Access Management (IAM) now supports AWS PrivateLink in the AWS GovCloud (US) Regions. With IAM, you can specify who or what can access services and resources in AWS by creating and managing resources such as IAM roles and policies. You can now establish a private connection between your virtual private cloud (VPC) and IAM to manage IAM resources, helping you meet your compliance and regulatory requirements to limit public internet connectivity.

By using PrivateLink with both IAM and the AWS Security Token Service (STS), which already supports PrivateLink, you can now manage your IAM resources such as IAM roles and request temporary credentials to access your AWS resources end to end without going through the public Internet. Interface VPC endpoints for IAM in the AWS GovCloud (US) Regions can only be created in the AWS GovCloud (US-West) Region, where the IAM control plane is located. If your VPC is in a different Region, use AWS Transit Gateway to allow access to the IAM interface VPC endpoint from another Region.

For more information about AWS PrivateLink and IAM, please see the IAM User Guide.

Read more


Amazon Verified Permissions launches new API to get multiple policies

Amazon Verified Permissions has launched a new API called batchGetPolicies. Customers can now make a single API call that returns multiple policies, for example to populate a list of policies that apply to a specific principal or resource. Amazon Verified Permissions is a permissions management and fine-grained authorization service for the applications that you build. Amazon Verified Permissions uses the Cedar policy language to enable developers and admins to define policy-based access controls based on roles and attributes. For example, a patient management application might call Amazon Verified Permissions (AVP) to determine if Alice is permitted access to Bob’s patient records.

The new API accepts up to 100 policy IDs and returns the corresponding set of policies, from across one or more policy stores. This simplifies the integration and reduces latency. Using the API reduces the number of calls that an application needs to make to Verified Permissions. For example, when building a permissions management UX that lists Cedar policies, the application now needs to make only one call to get 50 policies, rather than making 50 calls.

This feature is available in all regions where Verified Permissions is available. Pricing is based on the number of policies requested. For more information on pricing visit Amazon Verified Permissions Pricing – AWS - Amazon Web Services. For more information on the service visit Fine-Grained Authorization - Amazon Verified Permissions - AWS.
 

Read more


Amazon CloudFront no longer charges for requests blocked by AWS WAF

Effective October 25, 2024, all CloudFront requests blocked by AWS WAF are free of charge. With this change, CloudFront customers will never incur request fees or data transfer charges for requests blocked by AWS WAF. This update requires no changes to your applications and applies to all CloudFront distributions using AWS WAF.

AWS WAF will continue billing for evaluating and blocking these requests. To learn more about using AWS WAF with CloudFront, visit Use AWS WAF protections in the CloudFront Developer Guide.

Read more


AWS Security Hub launches 7 new security controls

AWS Security Hub has released 7 new security controls, increasing the total number of controls offered to 437. Security Hub released new controls for Amazon Simple Notification Service (Amazon SNS) topic and AWS Key Management Service (AWS KMS) keys checking for public access. Security Hub now supports additional controls for encryption checks for key AWS services such as AWS AppSync and Amazon Elastic File System (Amazon EFS). For the full list of recently released controls and the AWS Regions in which they are available, visit the Security Hub user guide.

To use the new controls, turn on the standard they belong to. Security Hub will then start evaluating your security posture and monitoring your resources for the relevant security controls. You can use central configuration to do so across all your organization accounts and linked Regions with a single action. If you are already using the relevant standards and have Security Hub configured to automatically enable new controls, these new controls will run without taking any additional action.

To get started, consult the following list of resources:

Read more


AWS Incident Detection and Response now available in 16 additional AWS regions

Starting today, AWS Incident Detection and Response is now available in 16 additional AWS regions. This service provides AWS Enterprise Support customers with proactive engagement and incident management, aimed at minimizing the risk of failures and accelerating the recovery of your critical workloads. AWS experts will assess your workloads for resilience, observability, and create customized runbooks for incident management. AWS Incident Management Engineers (IMEs) are on call 24/7 to detect incidents and engage you within 5 minutes of an alarm to offer guidance for mitigation and recovery.

With this release, AWS Incident Detection and Response is now available in the following AWS regions: Africa (Capetown), Asia Pacific (Seoul), Asia Pacific (Osaka), Middle East (Bahrain), Asia Pacific (Hong Kong), Middle East (UAE), Asia Pacific (Jakarta), Asia Pacific (Hyderabad), Asia Pacific (Melbourne), EU (Zurich), Europe (Spain), Canada West (Calgary), Israel (Tel Aviv), EU (Milan), North America (Calgary), Asia Pacific (Malaysia).

Visit the eligible AWS regions to see the full list of all supported regions. Visit the AWS Incident Detection and Response product page to get started.
 

Read more


serverless

Amazon OpenSearch Service zero-ETL integration with Amazon Security Lake

Amazon OpenSearch Service now offers a zero-ETL integration with Amazon Security Lake, enabling you to query and analyze security data in-place directly through OpenSearch. This integration allows you to efficiently explore voluminous data sources that were previously cost-prohibitive to analyze, helping you streamline security investigations and obtain comprehensive visibility of your security landscape. By offering the flexibility to selectively ingest data and eliminating the need to manage complex data pipelines, you can now focus on effective security operations while potentially lowering your analytics costs.

Using the powerful analytics and visualization capabilities in OpenSearch Service, you can perform deeper investigations, enhance threat hunting, and proactively monitor your security posture. Pre-built queries and dashboards using the Open Cybersecurity Schema Framework (OCSF) can further accelerate your analysis. The built-in query accelerator boosts performance and enables fast-loading dashboards, enhancing your overall experience. This integration empowers you to accelerate investigations, uncover insights from previously inaccessible data sources, optimize analytics efficiency and costs, with minimal data migration.

OpenSearch Service zero-ETL integration with Security Lake is now generally available in 13 regions globally: Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), US East (Ohio), US East (N. Virginia), US West (Oregon), South America (São Paulo), Europe (Paris), and Canada (Central).

To learn more on using this capability, see the OpenSearch Service Integrations page and the OpenSearch Service Developer Guide. To learn more about how to configure and share Security Lake, see the Get Started Guide.
 

Read more


AWS Lambda announces Provisioned Mode for Kafka event source mappings (ESMs)

AWS Lambda announces Provisioned Mode for event source mappings (ESMs) that subscribe to Apache Kafka event sources, a feature that allows you to optimize the throughput of your Kafka ESM by provisioning event polling resources that remain ready to handle sudden spikes in traffic. Provisioned Mode helps you build highly responsive and scalable event-driven Kafka applications with stringent performance requirements.

Customers building streaming data applications often use Kafka as an event source for Lambda functions, and use Lambda's fully-managed MSK ESM or self-managed Kafka ESM, which automatically scale polling resources in response to events. However, for event-driven Kafka applications that need to handle unpredictable bursts of traffic, lack of control over the throughput of ESM can lead to delays in your users’ experience. Provisioned Mode for Kafka ESM allows you to fine-tune the throughput of the ESM by provisioning and auto-scaling between a minimum and maximum number of polling resources called event pollers, and is ideal for real-time applications with stringent performance requirements.

This feature is generally available in all AWS Commercial Regions where AWS Lambda is available, except Israel (Tel Aviv), Asia Pacific (Malaysia), and Canada West (Calgary).

You can activate Provisioned Mode for MSK ESM or self-managed Kafka ESM by configuring a minimum and maximum number of event pollers in the ESM API, AWS Console, AWS CLI, AWS SDK, AWS CloudFormation, and AWS SAM. You pay for the usage of event pollers, along a billing unit called Event Poller Unit (EPU). To learn more, read Lambda ESM documentation and AWS Lambda pricing.

Read more


AWS Lambda adds support for Node.js 22

AWS Lambda now supports creating serverless applications using Node.js 22. Developers can use Node.js 22 as both a managed runtime and a container base image, and AWS will automatically apply updates to the managed runtime and base image as they become available.

Node.js 22 is the latest long-term support (LTS) release of Node.js and is expected to be supported for security and bug fixes until April 2027. It provides access to the latest Node.js language features, such as the ‘fetch’ API. You can use Node.js 22 with Lambda@Edge in supported Regions, allowing you to customize low-latency content delivered through Amazon CloudFront. Powertools for AWS Lambda (TypeScript), a developer toolkit to implement serverless best practices and increase developer velocity, also supports Node.js 22.

The Node.js 22 runtime is available in all Regions where Lambda is available, including China and the AWS GovCloud (US) Regions.

You can use the full range of AWS deployment tools, including the Lambda console, AWS CLI, AWS Serverless Application Model (AWS SAM), AWS CDK, and AWS CloudFormation to deploy and manage serverless applications written in Node.js 22. For more information, including guidance on upgrading existing Lambda functions, see our blog post. For more information about AWS Lambda, visit our product page.

Read more


Announcing new Amazon CloudWatch Metrics for AWS Lambda Event Source Mappings (ESMs)

AWS Lambda announces new Amazon CloudWatch metrics for Lambda Event Source Mappings (ESMs), which provide customers visibility into the processing state of events read by ESMs that subscribe to Amazon SQS, Amazon Kinesis, and Amazon DynamoDB event sources. This enables customers to easily monitor issues or delays in event processing and take corrective actions.

Customers use ESMs to read events from event sources and invoke Lambda functions. Lack of visibility into processing state of events ingested by ESMs delays diagnosis of event processing issues. Customers can now use the following CloudWatch metrics to monitor the processing state of events ingested by ESMs — PolledEventCount, InvokedEventCount, FilteredOutEventCount, FailedInvokeEventCount, DeletedEventCount, DroppedEventCount, and OnFailureDestinationDeliveredEventCount. PolledEventCount counts the events read by an ESM, and InvokedEventCount counts the events that invoked a Lambda function. FilteredOutEventCount counts the events filtered out by an ESM. FailedInvokeEventCount counts the events that attempted to invoke a Lambda function, but encountered failure. DeletedEventCount counts the events that have been deleted from the SQS queue by Lambda upon successful processing. DroppedEventCount counts the events dropped due to event expiry or exhaustion of retry attempts. OnFailureDestinationDeliveredEventCount counts the events successfully sent to an on-failure destination.

This feature is generally available in all AWS Commercial Regions where AWS Lambda is available.

You can enable ESM metrics using Lambda event source mapping API, AWS Console, AWS CLI, AWS SDK, AWS CloudFormation, and AWS SAM. To learn more about these metrics, visit Lambda developer guide. These new metrics are charged at standard CloudWatch pricing for metrics.

Read more


Amazon MQ is now available in the AWS Asia Pacific (Malaysia) region

Amazon MQ is now available in the AWS Asia Pacific (Malaysia) region. With this launch, Amazon MQ is now available in 34 regions.

Amazon MQ is a managed message broker service for Apache ActiveMQ and RabbitMQ that makes it easier to set up and operate message brokers on AWS. Amazon MQ reduces your operational responsibilities by managing the provisioning, setup, and maintenance of message brokers. Because Amazon MQ connects to your current applications with industry-standard APIs and protocols, you can more easily migrate to AWS without having to rewrite or modify your applications.

For more information, please visit the Amazon MQ product page, and see the AWS Region Table for complete regional availability.

Read more


Amazon OpenSearch Serverless now supports point in time (PIT) search

Amazon OpenSearch Serverless has added support for Point in Time (PIT) search, enabling you to run multiple queries against a dataset fixed at a specific moment. This feature allows you to maintain consistent search results even as your data continues to change, making it particularly useful for applications that require deep pagination or need to preserve a stable view of data across multiple queries.

Point in time search supports both forward and backward navigation through search results, ensuring consistency even during ongoing data ingestion. This feature is ideal for e-commerce applications, content management systems, and analytics platforms that require reliable and consistent search capabilities across large datasets.

Point in time search on Amazon OpenSearch Serverless is now available in 15 regions globally: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe West (Paris), Europe West (London), Asia Pacific South (Mumbai), South America (Sao Paulo), Canada Central (Montreal), Asia Pacific (Seoul). and Europe (Zurich). Please refer to the AWS Regional Services List for more information about Amazon OpenSearch Service availability. To learn more about OpenSearch Serverless, see the documentation.

Read more


Amazon OpenSearch Serverless now supports Binary Vector and FP16 cost savings features

We are excited to announce that Amazon OpenSearch Serverless now is supporting Binary Vector and FP16 compression helping reduce costs by lowering the memory requirements. It also lowers the latency, improve performance with acceptable accuracy tradeoff. OpenSearch Serverless is a serverless deployment option for Amazon OpenSearch Service that makes it simple to run search and analytics workloads without the complexities of infrastructure management. OpenSearch Serverless’ compute capacity used for data ingestion, search, and query is measured in OpenSearch Compute Units (OCUs).

The support for OpenSearch Serverless is now available in 17 regions globally: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe West (Paris), Europe West (London), Asia Pacific South (Mumbai), South America (Sao Paulo), Canada Central (Montreal), Asia Pacific (Seoul). Europe (Zurich), AWS GovCloud (US-West), and AWS GovCloud (US-East). Please refer to the AWS Regional Services List for more information about Amazon OpenSearch Service availability. To learn more about OpenSearch Serverless, see the documentation.

Read more


AWS Lambda now supports SnapStart for Python and .NET functions

Starting today, you can use Lambda SnapStart with your functions that use the Python and .NET managed runtimes, to deliver as low as sub-second startup performance. Lambda SnapStart is an opt-in capability that makes it easier for you to build highly responsive and scalable applications without provisioning resources or implementing complex performance optimizations.

For latency sensitive applications that support unpredictable bursts of traffic, high startup latencies—known as cold starts—can cause delays in your users’ experience. Lambda SnapStart can improve startup times by initializing the function’s code ahead of time, taking a snapshot of the initialized execution environment, and caching it. When the function is invoked and subsequently scales up, Lambda SnapStart resumes new execution environments from the cached snapshot instead of initializing them from scratch, significantly improving startup latency. Lambda SnapStart is ideal for applications such as synchronous APIs, interactive microservices, data processing, and ML inference.

Lambda SnapStart for Python and .NET is generally available in the following AWS Regions: US East (Ohio, N. Virginia), US West (Oregon), Asia Pacific (Singapore, Tokyo, Sydney), and Europe (Frankfurt, Ireland, Stockholm).

You can activate SnapStart for new or existing Lambda functions running on Python 3.12 (and newer) and .NET 8 (and newer) using the AWS Lambda API, AWS Management Console, AWS Command Line Interface (AWS CLI), AWS Cloud Formation, AWS Serverless Application Model (AWS SAM), AWS SDK, and AWS Cloud Development Kit (AWS CDK). For more information, see the Lambda documentation, or the launch blog post. To learn more about pricing for SnapStart on Python and .NET, visit AWS Lambda Pricing

Read more


AWS Lambda adds support for Python 3.13

AWS Lambda now supports creating serverless applications using Python 3.13. Developers can use Python 3.13 as both a managed runtime and a container base image, and AWS will automatically apply updates to the managed runtime and base image as they become available.

Python 3.13 is the latest long-term support (LTS) release of Python and is expected to be supported for security and bug fixes until October 2029. This release provides Lambda customers access to the latest Python 3.13 language features. You can use Python 3.13 with Lambda@Edge (in supported Regions), allowing you to customize low-latency content delivered through Amazon CloudFront. Powertools for AWS Lambda (Python), a developer toolkit to implement serverless best practices and increase developer velocity, also supports Python 3.13.

The Python 3.13 runtime is available in all Regions where Lambda is available, including China and the AWS GovCloud (US) Regions.

You can use the full range of AWS deployment tools, including the Lambda console, AWS CLI, AWS Serverless Application Model (AWS SAM), AWS CDK, and AWS CloudFormation to deploy and manage serverless applications written in Python 3.13. For more information, including guidance on upgrading existing Lambda functions, read our blog post. For more information about AWS Lambda, visit the product page.

Read more


Amazon Kinesis Data Streams launches CloudFormation support for resource policies

Amazon Kinesis Data Streams now provides AWS CloudFormation supports for managing resource policies for data streams and consumers. You can use CloudFormation templates to programmatically deploy resource policies in a secure, efficient, and repeatable way, reducing the risk of human error from manual configuration.

Kinesis Data Streams allows users to capture, process, and store data streams in real time at any scale. CloudFormation uses stacks to manage AWS resources, allowing you to track changes, apply updates automatically, and easily roll back changes when needed.

CloudFormation support for resource policies is available in all AWS regions where Amazon Kinesis Data Streams is offered, including the AWS GovCloud (US) Regions and China Regions. To learn more about Amazon Kinesis Data Streams resource policies, visit the developer guide.

Read more


AWS introduces service versioning and deployment history for Amazon ECS services

Amazon Elastic Container Service (Amazon ECS) now allows you to view the service revision and deployment history for your long-running applications deployed as Amazon ECS services. This capability makes it easier for you to track and view changes to applications deployed using Amazon ECS, monitor on-going deployments, and debug deployment failures.

Typically, customers deploy long running applications as Amazon ECS services and deploy software updates using a rolling update mechanism where tasks running the old software version are gradually replaced by tasks running the new version. With today’s release, you can now view the deployment history for your Amazon ECS services on the AWS Management Console as well as using the new listServiceDeployments API. You can look at the details of a specific deployment, including whether it succeeded, when it started and completed, and service revision information before and after the deployment using the Console and describeServiceDeployment API. Furthermore, you can look at the immutable configuration for a specific service version, including the task definition, container image digests, load balancer, service connect configuration, etc. using the Console and describeServiceRevision API.

You can view the service version and deployment history for their services deployed on or after October 25, 2024 using the AWS Management Console, API, SDK, and CLI in all AWS Regions. To learn more, visit this blog post and documentation.

Read more


AWS Lambda announces JSON logging support for .NET managed runtime

AWS Lambda now enables you to natively capture application logs in JSON structured format for Lambda functions that use .NET Lambda managed runtime. JSON format allows logs to be structured as a series of key-value pairs, enabling you to quickly search, filter, and analyze large volumes of logs to easily troubleshoot failures and understand the performance of your Lambda functions.

We previously announced support for natively capturing application logs (logs generated by your Lambda function code) and system logs (logs generated by the Lambda service while executing your function code) in JSON structured format for Python, Node.js, and Java managed runtimes. However, for .NET managed runtime, you could only natively capture system logs in JSON structured format. To capture application logs in JSON structured format, you had to manually configure logging libraries. This launch enables you to capture application logs in JSON structured format for functions that use .NET managed runtime without having to use your own logging libraries.

To get started, you can set log format to JSON for Lambda functions that use any .NET managed runtime using Lambda API, Lambda console, AWS CLI, AWS Serverless Application Model (SAM), and AWS CloudFormation. To learn more, visit the launch blog post. You can learn about Lambda logging in the Lambda logging controls blog post or Lambda Developer Guide.

JSON structured logging support for .NET is now available in all AWS Regions where Lambda is available, except for China and GovCloud Regions, at no additional cost. For more information, see the AWS Region table.

Read more


New Kinesis Client Library 3.0 reduces stream processing compute costs by up to 33%

You can now reduce compute costs to process streaming data with Kinesis Client Library (KCL) 3.0 by up to 33% compared to previous KCL versions. KCL 3.0 introduces an enhanced load balancing algorithm that continuously monitors resource utilization of the stream processing workers and automatically redistributes the load from over-utilized workers to other underutilized workers. This ensures even CPU utilization across workers and removes the need to over-provision the stream processing compute workers which reduces cost. Additionally, KCL 3.0 is built with the AWS SDK for Java 2.x for improved performance and security features, fully removing the dependency on the AWS SDK for Java 1.x.

KCL is an open-source library that simplifies the development of stream processing applications with Amazon Kinesis Data Streams. It manages complex tasks associated with distributed computing such as load balancing, fault tolerance, and service coordination, allowing you to solely focus on your core business logic. You can upgrade your stream processing application running on KCL 2.x by simply replacing the current library using KCL 3.0, without any changes in your application code. KCL 3.0 supports stream processing applications running on Amazon EC2 instances or containers such as Amazon ECS, Amazon EKS, or AWS Fargate.

KCL 3.0 is available with Amazon Kinesis Data Streams in all AWS regions. To learn more, see the Amazon Kinesis Data Streams developer guide, KCL 3.0 release notes, and launch blog.

Read more


storage

Amazon S3 Access Grants now integrate with AWS Glue

Amazon S3 Access Grants now integrate with AWS Glue for analytics, machine learning (ML), and application development workloads in AWS. S3 Access Grants map identities from your Identity Provider (IdP), such as Entra ID and Okta or AWS Identity and Access Management (IAM) principals, to datasets stored in Amazon S3. This integration gives you the ability to manage S3 permissions for end users running jobs with Glue 5.0 or later, without the need to write and maintain bucket policies or individual IAM roles.

AWS Glue provides a data integration service that simplifies data exploration, preparation, and integration from multiple sources, including S3. Using S3 Access Grants, you can grant permissions to buckets or prefixes in S3 to users and groups in an existing corporate directory, or to IAM users and roles. When end users in the appropriate user groups access S3 using Glue ETL for Apache Spark, they will then automatically have the necessary permissions to read and write data. S3 Access Grants also automatically update S3 permissions as users are added and removed from user groups in the IdP.

Amazon S3 Access Grants support is available when using AWS Glue 5.0 and later, and is available in all commercial AWS Regions where AWS Glue 5.0 and AWS IAM Identity Center are available. For pricing details, visit Amazon S3 pricing and Amazon Glue pricing. To learn more about S3 Access Grants, refer to the S3 User Guide.
 

Read more


Announcing Amazon S3 Metadata (Preview) – Easiest and fastest way to manage your metadata

Amazon S3 Metadata is the easiest and fastest way to help you instantly discover and understand your S3 data with automated, easily-queried metadata that updates in near real-time. This helps you to curate, identify, and use your S3 data for business analytics, real-time inference applications, and more. S3 Metadata supports object metadata, which includes system-defined details like size and the source of the object, and custom metadata, which allows you to use tags to annotate your objects with information like product SKU, transaction ID, or content rating, for example.

S3 Metadata is designed to automatically capture metadata from objects as they are uploaded into a bucket, and to make that metadata queryable in a read-only table. As data in your bucket changes, S3 Metadata updates the table within minutes to reflect the latest changes. These metadata tables are stored in S3 Tables, the new S3 storage offering optimized for tabular data. S3 Tables integration with AWS Glue Data Catalog is in preview, allowing you to stream, query, and visualize data—including S3 Metadata tables—using AWS Analytics services such as Amazon Data Firehose, Athena, Redshift, EMR, and QuickSight. Additionally, S3 Metadata integrates with Amazon Bedrock, allowing for the annotation of AI-generated videos with metadata that specifies its AI origin, creation timestamp, and the specific model used for its generation.

Amazon S3 Metadata is currently available in preview in the US East (N. Virginia), US East (Ohio), and US West (Oregon) Regions, and coming soon to additional Regions. For pricing details, visit the S3 pricing page. To learn more, visit the product page, documentation, and AWS News Blog.

Read more


Announcing Amazon S3 Tables – Fully managed Apache Iceberg tables optimized for analytics workloads

Amazon S3 Tables deliver the first cloud object store with built-in Apache Iceberg support, and the easiest way to store tabular data at scale. S3 Tables are specifically optimized for analytics workloads, resulting in up to 3x faster query throughput and up to 10x higher transactions per second compared to self-managed tables. With S3 Tables support for the Apache Iceberg standard, your tabular data can be easily queried by popular AWS and third-party query engines. Additionally, S3 Tables are designed to perform continual table maintenance to automatically optimize query efficiency and storage cost over time, even as your data lake scales and evolves. S3 Tables integration with AWS Glue Data Catalog is in preview, allowing you to stream, query, and visualize data—including S3 Metadata tables—using AWS Analytics services such as Amazon Data Firehose, Athena, Redshift, EMR, and QuickSight.

S3 Tables introduce table buckets, a new bucket type that is purpose-built to store tabular data. With table buckets, you can quickly create tables and set up table-level permissions to manage access to your data lake. You can then load and query data in your tables with standard SQL, and take advantage of Apache Iceberg’s advanced analytics capabilities such as row-level transactions, queryable snapshots, schema evolution, and more. Table buckets also provide policy-driven table maintenance, helping you to automate operational tasks such as compaction, snapshot management, and unreferenced file removal.

Amazon S3 Tables are now available in the US East (N. Virginia), US East (Ohio), and US West (Oregon) Regions, and coming soon to additional Regions. For pricing details, visit the S3 pricing page. To learn more, visit the product page, documentation, and AWS News Blog.

Read more


Announcing Amazon EC2 I8g instances

AWS is announcing the general availability of Amazon Elastic Compute Cloud (Amazon EC2) storage optimized I8g instances. I8g instances offer the best performance in Amazon EC2 for storage-intensive workloads. I8g instances are powered by AWS Graviton4 processors that deliver up to 60% better compute performance compared to previous generation I4g instances. I8g instances use the latest third generation AWS Nitro SSDs, local NVMe storage that deliver up to 65% better real-time storage performance per TB while offering up to 50% lower storage I/O latency and up to 60% lower storage I/O latency variability. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software enhancing the performance and security for your workloads.

I8g instances offer instance sizes up to 24xlarge, 768 GiB of memory, and 22.5 TB instance storage. They are ideal for real-time applications like relational databases, non-relational databases, streaming databases, search queries and data analytic.

I8g instances are available in the following AWS Regions: US East (N. Virginia) and US West (Oregon).

To learn more, see Amazon EC2 I8g instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs.

Read more


Amazon S3 adds new default data integrity protections

Amazon S3 updates the default behavior of object upload requests with new data integrity protections that build upon S3’s existing durability posture. The latest AWS SDKs now automatically calculate CRC-based checksums for uploads as data is transmitted over the network. S3 independently verifies these checksums and accepts objects after confirming that data integrity was maintained in transit over the public internet. Additionally, S3 now stores a CRC-based whole-object checksum in object metadata, even for multipart uploads, which helps you to verify the integrity of an object stored in S3 at any time.

S3 has always validated the integrity of object uploads from the S3 API to storage by calculating MD5 checksums and allowed customers to provide their own pre-calculated MD5 checksums for integrity validation. S3 also supports five additional checksum algorithms, CRC64NVME, CRC32, CRC32C, SHA-1, and SHA-256, for integrity validations on upload and download. Using checksums for data validation is a best practice for data durability, and this new default behavior adds additional data integrity protections with no changes to your applications and at no additional cost.

Default checksum protections are rolling out across all AWS Regions in the next few weeks. To get started, you can use the AWS Management Console or the latest AWS SDKs to upload objects. To learn more about checksums in S3, visit the AWS News Blog and the S3 User Guide.

Read more


Storage Browser for Amazon S3 is now generally available

Amazon S3 is announcing the general availability of Storage Browser for S3, an open source component that you can add to your web applications to provide your end users with a simple interface for data stored in S3. With Storage Browser for S3, you can provide authorized end users, such as customers, partners, and employees, with access to easily browse, download, and upload data in S3 directly from your own applications. Storage Browser for S3 is available in the AWS Amplify React and JavaScript client libraries.

With the general availability of Storage Browser for S3, your end users can now search for their data based on file name and can copy and delete data they have access to. Additionally, Storage Browser for S3 now automatically calculates checksums of the data your end users upload and blocks requests that do not pass these durability checks.

We welcome your contributions and feedback on our roadmap, which outlines the plan for adding new capabilities to Storage Browser for S3. Storage Browser for S3 is backed by AWS Support, which means customers with AWS Business and Enterprise Support plans get 24/7 access to cloud support engineers. To learn more and get started, visit the AWS News Blog and the UI documentation.
 

Read more


Introducing Amazon EC2 next generation high density Storage Optimized I7ie instances

Amazon Web Services is announcing general availability for next generation high density Storage Optimized I7ie instances. Designed for large storage I/O intensive workloads, I7ie instances are powered by 5th generation Intel Xeon Scalable processors with an all-core turbo frequency of 3.2 GHz, offering up to 40% better compute performance and 20% better price performance over existing I3en instances. I7ie instances have the highest local NVMe storage density in the cloud for storage optimized instances and offer up to twice as many vCPUs and memory compared to prior generation instances. Powered by 3rd generation AWS Nitro SSDs, I7ie instances deliver up to 65% better real-time storage performance, up to 50% lower storage I/O latency, and 65% lower storage I/O latency variability compared to I3en instances.

I7ie are high density storage optimized instances, ideal for workloads requiring fast local storage with high random read/write performance at very low latency consistency to access large data sets. I7ie instances also deliver 40% better compute performance to run more complex queries without increasing the storage density per vCPU. Additionally, the 16KB torn write prevention feature, enables customers to eliminate performance bottlenecks.

I7ie instances deliver up to 100Gbps of network bandwidth and 60Gbps of bandwidth for Amazon Elastic Block Store (EBS).

I7ie instances are available in the US East (N. Virginia) AWS Region today. Customers can use these instances with On Demand and Savings Plan purchase options. To learn more, visit the I7ie instances page.

Read more


Announcing Amazon FSx Intelligent-Tiering, a new storage class for FSx

Today, AWS announces the general availability of Amazon FSx Intelligent-Tiering, a new storage class for Amazon FSx that costs up to 85% less than the FSx SSD storage class and up to 20% less than traditional HDD-based NAS storage on premises, and that brings full elasticity and intelligent tiering to network-attached storage (NAS). The new storage class is available today on Amazon FSx for OpenZFS.

Using Amazon FSx, customers can launch and run fully managed cloud file systems that have familiar NAS capabilities such as point-in-time snapshots, data clones, and user quotas. Before today, customers have been moving NAS data sets for mission-critical and performance-intensive workloads to FSx for OpenZFS, using the existing SSD storage class for predictable high performance. With the new FSx Intelligent-Tiering storage class, customers can now bring to FSx for OpenZFS a broad range of general-purpose data sets, including those with a large proportion of infrequently accessed data stored on low-cost HDD on premises. FSx Intelligent-Tiering delivers low-cost storage and costs up to 85% less than the FSx SSD storage class and up to 20% less than traditional HDD-based NAS storage on premises. With FSx Intelligent-Tiering, customers no longer need to provision or manage storage and get automatic storage cost optimization as data access patterns change. There are no upfront costs or commitments to use the storage class, and customers pay only for the resources used.

FSx Intelligent-Tiering can be used when creating a new FSx for OpenZFS file system in the following AWS Regions: US East (N. Virginia, Ohio), US West (Oregon), Canada (Central), Europe (Frankfurt, Ireland), and Asia Pacific (Mumbai, Singapore, Sydney, Tokyo).

For more information about this feature, visit the FSx for OpenZFS documentation page.

Read more


Announcing AWS Transfer Family web apps

AWS Transfer Family web apps are a new resource that you can use to create a simple interface for accessing your data in Amazon S3 through a web browser. With Transfer Family web apps, you can provide your workforce with a fully managed, branded, and secure portal for your end users to browse, upload, and download data in S3.

Transfer Family offers fully managed file transfers over SFTP, FTPS, FTP, and AS2, enabling seamless workload migrations with no need to change your third-party clients or their configurations. Now, you can also enable browser-based transfers for non-technical users in your organization through a user-friendly interface. Transfer Family web apps are integrated with AWS IAM Identity Center and S3 Access Grants, enabling fine-grained access controls that map corporate identities in your existing directories directly to S3 datasets. With a few clicks in the Transfer Family console, you can generate a shareable URL for your web app. Then, your authenticated users can start accessing data you authorize them to access through their web browsers.

Transfer Family web apps are available in select AWS Regions. You can get started with Transfer Family web apps in the Transfer Family console. For pricing, visit the Transfer Family pricing page. To learn more, read the AWS News Blog or visit the Transfer Family User Guide.
 

Read more


Amazon S3 launches storage classes for AWS Dedicated Local Zones

You can now use the Amazon S3 Express One Zone and S3 One Zone-Infrequent Access storage classes in AWS Dedicated Local Zones. Dedicated Local Zones are a type of AWS infrastructure that is fully managed by AWS, built for exclusive use by you or your community, and placed in a location or data center specified by you to help you comply with regulatory requirements.

In Dedicated Local Zones, these storage classes are purpose-built to store data in a specific data perimeter, helping to support your data isolation and data residency use cases. To learn more, visit the S3 User Guide.

Read more


Amazon FSx for Lustre now supports Elastic Fabric Adapter and NVIDIA GPUDirect Storage

Amazon FSx for Lustre, a service that provides high-performance, cost-effective, and scalable file storage for compute workloads, now supports Elastic Fabric Adapter (EFA) and NVIDIA GPUDirect Storage (GDS). With this launch, Amazon FSx for Lustre now provides the fastest storage performance for GPU instances in the cloud, delivering up to 12x higher throughput per client instance (1200 Gbps) compared to previous FSx for Lustre systems, so you can complete machine learning training jobs faster and reduce workload costs.

EFA improves workload performance by using the AWS Scalable Reliable Datagram (SRD) protocol to increase network throughput utilization and by bypassing the operating system during data transfer. For applications powered by high-performance computing instances such as Trn1 and Hpc7a, you can use EFA to achieve higher throughput per client instance. GDS support builds on EFA to further enhance performance by enabling direct data transfer between the file system and the GPU memory. This direct path eliminates memory copies and CPU involvement in data transfer operations. With the combination of EFA and GDS support, applications using P5 GPU instances and NVIDIA Compute Unified Device Architecture (CUDA) can achieve up to 12x higher throughput (up to 1200 Gbps) per client instance.

EFA and GDS support is available at no additional cost on new FSx for Lustre Persistent-2 file systems in all commercial AWS Regions where Persistent-2 file systems are available. For more information about this new feature, see the Amazon FSx for Lustre documentation and the AWS News Blog, Amazon FSx for Lustre increases throughput to GPU instances by up to 12x.

Read more


Amazon EBS announces Time-based Copy for EBS Snapshots

Today, Amazon Elastic Block Store (Amazon EBS), a high-performance block storage service, announces the general availability of Time-based Copy. This new feature helps you meet your business and compliance requirements by ensuring that your EBS Snapshots are copied within and across AWS Regions within a specified timeframe.

Customers use EBS Snapshots to back up their EBS volumes, and copy them across multiple AWS Regions and accounts, for disaster recovery, data migration and compliance purposes. Time-based Copy gives you predictability when copying your snapshots across Regions. With this feature, you can specify a desired completion duration, ranging from 15 minutes to 48 hours, for individual copy requests, ensuring that your EBS Snapshots meet their duration requirements or Recovery Point Objectives (RPOs). You can now also monitor your Copy operations via EventBridge and the new SnapshotCopyBytesTransferred CloudWatch metric, available by default at a 1-minute frequency at no additional charge.

Amazon EBS Time-based Copy is available in all AWS commercial Regions and the AWS GovCloud (US) Regions, through the AWS Console, AWS Command Line Interface (CLI), and AWS SDKs. For pricing information, please visit the EBS pricing page. To learn more, see the technical documentation for Time-based Copy for Snapshots.
 

Read more


Amazon EFS now supports up to 2.5 million IOPS per file system

Amazon EFS now supports up to 2.5 million read IOPS and up to 500,000 write IOPS per file system, a 10x increase over the previous limits, making it easier to power machine learning (ML) research, multi-tenant SaaS, genomics, and other data-intensive workloads on AWS.

Amazon EFS provides serverless, fully elastic file storage that makes it simple to set up and run file workloads on AWS. With this launch, Amazon EFS supports up to 2.5 million read IOPS and up to 500,000 write IOPS per file system. Now, applications that demand millions of IOPS and tens of GiB per second of throughput performance, such as analytics user shares supporting hundreds of data scientists, multi-tenant SaaS applications supporting thousands of customers, and distributed applications processing petabytes of genomics data, can easily scale to achieve the required highest level of performance.

The increased IOPS limits are available for all new EFS General Purpose file systems using the Elastic Throughput mode in all AWS commercial regions, except in AWS China Regions. For new file systems, you can request an IOPS limit increase in the Amazon EFS Service Quota console. To learn more, see the Amazon EFS Documentation or create a file system using the Amazon EFS Console, API, or AWS CLI.

Read more


Amazon S3 now supports enforcement of conditional write operations for S3 general purpose buckets

Amazon S3 now supports enforcement of conditional write operations for S3 general purpose buckets using bucket policies. With enforcement of conditional writes, you can now mandate that S3 check the existence of an object before creating it in your bucket. Similarly, you can also mandate that S3 check the state of the object’s content before updating it in your bucket. This helps you to simplify distributed applications by preventing unintentional data overwrites, especially in high-concurrency, multi-writer scenarios.

To enforce conditional write operations, you can now use s3:if-none-match or s3:if-match condition keys to write a bucket policy that mandates the use of HTTP if-none-match or HTTP if-match conditional headers in S3 PutObject and CompleteMultipartUpload API requests. With this bucket policy in place, any attempt to write an object to your bucket without the required conditional header will be rejected. You can use this to centrally enforce the use of conditional writes across all the applications that write to your bucket.

You can use bucket policies to enforce conditional writes at no additional charge in all AWS Regions. You can use the AWS SDK, API, or CLI to perform conditional writes. To learn more about conditional writes, visit the S3 User Guide.

Read more


Amazon S3 adds new functionality for conditional writes

Amazon S3 can now perform conditional writes that evaluate if an object is unmodified before updating it. This helps you coordinate simultaneous writes to the same object and prevents multiple concurrent writers from unintentionally overwriting the object without knowing the state of its content. You can use this capability by providing the ETag of an object using S3 PutObject or CompleteMultipartUpload API requests in both S3 general purpose and directory buckets.

Conditional writes simplify how distributed applications with multiple clients concurrently update data across shared datasets. Similar to using the HTTP if-none-match conditional header to check for the existence of an object before creating it, clients can now perform conditional-write checks on an object’s Etag, which reflects changes to the object, by specifying it via the HTTP if-match header in the API request. S3 then evaluates if the object's ETag matches the value provided in the API request before committing the write and prevents your clients from overwriting the object until the condition is satisfied. This new conditional header can help improve the efficiency of your large-scale analytics, distributed machine learning, and other highly parallelized workloads by reliably offloading compare and swap operations to S3.

This new conditional-write functionality is available at no additional charge in all AWS Regions. You can use the AWS SDK, API, or CLI to perform conditional writes. To learn more about conditional writes, visit the S3 User Guide.

Read more


Amazon S3 Express One Zone now supports conditional deletes

Amazon S3 Express One Zone, a high-performance S3 storage class for latency-sensitive applications, can now evaluate whether an object is unchanged before deleting it. This conditional delete capability helps you improve data durability and reduce errors from accidental deletions in high-concurrency, multiple-writer scenarios.

Conditional writes simplify how distributed applications with multiple clients concurrently update data across shared datasets, helping to prevent unintentional overwrites. Now, in directory buckets, clients can perform conditional delete checks on an object’s last modified time, size, and Etag using the x-amz-if-match-last-modified-time, x-amz-if-match-size, and HTTP if-match headers, respectively, in the DeleteObject and DeleteObjects API. S3 Express One Zone then evaluates if each of these object attributes match the value provided in these headers and prevents your clients from deleting the object until the condition is satisfied. You can use these headers in conjunction or individually in a delete request to reliably offload object-state evaluation to S3 Express One Zone and efficiently secure your distributed and highly parallelized workloads against unintended deletions.

S3 Express One Zone support for conditional deletes is available at no additional charge in all AWS Regions where the storage class is available. You can use the S3 API, SDKs, and CLI to perform conditional deletes. To learn more, visit the S3 documentation.
 

Read more


AWS Backup now supports Amazon Timestream in Asia Pacific (Mumbai)

Today, we are announcing the availability of AWS Backup support for Amazon Timestream for LiveAnalytics in the Asia Pacific (Mumbai) Region. AWS Backup is a policy-based, fully managed and cost-effective solution that enables you to centralize and automate data protection of Amazon Timestream for LiveAnalytics along with other AWS services (spanning compute, storage, and databases) and third-party applications. Together with AWS Organizations, AWS Backup enables you to centrally deploy policies to configure, manage, and govern your data protection activity.

With this launch, AWS Backup support for Amazon Timestream for LiveAnalytics is available in the following Regions: US East (N. Virginia, Ohio, Oregon), Asia Pacific (Sydney, Tokyo), and Europe (Frankfurt, Ireland). For more information on regional availability, feature availability, and pricing, see the AWS Backup pricing page and the AWS Backup Feature Availability page.

To learn more about AWS Backup support for Amazon Timestream for LiveAnalytics, visit AWS Backup’s technical documentation. To get started, visit the AWS Backup console.
 

Read more


Amazon S3 Connector for PyTorch now supports Distributed Checkpoint

Amazon S3 Connector for PyTorch now supports Distributed Checkpoint (DCP), improving the time to write checkpoints to Amazon S3. DCP is a PyTorch feature for saving and loading machine learning (ML) models from multiple training processes in parallel. PyTorch is an open source ML framework used to build and train ML models.

Distributed training jobs often run for several hours or even days, and checkpoints are written frequently to improve fault tolerance. For example, jobs training large foundation models often run for several days and generate checkpoints that are hundreds of gigabytes in size. Using DCP with Amazon S3 Connector for PyTorch helps you reduce the time to write these large checkpoints to Amazon S3, keeping your compute resources utilized, ultimately resulting in lower compute cost.

Amazon S3 Connector for PyTorch is an open source project. To get started, visit the GitHub page.

Read more


Amazon S3 Express One Zone is now available in three additional AWS Regions

The Amazon S3 Express One Zone storage class is now available in three additional AWS Regions: Asia Pacific (Mumbai), Europe (Ireland), and US East (Ohio).

S3 Express One Zone is a high-performance, single-Availability Zone storage class purpose-built to deliver consistent single-digit millisecond data access for your most frequently accessed data and latency-sensitive applications. S3 Express One Zone delivers data access speed up to 10x faster and request costs up to 50% lower than S3 Standard. It enables workloads such as machine learning training, interactive analytics, and media content creation to achieve single-digit millisecond data access speed with high durability and availability.

S3 Express One Zone is now generally available in seven AWS Regions. For information on AWS service and AWS Partner integrations with S3 Express One Zone, visit the S3 Express One Zone integrations page. To learn more about S3 Express One Zone, visit the S3 User Guide.

Read more


Amazon S3 Express One Zone now supports the ability to append data to an object

Amazon S3 Express One Zone now supports the ability to append data to an object. For the first time, applications can add data to an existing object in S3.

Applications that continuously receive data over a period of time need the ability to add data to existing objects. For example, log-processing applications continuously add new log entries to the end of existing log files. Similarly, media-broadcasting applications add new video segments to video files as they are transcoded and then immediately stream the video to viewers. Previously, these applications needed to combine data in local storage before copying the final object to S3. Now, applications can directly append new data to existing objects and then immediately read the object, all within S3 Express One Zone.

You can append data to objects in S3 Express One Zone in all AWS Regions where the storage class is available. You can get started using the AWS SDK, the AWS CLI, or Mountpoint for Amazon S3 (version 1.12.0 or higher). To learn more, visit the S3 User Guide.

Read more


Amazon S3 Express One Zone now supports S3 Lifecycle expirations

Amazon S3 Express One Zone, a high-performance S3 storage class for latency-sensitive applications, now supports object expiration using S3 Lifecycle. S3 Lifecycle can expire objects based on age to help you automatically optimize storage costs.

Now, you can configure S3 Lifecycle rules for S3 Express One Zone to expire objects on your behalf. You can configure an S3 Lifecycle expiration rule either for your entire bucket or for a subset of objects by filtering by prefix or object size. For example, you can create an S3 Lifecycle rule that expires all objects smaller than 512 KB after 3 days and another rule that expires all objects in a prefix after 10 days. Additionally, S3 Lifecycle logs S3 Express One Zone object expirations in AWS CloudTrail, giving you the ability to monitor, set alerts for, and audit them.

Amazon S3 Express One Zone support for S3 Lifecycle expiration is generally available in all AWS Regions where the storage class is available. You can get started with S3 Lifecycle using the Amazon S3 REST API, AWS Command Line Interface (CLI), or AWS Software Development Kit (SDK) client. To learn more about S3 Lifecycle, visit the S3 User Guide.

Read more


Mountpoint for Amazon S3 now supports a high performance shared cache

You can now use Amazon S3 Express One Zone as a high performance read cache with Mountpoint for Amazon S3. The cache can be shared by multiple compute instances and can elastically scale to any dataset size. Mountpoint for S3 is a file client that translates local file system API calls to REST API calls on S3 objects. With this launch, Mountpoint for S3 can cache data in S3 Express One Zone after it’s read, making the subsequent read requests up to 7x faster compared to reading data from S3 Standard.

Previously, Mountpoint for S3 could cache recently accessed data in Amazon EC2 instance storage, EC2 instance memory, or an Amazon EBS volume. This improved performance for repeated read access from the same compute instance for dataset sizes up to the size of the available local storage. Starting today, you can also opt in to caching data in S3 Express One Zone, benefiting applications that repeatedly read a shared dataset across multiple compute instances, without any limits on the total dataset size. Once you opt in, Mountpoint for S3 retains objects with sizes up to one megabyte in S3 Express One Zone. This is ideal for compute-intensive use cases such as machine learning training for computer vision models where applications repeatedly read millions of small images from multiple instances.

Mountpoint for Amazon S3 is an open source project backed by AWS support, which means customers with AWS Business and Enterprise Support plans get 24/7 access to cloud support engineers. To get started, visit the GitHub page and product page.

Read more


AWS Backup for Amazon S3 adds new restore parameter

AWS Backup introduces a new restore parameter for Amazon S3 backups, offering you the ability to choose how many versions of an object to restore.

By default, AWS Backup restores only the latest version of objects from the version stack at any point in time. The new parameter will now allow you to recover all versions of your data by restoring the entire version stack. You can also recover just the latest version(s) of an object without the overhead of restoring all older versions. With this feature, you now have more flexibility to control the data recovery process of Amazon S3 buckets/prefixes from your Amazon S3 backups, tailoring restore jobs to your requirements.

This feature is available in all Regions where AWS Backup for Amazon S3 is available. For more information on Regional availability and pricing, see the AWS Backup pricing page.

To learn more about AWS Backup for Amazon S3, visit the product page and technical documentation. To get started, visit the AWS Backup console.
 

Read more


The next generation of Amazon FSx for Lustre file systems is now available in US West (N. California)

You can now create the next generation Amazon FSx for Lustre file systems in the US West (N. California) AWS Region.

The next generation of Amazon FSx for Lustre file systems is built on AWS Graviton processors and provides higher throughput per terabyte (up to 1 GB/s per terabyte) and lower cost of throughput compared to previous generation file systems. Using the next generation of FSx for Lustre file systems, you can accelerate execution of machine learning, high-performance computing, media & entertainment, and financial simulations workloads while reducing your cost of storage.

For more information, please visit the Amazon FSx for Lustre product page, and see the AWS region table for complete regional availability information.

Read more


Announcing customized delete protection for Amazon EBS Snapshots and EBS-backed AMIs

Customers can now further customize Recycle Bin rules to exclude EBS Snapshots and EBS-backed Amazon Machine Images (AMIs) based on tags. Customers use Recycle Bin to protect their resources from accidental deletion by retaining them for a time period that customers specify before being permanently deleted. The newly launched feature helps customers save cost by customizing their Recycle Bin rules for delete protection of only critical data in their resources, while excluding non-critical data that do not require delete protection.

Creating Region-level retention rules is a simple way to have peace of mind that all EBS Snapshots and EBS-backed AMIs in an AWS Region are protected from accidental deletion by Recycle Bin. However, in some cases, customers have security scanning workflows that create temporary EBS Snapshots that are not used for recovery. Customers may also have backup automation that do not require additional delete protection. The newly added feature to add resource exclusion tags in Recycle Bin can help you reduce storage costs by excluding the resources that do not require deletion protection from moving to Recycle Bin.

This feature is now available in all AWS commercial Regions and AWS GovCloud (US) Regions. Customers can add exclusion tags to their Recycle Bin rules via EC2 Console, API/CLI, or SDK.

To learn more about using Recycle Bin with exclusion tags, please refer to the technical documentation.

Read more


AWS Lambda supports Amazon S3 as a failed-event destination for asynchronous and stream event sources

AWS Lambda now supports Amazon S3 as a failed-event destination for asynchronous invocations, and for Amazon Kinesis and Amazon DynamoDB event source mappings (ESMs). This enables customers to route the failed batch of records and function execution results to S3 using a simple configuration, without the overhead of writing and managing additional code.

Customers building event-driven applications with asynchronous event sources or stream event sources for Lambda can configure services like Amazon Simple Queue Service (SQS) and Amazon Simple Notification Service (SNS) as failed-event destinations to store the results of failed invocations. However, in scenarios where existing failed-event destinations do not support the payload size requirements for the failed events, customers need to write custom logic to retrieve and redrive event payload data. With today’s announcement, customers can configure S3 as a failed-event destination for Lambda functions invoked via asynchronous invocations, Kinesis ESMs, and DynamoDB ESMs. This enables customers to deliver complete event payload data to the failed-event destination, and helps reduce the overhead of managing custom logic to reliably retrieve and redrive failed event data.

This feature is generally available in all AWS Commercial Regions where AWS Lambda and the configured event source or event destination are available.

To enable S3 as a failed-event destination, refer to our documentation for configuring destinations with asynchronous invocations, Kinesis ESMs, and DynamoDB ESMs. This feature incurs no additional charge to use. You pay for charges associated with Amazon S3 usage.

Read more


Amazon EFS now supports cross-account Replication

Amazon EFS now supports cross-account Replication, allowing customers to replicate file systems between AWS accounts. EFS Replication enables you to easily maintain an up-to-date replica of your file system in the AWS Region of your choice. With this launch, EFS Replication customers can meet business continuity, multi-account disaster recovery, and compliance requirements by automatically keeping replicas of their file data in separate accounts.

Customers often use multiple AWS accounts to help isolate and manage business applications and data for operational excellence, security, and reliability. Starting today, you can use EFS Replication to replicate your file system to another account in any AWS region. This eliminates the need to set up custom processes to synchronize EFS data across accounts, enhancing resilience and reliability in distributed environments.

EFS cross-account Replication is available for all existing and new EFS file systems in all commercial AWS Regions. To learn more, visit the Amazon EFS Documentation and get started by configuring EFS Replication in just a few clicks using the Amazon EFS Console, AWS CLI, AWS CloudFormation, and APIs.
 

Read more


AWS Organizations member accounts can now regain access to accidentally locked Amazon S3 buckets

AWS Organizations member accounts can now use a simple process through AWS Identity and Access Management (IAM) to regain access to accidentally locked Amazon S3 buckets. With this capability, you can repair misconfigured S3 bucket policies while improving your organization’s security and compliance posture.

IAM now provides centralized management of long-term root credentials, helping you prevent unintended access and improving your account security at scale in your organization. You can also perform a curated set of root-only tasks, using short-lived and privileged root sessions. For example, you can centrally delete an S3 bucket policy in just a few steps. First, navigate to the Root access management page in the IAM console, select an account, and choose Take privileged action. Next, select Delete bucket policy and select your chosen S3 bucket.

AWS Organization member accounts can use this capability in all AWS Regions, including the AWS GovCloud (US) Regions and AWS China Regions. Customers can use this new capability via the IAM console or programmatically using the AWS CLI or SDK. For more information, visit the AWS News Blog and IAM documentation.

Read more


AWS Backup now supports resource type and multiple tag selections in backup policies

Today, AWS Backup announces additional options to assign resources to a backup policy on AWS Organizations. Customers can now select specific resources by resource type and exclude them based on resource type or tag. They can also use the combination of multiple tags within the same resource selection.

With additional options to select resources, customers can implement flexible backup strategies across their organizations by combining multiple resource types and/or tags. They can also exclude resources they do not want to back up using resource type or tag, optimizing cost on non-critical resources.

To get started, use your AWS Organizations' management account to create or edit an AWS Backup policy. Then, create or modify a resource selection using the AWS Organizations' API, CLI, or JSON editor in either the AWS Organizations or AWS Backup console.

AWS Backup support for enhanced resource selection in backup policies is available in all commercial regions where AWS Backup’s cross account management is available. For more information, visit our documentation and launch blog.

Read more


Amazon S3 Access Grants now integrate with Amazon Redshift

Amazon S3 Access Grants now integrate with Amazon Redshift. S3 Access Grants map identities from your Identity Provider (IdP), such as Entra ID and Okta, to datasets stored in Amazon S3, helping you to easily manage data permissions at scale. This integration gives you the ability to manage S3 permissions for AWS IAM Identity Center users and groups when using Redshift, without the need to write and maintain bucket policies or individual IAM roles.

Using S3 Access Grants, you can grant permissions to buckets or prefixes in S3 to users and groups in your IdP by connecting S3 with IAM Identity Center. Then, when you use Identity Center authentication for Redshift, end users in the appropriate user groups will automatically have permission to read and write data in S3 using COPY, UNLOAD, and CREATE LIBRARY SQL commands. S3 Access Grants then automatically update S3 permissions as users are added and removed from user groups in the IdP.

Amazon S3 Access Grants with Amazon Redshift are available for users federated via IdP in all AWS Regions where AWS IAM Identity Center is available. For pricing details, visit Amazon S3 pricing and Amazon Redshift pricing. To learn more about S3 Access Grants, refer to the documentation.

Read more


Amazon S3 now supports up to 1 million buckets per AWS account

Amazon S3 has increased the default bucket quota from 100 to 10,000 per AWS account. Additionally, any customer can request a quota increase up to 1 million buckets. As a result, customers can create new buckets for individual datasets that they store in S3 to more easily take advantage of capabilities such as default encryption, security policies, S3 Replication, and more to remove barriers to scaling and optimize their S3 storage architecture.

Amazon S3’s new default bucket quota of 10,000 buckets is now applied to all AWS accounts and requires no action by customers. To increase your bucket quota from 10,000 to up to 1 million buckets, simply request a quota increase via Service Quotas. You can create your first 2,000 buckets at no cost. Above 2,000 buckets, you are charged a small monthly fee.

The increased default general purpose bucket limit per account now applies to all AWS Regions. To learn more about general purpose bucket quotas, visit the S3 User Guide.
 

Read more


AWS Backup now supports Amazon Neptune in three new Regions

Today, we are announcing the availability of AWS Backup support for Amazon Neptune in the Asia Pacific (Jakarta, Osaka) and Africa (Cape Town) Regions. AWS Backup is a policy-based, fully managed and cost-effective solution that enables you to centralize and automate data protection of Amazon Neptune along with other AWS services (spanning compute, storage, and databases) and third-party applications. Together with AWS Organizations, AWS Backup enables you to centrally deploy policies to configure, manage, and govern your data protection activity.

With this launch, AWS Backup support for Amazon Neptune is available in the following regions: in the following Regions: US East (N. Virginia, Ohio), US West (N. California, Oregon), Canada (Central), Europe (Frankfurt, Ireland, London, Paris, Stockholm), Asia Pacific (Hong Kong, Jakarta, Mumbai, Osaka, Seoul, Singapore, Sydney, Tokyo), Middle East (Bahrain, UAE), Africa (Cape Town), Israel (Tel Aviv), South America (Sao Paulo), AWS GovCloud (US-East, US-West), and China (Beijing, Ningxia). For more information on Region availability, feature availability, and pricing, see the AWS Backup pricing page and the AWS Backup feature availability page.

To learn more about AWS Backup support for Amazon Neptune, visit AWS Backup’s technical documentation. To get started, visit the AWS Backup console.

Read more


Amazon EBS now supports detailed performance statistics on EBS volume health

Today, Amazon announced the availability of detailed performance statistics for Amazon Elastic Block Store (EBS) volumes. This new capability provides you with real-time visibility into the performance of your EBS volumes, making it easier to monitor the health of your storage resources and take actions sooner.

With detailed performance statistics, you can access 11 metrics at up to a per-second granularity to monitor input/output (I/O) statistics of your EBS volumes, including driven I/O and I/O latency histograms. The granular visibility provided by these metrics helps you quickly identify and proactively troubleshoot application performance bottlenecks that may be caused by factors such as reaching an EBS volume's provisioned IOPS or throughput limits, enabling you to enhance application performance and resiliency.

Detailed performance statistics for EBS volumes are available by default for all EBS volumes attached to a Nitro-based EC2 instance in all AWS Commercial, China, and the AWS GovCloud (US) Regions, at no additional charge.

To get started with EBS detailed performance statistics, please visit the documentation here to learn more about the available metrics and how to access them using NVMe-CLI tools.

Read more


AWS Backup now supports copying Amazon S3 backups across Regions and accounts in opt-in Regions

AWS Backup for Amazon S3 adds support to copy your Amazon S3 backups across AWS Regions and accounts in AWS opt-in Regions (Regions that are disabled by default).

With the support of Amazon S3 backup copies in multiple AWS Regions, you can maintain separate, protected copies of your backup data to help meet the compliance requirements for data protection and disaster recovery. With the support of Amazon S3 backups across accounts, an additional layer of protection is provided against inadvertent or unauthorized actions.

The ability to copy Amazon S3 backups across AWS Regions and accounts is now available in all commercial AWS Regions. For more information on regional availability and pricing, see AWS Backup pricing page.

To learn more about AWS Backup for Amazon S3, visit the product page and technical documentation. To get started, visit the AWS Backup console.
 

Read more


Amazon S3 Access Grants is now available in the AWS Canada West (Calgary) Region

You can now create Amazon S3 Access Grants in the AWS Canada West (Calgary) Region.

Amazon S3 Access Grants map identities in directories such as Microsoft Entra ID, or AWS Identity and Access Management (IAM) principals, to datasets in S3. This helps you manage data permissions at scale by automatically granting S3 access to end users based on their corporate identity.

To learn more about Amazon S3 Access Grants, visit our product detail page, and see the S3 Access Grants Region Table for complete regional availability information.
 

Read more


tag-policies

Amazon DynamoDB announces general availability of attribute-based access control

Amazon DynamoDB is a serverless, NoSQL, fully managed database with single-digit millisecond performance at any scale. Today, we are announcing the general availability of attribute-based access control (ABAC) support for tables and indexes in all AWS Commercial Regions and the AWS GovCloud (US) Regions. ABAC is an authorization strategy that lets you define access permissions based on tags attached to users, roles, and AWS resources. Using ABAC with DynamoDB helps you simplify permission management with your tables and indexes as your applications and organizations scale.

ABAC uses tag-based conditions in your AWS Identity and Access Management (IAM) policies or other policies to allow or deny specific actions on your tables or indexes when IAM principals’ tags match the tags for the tables. Using tag-based conditions, you can also set more granular access permissions based on your organizational structures. ABAC automatically applies your tag-based permissions to new employees and changing resource structures, without rewriting policies as organizations grow.

There is no additional cost to use ABAC. You can get started with ABAC using the AWS Management Console, AWS API, AWS CLI, AWS SDK, or AWS CloudFormation. Learn more at Using attribute-based access control with DynamoDB.

Read more