SAA-C03 Online Practice Questions

Home / Amazon / SAA-C03

Latest SAA-C03 Exam Practice Questions

The practice questions for SAA-C03 exam was last updated on 2026-02-24 .

Viewing page 1 out of 17 pages.

Viewing questions 1 out of 88 questions.

Question#1

A company wants to create a payment processing application. The application must run when a payment record arrives in an existing Amazon S3 bucket. The application must process each payment record exactly once. The company wants to use an AWS Lambda function to process the payments.
Which solution will meet these requirements?

A. Configure the existing S3 bucket to send object creation events to Amazon EventBridge. Configure EventBridge to route events to an Amazon Simple Queue Service (Amazon SQS) FIFO queue. Configure the Lambda function to run when a new event arrives in the SQS queue.
B. Configure the existing S3 bucket to send object creation events to an Amazon Simple Notification Service (Amazon SNS) topic. Configure the Lambda function to run when a new event arrives in the SNS topic.
C. Configure the existing S3 bucket to send object creation events to an Amazon Simple Queue Service (Amazon SQS) queue. Configure the Lambda function to run when a new event arrives in the SQS queue.
D. Configure the existing S3 bucket to send object creation events directly to the Lambda function. Configure the Lambda function to handle object creation events and to process the payments.

Question#2

A company is using microservices to build an ecommerce application on AWS. The company wants to preserve customer transaction information after customers submit orders. The company wants to store transaction data in an Amazon Aurora database. The company expects sales volumes to vary throughout each year.

A. Use an Amazon API Gateway REST API to invoke an AWS Lambda function to send transaction data to the Aurora database. Send transaction data to an Amazon Simple Queue Service (Amazon SQS) queue that has a dead-letter queue. Use a second Lambda function to read from the SQS queue and to update the Aurora database.
B. Use an Amazon API Gateway HTTP API to send transaction data to an Application Load Balancer (ALB). Use the ALB to send the transaction data to Amazon Elastic Container Service (Amazon ECS) on Amazon EC2. Use ECS tasks to store the data in Aurora database.
C. Use an Application Load Balancer (ALB) to route transaction data to Amazon Elastic Kubernetes Service (Amazon EKS). Use Amazon EKS to send the data to the Aurora database.
D. Use Amazon Data Firehose to send transaction data to Amazon S3. Use AWS Database Migration Service (AWS DMS) to migrate the data from Amazon S3 to the Aurora database.

Explanation:
Analysis:
The solution must handle variable sales volumes, preserve transaction information, and store data in an Amazon Aurora database with minimal operational overhead. UsingAPI Gateway, AWS Lambda, and SQSis the best option because it provides scalability, reliability, and resilience.
Why Option A is Correct:
API Gateway: Serves as an entry point for transaction data in a serverless, scalable manner.
AWS Lambda: Processes the transactions and sends them to Amazon SQS for queuing.
Amazon SQS: Buffers the transaction data, ensuring durability and resilience against spikes in transaction volume.
Second Lambda Function: Processes messages from the SQS queue and updates the Aurora database, decoupling the workflow for better scalability.
Dead-Letter Queue (DLQ): Ensures failed transactions are logged for later debugging or reprocessing.
Why Other Options Are Not Ideal:
Option B:
Using an ALB with ECS on EC2 introduces operational overhead, such as managing EC2 instances and scaling ECS tasks. Not cost-effective.
Option C:
EKS is highly operationally intensive and requires Kubernetes cluster management, which is unnecessary for this use case. Too complex.
Option D:
Amazon Data Firehose and DMS are not designed for real-time transactional workflows. They are better suited for data analytics pipelines. Not suitable.
AWS
Reference: Amazon API Gateway: AWS Documentation - API Gateway AWS Lambda: AWS Documentation - Lambda Amazon SQS: AWS Documentation - SQS Amazon Aurora: AWS Documentation - Aurora

Question#3

A company runs a critical Amazon RDS for MySQL DB instance in a single Availability Zone. The company must improve the availability of the DB instance.
Which solution will meet this requirement?

A. Configure the DB instance to use a multi-Region DB instance deployment.
B. Create an Amazon Simple Queue Service (Amazon SQS) queue in the AWS Region where the company hosts the DB instance to manage writes to the DB instance.
C. Configure the DB instance to use a Multi-AZ DB instance deployment.
D. Create an Amazon Simple Queue Service (Amazon SQS) queue in a different AWS Region than the Region where the company hosts the DB instance to manage writes to the DB instance.

Explanation:
To improve availability and fault tolerance of an Amazon RDS instance, the recommended approach is to configure a Multi-AZ deployment.
Multi-AZ deployments for RDS automatically replicate data to a standby instance in a different Availability Zone (AZ).
If a failure occurs in the primary AZ (due to hardware, network, or power), RDS will automatically failover to the standby instance with minimal downtime, without administrative intervention.
This is an AWS-managed feature and does not require application modification.
It does not provide scalability or load balancing; it's designed for high availability and resiliency.
Options A, B, and D are incorrect:
A refers to cross-Region, which is used for disaster recovery, not high availability.
B and D with SQS do not address high availability directly for the RDS instance; queues help decouple systems but do not make a database more resilient.
Reference: Amazon RDS Multi-AZ Deployments

Question#4

A company's SAP application has a backend SQL Server database in an on-premises environment. The company wants to migrate its on-premises application and database server to AWS. The company needs an instance type that meets the high demands of its SAP database. On-premises performance data shows that both the SAP application and the database have high memory utilization.
Which solution will meet these requirements?

A. Use the compute optimized Instance family for the application Use the memory optimized instance family for the database.
B. Use the storage optimized instance family for both the application and the database
C. Use the memory optimized instance family for both the application and the database
D. Use the high performance computing (HPC) optimized instance family for the application. Use the memory optimized instance family for the database.

Explanation:
Memory Optimized Instances: These instances are designed to deliver fast performance for workloads that process large data sets in memory. They are ideal for high-performance databases like SAP and applications with high memory utilization.
High Memory Utilization: Both the SAP application and the SQL Server database have high memory demands as per the on-premises performance data. Memory optimized instances provide the necessary memory capacity and performance.
Instance Types:
For the SAP application, using a memory optimized instance ensures the application has sufficient memory to handle the high workload efficiently.
For the SQL Server database, memory optimized instances ensure optimal database performance with high memory throughput.
Operational Efficiency: Using the same instance family for both the application and the database simplifies management and ensures both components meet performance requirements.
Reference: Amazon EC2 Instance Types
SAP on AWS

Question#5

A company is migrating some of its applications to AWS. The company wants to migrate and modernize the applications quickly after it finalizes networking and security strategies. The company has set up an AWS Direct Connect connection in a central network account.
The company expects to have hundreds of AWS accounts and VPCs in the near future. The corporate network must be able to access the resources on AWS seamlessly and also must be able to communicate with all the VPCs. The company also wants to route its cloud resources to the internet through its on-premises data center.
Which combination of steps will meet these requirements? (Select THREE.)

A. Create a Direct Connect gateway in the central account. In each of the accounts, create an association proposal by using the Direct Connect gateway and the account ID for every virtual private gateway.
B. Create a Direct Connect gateway and a transit gateway in the central network account. Attach the transit gateway to the Direct Connect gateway by using a transit VI
C. Provision an internet gateway. Attach the internet gateway to subnets. Allow internet traffic through the gateway.
D. Share the transit gateway with other accounts. Attach VPCs to the transit gateway.
E. Provision VPC peering as necessary.
F. Provision only private subnets. Open the necessary route on the transit gateway and customer gateway to allow outbound internet traffic from AWS to flow through NAT services that run in the data center.

Explanation:
For a large-scale multi-account AWS environment with many VPCs and centralized Direct Connect, AWS recommends using a Transit Gateway (TGW) architecture combined with a Direct Connect gateway (DXGW). This setup allows scalable, centralized connectivity between on-premises and multiple VPCs across accounts.
Step B: Creating a Direct Connect gateway and Transit Gateway in a central network account and connecting them via a transit VIF enables the on-premises network to access all connected VPCs.
Step D: Sharing the transit gateway with other accounts via AWS Resource Access Manager (RAM) allows the central TGW to attach VPCs in multiple accounts, simplifying multi-account connectivity.
Step F: To route cloud resources’ internet traffic back through the on-premises data center (for centralized egress), provisioning only private subnets and routing outbound internet traffic through NAT or firewall services in the data center is necessary. This requires configuring transit gateway and customer gateway routes appropriately.
Option A is partially correct in the use of Direct Connect gateway but association proposals are not scalable for hundreds of VPCs and accounts compared to transit gateway.
Option C (internet gateway) is irrelevant here as traffic egress is required via on-premises data center, not directly to the internet.
Option E (VPC peering) is not scalable for hundreds of VPCs.
Reference: AWS Transit Gateway Overview (https: //docs.aws.amazon.com/vpc/latest/tgw/what-is-transit-gateway.html)
AWS Direct Connect Gateway (https: //docs.aws.amazon.com/directconnect/latest/UserGuide/direct-connect-gateways.html)
Centralized Egress Architecture with Transit Gateway (https: //aws.amazon.com/blogs/networking-and-content-delivery/how-to-set-up-centralized-egress-with-transit-gateway/)
AWS Well-Architected Framework ― Reliability Pillar (https: //d1.awsstatic.com/whitepapers/architecture/AWS_Well-Architected_Framework.pdf)

Disclaimer

This page is for educational and exam preparation reference only. It is not affiliated with Amazon, AWS Certified Associate, or the official exam provider. Candidates should refer to official documentation and training for authoritative information.

Exam Code: SAA-C03Q & A: 724 Q&AsUpdated:  2026-02-24

  Get All SAA-C03 Q&As