SAP-C02 Online Practice Questions

Home / Amazon / SAP-C02

Latest SAP-C02 Exam Practice Questions

The practice questions for SAP-C02 exam was last updated on 2025-12-14 .

Viewing page 1 out of 39 pages.

Viewing questions 1 out of 199 questions.

Question#1

A company deploys workloads in multiple AWS accounts. Each account has a VPC with VPC flow logs published in text log format to a centralized Amazon S3 bucket. Each log file is compressed with gzjp compression. The company must retain the log files indefinitely.
A security engineer occasionally analyzes the togs by using Amazon Athena to query the VPC flow logs. The query performance is degrading over time as the number of ingested togs is growing. A solutions architect: must improve the performance of the tog analysis and reduce the storage space that the VPC flow logs use.
Which solution will meet these requirements with the LARGEST performance improvement?

A. Create an AWS Lambda function to decompress the gzip flies and to compress the tiles with bzip2 compression. Subscribe the Lambda function to an s3: ObiectCrealed;Put S3 event notification for the S3 bucket.
B. Enable S3 Transfer Acceleration for the S3 bucket. Create an S3 Lifecycle configuration to move files to the S3 Intelligent-Tiering storage class as soon as the ties are uploaded
C. Update the VPC flow log configuration to store the files in Apache Parquet format. Specify Hourly partitions for the log files.
D. Create a new Athena workgroup without data usage control limits. Use Athena engine version 2.

Explanation:
Converting VPC flow logs to store in Apache Parquet format and specifying hourly partitions significantly improves query performance and reduces storage space usage. Apache Parquet is a columnar storage file format optimized for analytical queries, allowing Athena to scan less data and improve query performance. Partitioning logs by hour further enhances query efficiency by limiting the amount of data scanned during queries, addressing the issue of degrading performance over time due to the growing volume of ingested logs.
AWS Documentation on VPC Flow Logs and Amazon Athena provides insights into configuring VPC flow logs in Apache Parquet format and using Athena for querying log data. This approach is recommended for efficient log analysis and storage optimization.

Question#2

A large company is migrating ils entire IT portfolio to AWS. Each business unit in the company has a standalone AWS account that supports both development and test environments. New accounts to support production workloads will be needed soon.
The finance department requires a centralized method for payment but must maintain visibility into each group's spending to allocate costs.
The security team requires a centralized mechanism to control 1AM usage in all the company's accounts.
What combination of the following options meet the company's needs with the LEAST effort? (Select TWO.)

A. Use a collection of parameterized AWS CloudFormation templates defining common 1AM permissions that are launched into each account. Require all new and existing accounts to launch the appropriate stacks to enforce the least privilege model.
B. Use AWS Organizations to create a new organization from a chosen payer account and define an organizational unit hierarchy. Invite the existing accounts to join the organization and create new accounts using Organizations.
C. Require each business unit to use its own AWS accounts. Tag each AWS account appropriately and enable Cost Explorer to administer chargebacks.
D. Enable all features of AWS Organizations and establish appropriate service control policies that filter 1AM permissions for sub-accounts.
E. Consolidate all of the company's AWS accounts into a single AWS account. Use tags for billing purposes and the lAM's Access Advisor feature to enforce the least privilege model.

Explanation:
Option B is correct because AWS Organizations allows a company to create a new organization from a chosen payer account and define an organizational unit hierarchy. This way, the finance department can have a centralized method for payment but also maintain visibility into each group’s spending to allocate costs. The company can also invite the existing accounts to join the organization and create new accounts using Organizations, which simplifies the account management process.
Option D is correct because enabling all features of AWS Organizations and establishing appropriate service control policies (SCPs) that filter IAM permissions for sub-accounts allows the security team to have a centralized mechanism to control IAM usage in all the company’s accounts. SCPs are policies that specify the maximum permissions for an organization or organizational unit (OU), and they can be used to restrict access to certain services or actions across all accounts in an organization.
Option A is incorrect because using a collection of parameterized AWS CloudFormation templates defining common IAM permissions that are launched into each account requires more effort than using SCPs. Moreover, it does not provide a centralized mechanism to control IAM usage, as each account would have to launch the appropriate stacks to enforce the least privilege model.
Option C is incorrect because requiring each business unit to use its own AWS accounts does not provide a centralized method for payment or a centralized mechanism to control IAM usage. Tagging each AWS account appropriately and enabling Cost Explorer to administer chargebacks may help with cost allocation, but it is not as efficient as using AWS Organizations.
Option E is incorrect because consolidating all of the company’s AWS accounts into a single AWS
account does not provide visibility into each group’s spending or a way to control IAM usage for different business units. Using tags for billing purposes and the IAM’s Access Advisor feature to enforce the least privilege model may help with cost optimization and security, but it is not as scalable or flexible as using AWS Organizations.
AWS Organizations
Service Control Policies
AWS CloudFormation
Cost Explorer
IAM Access Advisor

Question#3

A company hosts a data-processing application on Amazon EC2 instances. The application polls an Amazon Elastic File System (Amazon EFS) file system for newly uploaded files. When a new file is detected, the application extracts data from the file and runs logic to select a Docker container image to process the file. The application starts the appropriate container image and passes the file location as a parameter.
The data processing that the container performs can take up to 2 hours. When the processing is complete, the code that runs inside the container writes the file back to Amazon EFS and exits.
The company needs to refactor the application to eliminate the EC2 instances that are running the containers
Which solution will meet these requirements?

A. Create an Amazon Elastic Container Service (Amazon ECS) cluster. Configure the processing to run as AWS Fargate tasks. Extract the container selection logic to run as an Amazon EventBridge rule that starts the appropriate Fargate task. Configure the EventBridge rule to run when files are added to the EFS file system.
B. Create an Amazon Elastic Container Service (Amazon ECS) cluster. Configure the processing to run as AWS Fargate tasks. Update and containerize the container selection logic to run as a Fargate service that starts the appropriate Fargate task. Configure an EFS event notification to invoke the Fargate service when files are added to the EFS file system.
C. Create an Amazon Elastic Container Service (Amazon ECS) cluster. Configure the processing to run as AWS Fargate tasks. Extract the container selection logic to run as an AWS Lambda function that starts the appropriate Fargate task. Migrate the storage of file uploads to an Amazon S3 bucket. Update the processing code to use Amazon S3. Configure an S3 event notification to invoke the Lambda function when objects are created.
D. Create AWS Lambda container images for the processing. Configure Lambda functions to use the container images. Extract the container selection logic torun as a decision Lambda function that invokes the appropriate Lambda processing function. Migrate the storage of file uploads to an Amazon S3 bucket. Update the processing code to use Amazon S3. Configure an S3 event notification to invoke the decision Lambda function when objects are created.

Question#4

A solutions architect is creating an application that stores objects in an Amazon S3 bucket. The solutions architect must deploy the application in two AWS Regions that will be used simultaneously. The objects in the two S3 buckets must remain synchronized with each other.
Which combination of steps will meet these requirements with the LEAST operational overhead? (Select THREE)

A. Create an S3 Multi-Region Access Point. Change the application to refer to the Multi-Region Access Point
B. Configure two-way S3 Cross-Region Replication (CRR) between the two S3 buckets
C. Modify the application to store objects in each S3 bucket.
D. Create an S3 Lifecycle rule for each S3 bucket to copy objects from one S3 bucket to the other S3 bucket.
E. Enable S3 Versioning for each S3 bucket
F. Configure an event notification for each S3 bucket to invoke an AVVS Lambda function to copy objects from one S3 bucket to the other S3 bucket.

Explanation:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/MultiRegionAccessPointRequestRouting. html
https://stackoverflow.com/questions/60947157/aws-s3-replication-without-versioning#:~:text=The%20automated%20Same%20Region%20Replication,is%20replicated%20betw een%20S3%20buckets.

Question#5

A solutions architect needs to copy data from an Amazon S3 bucket m an AWS account to a new S3 bucket in a new AWS account. The solutions architect must implement a solution that uses the AWS CLI.
Which combination of steps will successfully copy the data? (Choose three.)

A. Create a bucket policy to allow the source bucket to list its contents and to put objects and set object ACLs in the destination bucket. Attach the bucket policy to the destination bucket.
B. Create a bucket policy to allow a user In the destination account to list the source bucket's contents and read the source bucket's objects. Attach the bucket policy to the source bucket.
C. Create an IAM policy in the source account. Configure the policy to allow a user In the source account to list contents and get objects In the source bucket, and to list contents, put objects, and set object ACLs in the destination bucket. Attach the policy to the user _
D. Create an IAM policy in the destination account. Configure the policy to allow a user In the destination account to list contents and get objects In the source bucket, and to list contents, put objects, and set objectACLs in the destination bucket. Attach the policy to the user.
E. Run the aws s3 sync command as a user in the source account. Specify' the source and destination buckets to copy the data.
F. Run the aws s3 sync command as a user in the destination account. Specify' the source and destination buckets to copy the data.

Explanation:
Step B is necessary so that the user in the destination account has the necessary permissions to access the source bucket and list its contents, read its objects. Step D is needed so that the user in the destination account has the necessary permissions to access the destination bucket and list contents, put objects, and set object ACLs Step F is necessary because the aws s3 sync command needs to be run using the IAM user credentials from the destination account, so that the objects will have the appropriate permissions for the user in the destination account once they are copied.

Exam Code: SAP-C02Q & A: 569 Q&AsUpdated:  2025-12-14

 Get All SAP-C02 Q&As