DVA-C02 Certification Exam Guide + Practice Questions

Home / Amazon / DVA-C02

Comprehensive DVA-C02 certification exam guide covering exam overview, skills measured, preparation tips, and practice questions with detailed explanations.

DVA-C02 Exam Guide

This DVA-C02 exam focuses on practical knowledge and real-world application scenarios related to the subject area. It evaluates your ability to understand core concepts, apply best practices, and make informed decisions in realistic situations rather than relying solely on memorization.

This page provides a structured exam guide, including exam focus areas, skills measured, preparation recommendations, and practice questions with explanations to support effective learning.

 

Exam Overview

The DVA-C02 exam typically emphasizes how concepts are used in professional environments, testing both theoretical understanding and practical problem-solving skills.

 

Skills Measured

  • Understanding of core concepts and terminology
  • Ability to apply knowledge to practical scenarios
  • Analysis and evaluation of solution options
  • Identification of best practices and common use cases

 

Preparation Tips

Successful candidates combine conceptual understanding with hands-on practice. Reviewing measured skills and working through scenario-based questions is strongly recommended.

 

Practice Questions for DVA-C02 Exam

The following practice questions are designed to reinforce key DVA-C02 exam concepts and reflect common scenario-based decision points tested in the certification.

Question#1

A developer is configuring an applications deployment environment in AWS CodePipeine. The application code is stored in a GitHub repository. The developer wants to ensure that the repository package's unit tests run in the new deployment environment. The deployment has already set the pipeline's source provider to GitHub and has specified the repository and branch to use in the deployment.
When combination of steps should the developer take next to meet these requirements with the least the LEAST overhead' (Select TWO).

A. Create an AWS CodeCommt project. Add the repository package's build and test commands to the protects buildspec
B. Create an AWS CodeBuid project. Add the repository package's build and test commands to the projects buildspec
C. Create an AWS CodeDeploy protect. Add the repository package's build and test commands to the project's buildspec
D. Add an action to the source stage. Specify the newly created project as the action provider. Specify the build attract as the actions input artifact.
E. Add a new stage to the pipeline alter the source stage. Add an action to the new stage. Speedy the newly created protect as the action provider. Specify the source artifact as the action's input artifact.

Explanation:
This solution will ensure that the repository package’s unit tests run in the new deployment environment with the least overhead because it uses AWS CodeBuild to build and test the code in a fully managed service, and AWS CodePipeline to orchestrate the deployment stages and actions.
Option A is not optimal because it will use AWS CodeCommit instead of AWS CodeBuild, which is a source control service, not a build and test service.
Option C is not optimal because it will use AWS CodeDeploy instead of AWS CodeBuild, which is a deployment service, not a build and test service.
Option D is not optimal because it will add an action to the source stage instead of creating a new stage, which will not follow the best practice of separating different deployment phases.
Reference: AWS CodeBuild, AWS CodePipeline

Question#2

A developer is trying to make API calls using the AWS SDK. The IAM user credentials used by the application require multi-factor authentication for all API calls.
Which method should the developer use to access the multi-factor authentication-protected API?

A. GetFederationToken
B. GetCallerIdentity
C. GetSessionToken
D. DecodeAuthorizationMessage

Explanation:
When IAM user credentials require MFA for API access, the correct approach is to obtain temporary security credentials from AWS Security Token Service (STS) that are validated with an MFA code. AWS documentation describes using STS to issue temporary credentials that applications can use instead of long-term access keys, especially when MFA is required.
The specific STS API operation used for an IAM user to obtain temporary credentials is GetSessionToken. This call supports MFA by accepting the user’s MFA device serial number and a time-based one-time password (TOTP) code. STS then returns a set of temporary credentials: AccessKeyId, SecretAccessKey, and SessionToken, which the SDK can use to sign subsequent API requests. This is the standard method for enabling MFA-protected API access for IAM users.
Why the other options are wrong:
GetFederationToken is used to obtain temporary credentials for a federated user, often for scenarios where you want to grant access to resources for users who do not have IAM users. It’s not the typical method for IAM-user MFA enforcement for all calls.
GetCallerIdentity simply returns identity details for the current credentials; it does not generate credentials.
DecodeAuthorizationMessage is used to decode encoded authorization failure messages returned by AWS, not to authenticate.
Therefore, to access an API protected by MFA requirements for an IAM user, the developer should call GetSessionToken and then use the returned temporary credentials in the AWS SDK.

Question#3

A software company is migrating a single-page application from on-premises servers to the AWS Cloud by using AWS Amplify Hosting. The application relies on an API that was created with an existing GraphQL schema. The company needs to migrate the API along with the application.
Which solution will meet this requirement with the LEAST amount of configuration?

A. Create a new API by using the Amplify CLI's amplify import api command. Select REST as the service to use. Add the existing schema to the new AP
B. Create a new API in Amazon API Gateway by using the existing schema. Use the Amplify CLI's amplify add api command. Select the API as the application's backend environment.
C. Create a new API in AWS AppSync by using the existing schema. Use the Amplify CLI's amplify import api command. Select the API as the application's backend environment.
D. Create a new API by using the Amplify CLI's amplify add api command. Select GraphQL as the service to use. Add the existing schema to the new AP

Explanation:
AWS Amplify’s most direct support for GraphQL APIs is through AWS AppSync, and the Amplify CLI can generate and configure an AppSync GraphQL API directly from a schema with minimal setup. The requirement says the API already has an existing GraphQL schema, and the goal is to migrate it with the least configuration effort.
Option D is the simplest: run amplify add api, choose GraphQL, and provide the existing schema. Amplify then provisions the AppSync API, sets up the schema, creates the backend resources (depending on chosen data sources), and wires the configuration into the Amplify project so the SPA can consume the API.
Option A is incorrect because it selects REST and does not align with an existing GraphQL schema.
Option B is incorrect because API Gateway is not the native GraphQL service and would require additional mapping/proxy logic―more configuration.
Option C can be valid if an AppSync API already exists and you want to import it, but the question asks to “migrate the API along with the application” with least configuration. Creating it directly in Amplify is typically less configuration than creating separately and importing.
Therefore, using Amplify CLI to add a GraphQL API and supply the existing schema is the least-config approach.

Question#4

A developer manages a website that distributes its content by using Amazon CloudFront. The website's static artifacts are stored in an Amazon S3 bucket.
The developer deploys some changes and can see the new artifacts in the S3 bucket. However, the changes do not appear on the webpage that the CloudFront distribution delivers.
How should the developer resolve this issue?

A. Configure S3 Object Lock to update to the latest version of the files every time an S3 object is updated.
B. Configure the S3 bucket to clear all old objects from the bucket before new artifacts are uploaded.
C. Set CloudFront to invalidate the cache after the artifacts have been deployed to Amazon S3.
D. Set CloudFront to modify the distribution origin after the artifacts have been deployed to Amazon S3.

Explanation:
CloudFront is a content delivery network that caches objects at edge locations to reduce latency and origin load. When the developer updates static artifacts in the origin S3 bucket, CloudFront may continue serving cached versions until the objects expire based on cache-control headers or the distribution’s TTL settings. That is why the developer sees the updated files in S3 but not on the site.
The standard fix is to perform a CloudFront cache invalidation for the updated object paths (for example, /app.js, /styles.css, or /* if needed). An invalidation forces CloudFront edge locations to remove the cached objects so the next viewer request fetches the latest version from the S3 origin.
Option C directly addresses this behavior and is the correct operational practice when cache TTLs would otherwise delay updates.
Option A (Object Lock) is for WORM retention/compliance and has nothing to do with CloudFront cache refresh.
Option B might change what exists in S3 but does not guarantee CloudFront will stop serving cached content; the cache can still serve objects even if deleted from the origin (until it revalidates/refreshes).
Option D is unnecessary; the origin does not need to change to fetch updated content.
Therefore, invalidate the CloudFront cache after deployment to ensure the new artifacts are served.

Question#5

When a developer tries to run an AWS Code Build project, it raises an error because the length of all environment variables exceeds the limit for the combined maximum of characters.
What is the recommended solution?

A. Add the export LC-_ALL" on _ US, tuft" command to the pre _ build section to ensure POSIX Localization.
B. Use Amazon Cognate to store key-value pairs for large numbers of environment variables
C. Update the settings for the build project to use an Amazon S3 bucket for large numbers of environment variables
D. Use AWS Systems Manager Parameter Store to store large numbers ot environment variables

Explanation:
This solution allows the developer to overcome the limit for the combined maximum of characters for environment variables in AWS CodeBuild. AWS Systems Manager Parameter Store provides secure, hierarchical storage for configuration data management and secrets management. The developer can store large numbers of environment variables as parameters in Parameter Store and reference them in the buildspec file using parameter references. Adding export LC_ALL=“en_US.utf8” command to the pre_build section will not affect the environment variables limit. Using Amazon Cognito or an Amazon S3 bucket to store key-value pairs for environment variables will require additional configuration and integration.
Reference: [Build Specification Reference for AWS CodeBuild], [What Is AWS Systems Manager Parameter Store?]

Disclaimer

This page is for educational and exam preparation reference only. It is not affiliated with Amazon, Certified Developer - Associate, or the official exam provider. Candidates should refer to official documentation and training for authoritative information.

Exam Code: DVA-C02Q & A: 425 Q&AsUpdated:  2026-02-24

  Access Additional DVA-C02 Practice Resources