AIF-C01 Online Practice Questions

Home / Amazon / AIF-C01

Latest AIF-C01 Exam Practice Questions

The practice questions for AIF-C01 exam was last updated on 2026-02-24 .

Viewing page 1 out of 9 pages.

Viewing questions 1 out of 48 questions.

Question#1

A company has built an image classification model to predict plant diseases from photos of plant leaves. The company wants to evaluate how many images the model classified correctly.
Which evaluation metric should the company use to measure the model's performance?

A. R-squared score
B. Accuracy
C. Root mean squared error (RMSE)
D. Learning rate

Explanation:
Accuracy is the most appropriate metric to measure the performance of an image classification model. It indicates the percentage of correctly classified images out of the total number of images. In the context of classifying plant diseases from images, accuracy will help the company determine how well the model is performing by showing how many images were correctly classified.
Option B (Correct): "Accuracy": This is the correct answer because accuracy measures the proportion of correct predictions made by the model, which is suitable for evaluating the performance of a classification model.
Option A: "R-squared score" is incorrect as it is used for regression analysis, not classification tasks.
Option C: "Root mean squared error (RMSE)" is incorrect because it is also used for regression tasks to measure prediction errors, not for classification accuracy.
Option D: "Learning rate" is incorrect as it is a hyperparameter for training, not a performance metric.
AWS AI Practitioner
Reference: Evaluating Machine Learning Models on AWS: AWS documentation emphasizes the use of appropriate metrics, like accuracy, for classification tasks.

Question#2

Which strategy will prevent model hallucinations?

A. Fact-check the output of the large language model (LLM).
B. Compare the output of the large language model (LLM) to the results of an internet search.
C. Use contextual grounding.
D. Use relevance grounding.

Question#3

A company wants to use Amazon Bedrock. The company needs to review which security aspects the company is responsible for when using Amazon Bedrock.

A. Patching and updating the versions of Amazon Bedrock
B. Protecting the infrastructure that hosts Amazon Bedrock
C. Securing the company's data in transit and at rest
D. Provisioning Amazon Bedrock within the company network

Explanation:
With Amazon Bedrock, AWS handles infrastructure security and patching (shared responsibility model).
Customers are responsible for securing their data (encryption, IAM, policies) both in transit and at rest.
Provisioning infrastructure (D) and platform patching (A, B) are AWS responsibilities.
Reference: AWS Shared Responsibility Model

Question#4

In which stage of the generative AI model lifecycle are tests performed to examine the model's accuracy?

A. Deployment
B. Data selection
C. Fine-tuning
D. Evaluation

Explanation:
The evaluation stage of the generative AI model lifecycle involves testing the model to assess its performance, including accuracy, coherence, and other metrics. This stage ensures the model meets the desired quality standards before deployment.
Exact Extract from AWS AI Documents:
From the AWS AI Practitioner Learning Path:
"The evaluation phase in the machine learning lifecycle involves testing the model against validation or test datasets to measure its performance metrics, such as accuracy, precision, recall, or task-specific metrics for generative AI models."
(Source: AWS AI Practitioner Learning Path, Module on Machine Learning Lifecycle)
Detailed
Option A: Deployment Deployment involves making the model available for use in production. While monitoring occurs post-deployment, accuracy testing is performed earlier in the evaluation stage.
Option B: Data selection Data selection involves choosing and preparing data for training, not testing the model’s accuracy.
Option C: Fine-tuning Fine-tuning adjusts a pre-trained model to improve performance for a specific task, but it is not the stage where accuracy is formally tested.
Option D: Evaluation This is the correct answer. The evaluation stage is where tests are conducted to examine the model’s accuracy and other performance metrics, ensuring it meets requirements.
Reference: AWS AI Practitioner Learning Path: Module on Machine Learning Lifecycle
Amazon SageMaker Developer Guide: Model Evaluation (https://docs.aws.amazon.com/sagemaker/latest/dg/model-evaluation.html)
AWS Documentation: Generative AI Lifecycle (https://aws.amazon.com/machine-learning/)

Question#5

A company needs to use Amazon SageMaker AI for model training and inference. The company must comply with regulatory requirements to run SageMaker jobs in an isolated environment without internet access.
Which solution will meet these requirements?

A. Run SageMaker training and inference by using SageMaker Experiments.
B. Run SageMaker training and inference by using network isolation.
C. Encrypt the data at rest by using encryption for SageMaker geospatial capabilities.
D. Associate appropriate AWS Identity and Access Management (IAM) roles with the SageMaker jobs.

Explanation:
Network isolation is a key security feature for SageMaker. It ensures that training and inference jobs run in a VPC and are not accessible from the internet. Per the official SageMaker documentation:
“When you enable network isolation, your model can’t make any outbound network calls. This is useful for security and regulatory compliance when working with sensitive data.”

Disclaimer

This page is for educational and exam preparation reference only. It is not affiliated with Amazon, AWS Certified AI Practitioner, or the official exam provider. Candidates should refer to official documentation and training for authoritative information.

Exam Code: AIF-C01Q & A: 365 Q&AsUpdated:  2026-02-24

  Get All AIF-C01 Q&As