AIF-C01 Online Practice Questions

Home / Amazon / AIF-C01

Latest AIF-C01 Exam Practice Questions

The practice questions for AIF-C01 exam was last updated on 2025-12-14 .

Viewing page 1 out of 19 pages.

Viewing questions 1 out of 99 questions.

Question#1

A company wants to use a large language model (LLM) to develop a conversational agent. The company needs to prevent the LLM from being manipulated with common prompt engineering techniques to perform undesirable actions or expose sensitive information.
Which action will reduce these risks?

A. Create a prompt template that teaches the LLM to detect attack patterns.
B. Increase the temperature parameter on invocation requests to the LL
C. Avoid using LLMs that are not listed in Amazon SageMaker.
D. Decrease the number of input tokens on invocations of the LL

Explanation:
Creating a prompt template that teaches the LLM to detect attack patterns is the most effective way to reduce the risk of the model being manipulated through prompt engineering.
Prompt Templates for Security:
A well-designed prompt template can guide the LLM to recognize and respond appropriately to potential manipulation attempts.
This strategy helps prevent the model from performing undesirable actions or exposing sensitive information by embedding security awareness directly into the prompts.
Why Option A is Correct:
Teaches Model Security Awareness: Equips the LLM to handle potentially harmful inputs by recognizing suspicious patterns.
Reduces Manipulation Risk: Helps mitigate risks associated with prompt engineering attacks by proactively preparing the LLM.
Why Other Options are Incorrect:
B. Increase the temperature parameter: This increases randomness in responses, potentially making the LLM more unpredictable and less secure.
C. Avoid LLMs not listed in SageMaker: Does not directly address the risk of prompt manipulation.
D. Decrease the number of input tokens: Does not mitigate risks related to prompt manipulation.

Question#2

A loan company is building a generative AI-based solution to offer new applicants discounts based on specific business criteria. The company wants to build and use an AI model responsibly to minimize bias that could negatively affect some customers.
Which actions should the company take to meet these requirements? (Select TWO.)

A. Detect imbalances or disparities in the data.
B. Ensure that the model runs frequently.
C. Evaluate the model's behavior so that the company can provide transparency to stakeholders.
D. Use the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) technique to ensure that the model is 100% accurate.
E. Ensure that the model's inference time is within the accepted limits.

Explanation:
To build an AI model responsibly and minimize bias, it is essential to ensure fairness and transparency throughout the model development and deployment process. This involves detecting and mitigating data imbalances and thoroughly evaluating the model's behavior to understand its impact on different groups.
Option A (Correct): "Detect imbalances or disparities in the data": This is correct because identifying and addressing data imbalances or disparities is a critical step in reducing bias. AWS provides tools like Amazon SageMaker Clarify to detect bias during data preprocessing and model training.
Option C (Correct): "Evaluate the model's behavior so that the company can provide transparency to stakeholders": This is correct because evaluating the model's behavior for fairness and accuracy is key to ensuring that stakeholders understand how the model makes decisions. Transparency is a crucial aspect of responsible AI.
Option B: "Ensure that the model runs frequently" is incorrect because the frequency of model runs does not address bias.
Option D: "Use the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) technique to ensure that the model is 100% accurate" is incorrect because ROUGE is a metric for evaluating the quality of text summarization models, not for minimizing bias.
Option E: "Ensure that the model's inference time is within the accepted limits" is incorrect as it relates to performance, not bias reduction.
AWS AI Practitioner
Reference: Amazon SageMaker Clarify: AWS offers tools such as SageMaker Clarify for detecting bias in datasets and models, and for understanding model behavior to ensure fairness and transparency.
Responsible AI Practices: AWS promotes responsible AI by advocating for fairness, transparency, and inclusivity in model development and deployment.

Question#3

A company is developing an ML model to make loan approvals. The company must implement a solution to detect bias in the model. The company must also be able to explain the model's predictions.
Which solution will meet these requirements?

A. Amazon SageMaker Clarify
B. Amazon SageMaker Data Wrangler
C. Amazon SageMaker Model Cards
D. AWS AI Service Cards

Explanation:
Amazon SageMaker Clarify provides built-in tools to detect bias in data and models, and to generate detailed explainability reports for model predictions, including SHAP values and feature importance.
A is correct:
“Amazon SageMaker Clarify provides bias detection, explainability for ML models, and comprehensive reports to satisfy regulatory and ethical requirements.”
(Reference: Amazon SageMaker Clarify Overview)
B (Data Wrangler) is for data preparation, not bias/explainability.
C (Model Cards) document models, but don’t detect bias or explain predictions.
D (AI Service Cards) provide transparency for AWS AI services, not custom model explainability.

Question#4

A company wants to fine-tune an ML model that is hosted on Amazon Bedrock. The company wants to use its own sensitive data that is stored in private databases in a VPC. The data needs to stay within the company's private network.
Which solution will meet these requirements?

A. Restrict access to Amazon Bedrock by using an AWS Identity and Access Management (IAM) service role.
B. Restrict access to Amazon Bedrock by using an AWS Identity and Access Management (IAM) resource policy.
C. Use AWS PrivateLink to connect the VPC and Amazon Bedrock.
D. Use AWS Key Management Service (AWS KMS) keys to encrypt the data.

Explanation:
The company wants to fine-tune an ML model on Amazon Bedrock using sensitive data stored in private databases within a VPC, ensuring the data remains within its private network. AWS PrivateLink provides a secure, private connection between a VPC and AWS services like Amazon Bedrock, allowing data to stay within the company’s network without traversing the public internet. This meets the requirement for maintaining data privacy during fine-tuning.
Exact Extract from AWS AI Documents:
From the AWS Bedrock User Guide:
"AWS PrivateLink enables you to securely connect your VPC to Amazon Bedrock without exposing data to the public internet. This is particularly useful for fine-tuning models with sensitive data, as it ensures that data remains within your private network."
(Source: AWS Bedrock User Guide, Security and Networking)
Detailed
Option A: Restrict access to Amazon Bedrock by using an AWS Identity and Access Management (IAM) service role.While IAM service roles control access to Amazon Bedrock, they do not address the requirement of keeping data within the private network during data transfer. This option is insufficient.
Option B: Restrict access to Amazon Bedrock by using an AWS Identity and Access Management (IAM) resource policy.IAM resource policies define permissions for Bedrock resources but do not ensure that data stays within the private network. This option is incorrect.
Option C: Use AWS PrivateLink to connect the VPC and Amazon Bedrock. This is the correct answer. AWS PrivateLink creates a secure, private connection between the VPC and Amazon Bedrock, ensuring that sensitive data does not leave the private network during fine-tuning, as required.
Option D: Use AWS Key Management Service (AWS KMS) keys to encrypt the data. While AWS KMS can encrypt data, encryption alone does not guarantee that data remains within the private network during transfer. This option does not fully meet the requirement.
Reference: AWS Bedrock User Guide: Security and Networking (https://docs.aws.amazon.com/bedrock/latest/userguide/security.html)
AWS Documentation: AWS PrivateLink (https://aws.amazon.com/privatelink/)
AWS AI Practitioner Learning Path: Module on Security and Networking for AI/ML Services

Question#5

A company runs a website for users to make travel reservations. The company wants an AI solution to help create consistent branding for hotels on the website. The AI solution needs to generate hotel descriptions for the website in a consistent writing style.
Which AWS service will meet these requirements?

A. Amazon Comprehend
B. Amazon Personalize
C. Amazon Rekognition
D. Amazon Bedrock

Explanation:
The correct answer is D because Amazon Bedrock provides access to foundation models (FMs) from various providers for generative AI use cases, including text generation. It supports generating content in a consistent tone, voice, or writing style using prompts or few-shot examples.
From AWS documentation:
"Amazon Bedrock allows you to build and scale generative AI applications using foundation models from AI21 Labs, Anthropic, Cohere, Meta, Mistral, Stability AI, and Amazon. These models can generate text with controlled tone and style for applications like branding, content creation, and copywriting."
Explanation of other options:
A. Amazon Comprehend is for natural language understanding, such as sentiment analysis and entity recognition, not generation.
B. Amazon Personalize is for building recommendation systems, not content generation. C. Amazon Rekognition is for image and video analysis, not text generation. Referenced AWS AI/ML Documents and Study Guides: Amazon Bedrock Developer Guide C Generative AI Use Cases
AWS Certified Machine Learning Specialty Guide C Content Generation with FMs

Exam Code: AIF-C01Q & A: 282 Q&AsUpdated:  2025-12-14

 Get All AIF-C01 Q&As