AAISM Online Practice Questions

Home / ISACA / AAISM

Latest AAISM Exam Practice Questions

The practice questions for AAISM exam was last updated on 2025-09-28 .

Viewing page 1 out of 6 pages.

Viewing questions 1 out of 32 questions.

Question#1

Which of the following is the MOST effective way to mitigate the risk of deepfake attacks?

A. Relying on human judgment for oversight
B. Limiting employee access to AI tools
C. Validating the provenance of the data source
D. Using a general-purpose large language model (LLM) to detect fraud

Explanation:
AAISM study content identifies validating the provenance of data sources as the most effective way to counter deepfake risks. Provenance validation ensures that content is authentic, verifiable, and traceable, preventing malicious synthetic media from being trusted as legitimate. Human oversight helps but cannot reliably detect sophisticated fakes. Limiting tool access reduces exposure but does not prevent external attacks. General-purpose LLMs are not optimized for fraud detection. The strongest control is verifying the origin and authenticity of data before acceptance.
Reference: AAISM Study Guide C AI Risk Management (Deepfake and Content Integrity Risks)
ISACA AI Security Management C Provenance Validation as a Defense

Question#2

Which of the following is the BEST approach for minimizing risk when integrating acceptable use policies for AI foundation models into business operations?

A. Limit model usage to predefined scenarios specified by the developer
B. Rely on the developer's enforcement mechanisms
C. Establish AI model life cycle policy and procedures
D. Implement responsible development training and awareness

Explanation:
The AAISM guidance defines risk minimization for AI deployment as requiring a formalized AI model life cycle policy and associated procedures. This ensures oversight from design to deployment, covering data handling, bias testing, monitoring, retraining, decommissioning, and acceptable use. Limiting usage to developer-defined scenarios or relying on vendor mechanisms transfers responsibility away from the organization and fails to meet governance expectations. Training and awareness support cultural alignment but cannot substitute for structured lifecycle controls. Therefore, the establishment of a documented lifecycle policy and procedures is the most comprehensive way to minimize operational, compliance, and ethical risks in integrating foundation models.
Reference: AAISM Study Guide C AI Governance and Program Management (Model Lifecycle Governance) ISACA AI Security Guidance C Policies and Lifecycle Management

Question#3

Which area of intellectual property law presents the GREATEST challenge in determining copyright protection for AI-generated content?

A. Enforcing trademark rights associated with AI systems
B. Determining the rightful ownership of AI-generated creations
C. Protecting trade secrets in AI technologies
D. Establishing licensing frameworks for AI-generated works

Explanation:
AAISM governance content highlights that the greatest intellectual property challenge in the context of AI-generated works is determining rightful ownership. Traditional copyright law requires human authorship, but AI-generated creations blur authorship and ownership boundaries, raising legal uncertainty about who can claim rights. Trademark enforcement, trade secret protection, and licensing frameworks are established areas of IP law but do not present the same fundamental challenge as ownership attribution. For AI-generated content, the central legal dilemma is ownership of the creation.
Reference: AAISM Study Guide C AI Governance and Program Management (Intellectual Property and AI) ISACA AI Security Management C Copyright and Ownership Challenges

Question#4

An organization utilizes AI-enabled mapping software to plan routes for delivery drivers. A driver following the AI route drives the wrong way down a one-way street, despite numerous signs.
Which of the following biases does this scenario demonstrate?

A. Selection
B. Reporting
C. Confirmation
D. Automation

Explanation:
AAISM defines automation bias as the tendency of individuals to over-rely on AI-generated outputs even when contradictory real-world evidence is available. In this scenario, the driver ignores traffic signs and follows the AI’s instructions, showing blind reliance on automation. Selection bias relates to data sampling, reporting bias refers to misrepresentation of results, and confirmation bias involves interpreting information to fit pre-existing beliefs. The most accurate description is automation bias.
Reference: AAISM Exam Content Outline C AI Risk Management (Bias Types in AI)
AI Security Management Study Guide C Automation Bias in AI Use

Question#5

Which of the following should be done FIRST when developing an acceptable use policy for generative AI?

A. Determine the scope and intended use of AI
B. Review AI regulatory requirements
C. Consult with risk management and legal
D. Review existing company policies

Explanation:
According to the AAISM framework, the first step in drafting an acceptable use policy is defining the scope and intended use of the AI system. This ensures that governance, regulatory considerations, risk assessments, and alignment with organizational policies are all tailored to the specific applications and functions the AI will serve. Once scope and intended use are clearly defined, legal, regulatory, and risk considerations can be systematically applied. Without this step, policies risk being generic and misaligned with business objectives.
Reference: AAISM Study Guide C AI Governance and Program Management (Policy Development Lifecycle) ISACA AI Governance Guidance C Defining Scope and Use Priorities

Exam Code: AAISMQ & A: 90 Q&AsUpdated:  2025-09-28

 Get All AAISM Q&As