DEA-C02 Exam Questions 2026 – Real Practice Test with Verified Answers

Home / Snowflake / DEA-C02

Latest DEA-C02 Exam Practice Questions

The practice questions for DEA-C02 exam was last updated on 2026-04-24 .

Viewing page 1 out of 2 pages.

Viewing questions 1 out of 10 questions.

Question#1

1.A Snowflake Administrator is reviewing the credit consumption of a new virtual warehouse. They notice the warehouse frequently suspends and resumes, adding latency to the first query of each new analytics session.
To balance cost and performance for this ad-hoc query workload, which warehouse parameter is most appropriate to adjust?

A. `STATEMENT_TIMEOUT_IN_SECONDS`
B. `MAX_CLUSTER_COUNT`
C. `SCALING_POLICY`
D. `AUTO_SUSPEND`

Question#2

A data architect is designing a large table to store historical sales data. The table will contain over 5 billion rows and will be primarily queried by `order_date` and `customer_id`, often together. The table will be updated daily with new batches of data. The engineer wants to ensure optimal query performance for filtering and joining on these columns.
Which actions should the engineer take? (Select all that apply.)

A. Create a materialized view that aggregates the data by `order_date`.
B. Periodically run `ALTER TABLE ... RECLUSTER` to maintain clustering.
C. Define a clustering key on a high-cardinality column like `transaction_uuid`.
D. Ensure the daily data ingestion is ordered by `order_date` before loading.
E. Define a clustering key on `(order_date, customer_id)`.
F. Rely on Snowflake's natural data clustering to optimize performance.

Question#3

A data engineer is working with two large tables, `ORDERS` and `SHIPMENTS`, both containing a `REGION` column. A common query joins these two tables on their primary keys and filters for a specific region. To improve performance, the data engineer decides to create a multi-cluster warehouse.
How does a multi-cluster warehouse help optimize the performance of these queries, especially when multiple analysts are running them concurrently?

A. It automatically provisions additional, separate clusters to handle incoming queries in parallel, reducing queuing time.
B. It ensures that data for the same region from both tables is stored on the same micro-partition, speeding up the join.
C. It automatically rewrites the query to use a more efficient join algorithm.
D. It increases the amount of memory available for each individual query, preventing remote disk spilling.

Question#4

A table is created with a clustering key on `EVENT_TIMESTAMP`. The data is ingested in chronological order. An analyst runs a query with the filter `WHERE EVENT_TIMESTAMP BETWEEN '2023-01-01' AND '2023-01-02'`.
The query profile shows that a very small number of micro-partitions were scanned out of a very large total.
This is an example of:

A. Remote Spilling
B. Query Pruning
C. Concurrency Scaling
D. Result Cache

Question#5

A data scientist has written a Python UDF to process data using a third-party library that is not included in the default Anaconda package channel provided by Snowflake.
What are the necessary steps the engineer must take to make this UDF work in Snowflake? (Choose 2.)

A. Upload the third-party library's Wheel or Zip file to a Snowflake internal stage.
B. The UDF must be rewritten in Java, as Python UDFs do not support third-party libraries.
C. Contact Snowflake support to have the package added to the default Anaconda channel.
D. Use the `IMPORTS` clause in the `CREATE FUNCTION` statement, referencing the path to the library in the stage.
E. Use `pip install` directly within the Python UDF code.

Disclaimer

This page is for educational and exam preparation reference only. It is not affiliated with Snowflake, SnowPro Advanced: Data Engineer, or the official exam provider. Candidates should refer to official documentation and training for authoritative information.

Exam Code: DEA-C02Q & A: 91 Q&AsUpdated:  2026-04-24

  Get All DEA-C02 Q&As