A data architect is designing a large table to store historical sales data. The table will contain over 5 billion rows and will be primarily queried by `order_date` and `customer_id`, often together. The table will be updated daily with new batches of data. The engineer wants to ensure optimal query performance for filtering and joining on these columns.
Which actions should the engineer take? (Select all that apply.)
A. Create a materialized view that aggregates the data by `order_date`.
B. Periodically run `ALTER TABLE ... RECLUSTER` to maintain clustering.
C. Define a clustering key on a high-cardinality column like `transaction_uuid`.
D. Ensure the daily data ingestion is ordered by `order_date` before loading.
E. Define a clustering key on `(order_date, customer_id)`.
F. Rely on Snowflake's natural data clustering to optimize performance.