Analytics-Con-301 Online Practice Questions

Home / Salesforce / Analytics-Con-301

Latest Analytics-Con-301 Exam Practice Questions

The practice questions for Analytics-Con-301 exam was last updated on 2026-01-07 .

Viewing page 1 out of 7 pages.

Viewing questions 1 out of 38 questions.

Question#1

A Tableau Server customer is interested in measuring content and platform usage.
Which two features should the consultant use? Choose two.

A. Tableau Pulse
B. Tableau Server repository
C. Admin Insights page
D. Server Status page

Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
Two Tableau Server features provide usage and adoption insights:
Tableau Server Repository
Stores all metadata about:
Workbooks
Data sources
User activity
View traffic
Can be queried directly for content usage and platform metrics.
Admin Insights Page
Built-in dashboards showing:
User activity
Content usage
Data source usage
Performance metrics
Designed specifically for monitoring platform adoption.
These two together give complete content and usage visibility.
Why A and D are incorrect:
A. Tableau Pulse
Available only in Tableau Cloud, not Tableau Server.
Focuses on personalized metric insights, not platform reporting.
D. Server Status Page
Shows node health and process status, not content usage or adoption analytics.
Thus, correct answers are B and C.
Tableau Server auditing and usage documentation describing repository tables.
Admin Insights documentation describing built-in content and user monitoring.

Question#2

A company uses an extract built from Custom SQL joining Claims and Members.
Members have multiple records in both tables → causing data duplication, which results in inflated claim cost trends.
Which approach meets performance and maintenance goals?

A. Replace the Custom SQL with a relationship between two Logical Tables: Members and Claims.
B. Replace the Custom SQL with a join between two Physical Tables: Members and Claims.
C. Use LOD calculations to ensure that claim costs are captured at the right granularity.
D. Use Table Calculations to ensure that claim costs are captured at the right granularity.

Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
The problem:
Custom SQL joins two multi-row tables, causing→ many-to-many duplication.
This artificially multiplies claim costs.
The extract becomes heavy and slow due to Custom SQL.
Tableau’s recommended solution:
✔ Use Relationships in the Logical Layer
✔ Instead of physical joins
✔ Tableau resolves many-to-many issues automatically
✔ Query is generated at the appropriate granularity to avoid duplication This is exactly Option A.
Relationships allow the Claims facts to remain at the claim grain and Members to remain at the member grain. Tableau resolves aggregations correctly, preventing inflated values.
Why the others are incorrect:
B ― Physical Join
Would continue the same duplication problem because multi-row joins multiply rows.
C ― LODs
Would require complex calculations and are error-prone. They do NOT fix the duplication in the underlying extract.
D ― Table Calculations
Happen after Tableau aggregates the duplicated data ― too late to fix the inflated baseline numbers.
Thus, the only correct and modern solution is relationships.
Relationships documentation explaining resolution of many-to-many granularity issues.
Guidance recommending avoiding Custom SQL for performance reasons.
Logical Layer behavior preventing row-duplication errors.

Question#3

A consultant used Tableau Data Catalog to determine which workbooks will be affected by a field change.
Catalog shows:
Published Data Source → 7 connected workbooks
Field search (Lineage tab) → 6 impacted workbooks
The client asks: Why 7 connected, but only 6 impacted?

A. The field is used twice in a single workbook.
B. The consultant lacked sufficient permissions to see the seventh workbook.
C. The field being altered is not used in the seventh workbook.
D. The seventh workbook is connected via Custom SQL so it didn't appear in the list.

Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
Key Tableau Catalog behaviors:
Connected workbooks = any workbook linked to the published data source.
Impacted workbooks = only workbooks that use the specific field.
If a workbook connects to the data source but never uses the field, it appears as “connected” but not impacted.
This explains EXACTLY why:
7 workbooks are connected
Only 6 use the changed field
Therefore only 6 are impacted
This matches Option C.
Why the other options are incorrect:
A. Field used twice
Still counts as one workbook ― does not explain discrepancy.
B. Permission issue
If permissions blocked visibility, the data source would not list 7 connections.
D. Custom SQL use
Catalog can still detect field usage through metadata lineage; Custom SQL does NOT hide workbook dependency.
Thus, only Option C logically explains the scenario.
Data Catalog lineage rules: “Connected vs. Impacted” distinction.
Field-level impact analysis documentation.
Workbook dependency logic within Tableau Catalog.

Question#4

A client is migrating their data warehouse. They visualize the data in workbooks hosted on Tableau Server with Tableau Data Management enabled and want to see how many workbooks will be impacted.
What should the consultant do to quickly identify how many workbooks will be impacted?

A. In Tableau Server, select the database from External Assets, then select the Lineage tab.
B. Leverage the Tableau Developer API to query the workbooks' metadata.
C. Complete the migration and let users report errors as they are noticed.
D. Open each workbook and identify the data source.

Explanation:
Comprehensive and Detailed Explanation From Exact Extract:
When Tableau Data Management is enabled, Tableau Catalog provides Lineage capabilities that map connections between:
External databases
Tables
Data sources
Workbooks
Fields
Tableau documentation states that the Lineage tab for any external asset (such as a database or table):
Shows all connected workbooks
Shows dependencies and impact analysis
Allows admins to instantly assess how many analytics assets will be affected by a data warehouse migration
Option A directly uses Tableau Catalog to perform exactly this task.
Option B is unnecessary because the Catalog lineage tool already provides this information without development effort.
Option C is completely inappropriate because it offers no analysis or planning.
Option D is too time-consuming and unnecessary, especially when Tableau Catalog provides an automated dependency map.
Therefore, the correct method is to use the Lineage tab in External Assets.
Tableau Catalog lineage documentation showing how to view impacted workbooks.
External Assets and data source dependency features in Tableau Data Management.
Impact analysis best practices for data warehouse migration using Tableau Catalog.

Question#5

A Tableau consultant is tasked with creating a line graph that shows daily temperature fluctuations.
The below set of data to use to create a dashboard.
How should the consultant manipulate the data to support the business need?


A. Pivot the data before the requested visualization can be created.
B. Request a new set of data that is aggregated to the day level.
C. Create a Level of Detail (LOD) calculation that will aggregate the data at the requested daily level.

Explanation:
The business requirement is:
“Create a line graph that shows daily temperature fluctuations.”
The dataset provided contains:
Only 5 rows, one per month
Two aggregated columns: Avg High Temp and Avg Low Temp No daily values in the dataset
Tableau’s documentation states that:
Tableau cannot generate artificial granularity that does not exist in the underlying data.
LOD calculations cannot create detail that isn’t present in the source. They can only roll up or fix existing grain; they cannot fabricate lower-grain data.
Pivoting only reshapes data; it does not create missing days or introduce new rows.
When the visualization requires detail that the dataset does not contain, the correct solution is to obtain data at the required level of granularity.
Because the dataset contains monthly averages, it is impossible to show day-to-day fluctuations without having the actual daily temperatures.
Therefore, the only way to support the business need is to request daily-level data from the data provider.
Why the other options are incorrect:
A. Pivot the data
Pivoting would convert the dataset from wide format to long format (e.g., “Avg High Temp” and “Avg Low Temp” into a single “Temperature Type” field).
This does not add daily rows, so the required daily line graph still cannot be built.
C. Create an LOD calculation
LOD expressions cannot create new lower-level detail.
They only aggregate or fix existing detail.
Because the dataset contains only monthly values, an LOD cannot generate daily temperatures.
Tableau granularity and data modeling guidance stating that detail must exist in the data to be visualized.
LOD expression documentation explaining that LODs cannot create lower granularity than the source data.
Pivoting documentation explaining pivots reshape fields but do not generate new rows or finer-grain data.

Disclaimer

This page is for educational and exam preparation reference only. It is not affiliated with Salesforce, Salesforce Consultant, or the official exam provider. Candidates should refer to official documentation and training for authoritative information.

Exam Code: Analytics-Con-301Q & A: 100 Q&AsUpdated:  2026-01-07

  Get All Analytics-Con-301 Q&As