11.11 Special Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: scxmas70

ARA-R01 Exam Dumps - SnowPro Advanced: Architect Recertification Exam

Question # 4

A Developer is having a performance issue with a Snowflake query. The query receives up to 10 different values for one parameter and then performs an aggregation over the majority of a fact table. It then

joins against a smaller dimension table. This parameter value is selected by the different query users when they execute it during business hours. Both the fact and dimension tables are loaded with new data in an overnight import process.

On a Small or Medium-sized virtual warehouse, the query performs slowly. Performance is acceptable on a size Large or bigger warehouse. However, there is no budget to increase costs. The Developer

needs a recommendation that does not increase compute costs to run this query.

What should the Architect recommend?

A.

Create a task that will run the 10 different variations of the query corresponding to the 10 different parameters before the users come in to work. The query results will then be cached and ready to respond quickly when the users re-issue the query.

B.

Create a task that will run the 10 different variations of the query corresponding to the 10 different parameters before the users come in to work. The task will be scheduled to align with the users' working hours in order to allow the warehouse cache to be used.

C.

Enable the search optimization service on the table. When the users execute the query, the search optimization service will automatically adjust the query execution plan based on the frequently-used parameters.

D.

Create a dedicated size Large warehouse for this particular set of queries. Create a new role that has USAGE permission on this warehouse and has the appropriate read permissions over the fact and dimension tables. Have users switch to this role and use this warehouse when they want to access this data.

Full Access
Question # 5

An Architect runs the following SQL query:

How can this query be interpreted?

A.

FILEROWS is a stage. FILE_ROW_NUMBER is line number in file.

B.

FILEROWS is the table. FILE_ROW_NUMBER is the line number in the table.

C.

FILEROWS is a file. FILE_ROW_NUMBER is the file format location.

D.

FILERONS is the file format location. FILE_ROW_NUMBER is a stage.

Full Access
Question # 6

An Architect needs to automate the daily Import of two files from an external stage into Snowflake. One file has Parquet-formatted data, the other has CSV-formatted data.

How should the data be joined and aggregated to produce a final result set?

A.

Use Snowpipe to ingest the two files, then create a materialized view to produce the final result set.

B.

Create a task using Snowflake scripting that will import the files, and then call a User-Defined Function (UDF) to produce the final result set.

C.

Create a JavaScript stored procedure to read. join, and aggregate the data directly from the external stage, and then store the results in a table.

D.

Create a materialized view to read, Join, and aggregate the data directly from the external stage, and use the view to produce the final result set

Full Access
Question # 7

A Snowflake Architect is designing an application and tenancy strategy for an organization where strong legal isolation rules as well as multi-tenancy are requirements.

Which approach will meet these requirements if Role-Based Access Policies (RBAC) is a viable option for isolating tenants?

A.

Create accounts for each tenant in the Snowflake organization.

B.

Create an object for each tenant strategy if row level security is viable for isolating tenants.

C.

Create an object for each tenant strategy if row level security is not viable for isolating tenants.

D.

Create a multi-tenant table strategy if row level security is not viable for isolating tenants.

Full Access
Question # 8

A data platform team creates two multi-cluster virtual warehouses with the AUTO_SUSPEND value set to NULL on one. and '0' on the other. What would be the execution behavior of these virtual warehouses?

A.

Setting a '0' or NULL value means the warehouses will never suspend.

B.

Setting a '0' or NULL value means the warehouses will suspend immediately.

C.

Setting a '0' or NULL value means the warehouses will suspend after the default of 600 seconds.

D.

Setting a '0' value means the warehouses will suspend immediately, and NULL means the warehouses will never suspend.

Full Access
Question # 9

Why might a Snowflake Architect use a star schema model rather than a 3NF model when designing a data architecture to run in Snowflake? (Select TWO).

A.

Snowflake cannot handle the joins implied in a 3NF data model.

B.

The Architect wants to remove data duplication from the data stored in Snowflake.

C.

The Architect is designing a landing zone to receive raw data into Snowflake.

D.

The Bl tool needs a data model that allows users to summarize facts across different dimensions, or to drill down from the summaries.

E.

The Architect wants to present a simple flattened single view of the data to a particular group of end users.

Full Access
Question # 10

Based on the Snowflake object hierarchy, what securable objects belong directly to a Snowflake account? (Select THREE).

A.

Database

B.

Schema

C.

Table

D.

Stage

E.

Role

F.

Warehouse

Full Access
Question # 11

Which Snowflake objects can be used in a data share? (Select TWO).

A.

Standard view

B.

Secure view

C.

Stored procedure

D.

External table

E.

Stream

Full Access
Question # 12

A Snowflake Architect is setting up database replication to support a disaster recovery plan. The primary database has external tables.

How should the database be replicated?

A.

Create a clone of the primary database then replicate the database.

B.

Move the external tables to a database that is not replicated, then replicate the primary database.

C.

Replicate the database ensuring the replicated database is in the same region as the external tables.

D.

Share the primary database with an account in the same region that the database will be replicated to.

Full Access
Question # 13

An Architect is troubleshooting a query with poor performance using the QUERY_HIST0RY function. The Architect observes that the COMPILATIONJHME is greater than the EXECUTIONJTIME.

What is the reason for this?

A.

The query is processing a very large dataset.

B.

The query has overly complex logic.

C.

The query is queued for execution.

D.

The query is reading from remote storage.

Full Access
Question # 14

Which query will identify the specific days and virtual warehouses that would benefit from a multi-cluster warehouse to improve the performance of a particular workload?

A)

B)

C)

D)

A.

Option A

B.

Option B

C.

Option C

D.

Option D

Full Access
Question # 15

A company has a source system that provides JSON records for various loT operations. The JSON Is loading directly into a persistent table with a variant field. The data Is quickly growing to 100s of millions of records and performance to becoming an issue. There is a generic access pattern that Is used to filter on the create_date key within the variant field.

What can be done to improve performance?

A.

Alter the target table to Include additional fields pulled from the JSON records. This would Include a create_date field with a datatype of time stamp. When this field Is used in the filter, partition pruning will occur.

B.

Alter the target table to include additional fields pulled from the JSON records. This would include a create_date field with a datatype of varchar. When this field is used in the filter, partition pruning will occur.

C.

Validate the size of the warehouse being used. If the record count is approaching 100s of millions, size XL will be the minimum size required to process this amount of data.

D.

Incorporate the use of multiple tables partitioned by date ranges. When a user or process needs to query a particular date range, ensure the appropriate base table Is used.

Full Access
Question # 16

A group of Data Analysts have been granted the role analyst role. They need a Snowflake database where they can create and modify tables, views, and other objects to load with their own data. The Analysts should not have the ability to give other Snowflake users outside of their role access to this data.

How should these requirements be met?

A.

Grant ANALYST_R0LE OWNERSHIP on the database, but make sure that ANALYST_ROLE does not have the MANAGE GRANTS privilege on the account.

B.

Grant SYSADMIN ownership of the database, but grant the create schema privilege on the database to the ANALYST_ROLE.

C.

Make every schema in the database a managed access schema, owned by SYSADMIN, and grant create privileges on each schema to the ANALYST_ROLE for each type of object that needs to be created.

D.

Grant ANALYST_ROLE ownership on the database, but grant the ownership on future [object type] s in database privilege to SYSADMIN.

Full Access
Question # 17

A company has several sites in different regions from which the company wants to ingest data.

Which of the following will enable this type of data ingestion?

A.

The company must have a Snowflake account in each cloud region to be able to ingest data to that account.

B.

The company must replicate data between Snowflake accounts.

C.

The company should provision a reader account to each site and ingest the data through the reader accounts.

D.

The company should use a storage integration for the external stage.

Full Access
Question # 18

An Architect has a VPN_ACCESS_LOGS table in the SECURITY_LOGS schema containing timestamps of the connection and disconnection, username of the user, and summary statistics.

What should the Architect do to enable the Snowflake search optimization service on this table?

A.

Assume role with OWNERSHIP on future tables and ADD SEARCH OPTIMIZATION on the SECURITY_LOGS schema.

B.

Assume role with ALL PRIVILEGES including ADD SEARCH OPTIMIZATION in the SECURITY LOGS schema.

C.

Assume role with OWNERSHIP on VPN_ACCESS_LOGS and ADD SEARCH OPTIMIZATION in the SECURITY_LOGS schema.

D.

Assume role with ALL PRIVILEGES on VPN_ACCESS_LOGS and ADD SEARCH OPTIMIZATION in the SECURITY_LOGS schema.

Full Access
Question # 19

An Architect needs to improve the performance of reports that pull data from multiple Snowflake tables, join, and then aggregate the data. Users access the reports using several dashboards. There are performance issues on Monday mornings between 9:00am-11:00am when many users check the sales reports.

The size of the group has increased from 4 to 8 users. Waiting times to refresh the dashboards has increased significantly. Currently this workload is being served by a virtual warehouse with the following parameters:

AUTO-RESUME = TRUE AUTO_SUSPEND = 60 SIZE = Medium

What is the MOST cost-effective way to increase the availability of the reports?

A.

Use materialized views and pre-calculate the data.

B.

Increase the warehouse to size Large and set auto_suspend = 600.

C.

Use a multi-cluster warehouse in maximized mode with 2 size Medium clusters.

D.

Use a multi-cluster warehouse in auto-scale mode with 1 size Medium cluster, and set min_cluster_count = 1 and max_cluster_count = 4.

Full Access
Question # 20

Which feature provides the capability to define an alternate cluster key for a table with an existing cluster key?

A.

External table

B.

Materialized view

C.

Search optimization

D.

Result cache

Full Access
Question # 21

The following table exists in the production database:

A regulatory requirement states that the company must mask the username for events that are older than six months based on the current date when the data is queried.

How can the requirement be met without duplicating the event data and making sure it is applied when creating views using the table or cloning the table?

A.

Use a masking policy on the username column using a entitlement table with valid dates.

B.

Use a row level policy on the user_events table using a entitlement table with valid dates.

C.

Use a masking policy on the username column with event_timestamp as a conditional column.

D.

Use a secure view on the user_events table using a case statement on the username column.

Full Access
Question # 22

A company has an external vendor who puts data into Google Cloud Storage. The company's Snowflake account is set up in Azure.

What would be the MOST efficient way to load data from the vendor into Snowflake?

A.

Ask the vendor to create a Snowflake account, load the data into Snowflake and create a data share.

B.

Create an external stage on Google Cloud Storage and use the external table to load the data into Snowflake.

C.

Copy the data from Google Cloud Storage to Azure Blob storage using external tools and load data from Blob storage to Snowflake.

D.

Create a Snowflake Account in the Google Cloud Platform (GCP), ingest data into this account and use data replication to move the data from GCP to Azure.

Full Access
Question # 23

Which of the following ingestion methods can be used to load near real-time data by using the messaging services provided by a cloud provider?

A.

Snowflake Connector for Kafka

B.

Snowflake streams

C.

Snowpipe

D.

Spark

Full Access
Question # 24

A company has a Snowflake account named ACCOUNTA in AWS us-east-1 region. The company stores its marketing data in a Snowflake database named MARKET_DB. One of the company’s business partners has an account named PARTNERB in Azure East US 2 region. For marketing purposes the company has agreed to share the database MARKET_DB with the partner account.

Which of the following steps MUST be performed for the account PARTNERB to consume data from the MARKET_DB database?

A.

Create a new account (called AZABC123) in Azure East US 2 region. From account ACCOUNTA create a share of database MARKET_DB, create a new database out of this share locally in AWS us-east-1 region, and replicate this new database to AZABC123 account. Then set up data sharing to the PARTNERB account.

B.

From account ACCOUNTA create a share of database MARKET_DB, and create a new database out of this share locally in AWS us-east-1 region. Then make this database the provider and share it with the PARTNERB account.

C.

Create a new account (called AZABC123) in Azure East US 2 region. From account ACCOUNTA replicate the database MARKET_DB to AZABC123 and from this account set up the data sharing to the PARTNERB account.

D.

Create a share of database MARKET_DB, and create a new database out of this share locally in AWS us-east-1 region. Then replicate this database to the partner’s account PARTNERB.

Full Access
Question # 25

An Architect with the ORGADMIN role wants to change a Snowflake account from an Enterprise edition to a Business Critical edition.

How should this be accomplished?

A.

Run an ALTER ACCOUNT command and create a tag of EDITION and set the tag to Business Critical.

B.

Use the account's ACCOUNTADMIN role to change the edition.

C.

Failover to a new account in the same region and specify the new account's edition upon creation.

D.

Contact Snowflake Support and request that the account's edition be changed.

Full Access
Question # 26

When using the Snowflake Connector for Kafka, what data formats are supported for the messages? (Choose two.)

A.

CSV

B.

XML

C.

Avro

D.

JSON

E.

Parquet

Full Access
Question # 27

A user has the appropriate privilege to see unmasked data in a column.

If the user loads this column data into another column that does not have a masking policy, what will occur?

A.

Unmasked data will be loaded in the new column.

B.

Masked data will be loaded into the new column.

C.

Unmasked data will be loaded into the new column but only users with the appropriate privileges will be able to see the unmasked data.

D.

Unmasked data will be loaded into the new column and no users will be able to see the unmasked data.

Full Access
Question # 28

An Architect for a multi-national transportation company has a system that is used to check the weather conditions along vehicle routes. The data is provided to drivers.

The weather information is delivered regularly by a third-party company and this information is generated as JSON structure. Then the data is loaded into Snowflake in a column with a VARIANT data type. This

table is directly queried to deliver the statistics to the drivers with minimum time lapse.

A single entry includes (but is not limited to):

- Weather condition; cloudy, sunny, rainy, etc.

- Degree

- Longitude and latitude

- Timeframe

- Location address

- Wind

The table holds more than 10 years' worth of data in order to deliver the statistics from different years and locations. The amount of data on the table increases every day.

The drivers report that they are not receiving the weather statistics for their locations in time.

What can the Architect do to deliver the statistics to the drivers faster?

A.

Create an additional table in the schema for longitude and latitude. Determine a regular task to fill this information by extracting it from the JSON dataset.

B.

Add search optimization service on the variant column for longitude and latitude in order to query the information by using specific metadata.

C.

Divide the table into several tables for each year by using the timeframe information from the JSON dataset in order to process the queries in parallel.

D.

Divide the table into several tables for each location by using the location address information from the JSON dataset in order to process the queries in parallel.

Full Access
Question # 29

When loading data into a table that captures the load time in a column with a default value of either CURRENT_TIME () or CURRENT_TIMESTAMP() what will occur?

A.

All rows loaded using a specific COPY statement will have varying timestamps based on when the rows were inserted.

B.

Any rows loaded using a specific COPY statement will have varying timestamps based on when the rows were read from the source.

C.

Any rows loaded using a specific COPY statement will have varying timestamps based on when the rows were created in the source.

D.

All rows loaded using a specific COPY statement will have the same timestamp value.

Full Access
Question # 30

A company is storing large numbers of small JSON files (ranging from 1-4 bytes) that are received from IoT devices and sent to a cloud provider. In any given hour, 100,000 files are added to the cloud provider.

What is the MOST cost-effective way to bring this data into a Snowflake table?

A.

An external table

B.

A pipe

C.

A stream

D.

A copy command at regular intervals

Full Access