Special Summer Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: scxmas70

Associate-Data-Practitioner Exam Dumps - Google Cloud Associate Data Practitioner (ADP Exam)

Go to page:
Question # 17

Your company’s customer support audio files are stored in a Cloud Storage bucket. You plan to analyze the audio files’ metadata and file content within BigQuery to create inference by using BigQuery ML. You need to create a corresponding table in BigQuery that represents the bucket containing the audio files. What should you do?

A.

Create an external table.

B.

Create a temporary table.

C.

Create a native table.

D.

Create an object table.

Full Access
Question # 18

Your organization has highly sensitive data that gets updated once a day and is stored across multiple datasets in BigQuery. You need to provide a new data analyst access to query specific data in BigQuery while preventing access to sensitive data. What should you do?

A.

Grant the data analyst the BigQuery Job User IAM role in the Google Cloud project.

B.

Create a materialized view with the limited data in a new dataset. Grant the data analyst BigQuery Data Viewer IAM role in the dataset and the BigQuery Job User IAM role in the Google Cloud project.

C.

Create a new Google Cloud project, and copy the limited data into a BigQuery table. Grant the data analyst the BigQuery Data Owner IAM role in the new Google Cloud project.

D.

Grant the data analyst the BigQuery Data Viewer IAM role in the Google Cloud project.

Full Access
Question # 19

You recently inherited a task for managing Dataflow streaming pipelines in your organization and noticed that proper access had not been provisioned to you. You need to request a Google-provided IAM role so you can restart the pipelines. You need to follow the principle of least privilege. What should you do?

A.

Request the Dataflow Developer role.

B.

Request the Dataflow Viewer role.

C.

Request the Dataflow Worker role.

D.

Request the Dataflow Admin role.

Full Access
Question # 20

You are designing a pipeline to process data files that arrive in Cloud Storage by 3:00 am each day. Data processing is performed in stages, where the output of one stage becomes the input of the next. Each stage takes a long time to run. Occasionally a stage fails, and you have to address

the problem. You need to ensure that the final output is generated as quickly as possible. What should you do?

A.

Design a Spark program that runs under Dataproc. Code the program to wait for user input when an error is detected. Rerun the last action after correcting any stage output data errors.

B.

Design the pipeline as a set of PTransforms in Dataflow. Restart the pipeline after correcting any stage output data errors.

C.

Design the workflow as a Cloud Workflow instance. Code the workflow to jump to a given stage based on an input parameter. Rerun the workflow after correcting any stage output data errors.

D.

Design the processing as a directed acyclic graph (DAG) in Cloud Composer. Clear the state of the failed task after correcting any stage output data errors.

Full Access
Question # 21

Your organization is building a new application on Google Cloud. Several data files will need to be stored in Cloud Storage. Your organization has approved only two specific cloud regions where these data files can reside. You need to determine a Cloud Storage bucket strategy that includes automated high availability. What should you do?

A.

Create a dual-region bucket, and upload the files to this bucket.

B.

Create a single-region bucket in each of the two regions, and use the gcloud storage command to replicate the data across the buckets in both regions.

C.

Create a multi-region bucket, and upload the files to this bucket.

D.

Create a single-region bucket in each of the two regions, and use Storage Transfer Service to replicate the data across the buckets in both regions.

Full Access
Question # 22

You manage a web application that stores data in a Cloud SQL database. You need to improve the read performance of the application by offloading read traffic from the primary database instance. You want to implement a solution that minimizes effort and cost. What should you do?

A.

Use Cloud CDN to cache frequently accessed data.

B.

Store frequently accessed data in a Memorystore instance.

C.

Migrate the database to a larger Cloud SQL instance.

D.

Enable automatic backups, and create a read replica of the Cloud SQL instance.

Full Access
Question # 23

Your company is migrating their batch transformation pipelines to Google Cloud. You need to choose a solution that supports programmatic transformations using only SQL. You also want the technology to support Git integration for version control of your pipelines. What should you do?

A.

Use Cloud Data Fusion pipelines.

B.

Use Dataform workflows.

C.

Use Dataflow pipelines.

D.

Use Cloud Composer operators.

Full Access
Question # 24

Your organization uses a BigQuery table that is partitioned by ingestion time. You need to remove data that is older than one year to reduce your organization’s storage costs. You want to use the most efficient approach while minimizing cost. What should you do?

A.

Create a scheduled query that periodically runs an update statement in SQL that sets the “deleted" column to “yes” for data that is more than one year old. Create a view that filters out rows that have been marked deleted.

B.

Create a view that filters out rows that are older than one year.

C.

Require users to specify a partition filter using the alter table statement in SQL.

D.

Set the table partition expiration period to one year using the ALTER TABLE statement in SQL.

Full Access
Go to page: