Special Summer Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: scxmas70

Associate-Data-Practitioner Exam Dumps - Google Cloud Associate Data Practitioner (ADP Exam)

Go to page:
Question # 4

You work for a retail company that collects customer data from various sources:

    Online transactions: Stored in a MySQL database

    Customer feedback: Stored as text files on a company server

    Social media activity: Streamed in real-time from social media platformsYou need to design a data pipeline to extract and load the data into the appropriate Google Cloud storage system(s) for further analysis and ML model training. What should you do?

A.

Copy the online transactions data into Cloud SQL for MySQL. Import the customer feedback into BigQuery. Stream the social media activity into Cloud Storage.

B.

Extract and load the online transactions data into BigQuery. Load the customer feedback data into Cloud Storage. Stream the social media activity by using Pub/Sub and Dataflow, and store the data in BigQuery.

C.

Extract and load the online transactions data, customer feedback data, and social media activity into Cloud Storage.

D.

Extract and load the online transactions data into Bigtable. Import the customer feedback data into Cloud Storage. Store the social media activity in Cloud SQL for MySQL.

Full Access
Question # 5

You are predicting customer churn for a subscription-based service. You have a 50 PB historical customer dataset in BigQuery that includes demographics, subscription information, and engagement metrics. You want to build a churn prediction model with minimal overhead. You want to follow the Google-recommended approach. What should you do?

A.

Export the data from BigQuery to a local machine. Use scikit-learn in a Jupyter notebook to build the churn prediction model.

B.

Use Dataproc to create a Spark cluster. Use the Spark MLlib within the cluster to build the churn prediction model.

C.

Create a Looker dashboard that is connected to BigQuery. Use LookML to predict churn.

D.

Use the BigQuery Python client library in a Jupyter notebook to query and preprocess the data in BigQuery. Use the CREATE MODEL statement in BigQueryML to train the churn prediction model.

Full Access
Question # 6

Your company has developed a website that allows users to upload and share video files. These files are most frequently accessed and shared when they are initially uploaded. Over time, the files are accessed and shared less frequently, although some old video files may remain very popular. You need to design a storage system that is simple and cost-effective. What should you do?

A.

Create a single-region bucket with custom Object Lifecycle Management policies based on upload date.

B.

Create a single-region bucket with Autoclass enabled.

C.

Create a single-region bucket. Configure a Cloud Scheduler job that runs every 24 hours and changes the storage class based on upload date.

D.

Create a single-region bucket with Archive as the default storage class.

Full Access
Question # 7

You are a Looker analyst. You need to add a new field to your Looker report that generates SQL that will run against your company's database. You do not have the Develop permission. What should you do?

A.

Create a new field in the LookML layer, refresh your report, and select your new field from the field picker.

B.

Create a calculated field using the Add a field option in Looker Studio, and add it to your report.

C.

Create a table calculation from the field picker in Looker, and add it to your report.

D.

Create a custom field from the field picker in Looker, and add it to your report.

Full Access
Question # 8

You manage a BigQuery table that is used for critical end-of-month reports. The table is updated weekly with new sales data. You want to prevent data loss and reporting issues if the table is accidentally deleted. What should you do?

A.

Configure the time travel duration on the table to be exactly seven days. On deletion, re-create the deleted table solely from the time travel data.

B.

Schedule the creation of a new snapshot of the table once a week. On deletion, re-create the deleted table using the snapshot and time travel data.

C.

Create a clone of the table. On deletion, re-create the deleted table by copying the content of the clone.

D.

Create a view of the table. On deletion, re-create the deleted table from the view and time travel data.

Full Access
Go to page: