New Year Special Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: scxmas70

Professional-Cloud-Architect Exam Dumps - Google Certified Professional - Cloud Architect (GCP)

Go to page:
Question # 9

For this question, refer to the TerramEarth case study. You need to implement a reliable, scalable GCP solution for the data warehouse for your company, TerramEarth. Considering the TerramEarth business and technical requirements, what should you do?

A.

Replace the existing data warehouse with BigQuery. Use table partitioning.

B.

Replace the existing data warehouse with a Compute Engine instance with 96 CPUs.

C.

Replace the existing data warehouse with BigQuery. Use federated data sources.

D.

Replace the existing data warehouse with a Compute Engine instance with 96 CPUs. Add an additional Compute Engine pre-emptible instance with 32 CPUs.

Full Access
Question # 10

For this question, refer to the Dress4Win case study.

At Dress4Win, an operations engineer wants to create a tow-cost solution to remotely archive copies of database backup files. The database files are compressed tar files stored in their current data center. How should he proceed?

A.

Create a cron script using gsutil to copy the files to a Coldline Storage bucket.

B.

Create a cron script using gsutil to copy the files to a Regional Storage bucket.

C.

Create a Cloud Storage Transfer Service Job to copy the files to a Coldline Storage bucket.

D.

Create a Cloud Storage Transfer Service job to copy the files to a Regional Storage bucket.

Full Access
Question # 11

You have deployed an application on Anthos clusters (formerly Anthos GKE). According to the SRE practices at your company you need to be alerted if the request latency is above a certain threshold for a specified amount of time. What should you do?

A.

Enable the Cloud Trace API on your project and use Cloud Monitoring Alerts to send an alert based on the Cloud Trace metrics

B.

Configure Anthos Config Management on your cluster and create a yaml file that defines the SLO and alerting policy you want to deploy in your cluster

C.

Use Cloud Profiler to follow up the request latency. Create a custom metric in Cloud Monitoring based on the results of Cloud Profiler, and create an Alerting Policy in case this metric exceeds the threshold

D.

Install Anthos Service Mesh on your cluster. Use the Google Cloud Console to define a Service Level Objective (SLO)

Full Access
Question # 12

Your company has multiple on-premises systems that serve as sources for reporting. The data has not been maintained well and has become degraded over time. You want to use Google-recommended practices to detect anomalies in your company data. What should you do?

A.

Upload your files into Cloud Storage. Use Cloud Datalab to explore and clean your data.

B.

Upload your files into Cloud Storage. Use Cloud Dataprep to explore and clean your data.

C.

Connect Cloud Datalab to your on-premises systems. Use Cloud Datalab to explore and clean your data.

D.

Connect Cloud Dataprep to your on-premises systems. Use Cloud Dataprep to explore and clean your

data.

Full Access
Question # 13

You need to develop procedures to verify resilience of disaster recovery for remote recovery using GCP. Your production environment is hosted on-premises. You need to establish a secure, redundant connection between your on premises network and the GCP network.

What should you do?

A.

Verify that Dedicated Interconnect can replicate files to GCP. Verify that direct peering can establish a

secure connection between your networks if Dedicated Interconnect fails.

B.

Verify that Dedicated Interconnect can replicate files to GCP. Verify that Cloud VPN can establish a secure connection between your networks if Dedicated Interconnect fails.

C.

Verify that the Transfer Appliance can replicate files to GCP. Verify that direct peering can establish a

secure connection between your networks if the Transfer Appliance fails.

D.

Verify that the Transfer Appliance can replicate files to GCP. Verify that Cloud VPN can establish a secure connection between your networks if the Transfer Appliance fails.

Full Access
Question # 14

For this question, refer to the Mountkirk Games case study.

Mountkirk Games' gaming servers are not automatically scaling properly. Last month, they rolled out a new feature, which suddenly became very popular. A record number of users are trying to use the service, but many of them are getting 503 errors and very slow response times. What should they investigate first?

A.

Verify that the database is online.

B.

Verify that the project quota hasn't been exceeded.

C.

Verify that the new feature code did not introduce any performance bugs.

D.

Verify that the load-testing team is not running their tool against production.

Full Access
Question # 15

For this question, refer to the Mountkirk Games case study.

Mountkirk Games wants to set up a real-time analytics platform for their new game. The new platform must meet their technical requirements. Which combination of Google technologies will meet all of their requirements?

A.

Container Engine, Cloud Pub/Sub, and Cloud SQL

B.

Cloud Dataflow, Cloud Storage, Cloud Pub/Sub, and BigQuery

C.

Cloud SQL, Cloud Storage, Cloud Pub/Sub, and Cloud Dataflow

D.

Cloud Dataproc, Cloud Pub/Sub, Cloud SQL, and Cloud Dataflow

E.

Cloud Pub/Sub, Compute Engine, Cloud Storage, and Cloud Dataproc

Full Access
Question # 16

For this question, refer to the Mountkirk Games case study.

Mountkirk Games wants you to design their new testing strategy. How should the test coverage differ from their existing backends on the other platforms?

A.

Tests should scale well beyond the prior approaches.

B.

Unit tests are no longer required, only end-to-end tests.

C.

Tests should be applied after the release is in the production environment.

D.

Tests should include directly testing the Google Cloud Platform (GCP) infrastructure.

Full Access
Go to page: