Black Friday Special Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: scxmas70

Professional-Cloud-Architect Exam Dumps - Google Certified Professional - Cloud Architect (GCP)

Question # 4

TerramEarth has a legacy web application that you cannot migrate to cloud. However, you still want to build a cloud-native way to monitor the application. If the application goes down, you want the URL to point to a "Site is unavailable" page as soon as possible. You also want your Ops team to receive a notification for the issue. You need to build a reliable solution for minimum cost

What should you do?

A.

Create a scheduled job in Cloud Run to invoke a container every minute. The container will check the application URL If the application is down, switch the URL to the "Site is unavailable" page, and notify the Ops team.

B.

Create a cron job on a Compute Engine VM that runs every minute. The cron job invokes a Python program to check the application URL If the application is down, switch the URL to the "Site is unavailable" page, and notify the Ops team.

C.

Create a Cloud Monitoring uptime check to validate the application URL If it fails, put a message in a Pub/Sub queue that triggers a Cloud Function to switch the URL to the "Site is unavailable" page, and notify the Ops team.

D.

Use Cloud Error Reporting to check the application URL If the application is down, switch the URL to the "Site is unavailable" page, and notify the Ops team.

Full Access
Question # 5

For this question, refer to the Helicopter Racing League (HRL) case study. HRL wants better prediction

accuracy from their ML prediction models. They want you to use Google’s AI Platform so HRL can understand

and interpret the predictions. What should you do?

A.

Use Explainable AI.

B.

Use Vision AI.

C.

Use Google Cloud’s operations suite.

D.

Use Jupyter Notebooks.

Full Access
Question # 6

For this question, refer to the Helicopter Racing League (HRL) case study. Your team is in charge of creating a

payment card data vault for card numbers used to bill tens of thousands of viewers, merchandise consumers,

and season ticket holders. You need to implement a custom card tokenization service that meets the following

requirements:

• It must provide low latency at minimal cost.

• It must be able to identify duplicate credit cards and must not store plaintext card numbers.

• It should support annual key rotation.

Which storage approach should you adopt for your tokenization service?

A.

Store the card data in Secret Manager after running a query to identify duplicates.

B.

Encrypt the card data with a deterministic algorithm stored in Firestore using Datastore mode.

C.

Encrypt the card data with a deterministic algorithm and shard it across multiple Memorystore instances.

D.

Use column-level encryption to store the data in Cloud SQL.

Full Access
Question # 7

You have an App Engine application that needs to be updated. You want to test the update with production traffic before replacing the current application version.

What should you do?

A.

Deploy the update using the Instance Group Updater to create a partial rollout, which allows for canary

testing.

B.

Deploy the update as a new version in the App Engine application, and split traffic between the new and current versions.

C.

Deploy the update in a new VPC, and use Google’s global HTTP load balancing to split traffic between the update and current applications.

D.

Deploy the update as a new App Engine application, and use Google’s global HTTP load balancing to split traffic between the new and current applications.

Full Access
Question # 8

You are migrating a Linux-based application from your private data center to Google Cloud. The TerramEarth security team sent you several recent Linux vulnerabilities published by Common Vulnerabilities and Exposures (CVE). You need assistance in understanding how these vulnerabilities could impact your migration. What should you do?

A.

Open a support case regarding the CVE and chat with the support engineer.

B.

Read the CVEs from the Google Cloud Status Dashboard to understand the impact.

C.

Read the CVEs from the Google Cloud Platform Security Bulletins to understand the impact

D.

Post a question regarding the CVE in Stack Overflow to get an explanation

E.

Post a question regarding the CVE in a Google Cloud discussion group to get an explanation

Full Access
Question # 9

For this question, refer to the TerramEarth case study. You need to implement a reliable, scalable GCP solution for the data warehouse for your company, TerramEarth. Considering the TerramEarth business and technical requirements, what should you do?

A.

Replace the existing data warehouse with BigQuery. Use table partitioning.

B.

Replace the existing data warehouse with a Compute Engine instance with 96 CPUs.

C.

Replace the existing data warehouse with BigQuery. Use federated data sources.

D.

Replace the existing data warehouse with a Compute Engine instance with 96 CPUs. Add an additional Compute Engine pre-emptible instance with 32 CPUs.

Full Access
Question # 10

For this question, refer to the Dress4Win case study.

At Dress4Win, an operations engineer wants to create a tow-cost solution to remotely archive copies of database backup files. The database files are compressed tar files stored in their current data center. How should he proceed?

A.

Create a cron script using gsutil to copy the files to a Coldline Storage bucket.

B.

Create a cron script using gsutil to copy the files to a Regional Storage bucket.

C.

Create a Cloud Storage Transfer Service Job to copy the files to a Coldline Storage bucket.

D.

Create a Cloud Storage Transfer Service job to copy the files to a Regional Storage bucket.

Full Access
Question # 11

You have deployed an application on Anthos clusters (formerly Anthos GKE). According to the SRE practices at your company you need to be alerted if the request latency is above a certain threshold for a specified amount of time. What should you do?

A.

Enable the Cloud Trace API on your project and use Cloud Monitoring Alerts to send an alert based on the Cloud Trace metrics

B.

Configure Anthos Config Management on your cluster and create a yaml file that defines the SLO and alerting policy you want to deploy in your cluster

C.

Use Cloud Profiler to follow up the request latency. Create a custom metric in Cloud Monitoring based on the results of Cloud Profiler, and create an Alerting Policy in case this metric exceeds the threshold

D.

Install Anthos Service Mesh on your cluster. Use the Google Cloud Console to define a Service Level Objective (SLO)

Full Access
Question # 12

Your company has multiple on-premises systems that serve as sources for reporting. The data has not been maintained well and has become degraded over time. You want to use Google-recommended practices to detect anomalies in your company data. What should you do?

A.

Upload your files into Cloud Storage. Use Cloud Datalab to explore and clean your data.

B.

Upload your files into Cloud Storage. Use Cloud Dataprep to explore and clean your data.

C.

Connect Cloud Datalab to your on-premises systems. Use Cloud Datalab to explore and clean your data.

D.

Connect Cloud Dataprep to your on-premises systems. Use Cloud Dataprep to explore and clean your

data.

Full Access
Question # 13

You need to develop procedures to verify resilience of disaster recovery for remote recovery using GCP. Your production environment is hosted on-premises. You need to establish a secure, redundant connection between your on premises network and the GCP network.

What should you do?

A.

Verify that Dedicated Interconnect can replicate files to GCP. Verify that direct peering can establish a

secure connection between your networks if Dedicated Interconnect fails.

B.

Verify that Dedicated Interconnect can replicate files to GCP. Verify that Cloud VPN can establish a secure connection between your networks if Dedicated Interconnect fails.

C.

Verify that the Transfer Appliance can replicate files to GCP. Verify that direct peering can establish a

secure connection between your networks if the Transfer Appliance fails.

D.

Verify that the Transfer Appliance can replicate files to GCP. Verify that Cloud VPN can establish a secure connection between your networks if the Transfer Appliance fails.

Full Access
Question # 14

For this question, refer to the Mountkirk Games case study.

Mountkirk Games' gaming servers are not automatically scaling properly. Last month, they rolled out a new feature, which suddenly became very popular. A record number of users are trying to use the service, but many of them are getting 503 errors and very slow response times. What should they investigate first?

A.

Verify that the database is online.

B.

Verify that the project quota hasn't been exceeded.

C.

Verify that the new feature code did not introduce any performance bugs.

D.

Verify that the load-testing team is not running their tool against production.

Full Access
Question # 15

For this question, refer to the Mountkirk Games case study.

Mountkirk Games wants to set up a real-time analytics platform for their new game. The new platform must meet their technical requirements. Which combination of Google technologies will meet all of their requirements?

A.

Container Engine, Cloud Pub/Sub, and Cloud SQL

B.

Cloud Dataflow, Cloud Storage, Cloud Pub/Sub, and BigQuery

C.

Cloud SQL, Cloud Storage, Cloud Pub/Sub, and Cloud Dataflow

D.

Cloud Dataproc, Cloud Pub/Sub, Cloud SQL, and Cloud Dataflow

E.

Cloud Pub/Sub, Compute Engine, Cloud Storage, and Cloud Dataproc

Full Access
Question # 16

For this question, refer to the Mountkirk Games case study.

Mountkirk Games wants you to design their new testing strategy. How should the test coverage differ from their existing backends on the other platforms?

A.

Tests should scale well beyond the prior approaches.

B.

Unit tests are no longer required, only end-to-end tests.

C.

Tests should be applied after the release is in the production environment.

D.

Tests should include directly testing the Google Cloud Platform (GCP) infrastructure.

Full Access
Question # 17

For this question, refer to the Mountkirk Games case study

Mountkirk Games needs to create a repeatable and configurable mechanism for deploying isolated application environments. Developers and testers can access each other's environments and resources, but they cannot access staging or production resources. The staging environment needs access to some services from production.

What should you do to isolate development environments from staging and production?

A.

Create a project for development and test and another for staging and production.

B.

Create a network for development and test and another for staging and production.

C.

Create one subnetwork for development and another for staging and production.

D.

Create one project for development, a second for staging and a third for production.

Full Access
Question # 18

For this question, refer to the Dress4Win case study. Dress4Win is expected to grow to 10 times its size in 1 year with a corresponding growth in data and traffic that mirrors the existing patterns of usage. The CIO has set the target of migrating production infrastructure to the cloud within the next 6 months. How will you configure the solution to scale for this growth without making major application changes and still maximize the ROI?

A.

Migrate the web application layer to App Engine, and MySQL to Cloud Datastore, and NAS to Cloud Storage. Deploy RabbitMQ, and deploy Hadoop servers using Deployment Manager.

B.

Migrate RabbitMQ to Cloud Pub/Sub, Hadoop to BigQuery, and NAS to Compute Engine with Persistent Disk storage. Deploy Tomcat, and deploy Nginx using Deployment Manager.

C.

Implement managed instance groups for Tomcat and Nginx. Migrate MySQL to Cloud SQL, RabbitMQ to Cloud Pub/Sub, Hadoop to Cloud Dataproc, and NAS to Compute Engine with Persistent Disk storage.

D.

Implement managed instance groups for the Tomcat and Nginx. Migrate MySQL to Cloud SQL, RabbitMQ to Cloud Pub/Sub, Hadoop to Cloud Dataproc, and NAS to Cloud Storage.

Full Access
Question # 19

For this question, refer to the Dress4Win case study. To be legally compliant during an audit, Dress4Win must be able to give insights in all administrative actions that modify the configuration or metadata of resources on Google Cloud.

What should you do?

A.

Use Stackdriver Trace to create a trace list analysis.

B.

Use Stackdriver Monitoring to create a dashboard on the project’s activity.

C.

Enable Cloud Identity-Aware Proxy in all projects, and add the group of Administrators as a member.

D.

Use the Activity page in the GCP Console and Stackdriver Logging to provide the required insight.

Full Access
Question # 20

For this question, refer to the Dress4Win case study. Considering the given business requirements, how would you automate the deployment of web and transactional data layers?

A.

Deploy Nginx and Tomcat using Cloud Deployment Manager to Compute Engine. Deploy a Cloud SQL server to replace MySQL. Deploy Jenkins using Cloud Deployment Manager.

B.

Deploy Nginx and Tomcat using Cloud Launcher. Deploy a MySQL server using Cloud Launcher. Deploy Jenkins to Compute Engine using Cloud Deployment Manager scripts.

C.

Migrate Nginx and Tomcat to App Engine. Deploy a Cloud Datastore server to replace the MySQL server in a high-availability configuration. Deploy Jenkins to Compute Engine using Cloud Launcher.

D.

Migrate Nginx and Tomcat to App Engine. Deploy a MySQL server using Cloud Launcher. Deploy Jenkins to Compute Engine using Cloud Launcher.

Full Access
Question # 21

For this question, refer to the Dress4Win case study. You are responsible for the security of data stored in

Cloud Storage for your company, Dress4Win. You have already created a set of Google Groups and assigned the appropriate users to those groups. You should use Google best practices and implement the simplest design to meet the requirements.

Considering Dress4Win’s business and technical requirements, what should you do?

A.

Assign custom IAM roles to the Google Groups you created in order to enforce security requirements.

Encrypt data with a customer-supplied encryption key when storing files in Cloud Storage.

B.

Assign custom IAM roles to the Google Groups you created in order to enforce security requirements.

Enable default storage encryption before storing files in Cloud Storage.

C.

Assign predefined IAM roles to the Google Groups you created in order to enforce security requirements.

Utilize Google’s default encryption at rest when storing files in Cloud Storage.

D.

Assign predefined IAM roles to the Google Groups you created in order to enforce security requirements. Ensure that the default Cloud KMS key is set before storing files in Cloud Storage.

Full Access
Question # 22

For this question, refer to the TerramEarth case study.

To speed up data retrieval, more vehicles will be upgraded to cellular connections and be able to transmit data to the ETL process. The current FTP process is error-prone and restarts the data transfer from the start of the file when connections fail, which happens often. You want to improve the reliability of the solution and minimize data transfer time on the cellular connections. What should you do?

A.

Use one Google Container Engine cluster of FTP servers. Save the data to a Multi-Regional bucket. Run the ETL process using data in the bucket.

B.

Use multiple Google Container Engine clusters running FTP servers located in different regions. Save the data to Multi-Regional buckets in us, eu, and asia. Run the ETL process using the data in the bucket.

C.

Directly transfer the files to different Google Cloud Multi-Regional Storage bucket locations in us, eu, and asia using Google APIs over HTTP(S). Run the ETL process using the data in the bucket.

D.

Directly transfer the files to a different Google Cloud Regional Storage bucket location in us, eu, and asia using Google APIs over HTTP(S). Run the ETL process to retrieve the data from each Regional bucket.

Full Access
Question # 23

For this question, refer to the TerramEarth case study.

TerramEarth's 20 million vehicles are scattered around the world. Based on the vehicle's location its telemetry data is stored in a Google Cloud Storage (GCS) regional bucket (US. Europe, or Asia). The CTO has asked you to run a report on the raw telemetry data to determine why vehicles are breaking down after 100 K miles. You want to run this job on all the data. What is the most cost-effective way to run this job?

A.

Move all the data into 1 zone, then launch a Cloud Dataproc cluster to run the job.

B.

Move all the data into 1 region, then launch a Google Cloud Dataproc cluster to run the job.

C.

Launch a cluster in each region to preprocess and compress the raw data, then move the data into a multi region bucket and use a Dataproc cluster to finish the job.

D.

Launch a cluster in each region to preprocess and compress the raw data, then move the data into a region bucket and use a Cloud Dataproc cluster to finish the jo

Full Access
Question # 24

For this question refer to the TerramEarth case study.

Which of TerramEarth's legacy enterprise processes will experience significant change as a result of increased Google Cloud Platform adoption.

A.

Opex/capex allocation, LAN changes, capacity planning

B.

Capacity planning, TCO calculations, opex/capex allocation

C.

Capacity planning, utilization measurement, data center expansion

D.

Data Center expansion, TCO calculations, utilization measurement

Full Access
Question # 25

For this question, refer to the TerramEarth case study

You analyzed TerramEarth's business requirement to reduce downtime, and found that they can achieve a majority of time saving by reducing customers' wait time for parts You decided to focus on reduction of the 3 weeks aggregate reporting time Which modifications to the company's processes should you recommend?

A.

Migrate from CSV to binary format, migrate from FTP to SFTP transport, and develop machine learning analysis of metrics.

B.

Migrate from FTP to streaming transport, migrate from CSV to binary format, and develop machine learning analysis of metrics.

C.

Increase fleet cellular connectivity to 80%, migrate from FTP to streaming transport, and develop machine learning analysis of metrics.

D.

Migrate from FTP to SFTP transport, develop machine learning analysis of metrics, and increase dealer local inventory by a fixed factor.

Full Access
Question # 26

For this question, refer to the Dress4Win case study.

Dress4Win has configured a new uptime check with Google Stackdriver for several of their legacy services. The Stackdriver dashboard is not reporting the services as healthy. What should they do?

A.

Install the Stackdriver agent on all of the legacy web servers.

B.

In the Cloud Platform Console download the list of the uptime servers' IP addresses and create an inbound firewall rule

C.

Configure their load balancer to pass through the User-Agent HTTP header when the value matches GoogleStackdriverMonitoring-UptimeChecks (https://cloud.google.com/monitoring)

D.

Configure their legacy web servers to allow requests that contain user-Agent HTTP header when the value matches GoogleStackdriverMonitoring— UptimeChecks (https://cloud.google.com/monitoring)

Full Access
Question # 27

The current Dress4win system architecture has high latency to some customers because it is located in one

data center.

As of a future evaluation and optimizing for performance in the cloud, Dresss4win wants to distribute it's system

architecture to multiple locations when Google cloud platform.

Which approach should they use?

A.

Use regional managed instance groups and a global load balancer to increase performance because the

regional managed instance group can grow instances in each region separately based on traffic.

B.

Use a global load balancer with a set of virtual machines that forward the requests to a closer group of

virtual machines managed by your operations team.

C.

Use regional managed instance groups and a global load balancer to increase reliability by providing

automatic failover between zones in different regions.

D.

Use a global load balancer with a set of virtual machines that forward the requests to a closer group of

virtual machines as part of a separate managed instance groups.

Full Access
Question # 28

For this question, refer to the Dress4Win case study.

You want to ensure Dress4Win's sales and tax records remain available for infrequent viewing by auditors for at least 10 years. Cost optimization is your top priority. Which cloud services should you choose?

A.

Google Cloud Storage Coldline to store the data, and gsutil to access the data.

B.

Google Cloud Storage Nearline to store the data, and gsutil to access the data.

C.

Google Bigtabte with US or EU as location to store the data, and gcloud to access the data.

D.

BigQuery to store the data, and a web server cluster in a managed instance group to access the data. Google Cloud SQL mirrored across two distinct regions to store the data, and a Redis cluster in a managed instance group to access the data.

Full Access
Question # 29

For this question, refer to the Helicopter Racing League (HRL) case study. A recent finance audit of cloud

infrastructure noted an exceptionally high number of Compute Engine instances are allocated to do video

encoding and transcoding. You suspect that these Virtual Machines are zombie machines that were not deleted

after their workloads completed. You need to quickly get a list of which VM instances are idle. What should you

do?

A.

Log into each Compute Engine instance and collect disk, CPU, memory, and network usage statistics for

analysis.

B.

Use the gcloud compute instances list to list the virtual machine instances that have the idle: true label set.

C.

Use the gcloud recommender command to list the idle virtual machine instances.

D.

From the Google Console, identify which Compute Engine instances in the managed instance groups are

no longer responding to health check probes.

Full Access
Question # 30

For this question, refer to the Helicopter Racing League (HRL) case study. HRL is looking for a cost-effective

approach for storing their race data such as telemetry. They want to keep all historical records, train models

using only the previous season's data, and plan for data growth in terms of volume and information collected.

You need to propose a data solution. Considering HRL business requirements and the goals expressed by

CEO S. Hawke, what should you do?

A.

Use Firestore for its scalable and flexible document-based database. Use collections to aggregate race data

by season and event.

B.

Use Cloud Spanner for its scalability and ability to version schemas with zero downtime. Split race data

using season as a primary key.

C.

Use BigQuery for its scalability and ability to add columns to a schema. Partition race data based on

season.

D.

Use Cloud SQL for its ability to automatically manage storage increases and compatibility with MySQL. Use

separate database instances for each season.

Full Access
Question # 31

For this question, refer to the JencoMart case study.

JencoMart has decided to migrate user profile storage to Google Cloud Datastore and the application servers to Google Compute Engine (GCE). During the migration, the existing infrastructure will need access to Datastore to upload the data. What service account key-management strategy should you recommend?

A.

Provision service account keys for the on-premises infrastructure and for the GCE virtual machines (VMs).

B.

Authenticate the on-premises infrastructure with a user account and provision service account keys for the VMs.

C.

Provision service account keys for the on-premises infrastructure and use Google Cloud Platform (GCP) managed keys for the VMs

D.

Deploy a custom authentication service on GCE/Google Container Engine (GKE) for the on-premises infrastructure and use GCP managed keys for the VMs.

Full Access
Question # 32

For this question, refer to the JencoMart case study

A few days after JencoMart migrates the user credentials database to Google Cloud Platform and shuts down the old server, the new database server stops responding to SSH connections. It is still serving database requests to the application servers correctly. What three steps should you take to diagnose the problem? Choose 3 answers

A.

Delete the virtual machine (VM) and disks and create a new one.

B.

Delete the instance, attach the disk to a new VM, and investigate.

C.

Take a snapshot of the disk and connect to a new machine to investigate.

D.

Check inbound firewall rules for the network the machine is connected to.

E.

Connect the machine to another network with very simple firewall rules and investigate.

F.

Print the Serial Console output for the instance for troubleshooting, activate the interactive console, and investigate.

Full Access
Question # 33

For this question, refer to the JencoMart case study.

The migration of JencoMart’s application to Google Cloud Platform (GCP) is progressing too slowly. The infrastructure is shown in the diagram. You want to maximize throughput. What are three potential bottlenecks? (Choose 3 answers.)

A.

A single VPN tunnel, which limits throughput

B.

A tier of Google Cloud Storage that is not suited for this task

C.

A copy command that is not suited to operate over long distances

D.

Fewer virtual machines (VMs) in GCP than on-premises machines

E.

A separate storage layer outside the VMs, which is not suited for this task

F.

Complicated internet connectivity between the on-premises infrastructure and GCP

Full Access
Question # 34

For this question, refer to the JencoMart case study.

The JencoMart security team requires that all Google Cloud Platform infrastructure is deployed using a least privilege model with separation of duties for administration between production and development resources. What Google domain and project structure should you recommend?

A.

Create two G Suite accounts to manage users: one for development/test/staging and one for production. Each account should contain one project for every application.

B.

Create two G Suite accounts to manage users: one with a single project for all development applications and one with a single project for all production applications.

C.

Create a single G Suite account to manage users with each stage of each application in its own project.

D.

Create a single G Suite account to manage users with one project for the development/test/staging environment and one project for the production environment.

Full Access
Question # 35

For this question, refer to the EHR Healthcare case study. You are responsible for ensuring that EHR's use of Google Cloud will pass an upcoming privacy compliance audit. What should you do? (Choose two.)

A.

Verify EHR's product usage against the list of compliant products on the Google Cloud compliance page.

B.

Advise EHR to execute a Business Associate Agreement (BAA) with Google Cloud.

C.

Use Firebase Authentication for EHR's user facing applications.

D.

Implement Prometheus to detect and prevent security breaches on EHR's web-based applications.

E.

Use GKE private clusters for all Kubernetes workloads.

Full Access
Question # 36

For this question, refer to the EHR Healthcare case study. EHR has single Dedicated Interconnect

connection between their primary data center and Googles network. This connection satisfies

EHR’s network and security policies:

• On-premises servers without public IP addresses need to connect to cloud resources

without public IP addresses

• Traffic flows from production network mgmt. servers to Compute Engine virtual

machines should never traverse the public internet.

You need to upgrade the EHR connection to comply with their requirements. The new

connection design must support business critical needs and meet the same network and

security policy requirements. What should you do?

A.

Add a new Dedicated Interconnect connection

B.

Upgrade the bandwidth on the Dedicated Interconnect connection to 100 G

C.

Add three new Cloud VPN connections

D.

Add a new Carrier Peering connection

Full Access
Question # 37

You need to upgrade the EHR connection to comply with their requirements. The new connection design must support business-critical needs and meet the same network and security policy requirements. What should you do?

A.

Add a new Dedicated Interconnect connection.

B.

Upgrade the bandwidth on the Dedicated Interconnect connection to 100 G.

C.

Add three new Cloud VPN connections.

D.

Add a new Carrier Peering connection.

Full Access
Question # 38

For this question, refer to the EHR Healthcare case study. In the past, configuration errors put public IP addresses on backend servers that should not have been accessible from the Internet. You need to ensure that no one can put external IP addresses on backend Compute Engine instances and that external IP addresses can only be configured on frontend Compute Engine instances. What should you do?

A.

Create an Organizational Policy with a constraint to allow external IP addresses only on the frontend Compute Engine instances.

B.

Revoke the compute.networkAdmin role from all users in the project with front end instances.

C.

Create an Identity and Access Management (IAM) policy that maps the IT staff to the compute.networkAdmin role for the organization.

D.

Create a custom Identity and Access Management (IAM) role named GCE_FRONTEND with the compute.addresses.create permission.

Full Access
Question # 39

Mountkirk Games wants you to secure the connectivity from the new gaming application platform to Google

Cloud. You want to streamline the process and follow Google-recommended practices. What should you do?

A.

Configure Workload Identity and service accounts to be used by the application platform.

B.

Use Kubernetes Secrets, which are obfuscated by default. Configure these Secrets to be used by the

application platform.

C.

Configure Kubernetes Secrets to store the secret, enable Application-Layer Secrets Encryption, and use

Cloud Key Management Service (Cloud KMS) to manage the encryption keys. Configure these Secrets to

be used by the application platform.

D.

Configure HashiCorp Vault on Compute Engine, and use customer managed encryption keys and Cloud

Key Management Service (Cloud KMS) to manage the encryption keys. Configure these Secrets to be used

by the application platform.

Full Access
Question # 40

You need to optimize batch file transfers into Cloud Storage for Mountkirk Games’ new Google Cloud solution.

The batch files contain game statistics that need to be staged in Cloud Storage and be processed by an extract

transform load (ETL) tool. What should you do?

A.

Use gsutil to batch move files in sequence.

B.

Use gsutil to batch copy the files in parallel.

C.

Use gsutil to extract the files as the first part of ETL.

D.

Use gsutil to load the files as the last part of ETL.

Full Access