Weekend Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: scxmas70

Professional-Cloud-DevOps-Engineer Exam Dumps - Google Cloud Certified - Professional Cloud DevOps Engineer Exam

Go to page:
Question # 17

You support a trading application written in Python and hosted on App Engine flexible environment. You want to customize the error information being sent to Stackdriver Error Reporting. What should you do?

A.

Install the Stackdriver Error Reporting library for Python, and then run your code on a Compute Engine VM.

B.

Install the Stackdriver Error Reporting library for Python, and then run your code on Google Kubernetes Engine.

C.

Install the Stackdriver Error Reporting library for Python, and then run your code on App Engine flexible environment.

D.

Use the Stackdriver Error Reporting API to write errors from your application to ReportedErrorEvent, and then generate log entries with properly formatted error messages in Stackdriver Logging.

Full Access
Question # 18

You work for a company that manages highly sensitive user data. You are designing the Google Kubernetes Engine (GKE) infrastructure for your company, including several applications that will be deployed in development and production environments. Your design must protect data from unauthorized access from other applications while minimizing the amount of management overhead required. What should you do?

A.

Create one cluster for the organization with separate namespaces for each application and environment combination.

B.

Create one cluster for each environment (development and production) with each application in its own namespace within each cluster.

C.

Create one cluster for the organization with separate namespaces for each application.

D.

Create one cluster for each application with separate namespaces for production and development environments.

Full Access
Question # 19

You are responsible for the reliability of a high-volume enterprise application. A large number of users report that an important subset of the application’s functionality – a data intensive reporting feature – is consistently failing with an HTTP 500 error. When you investigate your application’s dashboards, you notice a strong correlation between the failures and a metric that represents the size of an internal queue used for generating reports. You trace the failures to a reporting backend that is experiencing high I/O wait times. You quickly fix the issue by resizing the backend’s persistent disk (PD). How you need to create an availability Service Level Indicator (SLI) for the report generation feature. How would you define it?

A.

As the I/O wait times aggregated across all report generation backends

B.

As the proportion of report generation requests that result in a successful response

C.

As the application’s report generation queue size compared to a known-good threshold

D.

As the reporting backend PD throughout capacity compared to a known-good threshold

Full Access
Question # 20

You use Spinnaker to deploy your application and have created a canary deployment stage in the pipeline. Your application has an in-memory cache that loads objects at start time. You want to automate the comparison of the canary version against the production version. How should you configure the canary analysis?

A.

Compare the canary with a new deployment of the current production version.

B.

Compare the canary with a new deployment of the previous production version.

C.

Compare the canary with the existing deployment of the current production version.

D.

Compare the canary with the average performance of a sliding window of previous production versions.

Full Access
Question # 21

You are developing a strategy for monitoring your Google Cloud Platform (GCP) projects in production using Stackdriver Workspaces. One of the requirements is to be able to quickly identify and react to production environment issues without false alerts from development and staging projects. You want to ensure that you adhere to the principle of least privilege when providing relevant team members with access to Stackdriver Workspaces. What should you do?

A.

Grant relevant team members read access to all GCP production projects. Create Stackdriver workspaces inside each project.

B.

Grant relevant team members the Project Viewer IAM role on all GCP production projects. Create Slackdriver workspaces inside each project.

C.

Choose an existing GCP production project to host the monitoring workspace. Attach the production projects to this workspace. Grant relevant team members read access to the Stackdriver Workspace.

D.

Create a new GCP monitoring project, and create a Stackdriver Workspace inside it. Attach the production projects to this workspace. Grant relevant team members read access to the Stackdriver Workspace.

Full Access
Question # 22

You are leading a DevOps project for your organization. The DevOps team is responsible for managing the service infrastructure and being on-call for incidents. The Software Development team is responsible for writing, submitting, and reviewing code. Neither team has any published SLOs. You want to design a new joint-ownership model for a service between the DevOps team and the Software Development team. Which responsibilities should be assigned to each team in the new joint-ownership model?

A.

Option A

B.

Option B

C.

Option C

D.

Option D

Full Access
Question # 23

Your team is designing a new application for deployment both inside and outside Google Cloud Platform (GCP). You need to collect detailed metrics such as system resource utilization. You want to use centralized GCP services while minimizing the amount of work required to set up this collection system. What should you do?

A.

Import the Stackdriver Profiler package, and configure it to relay function timing data to Stackdriver for further analysis.

B.

Import the Stackdriver Debugger package, and configure the application to emit debug messages with timing information.

C.

Instrument the code using a timing library, and publish the metrics via a health check endpoint that is scraped by Stackdriver.

D.

Install an Application Performance Monitoring (APM) tool in both locations, and configure an export to a central data storage location for analysis.

Full Access
Question # 24

Your organization is running multiple Google Kubernetes Engine (GKE) clusters in a project. You need to design a highly-available solution to collect and query both domain-specific workload metrics and GKE default metrics across all clusters, while minimizing operational overhead. What should you do?

A.

Use Prometheus Operator to install Prometheus in every cluster and scrape the metrics. Ensure that a Thanos sidecar is enabled on every Prometheus instance. Configure Thanos in the central cluster. Query the central Thanos instance.

B.

Use Prometheus Operator to install Prometheus in every cluster and scrape the metrics. Configure remote-write to one central Prometheus. Query the central Prometheus instance.

C.

Enable managed collection on every GKE cluster. Query the metrics in Cloud Monitoring.

D.

Enable managed collection on every GKE cluster. Query the metrics in BigQuery.

Full Access
Go to page: