Month End Special Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: scxmas70

Note! Following DBS-C01 Exam is Retired now. Please select the alternative replacement for your Exam Certification.

DBS-C01 Exam Dumps - AWS Certified Database - Specialty

Go to page:
Question # 73

An advertising company is developing a backend for a bidding platform. The company needs a cost-effective datastore solution that will accommodate a sudden increase in the volume of write transactions. The database also needs to make data changes available in a near real-time data stream.

Which solution will meet these requirements?

A.

Amazon Aurora MySQL Multi-AZ DB cluster

B.

Amazon Keyspaces (for Apache Cassandra)

C.

Amazon DynamoDB table with DynamoDB auto scaling

D.

Amazon DocumentDB (with MongoDB compatibility) cluster with a replica instance in a second Availability Zone

Full Access
Question # 74

A company with 500,000 employees needs to supply its employee list to an application used by human resources. Every 30 minutes, the data is exported using the LDAP service to load into a new Amazon DynamoDB table. The data model has a base table with Employee ID for the partition key and a global secondary index with Organization ID as the partition key.

While importing the data, a database specialist receives ProvisionedThroughputExceededException errors. After increasing the provisioned write capacity units

(WCUs) to 50,000, the specialist receives the same errors. Amazon CloudWatch metrics show a consumption of 1,500 WCUs.

What should the database specialist do to address the issue?

A.

Change the data model to avoid hot partitions in the global secondary index.

B.

Enable auto scaling for the table to automatically increase write capacity during bulk imports.

C.

Modify the table to use on-demand capacity instead of provisioned capacity.

D.

Increase the number of retries on the bulk loading application.

Full Access
Question # 75

A marketing company is developing an application to track responses to email message campaigns. The company needs a database storage solution that is optimized to work with highly connected data. The database needs to limit connections and programmatic access to the data by using IAM policies.

Which solution will meet these requirements?

A.

Amazon ElastiCache for Redis cluster

B.

Amazon Aurora MySQL DB cluster

C.

Amazon DynamoDB table

D.

Amazon Neptune DB cluster

Full Access
Question # 76

A company is running a two-tier ecommerce application in one AWS account. The web server is deployed using an Amazon RDS for MySQL Multi-AZ DB instance. A Developer mistakenly deleted the database in the production environment. The database has been restored, but this resulted in hours of downtime and lost revenue.

Which combination of changes in existing IAM policies should a Database Specialist make to prevent an error like this from happening in the future? (Choose three.)

A.

Grant least privilege to groups, users, and roles

B.

Allow all users to restore a database from a backup that will reduce the overall downtime to restore the database

C.

Enable multi-factor authentication for sensitive operations to access sensitive resources and API operations

D.

Use policy conditions to restrict access to selective IP addresses

E.

Use AccessList Controls policy type to restrict users for database instance deletion

F.

Enable AWS CloudTrail logging and Enhanced Monitoring

Full Access
Question # 77

A company is going to use an Amazon Aurora PostgreSQL DB cluster for an application backend. The DB cluster contains some tables with sensitive data. A Database Specialist needs to control the access privileges at the table level.

How can the Database Specialist meet these requirements?

A.

Use AWS IAM database authentication and restrict access to the tables using an IAM policy.

B.

Configure the rules in a NACL to restrict outbound traffic from the Aurora DB cluster.

C.

Execute GRANT and REVOKE commands that restrict access to the tables containing sensitive data.

D.

Define access privileges to the tables containing sensitive data in the pg_hba.conf file.

Full Access
Question # 78

An ecommerce company has tasked a Database Specialist with creating a reporting dashboard that visualizes critical business metrics that will be pulled from the core production database running on Amazon Aurora. Data that is read by the dashboard should be available within 100 milliseconds of an update.

The Database Specialist needs to review the current configuration of the Aurora DB cluster and develop a

cost-effective solution. The solution needs to accommodate the unpredictable read workload from the reporting dashboard without any impact on the write availability and performance of the DB cluster.

Which solution meets these requirements?

A.

Turn on the serverless option in the DB cluster so it can automatically scale based on demand.

B.

Provision a clone of the existing DB cluster for the new Application team.

C.

Create a separate DB cluster for the new workload, refresh from the source DB cluster, and set up ongoing replication using AWS DMS change data capture (CDC).

D.

Add an automatic scaling policy to the DB cluster to add Aurora Replicas to the cluster based on CPU consumption.

Full Access
Question # 79

A database specialist at a large multi-national financial company is in charge of designing the disaster recovery strategy for a highly available application that is in development. The application uses an Amazon DynamoDB table as its data store. The application requires a recovery time objective (RTO) of 1 minute and a recovery point objective (RPO) of 2 minutes.

Which operationally efficient disaster recovery strategy should the database specialist recommend for the DynamoDB table?

A.

Create a DynamoDB stream that is processed by an AWS Lambda function that copies the data to a DynamoDB table in another Region.

B.

Use a DynamoDB global table replica in another Region. Enable point-in-time recovery for both tables.

C.

Use a DynamoDB Accelerator table in another Region. Enable point-in-time recovery for the table.

D.

Create an AWS Backup plan and assign the DynamoDB table as a resource.

Full Access
Question # 80

A company is running a website on Amazon EC2 instances deployed in multiple Availability Zones (AZs). The site performs a high number of repetitive reads and writes each second on an Amazon RDS for MySQL Multi- AZ DB instance with General Purpose SSD (gp2) storage. After comprehensive testing and analysis, a database specialist discovers that there is high read latency and high CPU utilization on the DB instance.

Which approach should the database specialist to take to resolve this issue without changing the application?

A.

Implementing sharding to distribute the load to multiple RDS for MySQL databases.

B.

Use the same RDS for MySQL instance class with Provisioned IOPS (PIOPS) storage.

C.

Add an RDS for MySQL read replica.

D.

Modify the RDS for MySQL database class to a bigger size and implement Provisioned IOPS (PIOPS).

Full Access
Go to page: