Black Friday Special Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: scxmas70

Note! Following DBS-C01 Exam is Retired now. Please select the alternative replacement for your Exam Certification.

DBS-C01 Exam Dumps - AWS Certified Database - Specialty

Question # 4

A company developed an AWS CloudFormation template used to create all new Amazon DynamoDB tables in its AWS account. The template configures provisioned throughput capacity using hard-coded values. The company wants to change the template so that the tables it creates in the future have independently configurable read and write capacity units assigned.

Which solution will enable this change?

A.

Add values for the rcuCount and wcuCount parameters to the Mappings section of the template. Configure DynamoDB to provision throughput capacity using the stack’s mappings.

B.

Add values for two Number parameters, rcuCount and wcuCount, to the template. Replace the hard-coded values with calls to the Ref intrinsic function, referencing the new parameters.

C.

Add values for the rcuCount and wcuCount parameters as outputs of the template. Configure DynamoDB to provision throughput capacity using the stack outputs.

D.

Add values for the rcuCount and wcuCount parameters to the Mappings section of the template. Replace the hard-coded values with calls to the Ref intrinsic function, referencing the new parameters.

Full Access
Question # 5

A company has more than 100 AWS accounts that need Amazon RDS instances. The company wants to build an automated solution to deploy the RDS instances with specific compliance parameters. The data does not need to be replicated. The company needs to create the databases within 1 day

Which solution will meet these requirements in the MOST operationally efficient way?

A.

Create RDS resources by using AWS CloudFormation. Share the CloudFormation template with each account.

B.

Create an RDS snapshot. Share the snapshot With each account Deploy the snapshot into each account

C.

use AWS CloudFormation to create RDS instances in each account. Run AWS Database Migration Service (AWS DMS) replication to each ot the created instances.

D.

Create a script by using the AWS CLI to copy the ROS instance into the other accounts from a template account.

Full Access
Question # 6

A Database Specialist is migrating a 2 TB Amazon RDS for Oracle DB instance to an RDS for PostgreSQL DB instance using AWS DMS. The source RDS Oracle DB instance is in a VPC in the us-east-1 Region. The target RDS for PostgreSQL DB instance is in a VPC in the use-west-2 Region.

Where should the AWS DMS replication instance be placed for the MOST optimal performance?

A.

In the same Region and VPC of the source DB instance

B.

In the same Region and VPC as the target DB instance

C.

In the same VPC and Availability Zone as the target DB instance

D.

In the same VPC and Availability Zone as the source DB instance

Full Access
Question # 7

A company's database specialist implements an AWS Database Migration Service (AWS DMS) task for change data capture (CDC) to replicate data from an on- premises Oracle database to Amazon S3. When usage of the company's application increases, the database specialist notices multiple hours of latency with the CDC.

Which solutions will reduce this latency? (Choose two.)

A.

Configure the DMS task to run in full large binary object (LOB) mode.

B.

Configure the DMS task to run in limited large binary object (LOB) mode.

C.

Create a Multi-AZ replication instance.

D.

Load tables in parallel by creating multiple replication instances for sets of tables that participate in common transactions.

E.

Replicate tables in parallel by creating multiple DMS tasks for sets of tables that do not participate in common transactions.

Full Access
Question # 8

A company has a heterogeneous six-node production Amazon Aurora DB cluster that handles online transaction processing (OLTP) for the core business and OLAP reports for the human resources department. To match compute resources to the use case, the company has decided to have the reporting workload for the human resources department be directed to two small nodes in the Aurora DB cluster, while every other workload goes to four large nodes in the same DB cluster.

Which option would ensure that the correct nodes are always available for the appropriate workload while meeting these requirements?

A.

Use the writer endpoint for OLTP and the reader endpoint for the OLAP reporting workload.

B.

Use automatic scaling for the Aurora Replica to have the appropriate number of replicas for the desired workload.

C.

Create additional readers to cater to the different scenarios.

D.

Use custom endpoints to satisfy the different workloads.

Full Access
Question # 9

A company uses a large, growing, and high performance on-premises Microsoft SQL Server instance With an Always On availability group cluster size of 120 TIE. The company uses a third-party backup product that requires system-level access to the databases. The company will continue to use this third-party backup product in the future.

The company wants to move the DB cluster to AWS with the least possible downtime and data loss. The company needs a 2 Gbps connection to sustain Always On asynchronous data replication between the company's data center and AWS.

Which combination of actions should a database specialist take to meet these requirements? (Select THREE.)

A.

Establish an AWS Direct Connect hosted connection between the companfs data center and AWS

B.

Create an AWS Site-to-Site VPN connection between the companVs data center and AWS over the internet

C.

Use AWS Database Migration Service (AWS DMS) to migrate the on-premises SQL Server databases to Amazon RDS for SQL Server Configure Always On availability groups for SQL Server.

D.

Deploy a new SQL Server Always On availability group DB cluster on Amazon EC2 Configure Always On distributed availability groups between the on-premises DB cluster and the AWS DB cluster_ Fail over to the AWS DB cluster when it is time to migrate.

E.

Grant system-level access to the third-party backup product to perform backups of the Amazon RDS for SQL Server DB instance.

F.

Configure the third-party backup product to perform backups of the DB cluster on Amazon EC2.

Full Access
Question # 10

A company conducted a security audit of its AWS infrastructure. The audit identified that data was not encrypted in transit between application servers and a

MySQL database that is hosted in Amazon RDS.

After the audit, the company updated the application to use an encrypted connection. To prevent this problem from occurring again, the company's database team needs to configure the database to require in-transit encryption for all connections.

Which solution will meet this requirement?

A.

Update the parameter group in use by the DB instance, and set the require_secure_transport parameter to ON.

B.

Connect to the database, and use ALTER USER to enable the REQUIRE SSL option on the database user.

C.

Update the security group in use by the DB instance, and remove port 80 to prevent unencrypted connections from being established.

D.

Update the DB instance, and enable the Require Transport Layer Security option.

Full Access
Question # 11

A web-based application uses Amazon DocumentDB (with MongoDB compatibility) as its underlying data store. Sufficient access control IS in place, but a database specialist wants to be able to review logs if the primary DocumentDB database is deleted

Which combination of steps Should the database specialist take to meet this requirement? (Select TWO_)

A.

Set the audit_logs cluster parameter to enabled

B.

Enable DocumentDB log export to Amazon CloudWatch Logs.

C.

Enable Enhanced Monitoring tor DocumentDB.

D.

Enable AWS CloudTrail for DocumentDB.

E.

use AWS Config to monitor the state of DocumentDB.

Full Access
Question # 12

A company is using 5 TB Amazon RDS DB instances and needs to maintain 5 years of monthly database backups for compliance purposes. A Database Administrator must provide Auditors with data within 24 hours. Which solution will meet these requirements and is the MOST operationally efficient?

A.

Create an AWS Lambda function to run on the first day of every month to take a manual RDS snapshot. Move the snapshot to the company’s Amazon S3 bucket.

B.

Create an AWS Lambda function to run on the first day of every month to take a manual RDS snapshot.

C.

Create an RDS snapshot schedule from the AWS Management Console to take a snapshot every 30 days.

D.

Create an AWS Lambda function to run on the first day of every month to create an automated RDS snapshot.

Full Access
Question # 13

On AWS, a business is developing a web application. The application needs that the database supports concurrent read and write activities in several AWS Regions. Additionally, the database must communicate data changes across Regions as they occur. The application must be highly available and have a latency of less than a few hundred milliseconds.

Which solution satisfies these criteria?

A.

Amazon DynamoDB global tables

B.

Amazon DynamoDB streams with AWS Lambda to replicate the data

C.

An Amazon ElastiCache for Redis cluster with cluster mode enabled and multiple shards

D.

An Amazon Aurora global database

Full Access
Question # 14

A company uses Amazon DynamoDB as the data store for its ecommerce website. The website receives little to no traffic at night, and the majority of the traffic occurs during the day. The traffic growth during peak hours is gradual and predictable on a daily basis, but it can be orders of magnitude higher than during off-peak hours.

The company initially provisioned capacity based on its average volume during the day without accounting for the variability in traffic patterns. However, the website is experiencing a significant amount of throttling during peak hours. The company wants to reduce the amount of throttling while minimizing costs.

What should a database specialist do to meet these requirements?

A.

Use reserved capacity. Set it to the capacity levels required for peak daytime throughput.

B.

Use provisioned capacity. Set it to the capacity levels required for peak daytime throughput.

C.

Use provisioned capacity. Create an AWS Application Auto Scaling policy to update capacity based on consumption.

D.

Use on-demand capacity.

Full Access
Question # 15

AWS CloudFormation stack including an Amazon RDS database instance was mistakenly removed, resulting in the loss of recent data. A Database Specialist must apply RDS parameters to the CloudFormation template in order to minimize the possibility of future inadvertent instance data loss.

Which settings will satisfy this criterion? (Select three.)

A.

Set DeletionProtection to True

B.

Set MultiAZ to True

C.

Set TerminationProtection to True

D.

Set DeleteAutomatedBackups to False

E.

Set DeletionPolicy to Delete

F.

Set DeletionPolicy to Retain

Full Access
Question # 16

A company runs a customer relationship management (CRM) system that is hosted on-premises with a MySQL database as the backend. A custom stored procedure is used to send email notifications to another system when data is inserted into a table. The company has noticed that the performance of the CRM system has decreased due to database reporting applications used by various teams. The company requires an AWS solution that would reduce maintenance, improve performance, and accommodate the email notification feature.

Which AWS solution meets these requirements?

A.

Use MySQL running on an Amazon EC2 instance with Auto Scaling to accommodate the reporting applications. Configure a stored procedure and an AWS Lambda function that uses Amazon SES to send email notifications to the other system.

B.

Use Amazon Aurora MySQL in a multi-master cluster to accommodate the reporting applications. Configure Amazon RDS event subscriptions to publish a message to an Amazon SNS topic and subscribe the other system's email address to the topic.

C.

Use MySQL running on an Amazon EC2 instance with a read replica to accommodate the reporting applications. Configure Amazon SES integration to send email notifications to the other system.

D.

Use Amazon Aurora MySQL with a read replica for the reporting applications. Configure a stored procedure and an AWS Lambda function to publish a message to an Amazon SNS topic. Subscribe the other system's email address to the topic.

Full Access
Question # 17

A company has an application environment that deploys Amazon Aurora PostgreSQL databases as part of its CI/CD process that uses AWS CloudFormatlon. The company's database administrator has received reports of performance Issues from the resulting database but has no way to investigate the issues.

Which combination of changes must the database administrator make to the database deployment to automate the collection of performance data? (Select TWO.)

A.

Turn on Amazon DevOps Guru for the Aurora database resources in the CloudFormat10n template.

B.

Turn on AWS CloudTraiI in each AWS account_

C.

Turn on and contigure AWS Config tor all Aurora PostgreSQL databases.

D.

Update the CloudFormatlon template to enable Amazon CloudWatch monitoring on the Aurora PostgreSQL DB instances.

E.

Update the CloudFormatlon template to turn on Performance Insights for Aurora PostgreSQL.

Full Access
Question # 18

A team of Database Specialists is currently investigating performance issues on an Amazon RDS for MySQL DB instance and is reviewing related metrics. The team wants to narrow the possibilities down to specific database wait events to better understand the situation.

How can the Database Specialists accomplish this?

A.

Enable the option to push all database logs to Amazon CloudWatch for advanced analysis

B.

Create appropriate Amazon CloudWatch dashboards to contain specific periods of time

C.

Enable Amazon RDS Performance Insights and review the appropriate dashboard

D.

Enable Enhanced Monitoring will the appropriate settings

Full Access
Question # 19

A database administrator needs to save a particular automated database snapshot from an Amazon RDS for Microsoft SQL Server DB instance for longer than the maximum number of days.

Which solution will meet these requirements in the MOST operationally efficient way?

A.

Create a manual copy of the snapshot.

B.

Export the contents of the snapshot to an Amazon S3 bucket.

C.

Change the retention period of the snapshot to 45 days.

D.

Create a native SQL Server backup. Save the backup to an Amazon S3 bucket.

Full Access
Question # 20

A company is closing one of its remote data centers. This site runs a 100 TB on-premises data warehouse solution. The company plans to use the AWS Schema Conversion Tool (AWS SCT) and AWS DMS for the migration to AWS. The site network bandwidth is 500 Mbps. A Database Specialist wants to migrate the on- premises data using Amazon S3 as the data lake and Amazon Redshift as the data warehouse. This move must take place during a 2-week period when source systems are shut down for maintenance. The data should stay encrypted at rest and in transit.

Which approach has the least risk and the highest likelihood of a successful data transfer?

A.

Set up a VPN tunnel for encrypting data over the network from the data center to AWS. Leverage AWS SCT and apply the converted schema to Amazon Redshift. Once complete, start an AWS DMS task to move the data from the source to Amazon S3. Use AWS Glue to load the data from Amazon S3 to Amazon Redshift.

B.

Leverage AWS SCT and apply the converted schema to Amazon Redshift. Start an AWS DMS task with two AWS Snowball Edge devices to copy data from on-premises to Amazon S3 with AWS KMS encryption. Use AWS DMS to finish copying data to Amazon Redshift.

C.

Leverage AWS SCT and apply the converted schema to Amazon Redshift. Once complete, use a fleet of 10 TB dedicated encrypted drives using the AWS Import/Export feature to copy data from on-premises to Amazon S3 with AWS KMS encryption. Use AWS Glue to load the data to Amazon redshift.

D.

Set up a VPN tunnel for encrypting data over the network from the data center to AWS. Leverage a native database export feature to export the data and compress the files. Use the aws S3 cp multi-port upload command to upload these files to Amazon S3 with AWS KMS encryption. Once complete, load the data to Amazon Redshift using AWS Glue.

Full Access
Question # 21

A financial institution uses AWS to host its online application. Amazon RDS for MySQL is used to host the application's database, which includes automatic backups.

The program has corrupted the database logically, resulting in the application being unresponsive. The exact moment the corruption occurred has been determined, and it occurred within the backup retention period.

How should a database professional restore a database to its previous state prior to corruption?

A.

Use the point-in-time restore capability to restore the DB instance to the specified time. No changes to the application connection string are required.

B.

Use the point-in-time restore capability to restore the DB instance to the specified time. Change the application connection string to the new, restored DB instance.

C.

Restore using the latest automated backup. Change the application connection string to the new, restored DB instance.

D.

Restore using the appropriate automated backup. No changes to the application connection string are required.

Full Access
Question # 22

A company uses an Amazon RDS for PostgreSQL database in the us-east-2 Region. The company wants to have a copy of the database available in the us-west-2 Region as part of a new disaster recovery strategy.

A database architect needs to create the new database. There can be little to no downtime to the source database. The database architect has decided to use AWS Database Migration Service (AWS DMS) to replicate the database across Regions. The database architect will use full load mode and then will switch to change data capture (CDC) mode.

Which parameters must the database architect configure to support CDC mode for the RDS for PostgreSQL database? (Choose three.)

A.

Set wal_level = logical.

B.

Set wal_level = replica.

C.

Set max_replication_slots to 1 or more, depending on the number of DMS tasks.

D.

Set max_replication_slots to 0 to support dynamic allocation of slots.

E.

Set wal_sender_timeout to 20,000 milliseconds.

F.

Set wal_sender_timeout to 5,000 milliseconds.

Full Access
Question # 23

A database specialist has been entrusted by an ecommerce firm with designing a reporting dashboard that visualizes crucial business KPIs derived from the company's primary production database running on Amazon Aurora. The dashboard should be able to read data within 100 milliseconds after an update.

The Database Specialist must conduct an audit of the Aurora DB cluster's present setup and provide a cost-effective alternative. The solution must support the unexpected read demand generated by the reporting dashboard without impairing the DB cluster's write availability and performance.

Which solution satisfies these criteria?

A.

Turn on the serverless option in the DB cluster so it can automatically scale based on demand.

B.

Provision a clone of the existing DB cluster for the new Application team.

C.

Create a separate DB cluster for the new workload, refresh from the source DB cluster, and set up ongoing replication using AWS DMS change data capture (CDC).

D.

Add an automatic scaling policy to the DB cluster to add Aurora Replicas to the cluster based on CPU consumption.

Full Access
Question # 24

A ride-hailing application stores bookings in a persistent Amazon RDS for MySQL DB instance. This program is very popular, and the corporation anticipates a tenfold rise in the application's user base over the next several months. The application receives a higher volume of traffic in the morning and evening.

This application is divided into two sections:

✑ An internal booking component that takes online reservations in response to concurrent user queries.

✑ A component of a third-party customer relationship management (CRM) system that customer service professionals utilize. Booking data is accessed using queries in the CRM.

To manage this workload effectively, a database professional must create a cost-effective database system.

Which solution satisfies these criteria?

A.

Use Amazon ElastiCache for Redis to accept the bookings. Associate an AWS Lambda function to capture changes and push the booking data to the RDS for MySQL DB instance used by the CRM.

B.

Use Amazon DynamoDB to accept the bookings. Enable DynamoDB Streams and associate an AWS Lambda function to capture changes and push the booking data to an Amazon SQS queue. This triggers another Lambda function that pulls data from Amazon SQS and writes it to the RDS for MySQL DB instance used by the CRM.

C.

Use Amazon ElastiCache for Redis to accept the bookings. Associate an AWS Lambda function to capture changes and push the booking data to an Amazon Redshift database used by the CRM.

D.

Use Amazon DynamoDB to accept the bookings. Enable DynamoDB Streams and associate an AWS Lambda function to capture changes and push the booking data to Amazon Athena, which is used by the CRM.

Full Access
Question # 25

A database specialist needs to review and optimize an Amazon DynamoDB table that is experiencing performance issues. A thorough investigation by the database specialist reveals that the partition key is causing hot partitions, so a new partition key is created. The database specialist must effectively apply this new partition key to all existing and new data.

How can this solution be implemented?

A.

Use Amazon EMR to export the data from the current DynamoDB table to Amazon S3. Then use Amazon EMR again to import the data from Amazon S3 into a new DynamoDB table with the new partition key.

B.

Use AWS DMS to copy the data from the current DynamoDB table to Amazon S3. Then import the DynamoDB table to create a new DynamoDB table with the new partition key.

C.

Use the AWS CLI to update the DynamoDB table and modify the partition key.

D.

Use the AWS CLI to back up the DynamoDB table. Then use the restore-table-from-backup command and modify the partition key.

Full Access
Question # 26

An online gaming company is planning to launch a new game with Amazon DynamoDB as its data store. The database should be designated to support the following use cases:

Update scores in real time whenever a player is playing the game. Retrieve a player’s score details for a specific game session.

A Database Specialist decides to implement a DynamoDB table. Each player has a unique user_id and each

game has a unique game_id.

Which choice of keys is recommended for the DynamoDB table?

A.

Create a global secondary index with game_id as the partition key

B.

Create a global secondary index with user_id as the partition key

C.

Create a composite primary key with game_id as the partition key and user_id as the sort key

D.

Create a composite primary key with user_id as the partition key and game_id as the sort key

Full Access
Question # 27

A gaming company is evaluating Amazon ElastiCache as a solution to manage player leaderboards. Millions of players around the world will complete in annual tournaments. The company wants to implement an architecture that is highly available. The company also wants to ensure that maintenance activities have minimal impact on the availability of the gaming platform.

Which combination of steps should the company take to meet these requirements? (Choose two.)

A.

Deploy an ElastiCache for Redis cluster with read replicas and Multi-AZ enabled.

B.

Deploy an ElastiCache for Memcached global datastore.

C.

Deploy a single-node ElastiCache for Redis cluster with automatic backups enabled. In the event of a failure, create a new cluster and restore data from the most recent backup.

D.

Use the default maintenance window to apply any required system changes and mandatory updates as soon as they are available.

E.

Choose a preferred maintenance window at the time of lowest usage to apply any required changes and mandatory updates.

Full Access
Question # 28

A company is concerned about the cost of a large-scale, transactional application using Amazon DynamoDB that only needs to store data for 2 days before it is deleted. In looking at the tables, a Database Specialist notices that much of the data is months old, and goes back to when the application was first deployed.

What can the Database Specialist do to reduce the overall cost?

A.

Create a new attribute in each table to track the expiration time and create an AWS Glue transformation to delete entries more than 2 days old.

B.

Create a new attribute in each table to track the expiration time and enable DynamoDB Streams on each table.

C.

Create a new attribute in each table to track the expiration time and enable time to live (TTL) on each table.

D.

Create an Amazon CloudWatch Events event to export the data to Amazon S3 daily using AWS Data Pipeline and then truncate the Amazon DynamoDB table.

Full Access
Question # 29

A retail company uses Amazon Redshift for its 1 PB data warehouse. Several analytical workloads run on a Redshift cluster. The tables within the cluster have grown rapidly. End users are reporting poor performance of daily reports that run on the transaction fact tables.

A database specialist must change the design of the tables to improve the reporting performance. All the changes must be applied dynamically. The changes must have the least possible impact on users and must optimize the overall table size.

Which solution will meet these requirements?

A.

Use the STL SCAN view to understand how the tables are getting scanned. Identify the columns that are used in filter and group by conditions. Create a temporary table with the identified columns as sort keys and compression as Zstandard (ZSTD) by copying the data from the original table. Drop the original table. Give the temporary table the same name that the original table had.

B.

Run an explain plan to analyze the queries on the tables. Consider recommendations from Amazon Redshift Advisor. Identify the columns that are used in filter and group by conditions. Convert the recommended columns from Redshift Advisor into sort keys with compression encoding set to RAW. Set the rest of the column compression encoding to AZ64.

C.

Run an explain plan to analyze the queries on the tables. Consider recommendations from Amazon Redshift Advisor. Identify the columns that are used in filter and group by conditions. Convert the recommended columns from Redshift Advisor into sort keys with compression encoding set to I_ZO. Set the rest of the column compression encoding to Zstandard (ZSTD).

D.

Run an explain plan to analyze the queries on the tables. Consider recommendations from Amazon Redshift Advisor. Identify the columns that are used in filter and group by conditions. Create a deep copy of the table with the identified columns as sort keys and compression for all columns as Zstandard (ZSTD) by using a bulk insert. Drop the original table. Give the copy table the same name that the original table had.

Full Access
Question # 30

A company runs an ecommerce application on premises on Microsoft SQL Server. The company is planning to migrate the application to the AWS Cloud. The application code contains complex T-SQL queries and stored procedures.

The company wants to minimize database server maintenance and operating costs after the migration is completed. The company also wants to minimize the need to rewrite code as part of the migration effort.

Which solution will meet these requirements?

A.

Migrate the database to Amazon Aurora PostgreSQL. Turn on Babelfish.

B.

Migrate the database to Amazon S3. Use Amazon Redshift Spectrum for query processing.

C.

Migrate the database to Amazon RDS for SQL Server. Turn on Kerberos authentication.

D.

Migrate the database to an Amazon EMR cluster that includes multiple primary nodes.

Full Access
Question # 31

A company uses an Amazon RDS for PostgreSQL DB instance for its customer relationship management (CRM) system. New compliance requirements specify that the database must be encrypted at rest.

Which action will meet these requirements?

A.

Create an encrypted copy of manual snapshot of the DB instance. Restore a new DB instance from the encrypted snapshot.

B.

Modify the DB instance and enable encryption.

C.

Restore a DB instance from the most recent automated snapshot and enable encryption.

D.

Create an encrypted read replica of the DB instance. Promote the read replica to a standalone instance.

Full Access
Question # 32

A banking company recently launched an Amazon RDS for MySQL DB instance as part of a proof-of-concept project. A database specialist has configured automated database snapshots. As a part of routine testing, the database specialist noticed one day that the automated database snapshot was not created.

Which of the following are possible reasons why the snapshot was not created? (Choose two.)

A.

A copy of the RDS automated snapshot for this DB instance is in progress within the same AWS Region.

B.

A copy of the RDS automated snapshot for this DB instance is in progress in a different AWS Region.

C.

The RDS maintenance window is not configured.

D.

The RDS DB instance is in the STORAGE_FULL state.

E.

RDS event notifications have not been enabled.

Full Access
Question # 33

A database specialist must create nightly backups of an Amazon DynamoDB table in a mission-critical workload as part of a disaster recovery strategy.

Which backup methodology should the database specialist use to MINIMIZE management overhead?

A.

Install the AWS CLI on an Amazon EC2 instance. Write a CLI command that creates a backup of the DynamoDB table. Create a scheduled job or task that executes the command on a nightly basis.

B.

Create an AWS Lambda function that creates a backup of the DynamoDB table. Create an Amazon CloudWatch Events rule that executes the Lambda function on a nightly basis.

C.

Create a backup plan using AWS Backup, specify a backup frequency of every 24 hours, and give the plan a nightly backup window.

D.

Configure DynamoDB backup and restore for an on-demand backup frequency of every 24 hours.

Full Access
Question # 34

A large IT hardware manufacturing company wants to deploy a MySQL database solution in the AWS Cloud. The solution should quickly create copies of the company's production databases for test purposes. The solution must deploy the test databases in minutes, and the test data should match the latest production data as closely as possible. Developers must also be able to make changes in the test database and delete the instances afterward.

Which solution meets these requirements?

A.

Leverage Amazon RDS for MySQL with write-enabled replicas running on Amazon EC2. Create the test copies using a mysqidump backup from the RDS for MySQL DB instances and importing them into the new EC2 instances.

B.

Leverage Amazon Aurora MySQL. Use database cloning to create multiple test copies of the production DB clusters.

C.

Leverage Amazon Aurora MySQL. Restore previous production DB instance snapshots into new test copies of Aurora MySQL DB clusters to allow them to make changes.

D.

Leverage Amazon RDS for MySQL. Use database cloning to create multiple developer copies of the production DB instance.

Full Access
Question # 35

A social media company recently launched a new feature that gives users the ability to share live feeds of their daily activities with their followers. The company has an Amazon RDS for

MySOL DB instance that stores data about follower engagement

After the new feature launched, the company noticed high CPU utilization and high database latency during reads and writes. The company wants to implement a solution that will identify the source of the high CPU utilization.

Which solution will meet these requirements with the LEAST administrative oversight?

A.

Use Amazon DevOps Guru insights

B.

Use AWS CloudTrail

C.

Use Amazon CloudWatch Logs

D.

Use Amazon Aurora Database Activity Streams

Full Access
Question # 36

An online retailer uses Amazon DynamoDB for its product catalog and order data. Some popular items have led to frequently accessed keys in the data, and the company is using DynamoDB Accelerator (DAX) as the caching solution to cater to the frequently accessed keys. As the number of popular products is growing, the company realizes that more items need to be cached. The company observes a high cache miss rate and needs a solution to address this issue.

What should a database specialist do to accommodate the changing requirements for DAX?

A.

Increase the number of nodes in the existing DAX cluster.

B.

Create a new DAX cluster with more nodes. Change the DAX endpoint in the application to point to the new cluster.

C.

Create a new DAX cluster using a larger node type. Change the DAX endpoint in the application to point to the new cluster.

D.

Modify the node type in the existing DAX cluster.

Full Access
Question # 37

A financial services company is using AWS Database Migration Service (AWS OMS) to migrate Its databases from on-premises to AWS. A database administrator is working on replicating a database to AWS from on-premises using full load and change data capture (CDC). During the CDC replication, the database administrator observed that the target latency was high and slowly increasing-

What could be the root causes for this high target latency? (Select TWO.)

A.

There was ongoing maintenance on the replication instance

B.

The source endpoint was changed by modifyng the task

C.

Loopback changes had affected the source and target instances-

D.

There was no primary key or index in the target database.

E.

There were resource bottlenecks in the replication instance

Full Access
Question # 38

A global company is developing an application across multiple AWS Regions. The company needs a database solution with low latency in each Region and automatic disaster recovery. The database must be deployed in an active-active configuration with automatic data synchronization between Regions.

Which solution will meet these requirements with the LOWEST latency?

A.

Amazon RDS with cross-Region read replicas

B.

Amazon DynamoDB global tables

C.

Amazon Aurora global database

D.

Amazon Athena and Amazon S3 with S3 Cross Region Replication

Full Access
Question # 39

A Database Specialist is creating Amazon DynamoDB tables, Amazon CloudWatch alarms, and associated infrastructure for an Application team using a development AWS account. The team wants a deployment

method that will standardize the core solution components while managing environment-specific settings separately, and wants to minimize rework due to configuration errors.

Which process should the Database Specialist recommend to meet these requirements?

A.

Organize common and environmental-specific parameters hierarchically in the AWS Systems Manager Parameter Store, then reference the parameters dynamically from an AWS CloudFormation template. Deploy the CloudFormation stack using the environment name as a parameter.

B.

Create a parameterized AWS CloudFormation template that builds the required objects. Keep separate environment parameter files in separate Amazon S3 buckets. Provide an AWS CLI command that deploys the CloudFormation stack directly referencing the appropriate parameter bucket.

C.

Create a parameterized AWS CloudFormation template that builds the required objects. Import the template into the CloudFormation interface in the AWS Management Console. Make the required changes to the parameters and deploy the CloudFormation stack.

D.

Create an AWS Lambda function that builds the required objects using an AWS SDK. Set the required parameter values in a test event in the Lambda console for each environment that the Application team can modify, as needed. Deploy the infrastructure by triggering the test event in the console.

Full Access
Question # 40

A media company hosts a highly available news website on AWS but needs to improve its page load time, especially during very popular news releases. Once a news page is published, it is very unlikely to change unless an error is identified. The company has decided to use Amazon ElastiCache.

What is the recommended strategy for this use case?

A.

Use ElastiCache for Memcached with write-through and long time to live (TTL)

B.

Use ElastiCache for Redis with lazy loading and short time to live (TTL)

C.

Use ElastiCache for Memcached with lazy loading and short time to live (TTL)

D.

Use ElastiCache for Redis with write-through and long time to live (TTL)

Full Access
Question # 41

A large financial services company uses Amazon ElastiCache for Redis for its new application that has a global user base. A database administrator must develop a caching solution that will be available

across AWS Regions and include low-latency replication and failover capabilities for disaster recovery (DR). The company's security team requires the encryption of cross-Region data transfers.

Which solution meets these requirements with the LEAST amount of operational effort?

A.

Enable cluster mode in ElastiCache for Redis. Then create multiple clusters across Regions and replicate the cache data by using AWS Database Migration Service (AWS DMS). Promote a cluster in the failover Region to handle production traffic when DR is required.

B.

Create a global datastore in ElastiCache for Redis. Then create replica clusters in two other Regions. Promote one of the replica clusters as primary when DR is required.

C.

Disable cluster mode in ElastiCache for Redis. Then create multiple replication groups across Regions and replicate the cache data by using AWS Database Migration Service (AWS DMS). Promote a replication group in the failover Region to primary when DR is required.

D.

Create a snapshot of ElastiCache for Redis in the primary Region and copy it to the failover Region. Use the snapshot to restore the cluster from the failover Region when DR is required.

Full Access
Question # 42

Recently, a financial institution created a portfolio management service. The application's backend is powered by Amazon Aurora, which supports MySQL.

The firm demands a response time of five minutes and a response time of five minutes. A database professional must create a disaster recovery system that is both efficient and has a low replication latency.

How should the database professional tackle these requirements?

A.

Configure AWS Database Migration Service (AWS DMS) and create a replica in a different AWS Region.

B.

Configure an Amazon Aurora global database and add a different AWS Region.

C.

Configure a binlog and create a replica in a different AWS Region.

D.

Configure a cross-Region read replica.

Full Access
Question # 43

An online bookstore uses Amazon Aurora MySQL as its backend database. After the online bookstore added a popular book to the online catalog, customers began reporting intermittent timeouts on the checkout page. A database specialist determined that increased load was causing locking contention on the database. The database specialist wants to automatically detect and diagnose database performance issues and to resolve bottlenecks faster.

Which solution will meet these requirements?

A.

Turn on Performance Insights for the Aurora MySQL database. Configure and turn on Amazon DevOps Guru for RDS.

B.

Create a CPU usage alarm. Select the CPU utilization metric for the DB instance. Create an Amazon Simple Notification Service (Amazon SNS) topic to notify the database specialist when CPU utilization is over 75%.

C.

Use the Amazon RDS query editor to get the process ID of the query that is causing the database to lock. Run a command to end the process.

D.

Use the SELECT INTO OUTFILE S3 statement to query data from the database. Save the data directly to an Amazon S3 bucket. Use Amazon Athena to analyze the files for long-running queries.

Full Access
Question # 44

A business is transferring a database from one AWS Region to another using an Amazon RDS for SQL Server DB instance. The organization wishes to keep database downtime to a minimum throughout the transfer.

Which migration strategy should the organization use for this cross-regional move?

A.

Back up the source database using native backup to an Amazon S3 bucket in the same Region. Then restore the backup in the target Region.

B.

Back up the source database using native backup to an Amazon S3 bucket in the same Region. Use Amazon S3 Cross-Region Replication to copy the backup to an S3 bucket in the target Region. Then restore the backup in the target Region.

C.

Configure AWS Database Migration Service (AWS DMS) to replicate data between the source and the target databases. Once the replication is in sync, terminate the DMS task.

D.

Add an RDS for SQL Server cross-Region read replica in the target Region. Once the replication is in sync, promote the read replica to master.

Full Access
Question # 45

A pharmaceutical company uses Amazon Quantum Ledger Database (Amazon QLDB) to store its clinical trial data records. The company has an application that runs as AWS Lambda functions. The application is hosted in the private subnet in a VPC.

The application does not have internet access and needs to read some of the clinical data records. The company is concerned that traffic between the QLDB ledger and the VPC could leave the AWS network. The company needs to secure access to the QLDB ledger and allow the VPC traffic to have read-only access.

Which security strategy should a database specialist implement to meet these requirements?

A.

Move the QLDB ledger into a private database subnet inside the VPC. Run the Lambda functions inside the same VPC in an application private subnet. Ensure that the VPC route table allows read-only flow from the application subnet to the database subnet.

B.

Create an AWS PrivateLink VPC endpoint for the QLDB ledger. Attach a VPC policy to the VPC endpoint to allow read-only traffic for the Lambda functions that run inside the VPC.

C.

Add a security group to the QLDB ledger to allow access from the private subnets inside the VPC where the Lambda functions that access the QLDB ledger are running.

D.

Create a VPN connection to ensure pairing of the private subnet where the Lambda functions are running with the private subnet where the QLDB ledger is deployed.

Full Access
Question # 46

A Database Specialist migrated an existing production MySQL database from on-premises to an Amazon RDS for MySQL DB instance. However, after the migration, the database needed to be encrypted at rest using AWS KMS. Due to the size of the database, reloading, the data into an encrypted database would be too time- consuming, so it is not an option.

How should the Database Specialist satisfy this new requirement?

A.

Create a snapshot of the unencrypted RDS DB instance. Create an encrypted copy of the unencrypted snapshot. Restore the encrypted snapshot copy.

B.

Modify the RDS DB instance. Enable the AWS KMS encryption option that leverages the AWS CLI.

C.

Restore an unencrypted snapshot into a MySQL RDS DB instance that is encrypted.

D.

Create an encrypted read replica of the RDS DB instance. Promote it the master.

Full Access
Question # 47

An online retail company is planning a multi-day flash sale that must support processing of up to 5,000 orders per second. The number of orders and exact schedule for the sale will vary each day. During the sale, approximately 10,000 concurrent users will look at the deals before buying items. Outside of the sale, the traffic volume is very low. The acceptable performance for read/write queries should be under 25 ms. Order items are about 2 KB in size and have a unique identifier. The company requires the most cost-effective solution that will automatically scale and is highly available.

Which solution meets these requirements?

A.

Amazon DynamoDB with on-demand capacity mode

B.

Amazon Aurora with one writer node and an Aurora Replica with the parallel query feature enabled

C.

Amazon DynamoDB with provisioned capacity mode with 5,000 write capacity units (WCUs) and 10,000 read capacity units (RCUs)

D.

Amazon Aurora with one writer node and two cross-Region Aurora Replicas

Full Access
Question # 48

A company is launching a new Amazon RDS for MySQL Multi-AZ DB instance to be used as a data store for a custom-built application. After a series of tests with point-in-time recovery disabled, the company decides that it must have point-in-time recovery reenabled before using the DB instance to store production data.

What should a database specialist do so that point-in-time recovery can be successful?

A.

Enable binary logging in the DB parameter group used by the DB instance.

B.

Modify the DB instance and enable audit logs to be pushed to Amazon CloudWatch Logs.

C.

Modify the DB instance and configure a backup retention period

D.

Set up a scheduled job to create manual DB instance snapshots.

Full Access
Question # 49

A financial company wants to store sensitive user data in an Amazon Aurora PostgreSQL DB cluster. The database will be accessed by multiple applications across the company. The company has mandated that all communications to the database be encrypted and the server identity must be validated. Any non-SSL-based connections should be disallowed access to the database.

Which solution addresses these requirements?

A.

Set the rds.force_ssl=0 parameter in DB parameter groups. Download and use the Amazon RDS certificate bundle and configure the PostgreSQL connection string with sslmode=allow.

B.

Set the rds.force_ssl=1 parameter in DB parameter groups. Download and use the Amazon RDS certificate bundle and configure the PostgreSQL connection string with sslmode=disable.

C.

Set the rds.force_ssl=0 parameter in DB parameter groups. Download and use the Amazon RDS certificate bundle and configure the PostgreSQL connection string with sslmode=verify-ca.

D.

Set the rds.force_ssl=1 parameter in DB parameter groups. Download and use the Amazon RDS certificate bundle and configure the PostgreSQL connection string with sslmode=verify-full.

Full Access
Question # 50

A Database Specialist is performing a proof of concept with Amazon Aurora using a small instance to confirm a simple database behavior. When loading a large dataset and creating the index, the Database Specialist encounters the following error message from Aurora:

ERROR: cloud not write block 7507718 of temporary file: No space left on device

What is the cause of this error and what should the Database Specialist do to resolve this issue?

A.

The scaling of Aurora storage cannot catch up with the data loading. The Database Specialist needs to modify the workload to load the data slowly.

B.

The scaling of Aurora storage cannot catch up with the data loading. The Database Specialist needs to enable Aurora storage scaling.

C.

The local storage used to store temporary tables is full. The Database Specialist needs to scale up the instance.

D.

The local storage used to store temporary tables is full. The Database Specialist needs to enable local storage scaling.

Full Access
Question # 51

A company is running its line of business application on AWS, which uses Amazon RDS for MySQL at the persistent data store. The company wants to minimize downtime when it migrates the database to Amazon Aurora.

Which migration method should a Database Specialist use?

A.

Take a snapshot of the RDS for MySQL DB instance and create a new Aurora DB cluster with the option to migrate snapshots.

B.

Make a backup of the RDS for MySQL DB instance using the mysqldump utility, create a new Aurora DB cluster, and restore the backup.

C.

Create an Aurora Replica from the RDS for MySQL DB instance and promote the Aurora DB cluster.

D.

Create a clone of the RDS for MySQL DB instance and promote the Aurora DB cluster.

Full Access
Question # 52

A Database Specialist is creating a new Amazon Neptune DB cluster, and is attempting to load fata from Amazon S3 into the Neptune DB cluster using the Neptune bulk loader API. The Database Specialist receives the following error:

“Unable to connect to s3 endpoint. Provided source = s3://mybucket/graphdata/ and region = us-east-1. Please verify your S3 configuration.”

Which combination of actions should the Database Specialist take to troubleshoot the problem? (Choose two.)

A.

Check that Amazon S3 has an IAM role granting read access to Neptune

B.

Check that an Amazon S3 VPC endpoint exists

C.

Check that a Neptune VPC endpoint exists

D.

Check that Amazon EC2 has an IAM role granting read access to Amazon S3

E.

Check that Neptune has an IAM role granting read access to Amazon S3

Full Access
Question # 53

A Database Specialist has migrated an on-premises Oracle database to Amazon Aurora PostgreSQL. The schema and the data have been migrated successfully. The on-premises database server was also being used to run database maintenance cron jobs written in Python to perform tasks including data purging and generating data exports. The logs for these jobs show that, most of the time, the jobs completed within 5 minutes, but a few jobs took up to 10 minutes to complete. These maintenance jobs need to be set up for Aurora PostgreSQL.

How can the Database Specialist schedule these jobs so the setup requires minimal maintenance and provides high availability?

A.

Create cron jobs on an Amazon EC2 instance to run the maintenance jobs following the required schedule.

B.

Connect to the Aurora host and create cron jobs to run the maintenance jobs following the required schedule.

C.

Create AWS Lambda functions to run the maintenance jobs and schedule them with Amazon CloudWatch Events.

D.

Create the maintenance job using the Amazon CloudWatch job scheduling plugin.

Full Access
Question # 54

A Database Specialist is designing a new database infrastructure for a ride hailing application. The application data includes a ride tracking system that stores GPS coordinates for all rides. Real-time statistics and metadata lookups must be performed with high throughput and microsecond latency. The database should be fault tolerant with minimal operational overhead and development effort.

Which solution meets these requirements in the MOST efficient way?

A.

Use Amazon RDS for MySQL as the database and use Amazon ElastiCache

B.

Use Amazon DynamoDB as the database and use DynamoDB Accelerator

C.

Use Amazon Aurora MySQL as the database and use Aurora’s buffer cache

D.

Use Amazon DynamoDB as the database and use Amazon API Gateway

Full Access
Question # 55

A large financial services company requires that all data be encrypted in transit. A Developer is attempting to connect to an Amazon RDS DB instance using the company VPC for the first time with credentials provided by a Database Specialist. Other members of the Development team can connect, but this user is consistently receiving an error indicating a communications link failure. The Developer asked the Database Specialist to reset the password a number of times, but the error persists.

Which step should be taken to troubleshoot this issue?

A.

Ensure that the database option group for the RDS DB instance allows ingress from the Developer machine’s IP address

B.

Ensure that the RDS DB instance’s subnet group includes a public subnet to allow the Developer to connect

C.

Ensure that the RDS DB instance has not reached its maximum connections limit

D.

Ensure that the connection is using SSL and is addressing the port where the RDS DB instance is listening for encrypted connections

Full Access
Question # 56

After restoring an Amazon RDS snapshot from 3 days ago, a company’s Development team cannot connect to the restored RDS DB instance. What is the likely cause of this problem?

A.

The restored DB instance does not have Enhanced Monitoring enabled

B.

The production DB instance is using a custom parameter group

C.

The restored DB instance is using the default security group

D.

The production DB instance is using a custom option group

Full Access
Question # 57

A company is running an on-premises application comprised of a web tier, an application tier, and a MySQL database tier. The database is used primarily during business hours with random activity peaks throughout the day. A database specialist needs to improve the availability and reduce the cost of the MySQL database tier as part of the company’s migration to AWS.

Which MySQL database option would meet these requirements?

A.

Amazon RDS for MySQL with Multi-AZ

B.

Amazon Aurora Serverless MySQL cluster

C.

Amazon Aurora MySQL cluster

D.

Amazon RDS for MySQL with read replica

Full Access
Question # 58

Application developers have reported that an application is running slower as more users are added. The application database is running on an Amazon Aurora

DB cluster with an Aurora Replica. The application is written to take advantage of read scaling through reader endpoints. A database specialist looks at the performance metrics of the database and determines that, as new users were added to the database, the primary instance CPU utilization steadily increased while the Aurora Replica CPU utilization remained steady.

How can the database specialist improve database performance while ensuring minimal downtime?

A.

Modify the Aurora DB cluster to add more replicas until the overall load stabilizes. Then, reduce the number of replicas once the application meets service level objectives.

B.

Modify the primary instance to a larger instance size that offers more CPU capacity.

C.

Modify a replica to a larger instance size that has more CPU capacity. Then, promote the modified replica.

D.

Restore the Aurora DB cluster to one that has an instance size with more CPU capacity. Then, swap the names of the old and new DB clusters.

Full Access
Question # 59

A company is using Amazon RDS for MySQL to redesign its business application. A Database Specialist has noticed that the Development team is restoring their MySQL database multiple times a day when Developers make mistakes in their schema updates. The Developers sometimes need to wait hours to the restores to complete.

Multiple team members are working on the project, making it difficult to find the correct restore point for each mistake.

Which approach should the Database Specialist take to reduce downtime?

A.

Deploy multiple read replicas and have the team members make changes to separate replica instances

B.

Migrate to Amazon RDS for SQL Server, take a snapshot, and restore from the snapshot

C.

Migrate to Amazon Aurora MySQL and enable the Aurora Backtrack feature

D.

Enable the Amazon RDS for MySQL Backtrack feature

Full Access
Question # 60

A business's production database is hosted on a single-node Amazon RDS for MySQL DB instance. The database instance is hosted in a United States AWS Region.

A week before a significant sales event, a fresh database maintenance update is released. The maintenance update has been designated as necessary. The firm want to minimize the database instance's downtime and requests that a database expert make the database instance highly accessible until the sales event concludes.

Which solution will satisfy these criteria?

A.

Defer the maintenance update until the sales event is over.

B.

Create a read replica with the latest update. Initiate a failover before the sales event.

C.

Create a read replica with the latest update. Transfer all read-only traffic to the read replica during the sales event.

D.

Convert the DB instance into a Multi-AZ deployment. Apply the maintenance update.

Full Access
Question # 61

A company has an application that uses an Amazon DynamoDB table to store user data. Every morning, a single-threaded process calls the DynamoDB API Scan operation to scan the entire table and generate a critical start-of-day report for management. A successful marketing campaign recently doubled the number of items in the table, and now the process takes too long to run and the report is not generated in time.

A database specialist needs to improve the performance of the process. The database specialist notes that, when the process is running, 15% of the table’s provisioned read capacity units (RCUs) are being used.

What should the database specialist do?

A.

Enable auto scaling for the DynamoDB table.

B.

Use four threads and parallel DynamoDB API Scan operations.

C.

Double the table’s provisioned RCUs.

D.

Set the Limit and Offset parameters before every call to the API.

Full Access
Question # 62

A financial company is running an Amazon Redshift cluster for one of its data warehouse solutions. The company needs to generate connection logs, user logs, and user activity logs. The company also must make these logs available for future analysis.

Which combination of steps should a database specialist take to meet these requirements? (Choose two.)

A.

Edit the database configuration of the cluster by enabling audit logging. Direct the logging to a specified log group in Amazon CloudWatch Logs.

B.

Edit the database configuration of the cluster by enabling audit logging. Direct the logging to a specified Amazon S3 bucket

C.

Modify the cluster by enabling continuous delivery of AWS CloudTrail logs to Amazon S3.

D.

Create a new parameter group with the enable_user_activity_logging parameter set to true. Configure the cluster to use the new parameter group.

E.

Modify the system table to enable logging for each user.

Full Access
Question # 63

A database professional is developing an application that will respond to single-instance requests. The program will query large amounts of client data and offer end users with results.

These reports may include a variety of fields. The database specialist want to enable users to query the database using any of the fields offered.

During peak periods, the database's traffic volume will be significant yet changeable. However, the database will see little activity over the rest of the day.

Which approach will be the most cost-effective in meeting these requirements?

A.

Amazon DynamoDB with provisioned capacity mode and auto scaling

B.

Amazon DynamoDB with on-demand capacity mode

C.

Amazon Aurora with auto scaling enabled

D.

Amazon Aurora in a serverless mode

Full Access
Question # 64

A company is running a mobile app that has a backend database in Amazon DynamoDB. The app experiences sudden increases and decreases in activity throughout the day. The companys operations team notices that DynamoDB read and write requests are being throttled at different times, resulting in a negative customer experience

Which solution will solve the throttling issue without requiring changes to the app?

A.

Add a DynamoD3 table in a secondary AWS Region. Populate the additional table by using DynamoDB Streams.

B.

Deploy an Amazon ElastiCache cluster in front of the DynamoDB table.

C.

use on-demand capacity mode tor the DynamoDB table.

D.

use DynamoDB Accelerator (DAX).

Full Access
Question # 65

An AWS CloudFormation stack that included an Amazon RDS DB instance was accidentally deleted and recent data was lost. A Database Specialist needs to add RDS settings to the CloudFormation template to reduce the chance of accidental instance data loss in the future.

Which settings will meet this requirement? (Choose three.)

A.

Set DeletionProtection to True

B.

Set MultiAZ to True

C.

Set TerminationProtection to True

D.

Set DeleteAutomatedBackups to False

E.

Set DeletionPolicy to Delete

F.

Set DeletionPolicy to Retain

Full Access
Question # 66

A company's application development team wants to share an automated snapshot of its Amazon RDS database with another team. The database is encrypted with a custom AWS Key Management Service (AWS KMS) key under the "WeShare" AWS account. The application development team needs to share the DB snapshot under the "WeReceive" AWS account.

Which combination of actions must the application development team take to meet these requirements? (Choose two.)

A.

Add access from the "WeReceive" account to the custom AWS KMS key policy of the sharing team.

B.

Make a copy of the DB snapshot, and set the encryption option to disable.

C.

Share the DB snapshot by setting the DB snapshot visibility option to public.

D.

Make a copy of the DB snapshot, and set the encryption option to enable.

E.

Share the DB snapshot by using the default AWS KMS encryption key.

Full Access
Question # 67

A Database Specialist modified an existing parameter group currently associated with a production Amazon RDS for SQL Server Multi-AZ DB instance. The change is associated with a static parameter type, which controls the number of user connections allowed on the most critical RDS SQL Server DB instance for the company. This change has been approved for a specific maintenance window to help minimize the impact on users.

How should the Database Specialist apply the parameter group change for the DB instance?

A.

Select the option to apply the change immediately

B.

Allow the preconfigured RDS maintenance window for the given DB instance to control when the change is applied

C.

Apply the change manually by rebooting the DB instance during the approved maintenance window

D.

Reboot the secondary Multi-AZ DB instance

Full Access
Question # 68

A company wants to build a new invoicing service for its cloud-native application on AWS. The company has a small development team and wants to focus on service feature development and minimize operations and maintenance as much as possible. The company expects the service to handle billions of requests and millions of new records every day. The service feature requirements, including data access patterns are well-defined. The service has an availability target of

99.99% with a milliseconds latency requirement. The database for the service will be the system of record for invoicing data.

Which database solution meets these requirements at the LOWEST cost?

A.

Amazon Neptune

B.

Amazon Aurora PostgreSQL Serverless

C.

Amazon RDS for PostgreSQL

D.

Amazon DynamoDB

Full Access
Question # 69

A company is running a business-critical application on premises by using Microsoft SQL Server. A database specialist is planning to migrate the instance with several databases to the AWS Cloud. The database specialist will use SQL Server Standard edition hosted on Amazon EC2 Windows instances. The solution must provide high availability and must avoid a single point of failure in the SQL Server deployment architecture.

Which solution will meet these requirements?

A.

Create Amazon RDS for SQL Server Multi-AZ DB instances. Use Amazon S3 as a shared storage option to host the databases.

B.

Set up Always On Failover Cluster Instances as a single SQL Server instance. Use Multi-AZ Amazon FSx for Windows File Server as a shared storage option to host the databases.

C.

Set up Always On availability groups to group one or more user databases that fail over together across multiple SQL Server instances. Use Multi-AZ Amazon FSx for Windows File Server as a shared storage option to host the databases.

D.

Create an Application Load Balancer to distribute database traffic across multiple EC2 instances in multiple Availability Zones. Use Amazon S3 as a shared storage option to host the databases.

Full Access
Question # 70

A company is developing an application that performs intensive in-memory operations on advanced data structures such as sorted sets. The application requires sub-millisecond latency for reads and writes. The application occasionally must run a group of commands as an ACID-compliant operation. A database specialist is setting up the database for this application. The database specialist needs the ability to create a new database cluster from the latest backup of the production cluster.

Which type of cluster should the database specialist create to meet these requirements?

A.

Amazon ElastiCache for Memcached

B.

Amazon Neptune

C.

Amazon ElastiCache for Redis

D.

Amazon DynamoDB Accelerator (DAX)

Full Access
Question # 71

A company hosts an on-premises Microsoft SQL Server Enterprise edition database with Transparent Data Encryption (TDE) enabled. The database is 20 TB in size and includes sparse tables. The company needs to migrate the database to Amazon RDS for SQL Server during a maintenance window that is scheduled for an upcoming weekend. Data-at-rest encryption must be enabled for the target DB instance.

Which combination of steps should the company take to migrate the database to AWS in the MOST operationally efficient manner? (Choose two.)

A.

Use AWS Database Migration Service (AWS DMS) to migrate from the on-premises source database to the RDS for SQL Server target database.

B.

Disable TDE. Create a database backup without encryption. Copy the backup to Amazon S3.

C.

Restore the backup to the RDS for SQL Server DB instance. Enable TDE for the RDS for SQL Server DB instance.

D.

Set up an AWS Snowball Edge device. Copy the database backup to the device. Send the device to AWS. Restore the database from Amazon S3.

E.

Encrypt the data with client-side encryption before transferring the data to Amazon RDS.

Full Access
Question # 72

A financial services company is developing a shared data service that supports different applications from throughout the company. A Database Specialist designed a solution to leverage Amazon ElastiCache for Redis with cluster mode enabled to enhance performance and scalability. The cluster is configured to listen on port 6379.

Which combination of steps should the Database Specialist take to secure the cache data and protect it from unauthorized access? (Choose three.)

A.

Enable in-transit and at-rest encryption on the ElastiCache cluster.

B.

Ensure that Amazon CloudWatch metrics are configured in the ElastiCache cluster.

C.

Ensure the security group for the ElastiCache cluster allows all inbound traffic from itself and inbound traffic on TCP port 6379 from trusted clients only.

D.

Create an IAM policy to allow the application service roles to access all ElastiCache API actions.

E.

Ensure the security group for the ElastiCache clients authorize inbound TCP port 6379 and port 22 traffic from the trusted ElastiCache cluster’s security group.

F.

Ensure the cluster is created with the auth-token parameter and that the parameter is used in all subsequent commands.

Full Access
Question # 73

An advertising company is developing a backend for a bidding platform. The company needs a cost-effective datastore solution that will accommodate a sudden increase in the volume of write transactions. The database also needs to make data changes available in a near real-time data stream.

Which solution will meet these requirements?

A.

Amazon Aurora MySQL Multi-AZ DB cluster

B.

Amazon Keyspaces (for Apache Cassandra)

C.

Amazon DynamoDB table with DynamoDB auto scaling

D.

Amazon DocumentDB (with MongoDB compatibility) cluster with a replica instance in a second Availability Zone

Full Access
Question # 74

A company with 500,000 employees needs to supply its employee list to an application used by human resources. Every 30 minutes, the data is exported using the LDAP service to load into a new Amazon DynamoDB table. The data model has a base table with Employee ID for the partition key and a global secondary index with Organization ID as the partition key.

While importing the data, a database specialist receives ProvisionedThroughputExceededException errors. After increasing the provisioned write capacity units

(WCUs) to 50,000, the specialist receives the same errors. Amazon CloudWatch metrics show a consumption of 1,500 WCUs.

What should the database specialist do to address the issue?

A.

Change the data model to avoid hot partitions in the global secondary index.

B.

Enable auto scaling for the table to automatically increase write capacity during bulk imports.

C.

Modify the table to use on-demand capacity instead of provisioned capacity.

D.

Increase the number of retries on the bulk loading application.

Full Access
Question # 75

A marketing company is developing an application to track responses to email message campaigns. The company needs a database storage solution that is optimized to work with highly connected data. The database needs to limit connections and programmatic access to the data by using IAM policies.

Which solution will meet these requirements?

A.

Amazon ElastiCache for Redis cluster

B.

Amazon Aurora MySQL DB cluster

C.

Amazon DynamoDB table

D.

Amazon Neptune DB cluster

Full Access
Question # 76

A company is running a two-tier ecommerce application in one AWS account. The web server is deployed using an Amazon RDS for MySQL Multi-AZ DB instance. A Developer mistakenly deleted the database in the production environment. The database has been restored, but this resulted in hours of downtime and lost revenue.

Which combination of changes in existing IAM policies should a Database Specialist make to prevent an error like this from happening in the future? (Choose three.)

A.

Grant least privilege to groups, users, and roles

B.

Allow all users to restore a database from a backup that will reduce the overall downtime to restore the database

C.

Enable multi-factor authentication for sensitive operations to access sensitive resources and API operations

D.

Use policy conditions to restrict access to selective IP addresses

E.

Use AccessList Controls policy type to restrict users for database instance deletion

F.

Enable AWS CloudTrail logging and Enhanced Monitoring

Full Access
Question # 77

A company is going to use an Amazon Aurora PostgreSQL DB cluster for an application backend. The DB cluster contains some tables with sensitive data. A Database Specialist needs to control the access privileges at the table level.

How can the Database Specialist meet these requirements?

A.

Use AWS IAM database authentication and restrict access to the tables using an IAM policy.

B.

Configure the rules in a NACL to restrict outbound traffic from the Aurora DB cluster.

C.

Execute GRANT and REVOKE commands that restrict access to the tables containing sensitive data.

D.

Define access privileges to the tables containing sensitive data in the pg_hba.conf file.

Full Access
Question # 78

An ecommerce company has tasked a Database Specialist with creating a reporting dashboard that visualizes critical business metrics that will be pulled from the core production database running on Amazon Aurora. Data that is read by the dashboard should be available within 100 milliseconds of an update.

The Database Specialist needs to review the current configuration of the Aurora DB cluster and develop a

cost-effective solution. The solution needs to accommodate the unpredictable read workload from the reporting dashboard without any impact on the write availability and performance of the DB cluster.

Which solution meets these requirements?

A.

Turn on the serverless option in the DB cluster so it can automatically scale based on demand.

B.

Provision a clone of the existing DB cluster for the new Application team.

C.

Create a separate DB cluster for the new workload, refresh from the source DB cluster, and set up ongoing replication using AWS DMS change data capture (CDC).

D.

Add an automatic scaling policy to the DB cluster to add Aurora Replicas to the cluster based on CPU consumption.

Full Access
Question # 79

A database specialist at a large multi-national financial company is in charge of designing the disaster recovery strategy for a highly available application that is in development. The application uses an Amazon DynamoDB table as its data store. The application requires a recovery time objective (RTO) of 1 minute and a recovery point objective (RPO) of 2 minutes.

Which operationally efficient disaster recovery strategy should the database specialist recommend for the DynamoDB table?

A.

Create a DynamoDB stream that is processed by an AWS Lambda function that copies the data to a DynamoDB table in another Region.

B.

Use a DynamoDB global table replica in another Region. Enable point-in-time recovery for both tables.

C.

Use a DynamoDB Accelerator table in another Region. Enable point-in-time recovery for the table.

D.

Create an AWS Backup plan and assign the DynamoDB table as a resource.

Full Access
Question # 80

A company is running a website on Amazon EC2 instances deployed in multiple Availability Zones (AZs). The site performs a high number of repetitive reads and writes each second on an Amazon RDS for MySQL Multi- AZ DB instance with General Purpose SSD (gp2) storage. After comprehensive testing and analysis, a database specialist discovers that there is high read latency and high CPU utilization on the DB instance.

Which approach should the database specialist to take to resolve this issue without changing the application?

A.

Implementing sharding to distribute the load to multiple RDS for MySQL databases.

B.

Use the same RDS for MySQL instance class with Provisioned IOPS (PIOPS) storage.

C.

Add an RDS for MySQL read replica.

D.

Modify the RDS for MySQL database class to a bigger size and implement Provisioned IOPS (PIOPS).

Full Access
Question # 81

An ecommerce company uses a backend application that stores data in an Amazon DynamoDB table. The backend application runs in a private subnet in a VPC and must connect to this table.

The company must minimize any network latency that results from network connectivity issues, even during periods of heavy application usage. A database administrator also needs the ability to use a private connection to connect to the DynamoDB table from the application.

Which solution will meet these requirements?

A.

Use network ACLs to ensure that any outgoing or incoming connections to any port except DynamoDB are deactivated. Encrypt API calls by using TLS.

B.

Create a VPC endpoint for DynamoDB in the application's VPC. Use the VPC endpoint to access the table.

C.

Create an AWS Lambda function that has access to DynamoDB. Restrict outgoing access only to this Lambda function from the application.

D.

Use a VPN to route all communication to DynamoDB through the company's own corporate network infrastructure.

Full Access
Question # 82

A company is using an Amazon Aurora PostgreSQL DB cluster for the backend of its mobile application. The application is running continuously and a database specialist is satisfied with high availability and fast failover, but is concerned about performance degradation after failover.

How can the database specialist minimize the performance degradation after failover?

A.

Enable cluster cache management for the Aurora DB cluster and set the promotion priority for the writer DB instance and replica to tier-0

B.

Enable cluster cache management tor the Aurora DB cluster and set the promotion priority for the writer DB instance and replica to tier-1

C.

Enable Query Plan Management for the Aurora DB cluster and perform a manual plan capture

D.

Enable Query Plan Management for the Aurora DB cluster and force the query optimizer to use the desired plan

Full Access
Question # 83

A company has applications running on Amazon EC2 instances in a private subnet with no internet connectivity. The company deployed a new application that uses Amazon DynamoDB, but the application cannot connect to the DynamoDB tables. A developer already checked that all permissions are set correctly.

What should a database specialist do to resolve this issue while minimizing access to external resources?

A.

Add a route to an internet gateway in the subnet’s route table.

B.

Add a route to a NAT gateway in the subnet’s route table.

C.

Assign a new security group to the EC2 instances with an outbound rule to ports 80 and 443.

D.

Create a VPC endpoint for DynamoDB and add a route to the endpoint in the subnet’s route table.

Full Access
Question # 84

A database specialist is responsible for an Amazon RDS for MySQL DB instance with one read replica. The DB instance and the read replica are assigned to the default parameter group. The database team currently runs test queries against a read replica. The database team wants to create additional tables in the read replica that will only be accessible from the read replica to benefit the tests.

Which should the database specialist do to allow the database team to create the test tables?

A.

Contact AWS Support to disable read-only mode on the read replica. Reboot the read replica. Connect to the read replica and create the tables.

B.

Change the read_only parameter to false (read_only=0) in the default parameter group of the read replica. Perform a reboot without failover. Connect to the read replica and create the tables using the local_only MySQL option.

C.

Change the read_only parameter to false (read_only=0) in the default parameter group. Reboot the read replica. Connect to the read replica and create the tables.

D.

Create a new DB parameter group. Change the read_only parameter to false (read_only=0). Associate the read replica with the new group. Reboot the read replica. Connect to the read replica and create the tables.

Full Access
Question # 85

A gaming company uses Amazon Aurora Serverless for one of its internal applications. The company's developers use Amazon RDS Data API to work with the

Aurora Serverless DB cluster. After a recent security review, the company is mandating security enhancements. A database specialist must ensure that access to

RDS Data API is private and never passes through the public internet.

What should the database specialist do to meet this requirement?

A.

Modify the Aurora Serverless cluster by selecting a VPC with private subnets.

B.

Modify the Aurora Serverless cluster by unchecking the publicly accessible option.

C.

Create an interface VPC endpoint that uses AWS PrivateLink for RDS Data API.

D.

Create a gateway VPC endpoint for RDS Data API.

Full Access
Question # 86

A Database Specialist is working with a company to launch a new website built on Amazon Aurora with several Aurora Replicas. This new website will replace an on-premises website connected to a legacy relational database. Due to stability issues in the legacy database, the company would like to test the resiliency of Aurora.

Which action can the Database Specialist take to test the resiliency of the Aurora DB cluster?

A.

Stop the DB cluster and analyze how the website responds

B.

Use Aurora fault injection to crash the master DB instance

C.

Remove the DB cluster endpoint to simulate a master DB instance failure

D.

Use Aurora Backtrack to crash the DB cluster

Full Access
Question # 87

A significant automotive manufacturer is switching a mission-critical finance application's database to Amazon DynamoDB. According to the company's risk and compliance policy, any update to the database must be documented as a log entry for auditing purposes. Each minute, the system anticipates about 500,000 log entries. Log entries should be kept in Apache Parquet files in batches of at least 100,000 records per file.

How could a database professional approach these needs while using DynamoDB?

A.

Enable Amazon DynamoDB Streams on the table. Create an AWS Lambda function triggered by the stream. Write the log entries to an Amazon S3 object.

B.

Create a backup plan in AWS Backup to back up the DynamoDB table once a day. Create an AWS Lambda function that restores the backup in another table and compares both tables for changes. Generate the log entries and write them to an Amazon S3 object.

C.

Enable AWS CloudTrail logs on the table. Create an AWS Lambda function that reads the log files once an hour and filters DynamoDB API actions. Write the filtered log files to Amazon S3.

D.

Enable Amazon DynamoDB Streams on the table. Create an AWS Lambda function triggered by the stream. Write the log entries to an Amazon Kinesis Data Firehose delivery stream with buffering and Amazon S3 as the destination.

Full Access
Question # 88

An ecommerce company is running Amazon RDS for Microsoft SQL Server. The company is planning to perform testing in a development environment with production data. The development environment and the production environment are in separate AWS accounts. Both environments use AWS Key Management Service (AWS KMS) encrypted databases with both manual and automated snapshots. A database specialist needs to share a KMS encrypted production RDS snapshot with the development account.

Which combination of steps should the database specialist take to meet these requirements? (Select THREE.)

A.

Create an automated snapshot. Share the snapshot from the production account to the development account.

B.

Create a manual snapshot. Share the snapshot from the production account to the development account.

C.

Share the snapshot that is encrypted by using the development account default KMS encryption key.

D.

Share the snapshot that is encrypted by using the production account custom KMS encryption key.

E.

Allow the development account to access the production account KMS encryption key.

F.

Allow the production account to access the development account KMS encryption key.

Full Access
Question # 89

A database specialist is designing the database for a software-as-a-service (SaaS) version of an employee information application. In the current architecture, the change history of employee records is stored in a single table in an Amazon RDS for Oracle database. Triggers on the employee table populate the history table with historical records.

This architecture has two major challenges. First, there is no way to guarantee that the records have not been changed in the history table. Second, queries on the history table are slow because of the large size of the table and the need to run the queries against a large subset of data in the table.

The database specialist must design a solution that prevents modification of the historical records. The solution also must maximize the speed of the queries.

Which solution will meet these requirements?

A.

Migrate the current solution to an Amazon DynamoDB table. Use DynamoDB Streams to keep track of changes. Use DynamoDB Accelerator (DAX) to improve query performance.

B.

Write employee record history to Amazon Quantum Ledger Database (Amazon QLDB) for historical records and to an Amazon OpenSearch Service domain for queries.

C.

Use Amazon Aurora PostgreSQL to store employee record history in a single table. Use Aurora Auto Scaling to provision more capacity.

D.

Build a solution that uses an Amazon Redshift cluster for historical records. Query the Redshift cluster directly as needed.

Full Access
Question # 90

The Security team for a finance company was notified of an internal security breach that happened 3 weeks ago. A Database Specialist must start producing audit logs out of the production Amazon Aurora PostgreSQL cluster for the Security team to use for monitoring and alerting. The Security team is required to perform real- time alerting and monitoring outside the Aurora DB cluster and wants to have the cluster push encrypted files to the chosen solution.

Which approach will meet these requirements?

A.

Use pg_audit to generate audit logs and send the logs to the Security team.

B.

Use AWS CloudTrail to audit the DB cluster and the Security team will get data from Amazon S3.

C.

Set up database activity streams and connect the data stream from Amazon Kinesis to consumer applications.

D.

Turn on verbose logging and set up a schedule for the logs to be dumped out for the Security team.

Full Access
Question # 91

A company uses Amazon Aurora MySQL as the primary database engine for many of its applications. A database specialist must create a dashboard to provide the company with information about user connections to databases. According to compliance requirements, the company must retain all connection logs for at least 7 years.

Which solution will meet these requirements MOST cost-effectively?

A.

Enable advanced auditing on the Aurora cluster to log CONNECT events. Export audit logs from Amazon CloudWatch to Amazon S3 by using an AWS Lambda function that is invoked by an Amazon EventBridge (Amazon CloudWatch Events) scheduled event. Build a dashboard by using Amazon QuickSight.

B.

Capture connection attempts to the Aurora cluster with AWS Cloud Trail by using the DescribeEvents API operation. Create a CloudTrail trail to export connection logs to Amazon S3. Build a dashboard by using Amazon QuickSight.

C.

Start a database activity stream for the Aurora cluster. Push the activity records to an Amazon Kinesis data stream. Build a dynamic dashboard by using AWS Lambda.

D.

Publish the DatabaseConnections metric for the Aurora DB instances to Amazon CloudWatch. Build a dashboard by using CloudWatch dashboards.

Full Access
Question # 92

A company wants to improve its ecommerce website on AWS. A database specialist decides to add Amazon ElastiCache for Redis in the implementation stack to ease the workload off the database and shorten the website response times. The database specialist must also ensure the ecommerce website is highly available within the company's AWS Region.

How should the database specialist deploy ElastiCache to meet this requirement?

A.

Launch an ElastiCache for Redis cluster using the AWS CLI with the -cluster-enabled switch.

B.

Launch an ElastiCache for Redis cluster and select read replicas in different Availability Zones.

C.

Launch two ElastiCache for Redis clusters in two different Availability Zones. Configure Redis streams to replicate the cache from the primary cluster to another.

D.

Launch an ElastiCache cluster in the primary Availability Zone and restore the cluster's snapshot to a different Availability Zone during disaster recovery.

Full Access
Question # 93

A company is running an Amazon RDS for MySQL Multi-AZ DB instance for a business-critical workload. RDS encryption for the DB instance is disabled. A recent security audit concluded that all business-critical applications must encrypt data at rest. The company has asked its database specialist to formulate a plan to accomplish this for the DB instance.

Which process should the database specialist recommend?

A.

Create an encrypted snapshot of the unencrypted DB instance. Copy the encrypted snapshot to Amazon S3. Restore the DB instance from the encrypted snapshot using Amazon S3.

B.

Create a new RDS for MySQL DB instance with encryption enabled. Restore the unencrypted snapshot to this DB instance.

C.

Create a snapshot of the unencrypted DB instance. Create an encrypted copy of the snapshot. Restore the DB instance from the encrypted snapshot.

D.

Temporarily shut down the unencrypted DB instance. Enable AWS KMS encryption in the AWS Management Console using an AWS managed CMK. Restart the DB instance in an encrypted state.

Full Access
Question # 94

A company’s Security department established new requirements that state internal users must connect to an existing Amazon RDS for SQL Server DB instance using their corporate Active Directory (AD) credentials. A Database Specialist must make the modifications needed to fulfill this requirement.

Which combination of actions should the Database Specialist take? (Choose three.)

A.

Disable Transparent Data Encryption (TDE) on the RDS SQL Server DB instance.

B.

Modify the RDS SQL Server DB instance to use the directory for Windows authentication. Create appropriate new logins.

C.

Use the AWS Management Console to create an AWS Managed Microsoft AD. Create a trust relationship with the corporate AD.

D.

Stop the RDS SQL Server DB instance, modify it to use the directory for Windows authentication, and start it again. Create appropriate new logins.

E.

Use the AWS Management Console to create an AD Connector. Create a trust relationship with the corporate AD.

F.

Configure the AWS Managed Microsoft AD domain controller Security Group.

Full Access
Question # 95

A social media company is using Amazon DynamoDB to store user profile data and user activity data. Developers are reading and writing the data, causing the size of the tables to grow significantly. Developers have started to face performance bottlenecks with the tables.

Which solution should a database specialist recommend to read items the FASTEST without consuming all the provisioned throughput for the tables?

A.

Use the Scan API operation in parallel with many workers to read all the items. Use the Query API operation to read multiple items that have a specific partition key and sort key. Use the GetItem API operation to read a single item.

B.

Use the Scan API operation with a filter expression that allows multiple items to be read. Use the Query API operation to read multiple items that have a specific partition key and sort key. Use the GetItem API operation to read a single item.

C.

Use the Scan API operation with a filter expression that allows multiple items to be read. Use the Query API operation to read a single item that has a specific primary key. Use the BatchGetItem API operation to read multiple items.

D.

Use the Scan API operation in parallel with many workers to read all the items. Use the Query API operation to read a single item that has a specific primary key Use the BatchGetItem API operation to read multiple items.

Full Access