New Year Special Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: scxmas70

AIF-C01 Exam Dumps - AWS Certified AI Practitioner Exam(AI1-C01)

Go to page:
Question # 17

A company is using a pre-trained large language model (LLM) to build a chatbot for product recommendations. The company needs the LLM outputs to be short and written in a specific language.

Which solution will align the LLM response quality with the company's expectations?

A.

Adjust the prompt.

B.

Choose an LLM of a different size.

C.

Increase the temperature.

D.

Increase the Top K value.

Full Access
Question # 18

A company wants to use a large language model (LLM) to develop a conversational agent. The company needs to prevent the LLM from being manipulated with common prompt engineering techniques to perform undesirable actions or expose sensitive information.

Which action will reduce these risks?

A.

Create a prompt template that teaches the LLM to detect attack patterns.

B.

Increase the temperature parameter on invocation requests to the LLM.

C.

Avoid using LLMs that are not listed in Amazon SageMaker.

D.

Decrease the number of input tokens on invocations of the LLM.

Full Access
Question # 19

A company wants to display the total sales for its top-selling products across various retail locations in the past 12 months.

Which AWS solution should the company use to automate the generation of graphs?

A.

Amazon Q in Amazon EC2

B.

Amazon Q Developer

C.

Amazon Q in Amazon QuickSight

D.

Amazon Q in AWS Chatbot

Full Access
Question # 20

An AI practitioner is using an Amazon Bedrock base model to summarize session chats from the customer service department. The AI practitioner wants to store invocation logs to monitor model input and output data.

Which strategy should the AI practitioner use?

A.

Configure AWS CloudTrail as the logs destination for the model.

B.

Enable invocation logging in Amazon Bedrock.

C.

Configure AWS Audit Manager as the logs destination for the model.

D.

Configure model invocation logging in Amazon EventBridge.

Full Access
Question # 21

How can companies use large language models (LLMs) securely on Amazon Bedrock?

A.

Design clear and specific prompts. Configure AWS Identity and Access Management (IAM) roles and policies by using least privilege access.

B.

Enable AWS Audit Manager for automatic model evaluation jobs.

C.

Enable Amazon Bedrock automatic model evaluation jobs.

D.

Use Amazon CloudWatch Logs to make models explainable and to monitor for bias.

Full Access
Question # 22

A company wants to use a pre-trained generative AI model to generate content for its marketing campaigns. The company needs to ensure that the generated content aligns with the company's brand voice and messaging requirements.

Which solution meets these requirements?

A.

Optimize the model's architecture and hyperparameters to improve the model's overall performance.

B.

Increase the model's complexity by adding more layers to the model's architecture.

C.

Create effective prompts that provide clear instructions and context to guide the model's generation.

D.

Select a large, diverse dataset to pre-train a new generative model.

Full Access
Question # 23

A medical company is customizing a foundation model (FM) for diagnostic purposes. The company needs the model to be transparent and explainable to meet regulatory requirements.

Which solution will meet these requirements?

A.

Configure the security and compliance by using Amazon Inspector.

B.

Generate simple metrics, reports, and examples by using Amazon SageMaker Clarify.

C.

Encrypt and secure training data by using Amazon Macie.

D.

Gather more data. Use Amazon Rekognition to add custom labels to the data.

Full Access
Question # 24

A company wants to assess the costs that are associated with using a large language model (LLM) to generate inferences. The company wants to use Amazon Bedrock to build generative AI applications.

Which factor will drive the inference costs?

A.

Number of tokens consumed

B.

Temperature value

C.

Amount of data used to train the LLM

D.

Total training time

Full Access
Go to page: