Summer Sale Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: v4s65

H13-311_V3.5 Exam Dumps - HCIA-AI V3.5 Exam

Question # 4

Single-layer perceptrons and logistic regression are linear classifiers that can only process linearly separable data.

A.

TRUE

B.

FALSE

Full Access
Question # 5

AI chips, also called AI accelerators, optimize matrix multiplication.

A.

TRUE

B.

FALSE

Full Access
Question # 6

As the cornerstone of Huawei's full-stack, all-scenario AI solution, it provides modules, boards, and servers powered by the Ascend AI processor to meet customer demand for computing power in all scenarios.

A.

Atlas

B.

CANN

C.

MindSpore

D.

ModelArts

Full Access
Question # 7

Which of the following statements is false about feedforward neural networks?

A.

A unidirectional multi-layer structure is adopted. Each layer includes several neurons, and those in the same layer are not connected to each other. Only unidirectional inter-layer information transmission is supported.

B.

Nodes at each hidden layer represent neurons that provide the computing function.

C.

Input nodes do not provide the computing function and are used to represent only the element values of an input vector.

D.

Each neuron is connected to all neurons at the previous layer.

Full Access
Question # 8

Which of the following is the activation function used in the hidden layers of the standard recurrent neural network (RNN) structure?

A.

ReLU

B.

Softmax

C.

Tanh

D.

Sigmoid

Full Access
Question # 9

In MindSpore, mindspore.nn.Conv2d() is used to create a convolutional layer. Which of the following values can be passed to this API's "pad_mode" parameter?

A.

pad

B.

same

C.

valid

D.

nopadding

Full Access
Question # 10

All kernels of the same convolutional layer in a convolutional neural network share a weight.

A.

TRUE

B.

FALSE

Full Access
Question # 11

Which of the following is NOT a key feature that enables all-scenario deployment and collaboration for MindSpore?

A.

Data and computing graphs are transmitted to Ascend AI Processors.

B.

Federal meta-learning enables real-time, coordinated model updates between different devices, and across the device and cloud.

C.

Unified model IR delivers a consistent deployment experience.

D.

Graph optimization based on a software-hardware synergy shields the differences between scenarios.

Full Access
Question # 12

In a hyperparameter-based search, the hyperparameters of a model are searched based on the data on and the model's performance metrics.

A.

TRUE

B.

FALSE

Full Access
Question # 13

When you use MindSpore to execute the following code, which of the following is the output?

from mindspore import ops

import mindspore

shape = (2, 2)

ones = ops.Ones()

output = ones(shape, dtype=mindspore.float32)

print(output)

A.

[[1 1]

     [1 1]]

B.

[[1. 1.]

     [1. 1.]]

C.

1

D.

[[1. 1.

     1. 1.]]

Full Access
Question # 14

Which of the following are feedforward neural networks?

A.

Fully-connected neural networks

B.

Recurrent neural networks

C.

Boltzmann machines

D.

Convolutional neural networks

Full Access
Question # 15

When using the following code to construct a neural network, MindSpore can inherit the Cell class and rewrite the __init__ and construct methods.

A.

TRUE

B.

FALSE

Full Access
Question # 16

Convolutional neural networks (CNNs) cannot be used to process text data.

A.

TRUE

B.

FALSE

Full Access
Question # 17

"Today's speech processing technology can achieve a recognition accuracy of over 90% in any case." Which of the following is true about this statement?

A.

This statement is incorrect. The accuracy of speech recognition is high, but not extremely high.

B.

This statement is incorrect. In many situations, noise and background sound have a huge impact on speech recognition accuracy.

C.

This statement is correct. Speech processing can achieve a high level of accuracy.

D.

This statement is correct. Speech processing has a long history and the technology is very mature.

Full Access
Question # 18

Which of the following statements is false about gradient descent algorithms?

A.

Each time the global gradient updates its weight, all training samples need to be calculated.

B.

When GPUs are used for parallel computing, the mini-batch gradient descent (MBGD) takes less time than the stochastic gradient descent (SGD) to complete an epoch.

C.

The global gradient descent is relatively stable, which helps the model converge to the global extremum.

D.

When there are too many samples and GPUs are not used for parallel computing, the convergence process of the global gradient algorithm is time-consuming.

Full Access