"Today's speech processing technology can achieve a recognition accuracy of over 90% in any case." Which of the following is true about this statement?
A.
This statement is incorrect. The accuracy of speech recognition is high, but not extremely high.
B.
This statement is incorrect. In many situations, noise and background sound have a huge impact on speech recognition accuracy.
C.
This statement is correct. Speech processing can achieve a high level of accuracy.
D.
This statement is correct. Speech processing has a long history and the technology is very mature.
While speech recognition technology has improved significantly, its accuracy can still be affected by external factors such as noise, background sound, accents, and speech clarity. Although systems can achieve over 90% accuracy under controlled conditions, the accuracy drops in noisy or complex real-world environments. Therefore, the statement that today's speech processing technology can always achieve high recognition accuracy is incorrect.
Speech recognition systems are sophisticated but still face challenges in environments with heavy noise, where the technology has difficulty interpreting speech accurately.
[Reference: Huawei HCIA-AI Certification, AI Applications in Speech Processing., , , ]
Question # 18
Which of the following statements is false about gradient descent algorithms?
A.
Each time the global gradient updates its weight, all training samples need to be calculated.
B.
When GPUs are used for parallel computing, the mini-batch gradient descent (MBGD) takes less time than the stochastic gradient descent (SGD) to complete an epoch.
C.
The global gradient descent is relatively stable, which helps the model converge to the global extremum.
D.
When there are too many samples and GPUs are not used for parallel computing, the convergence process of the global gradient algorithm is time-consuming.
The statement that mini-batch gradient descent (MBGD) takes less time than stochastic gradient descent (SGD) to complete an epoch when GPUs are used for parallel computing is incorrect. Here’s why:
Stochastic Gradient Descent (SGD) updates the weights after each training sample, which can lead to faster updates but more noise in the gradient steps. It completes an epoch after processing all samples one by one.
Mini-batch Gradient Descent (MBGD) processes small batches of data at a time, updating the weights after each batch. While MBGD leverages the computational power of GPUs effectively for parallelization, the comparison made in this question is not about overall computation speed, but about completing an epoch.
MBGD does not necessarily complete an epoch faster than SGD, as MBGD processes multiple samples in each batch, meaning fewer updates per epoch compared to SGD, where weights are updated after every individual sample.
Therefore, the correct answer is B. FALSE, as MBGD does not always take less time than SGD for completing an epoch, even when GPUs are used for parallelization.
HCIA AI References:
AI Development Framework: Discussion of gradient descent algorithms and their efficiency on different hardware architectures like GPUs.