Winter Sale Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: v4s65

Databricks-Certified-Associate-Developer-for-Apache-Spark-3.0 Exam Dumps - Databricks Certified Associate Developer for Apache Spark 3.0 Exam

Go to page:
Question # 9

Which of the following is not a feature of Adaptive Query Execution?

A.

Replace a sort merge join with a broadcast join, where appropriate.

B.

Coalesce partitions to accelerate data processing.

C.

Split skewed partitions into smaller partitions to avoid differences in partition processing time.

D.

Reroute a query in case of an executor failure.

E.

Collect runtime statistics during query execution.

Full Access
Question # 10

The code block shown below should return a copy of DataFrame transactionsDf with an added column cos. This column should have the values in column value converted to degrees and having

the cosine of those converted values taken, rounded to two decimals. Choose the answer that correctly fills the blanks in the code block to accomplish this.

Code block:

transactionsDf.__1__(__2__, round(__3__(__4__(__5__)),2))

A.

1. withColumn

2. col("cos")

3. cos

4. degrees

5. transactionsDf.value

B.

1. withColumnRenamed

2. "cos"

3. cos

4. degrees

5. "transactionsDf.value"

C.

1. withColumn

2. "cos"

3. cos

4. degrees

5. transactionsDf.value

D.

1. withColumn

2. col("cos")

3. cos

4. degrees

5. col("value")

E

. 1. withColumn

2. "cos"

3. degrees

4. cos

5. col("value")

Full Access
Question # 11

Which of the following describes Spark's Adaptive Query Execution?

A.

Adaptive Query Execution features include dynamically coalescing shuffle partitions, dynamically injecting scan filters, and dynamically optimizing skew joins.

B.

Adaptive Query Execution is enabled in Spark by default.

C.

Adaptive Query Execution reoptimizes queries at execution points.

D.

Adaptive Query Execution features are dynamically switching join strategies and dynamically optimizing skew joins.

E.

Adaptive Query Execution applies to all kinds of queries.

Full Access
Question # 12

Which of the following code blocks silently writes DataFrame itemsDf in avro format to location fileLocation if a file does not yet exist at that location?

A.

itemsDf.write.avro(fileLocation)

B.

itemsDf.write.format("avro").mode("ignore").save(fileLocation)

C.

itemsDf.write.format("avro").mode("errorifexists").save(fileLocation)

D.

itemsDf.save.format("avro").mode("ignore").write(fileLocation)

E.

spark.DataFrameWriter(itemsDf).format("avro").write(fileLocation)

Full Access
Question # 13

The code block displayed below contains an error. The code block should return DataFrame transactionsDf, but with the column storeId renamed to storeNumber. Find the error.

Code block:

transactionsDf.withColumn("storeNumber", "storeId")

A.

Instead of withColumn, the withColumnRenamed method should be used.

B.

Arguments "storeNumber" and "storeId" each need to be wrapped in a col() operator.

C.

Argument "storeId" should be the first and argument "storeNumber" should be the second argument to the withColumn method.

D.

The withColumn operator should be replaced with the copyDataFrame operator.

E.

Instead of withColumn, the withColumnRenamed method should be used and argument "storeId" should be the first and argument "storeNumber" should be the second argument to that method.

Full Access
Question # 14

The code block displayed below contains an error. The code block should return a DataFrame where all entries in column supplier contain the letter combination et in this order. Find the error.

Code block:

itemsDf.filter(Column('supplier').isin('et'))

A.

The Column operator should be replaced by the col operator and instead of isin, contains should be used.

B.

The expression inside the filter parenthesis is malformed and should be replaced by isin('et', 'supplier').

C.

Instead of isin, it should be checked whether column supplier contains the letters et, so isin should be replaced with contains. In addition, the column should be accessed using col['supplier'].

D.

The expression only returns a single column and filter should be replaced by select.

Full Access
Question # 15

Which of the following code blocks shows the structure of a DataFrame in a tree-like way, containing both column names and types?

A.

1.print(itemsDf.columns)

2.print(itemsDf.types)

B.

itemsDf.printSchema()

C.

spark.schema(itemsDf)

D.

itemsDf.rdd.printSchema()

E.

itemsDf.print.schema()

Full Access
Question # 16

Which of the following code blocks applies the Python function to_limit on column predError in table transactionsDf, returning a DataFrame with columns transactionId and result?

A.

1.spark.udf.register("LIMIT_FCN", to_limit)

2.spark.sql("SELECT transactionId, LIMIT_FCN(predError) AS result FROM transactionsDf")

(Correct)

B.

1.spark.udf.register("LIMIT_FCN", to_limit)

2.spark.sql("SELECT transactionId, LIMIT_FCN(predError) FROM transactionsDf AS result")

C.

1.spark.udf.register("LIMIT_FCN", to_limit)

2.spark.sql("SELECT transactionId, to_limit(predError) AS result FROM transactionsDf")

spark.sql("SELECT transactionId, udf(to_limit(predError)) AS result FROM transactionsDf")

D.

1.spark.udf.register(to_limit, "LIMIT_FCN")

2.spark.sql("SELECT transactionId, LIMIT_FCN(predError) AS result FROM transactionsDf")

Full Access
Go to page: