0

我正在尝试使用 Apache Spark 的 2.0 数据集:

import org.apache.spark.sql.expressions.Aggregator
import org.apache.spark.sql.Encoder
import spark.implicits._

case class C1(f1: String, f2: String, f3: String, f4: String, f5: Double)

val teams = Seq(
  C1("hash1", "NLC", "Cubs", "2016-01-23", 3253.21),
  C1("hash1", "NLC", "Cubs", "2014-01-23", 353.88),
  C1("hash3", "NLW", "Dodgers", "2013-08-15", 4322.12),
  C1("hash4", "NLE", "Red Sox", "2010-03-14", 10283.72)
).toDS()

val c1Agg = new Aggregator[C1, Seq[C1], Seq[C1]]  with Serializable {
  def zero: Seq[C1] = Seq.empty[C1] //Nil
  def reduce(b: Seq[C1], a: C1): Seq[C1] = b :+ a
  def merge(b1: Seq[C1], b2: Seq[C1]): Seq[C1] = b1 ++ b2
  def finish(r: Seq[C1]): Seq[C1] = r

  override def bufferEncoder: Encoder[Seq[C1]] = newProductSeqEncoder[C1]
  override def outputEncoder: Encoder[Seq[C1]] = newProductSeqEncoder[C1]
}.toColumn

val g_c1 = teams.groupByKey(_.f1).agg(c1Agg).collect

但是当我运行它时,我收到以下错误消息:

scala.reflect.internal.MissingRequirementError: class lineb4c2bb72bf6e417e9975d1a65602aec912.$read in JavaMirror with sun.misc.Launcher$AppClassLoader@14dad5dc of type class sun.misc.Launcher$AppClassLoader with class path [OMITTED] not found

我假设配置是正确的,因为我在 Databricks 社区云下运行。

4

1 回答 1

0

我终于能够通过在第 20、21 行使用ExpressionEncoder()而不是newProductSeqEncoder[C1]来使其工作。

(不知道为什么以前的代码不起作用。)

于 2016-07-16T07:34:11.080 回答