Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
72 commits
Select commit Hold shift + click to select a range
6a514f1
add word2vec transformer
yinxusen Apr 19, 2015
02767fb
add shared params
yinxusen Apr 20, 2015
fe3afe9
add test suite and pass it
yinxusen Apr 20, 2015
e29680a
fix errors
yinxusen Apr 20, 2015
618abd0
refine comments
yinxusen Apr 20, 2015
7cde18f
rm sharedParams
yinxusen Apr 22, 2015
6b97ec8
fix code style and refine the transform function of word2vec
yinxusen Apr 22, 2015
1211e86
ensure the functionality
yinxusen Apr 22, 2015
b54399f
fix comments and code style
yinxusen Apr 22, 2015
566ec20
fix scala style
yinxusen Apr 23, 2015
66e7bd3
change foldLeft to for loop and use blas
yinxusen Apr 24, 2015
e8cfaf7
fix conflict with master
yinxusen Apr 26, 2015
23d77fa
merge with #5626
yinxusen Apr 26, 2015
ca55dc9
[SPARK-7152][SQL] Add a Column expression for partition ID.
rxin Apr 26, 2015
d188b8b
[SQL][Minor] rename DataTypeParser.apply to DataTypeParser.parse
scwf Apr 27, 2015
82bb7fd
[SPARK-6505] [SQL] Remove the reflection call in HiveFunctionWrapper
baishuo Apr 27, 2015
998aac2
[SPARK-4925] Publish Spark SQL hive-thriftserver maven artifact
chernetsov Apr 27, 2015
7078f60
[SPARK-6856] [R] Make RDD information more useful in SparkR
Jeffrharr Apr 27, 2015
ef82bdd
SPARK-7107 Add parameter for zookeeper.znode.parent to hbase_inputfor…
tedyu Apr 27, 2015
ca9f4eb
[SPARK-6991] [SPARKR] Adds support for zipPartitions.
hlin09 Apr 27, 2015
b9de9e0
[SPARK-7103] Fix crash with SparkContext.union when RDD has no partit…
stshe Apr 27, 2015
8e1c00d
[SPARK-6738] [CORE] Improve estimate the size of a large array
shenh062326 Apr 27, 2015
5d45e1f
[SPARK-3090] [CORE] Stop SparkContext if user forgets to.
Apr 27, 2015
ab5adb7
[SPARK-7145] [CORE] commons-lang (2.x) classes used instead of common…
srowen Apr 27, 2015
62888a4
[SPARK-7162] [YARN] Launcher error in yarn-client
witgo Apr 27, 2015
4d9e560
[SPARK-7090] [MLLIB] Introduce LDAOptimizer to LDA to further improve…
hhbyyh Apr 28, 2015
874a2ca
[SPARK-7174][Core] Move calling `TaskScheduler.executorHeartbeatRecei…
zsxwing Apr 28, 2015
29576e7
[SPARK-6829] Added math functions for DataFrames
brkyvz Apr 28, 2015
9e4e82b
[SPARK-5946] [STREAMING] Add Python API for direct Kafka stream
jerryshao Apr 28, 2015
bf35edd
[SPARK-7187] SerializationDebugger should not crash user code
Apr 28, 2015
d94cd1a
[SPARK-7135][SQL] DataFrame expression for monotonically increasing IDs.
rxin Apr 28, 2015
e13cd86
[SPARK-6352] [SQL] Custom parquet output committer
Apr 28, 2015
7f3b3b7
[SPARK-7168] [BUILD] Update plugin versions in Maven build and centra…
srowen Apr 28, 2015
75905c5
[SPARK-7100] [MLLIB] Fix persisted RDD leak in GradientBoostTrees
Apr 28, 2015
268c419
[SPARK-6435] spark-shell --jars option does not add all jars to class…
tsudukim Apr 28, 2015
6a827d5
[SPARK-5253] [ML] LinearRegression with L1/L2 (ElasticNet) using OWLQN
Apr 28, 2015
b14cd23
[SPARK-7140] [MLLIB] only scan the first 16 entries in Vector.hashCode
mengxr Apr 28, 2015
52ccf1d
[Core][test][minor] replace try finally block with tryWithSafeFinally
liyezhang556520 Apr 28, 2015
8aab94d
[SPARK-4286] Add an external shuffle service that can be run as a dae…
dragos Apr 28, 2015
2d222fb
[SPARK-5932] [CORE] Use consistent naming for size properties
Apr 28, 2015
8009810
[SPARK-6314] [CORE] handle JsonParseException for history server
liyezhang556520 Apr 28, 2015
53befac
[SPARK-5338] [MESOS] Add cluster mode support for Mesos
tnachen Apr 28, 2015
28b1af7
[MINOR] [CORE] Warn users who try to cache RDDs with dynamic allocati…
Apr 28, 2015
f0a1f90
[SPARK-7201] [MLLIB] move Identifiable to ml.util
mengxr Apr 28, 2015
555213e
Closes #4807
mengxr Apr 28, 2015
d36e673
[SPARK-6965] [MLLIB] StringIndexer handles numeric input.
mengxr Apr 29, 2015
5c8f4bd
[SPARK-7138] [STREAMING] Add method to BlockGenerator to add multiple…
tdas Apr 29, 2015
a8aeadb
[SPARK-7208] [ML] [PYTHON] Added Matrix, SparseMatrix to __all__ list…
jkbradley Apr 29, 2015
5ef006f
[SPARK-6756] [MLLIB] add toSparse, toDense, numActives, numNonzeros, …
mengxr Apr 29, 2015
271c4c6
[SPARK-7215] made coalesce and repartition a part of the query plan
brkyvz Apr 29, 2015
f98773a
[SPARK-7205] Support `.ivy2/local` and `.m2/repositories/` in --packages
brkyvz Apr 29, 2015
8dee274
MAINTENANCE: Automated closing of pull requests.
pwendell Apr 29, 2015
fe917f5
[SPARK-7188] added python support for math DataFrame functions
brkyvz Apr 29, 2015
1fd6ed9
[SPARK-7204] [SQL] Fix callSite for Dataframe and SQL operations
pwendell Apr 29, 2015
f49284b
[SPARK-7076][SPARK-7077][SPARK-7080][SQL] Use managed memory for aggr…
JoshRosen Apr 29, 2015
baed3f2
[SPARK-6918] [YARN] Secure HBase support.
deanchen Apr 29, 2015
687273d
[SPARK-7223] Rename RPC askWithReply -> askWithReply, sendWithReply -…
rxin Apr 29, 2015
3df9c5d
Better error message on access to non-existing attribute
ksonj Apr 29, 2015
81ea42b
[SQL][Minor] fix java doc for DataFrame.agg
cloud-fan Apr 29, 2015
c0c0ba6
Fix a typo of "threshold"
yinxusen Apr 29, 2015
c594095
add word2vec transformer
yinxusen Apr 19, 2015
04dde06
add shared params
yinxusen Apr 20, 2015
109d124
add test suite and pass it
yinxusen Apr 20, 2015
34a55c0
fix errors
yinxusen Apr 20, 2015
02848fa
refine comments
yinxusen Apr 20, 2015
a190f2c
fix code style and refine the transform function of word2vec
yinxusen Apr 22, 2015
04c48e9
ensure the functionality
yinxusen Apr 22, 2015
743e0d5
fix comments and code style
yinxusen Apr 22, 2015
5dd4ee7
fix scala style
yinxusen Apr 23, 2015
3bc2cbd
change foldLeft to for loop and use blas
yinxusen Apr 24, 2015
4945462
merge with #5626
yinxusen Apr 26, 2015
ee2b37a
merge with former HEAD
yinxusen Apr 29, 2015
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions R/pkg/NAMESPACE
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,7 @@ exportMethods(
"unpersist",
"value",
"values",
"zipPartitions",
"zipRDD",
"zipWithIndex",
"zipWithUniqueId"
Expand Down
51 changes: 51 additions & 0 deletions R/pkg/R/RDD.R
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,11 @@ setMethod("initialize", "RDD", function(.Object, jrdd, serializedMode,
.Object
})

setMethod("show", "RDD",
function(.Object) {
cat(paste(callJMethod(.Object@jrdd, "toString"), "\n", sep=""))
})

setMethod("initialize", "PipelinedRDD", function(.Object, prev, func, jrdd_val) {
.Object@env <- new.env()
.Object@env$isCached <- FALSE
Expand Down Expand Up @@ -1590,3 +1595,49 @@ setMethod("intersection",

keys(filterRDD(cogroup(rdd1, rdd2, numPartitions = numPartitions), filterFunction))
})

#' Zips an RDD's partitions with one (or more) RDD(s).
#' Same as zipPartitions in Spark.
#'
#' @param ... RDDs to be zipped.
#' @param func A function to transform zipped partitions.
#' @return A new RDD by applying a function to the zipped partitions.
#' Assumes that all the RDDs have the *same number of partitions*, but
#' does *not* require them to have the same number of elements in each partition.
#' @examples
#'\dontrun{
#' sc <- sparkR.init()
#' rdd1 <- parallelize(sc, 1:2, 2L) # 1, 2
#' rdd2 <- parallelize(sc, 1:4, 2L) # 1:2, 3:4
#' rdd3 <- parallelize(sc, 1:6, 2L) # 1:3, 4:6
#' collect(zipPartitions(rdd1, rdd2, rdd3,
#' func = function(x, y, z) { list(list(x, y, z))} ))
#' # list(list(1, c(1,2), c(1,2,3)), list(2, c(3,4), c(4,5,6)))
#'}
#' @rdname zipRDD
#' @aliases zipPartitions,RDD
setMethod("zipPartitions",
"RDD",
function(..., func) {
rrdds <- list(...)
if (length(rrdds) == 1) {
return(rrdds[[1]])
}
nPart <- sapply(rrdds, numPartitions)
if (length(unique(nPart)) != 1) {
stop("Can only zipPartitions RDDs which have the same number of partitions.")
}

rrdds <- lapply(rrdds, function(rdd) {
mapPartitionsWithIndex(rdd, function(partIndex, part) {
print(length(part))
list(list(partIndex, part))
})
})
union.rdd <- Reduce(unionRDD, rrdds)
zipped.rdd <- values(groupByKey(union.rdd, numPartitions = nPart[1]))
res <- mapPartitions(zipped.rdd, function(plist) {
do.call(func, plist[[1]])
})
res
})
5 changes: 5 additions & 0 deletions R/pkg/R/generics.R
Original file line number Diff line number Diff line change
Expand Up @@ -217,6 +217,11 @@ setGeneric("unpersist", function(x, ...) { standardGeneric("unpersist") })
#' @export
setGeneric("zipRDD", function(x, other) { standardGeneric("zipRDD") })

#' @rdname zipRDD
#' @export
setGeneric("zipPartitions", function(..., func) { standardGeneric("zipPartitions") },
signature = "...")

#' @rdname zipWithIndex
#' @seealso zipWithUniqueId
#' @export
Expand Down
33 changes: 33 additions & 0 deletions R/pkg/inst/tests/test_binary_function.R
Original file line number Diff line number Diff line change
Expand Up @@ -66,3 +66,36 @@ test_that("cogroup on two RDDs", {
expect_equal(sortKeyValueList(actual),
sortKeyValueList(expected))
})

test_that("zipPartitions() on RDDs", {
rdd1 <- parallelize(sc, 1:2, 2L) # 1, 2
rdd2 <- parallelize(sc, 1:4, 2L) # 1:2, 3:4
rdd3 <- parallelize(sc, 1:6, 2L) # 1:3, 4:6
actual <- collect(zipPartitions(rdd1, rdd2, rdd3,
func = function(x, y, z) { list(list(x, y, z))} ))
expect_equal(actual,
list(list(1, c(1,2), c(1,2,3)), list(2, c(3,4), c(4,5,6))))

mockFile = c("Spark is pretty.", "Spark is awesome.")
fileName <- tempfile(pattern="spark-test", fileext=".tmp")
writeLines(mockFile, fileName)

rdd <- textFile(sc, fileName, 1)
actual <- collect(zipPartitions(rdd, rdd,
func = function(x, y) { list(paste(x, y, sep = "\n")) }))
expected <- list(paste(mockFile, mockFile, sep = "\n"))
expect_equal(actual, expected)

rdd1 <- parallelize(sc, 0:1, 1)
actual <- collect(zipPartitions(rdd1, rdd,
func = function(x, y) { list(x + nchar(y)) }))
expected <- list(0:1 + nchar(mockFile))
expect_equal(actual, expected)

rdd <- map(rdd, function(x) { x })
actual <- collect(zipPartitions(rdd, rdd1,
func = function(x, y) { list(y + nchar(x)) }))
expect_equal(actual, expected)

unlink(fileName)
})
5 changes: 5 additions & 0 deletions R/pkg/inst/tests/test_rdd.R
Original file line number Diff line number Diff line change
Expand Up @@ -759,6 +759,11 @@ test_that("collectAsMap() on a pairwise RDD", {
expect_equal(vals, list(`1` = "a", `2` = "b"))
})

test_that("show()", {
rdd <- parallelize(sc, list(1:10))
expect_output(show(rdd), "ParallelCollectionRDD\\[\\d+\\] at parallelize at RRDD\\.scala:\\d+")
})

test_that("sampleByKey() on pairwise RDDs", {
rdd <- parallelize(sc, 1:2000)
pairsRDD <- lapply(rdd, function(x) { if (x %% 2 == 0) list("a", x) else list("b", x) })
Expand Down
1 change: 0 additions & 1 deletion assembly/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -194,7 +194,6 @@
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<version>2.4</version>
<executions>
<execution>
<id>dist</id>
Expand Down
5 changes: 4 additions & 1 deletion bin/spark-class2.cmd
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,10 @@ if not "x%JAVA_HOME%"=="x" set RUNNER=%JAVA_HOME%\bin\java

rem The launcher library prints the command to be executed in a single line suitable for being
rem executed by the batch interpreter. So read all the output of the launcher into a variable.
for /f "tokens=*" %%i in ('cmd /C ""%RUNNER%" -cp %LAUNCH_CLASSPATH% org.apache.spark.launcher.Main %*"') do (
set LAUNCHER_OUTPUT=%temp%\spark-class-launcher-output-%RANDOM%.txt
"%RUNNER%" -cp %LAUNCH_CLASSPATH% org.apache.spark.launcher.Main %* > %LAUNCHER_OUTPUT%
for /f "tokens=*" %%i in (%LAUNCHER_OUTPUT%) do (
set SPARK_CMD=%%i
)
del %LAUNCHER_OUTPUT%
%SPARK_CMD%
3 changes: 2 additions & 1 deletion conf/spark-env.sh.template
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
# This file is sourced when running various Spark programs.
# Copy it as spark-env.sh and edit that to configure Spark for your site.

# Options read when launching programs locally with
# Options read when launching programs locally with
# ./bin/run-example or ./bin/spark-submit
# - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files
# - SPARK_LOCAL_IP, to set the IP address Spark binds to on this node
Expand Down Expand Up @@ -39,6 +39,7 @@
# - SPARK_WORKER_DIR, to set the working directory of worker processes
# - SPARK_WORKER_OPTS, to set config properties only for the worker (e.g. "-Dx=y")
# - SPARK_HISTORY_OPTS, to set config properties only for the history server (e.g. "-Dx=y")
# - SPARK_SHUFFLE_OPTS, to set config properties only for the external shuffle service (e.g. "-Dx=y")
# - SPARK_DAEMON_JAVA_OPTS, to set config properties for all daemons (e.g. "-Dx=y")
# - SPARK_PUBLIC_DNS, to set the public dns name of the master or workers

Expand Down
6 changes: 5 additions & 1 deletion core/pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -95,6 +95,11 @@
<artifactId>spark-network-shuffle_${scala.binary.version}</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-unsafe_${scala.binary.version}</artifactId>
<version>${project.version}</version>
</dependency>
<dependency>
<groupId>net.java.dev.jets3t</groupId>
<artifactId>jets3t</artifactId>
Expand Down Expand Up @@ -478,7 +483,6 @@
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>1.3.2</version>
<executions>
<execution>
<id>sparkr-pkg</id>
Expand Down
26 changes: 17 additions & 9 deletions core/src/main/scala/org/apache/spark/HeartbeatReceiver.scala
Original file line number Diff line number Diff line change
Expand Up @@ -76,13 +76,15 @@ private[spark] class HeartbeatReceiver(sc: SparkContext)

private var timeoutCheckingTask: ScheduledFuture[_] = null

private val timeoutCheckingThread =
ThreadUtils.newDaemonSingleThreadScheduledExecutor("heartbeat-timeout-checking-thread")
// "eventLoopThread" is used to run some pretty fast actions. The actions running in it should not
// block the thread for a long time.
private val eventLoopThread =
ThreadUtils.newDaemonSingleThreadScheduledExecutor("heartbeat-receiver-event-loop-thread")

private val killExecutorThread = ThreadUtils.newDaemonSingleThreadExecutor("kill-executor-thread")

override def onStart(): Unit = {
timeoutCheckingTask = timeoutCheckingThread.scheduleAtFixedRate(new Runnable {
timeoutCheckingTask = eventLoopThread.scheduleAtFixedRate(new Runnable {
override def run(): Unit = Utils.tryLogNonFatalError {
Option(self).foreach(_.send(ExpireDeadHosts))
}
Expand All @@ -99,11 +101,15 @@ private[spark] class HeartbeatReceiver(sc: SparkContext)
override def receiveAndReply(context: RpcCallContext): PartialFunction[Any, Unit] = {
case heartbeat @ Heartbeat(executorId, taskMetrics, blockManagerId) =>
if (scheduler != null) {
val unknownExecutor = !scheduler.executorHeartbeatReceived(
executorId, taskMetrics, blockManagerId)
val response = HeartbeatResponse(reregisterBlockManager = unknownExecutor)
executorLastSeen(executorId) = System.currentTimeMillis()
context.reply(response)
eventLoopThread.submit(new Runnable {
override def run(): Unit = Utils.tryLogNonFatalError {
val unknownExecutor = !scheduler.executorHeartbeatReceived(
executorId, taskMetrics, blockManagerId)
val response = HeartbeatResponse(reregisterBlockManager = unknownExecutor)
context.reply(response)
}
})
} else {
// Because Executor will sleep several seconds before sending the first "Heartbeat", this
// case rarely happens. However, if it really happens, log it and ask the executor to
Expand All @@ -125,7 +131,9 @@ private[spark] class HeartbeatReceiver(sc: SparkContext)
if (sc.supportDynamicAllocation) {
// Asynchronously kill the executor to avoid blocking the current thread
killExecutorThread.submit(new Runnable {
override def run(): Unit = sc.killExecutor(executorId)
override def run(): Unit = Utils.tryLogNonFatalError {
sc.killExecutor(executorId)
}
})
}
executorLastSeen.remove(executorId)
Expand All @@ -137,7 +145,7 @@ private[spark] class HeartbeatReceiver(sc: SparkContext)
if (timeoutCheckingTask != null) {
timeoutCheckingTask.cancel(true)
}
timeoutCheckingThread.shutdownNow()
eventLoopThread.shutdownNow()
killExecutorThread.shutdownNow()
}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ private[spark] abstract class MapOutputTracker(conf: SparkConf) extends Logging
*/
protected def askTracker[T: ClassTag](message: Any): T = {
try {
trackerEndpoint.askWithReply[T](message)
trackerEndpoint.askWithRetry[T](message)
} catch {
case e: Exception =>
logError("Error communicating with MapOutputTracker", e)
Expand Down
90 changes: 89 additions & 1 deletion core/src/main/scala/org/apache/spark/SparkConf.scala
Original file line number Diff line number Diff line change
Expand Up @@ -211,7 +211,74 @@ class SparkConf(loadDefaults: Boolean) extends Cloneable with Logging {
Utils.timeStringAsMs(get(key, defaultValue))
}

/**
* Get a size parameter as bytes; throws a NoSuchElementException if it's not set. If no
* suffix is provided then bytes are assumed.
* @throws NoSuchElementException
*/
def getSizeAsBytes(key: String): Long = {
Utils.byteStringAsBytes(get(key))
}

/**
* Get a size parameter as bytes, falling back to a default if not set. If no
* suffix is provided then bytes are assumed.
*/
def getSizeAsBytes(key: String, defaultValue: String): Long = {
Utils.byteStringAsBytes(get(key, defaultValue))
}

/**
* Get a size parameter as Kibibytes; throws a NoSuchElementException if it's not set. If no
* suffix is provided then Kibibytes are assumed.
* @throws NoSuchElementException
*/
def getSizeAsKb(key: String): Long = {
Utils.byteStringAsKb(get(key))
}

/**
* Get a size parameter as Kibibytes, falling back to a default if not set. If no
* suffix is provided then Kibibytes are assumed.
*/
def getSizeAsKb(key: String, defaultValue: String): Long = {
Utils.byteStringAsKb(get(key, defaultValue))
}

/**
* Get a size parameter as Mebibytes; throws a NoSuchElementException if it's not set. If no
* suffix is provided then Mebibytes are assumed.
* @throws NoSuchElementException
*/
def getSizeAsMb(key: String): Long = {
Utils.byteStringAsMb(get(key))
}

/**
* Get a size parameter as Mebibytes, falling back to a default if not set. If no
* suffix is provided then Mebibytes are assumed.
*/
def getSizeAsMb(key: String, defaultValue: String): Long = {
Utils.byteStringAsMb(get(key, defaultValue))
}

/**
* Get a size parameter as Gibibytes; throws a NoSuchElementException if it's not set. If no
* suffix is provided then Gibibytes are assumed.
* @throws NoSuchElementException
*/
def getSizeAsGb(key: String): Long = {
Utils.byteStringAsGb(get(key))
}

/**
* Get a size parameter as Gibibytes, falling back to a default if not set. If no
* suffix is provided then Gibibytes are assumed.
*/
def getSizeAsGb(key: String, defaultValue: String): Long = {
Utils.byteStringAsGb(get(key, defaultValue))
}

/** Get a parameter as an Option */
def getOption(key: String): Option[String] = {
Option(settings.get(key)).orElse(getDeprecatedConfig(key, this))
Expand Down Expand Up @@ -407,7 +474,13 @@ private[spark] object SparkConf extends Logging {
"The spark.cache.class property is no longer being used! Specify storage levels using " +
"the RDD.persist() method instead."),
DeprecatedConfig("spark.yarn.user.classpath.first", "1.3",
"Please use spark.{driver,executor}.userClassPathFirst instead."))
"Please use spark.{driver,executor}.userClassPathFirst instead."),
DeprecatedConfig("spark.kryoserializer.buffer.mb", "1.4",
"Please use spark.kryoserializer.buffer instead. The default value for " +
"spark.kryoserializer.buffer.mb was previously specified as '0.064'. Fractional values " +
"are no longer accepted. To specify the equivalent now, one may use '64k'.")
)

Map(configs.map { cfg => (cfg.key -> cfg) }:_*)
}

Expand All @@ -432,6 +505,21 @@ private[spark] object SparkConf extends Logging {
AlternateConfig("spark.yarn.applicationMaster.waitTries", "1.3",
// Translate old value to a duration, with 10s wait time per try.
translation = s => s"${s.toLong * 10}s")),
"spark.reducer.maxSizeInFlight" -> Seq(
AlternateConfig("spark.reducer.maxMbInFlight", "1.4")),
"spark.kryoserializer.buffer" ->
Seq(AlternateConfig("spark.kryoserializer.buffer.mb", "1.4",
translation = s => s"${s.toDouble * 1000}k")),
"spark.kryoserializer.buffer.max" -> Seq(
AlternateConfig("spark.kryoserializer.buffer.max.mb", "1.4")),
"spark.shuffle.file.buffer" -> Seq(
AlternateConfig("spark.shuffle.file.buffer.kb", "1.4")),
"spark.executor.logs.rolling.maxSize" -> Seq(
AlternateConfig("spark.executor.logs.rolling.size.maxBytes", "1.4")),
"spark.io.compression.snappy.blockSize" -> Seq(
AlternateConfig("spark.io.compression.snappy.block.size", "1.4")),
"spark.io.compression.lz4.blockSize" -> Seq(
AlternateConfig("spark.io.compression.lz4.block.size", "1.4")),
"spark.rpc.numRetries" -> Seq(
AlternateConfig("spark.akka.num.retries", "1.4")),
"spark.rpc.retry.wait" -> Seq(
Expand Down
Loading