Skip to content

Commit 21e1fc7

Browse files
Stavros Kontopoulosfoxish
authored andcommitted
[SPARK-24232][K8S] Add support for secret env vars
## What changes were proposed in this pull request? * Allows to refer a secret as an env var. * Introduces new config properties in the form: spark.kubernetes{driver,executor}.secretKeyRef.ENV_NAME=name:key ENV_NAME is case sensitive. * Updates docs. * Adds required unit tests. ## How was this patch tested? Manually tested and confirmed that the secrets exist in driver's and executor's container env. Also job finished successfully. First created a secret with the following yaml: ``` apiVersion: v1 kind: Secret metadata: name: test-secret data: username: c3RhdnJvcwo= password: Mzk1MjgkdmRnN0pi ------- $ echo -n 'stavros' | base64 c3RhdnJvcw== $ echo -n '39528$vdg7Jb' | base64 MWYyZDFlMmU2N2Rm ``` Run a job as follows: ```./bin/spark-submit \ --master k8s://http://localhost:9000 \ --deploy-mode cluster \ --name spark-pi \ --class org.apache.spark.examples.SparkPi \ --conf spark.executor.instances=1 \ --conf spark.kubernetes.container.image=skonto/spark:k8envs3 \ --conf spark.kubernetes.driver.secretKeyRef.MY_USERNAME=test-secret:username \ --conf spark.kubernetes.driver.secretKeyRef.My_password=test-secret:password \ --conf spark.kubernetes.executor.secretKeyRef.MY_USERNAME=test-secret:username \ --conf spark.kubernetes.executor.secretKeyRef.My_password=test-secret:password \ local:///opt/spark/examples/jars/spark-examples_2.11-2.4.0-SNAPSHOT.jar 10000 ``` Secret loaded correctly at the driver container: ![image](https://user-images.githubusercontent.com/7945591/40174346-7fee70c8-59dd-11e8-8705-995a5472716f.png) Also if I log into the exec container: kubectl exec -it spark-pi-1526555613156-exec-1 bash bash-4.4# env > SPARK_EXECUTOR_MEMORY=1g > SPARK_EXECUTOR_CORES=1 > LANG=C.UTF-8 > HOSTNAME=spark-pi-1526555613156-exec-1 > SPARK_APPLICATION_ID=spark-application-1526555618626 > **MY_USERNAME=stavros** > > JAVA_HOME=/usr/lib/jvm/java-1.8-openjdk > KUBERNETES_PORT_443_TCP_PROTO=tcp > KUBERNETES_PORT_443_TCP_ADDR=10.100.0.1 > JAVA_VERSION=8u151 > KUBERNETES_PORT=tcp://10.100.0.1:443 > PWD=/opt/spark/work-dir > HOME=/root > SPARK_LOCAL_DIRS=/var/data/spark-b569b0ae-b7ef-4f91-bcd5-0f55535d3564 > KUBERNETES_SERVICE_PORT_HTTPS=443 > KUBERNETES_PORT_443_TCP_PORT=443 > SPARK_HOME=/opt/spark > SPARK_DRIVER_URL=spark://CoarseGrainedSchedulerspark-pi-1526555613156-driver-svc.default.svc:7078 > KUBERNETES_PORT_443_TCP=tcp://10.100.0.1:443 > SPARK_EXECUTOR_POD_IP=9.0.9.77 > TERM=xterm > SPARK_EXECUTOR_ID=1 > SHLVL=1 > KUBERNETES_SERVICE_PORT=443 > SPARK_CONF_DIR=/opt/spark/conf > PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/jvm/java-1.8-openjdk/jre/bin:/usr/lib/jvm/java-1.8-openjdk/bin > JAVA_ALPINE_VERSION=8.151.12-r0 > KUBERNETES_SERVICE_HOST=10.100.0.1 > **My_password=39528$vdg7Jb** > _=/usr/bin/env > Author: Stavros Kontopoulos <[email protected]> Closes #21317 from skonto/k8s-fix-env-secrets.
1 parent cc976f6 commit 21e1fc7

18 files changed

+222
-12
lines changed

docs/running-on-kubernetes.md

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -140,6 +140,12 @@ namespace as that of the driver and executor pods. For example, to mount a secre
140140
--conf spark.kubernetes.executor.secrets.spark-secret=/etc/secrets
141141
```
142142

143+
To use a secret through an environment variable use the following options to the `spark-submit` command:
144+
```
145+
--conf spark.kubernetes.driver.secretKeyRef.ENV_NAME=name:key
146+
--conf spark.kubernetes.executor.secretKeyRef.ENV_NAME=name:key
147+
```
148+
143149
## Introspection and Debugging
144150

145151
These are the different ways in which you can investigate a running/completed Spark application, monitor progress, and
@@ -602,4 +608,20 @@ specific to Spark on Kubernetes.
602608
<code>spark.kubernetes.executor.secrets.spark-secret=/etc/secrets</code>.
603609
</td>
604610
</tr>
611+
<tr>
612+
<td><code>spark.kubernetes.driver.secretKeyRef.[EnvName]</code></td>
613+
<td>(none)</td>
614+
<td>
615+
Add as an environment variable to the driver container with name EnvName (case sensitive), the value referenced by key <code> key </code> in the data of the referenced <a href="https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables">Kubernetes Secret</a>. For example,
616+
<code>spark.kubernetes.driver.secretKeyRef.ENV_VAR=spark-secret:key</code>.
617+
</td>
618+
</tr>
619+
<tr>
620+
<td><code>spark.kubernetes.executor.secretKeyRef.[EnvName]</code></td>
621+
<td>(none)</td>
622+
<td>
623+
Add as an environment variable to the executor container with name EnvName (case sensitive), the value referenced by key <code> key </code> in the data of the referenced <a href="https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables">Kubernetes Secret</a>. For example,
624+
<code>spark.kubernetes.executor.secrets.ENV_VAR=spark-secret:key</code>.
625+
</td>
626+
</tr>
605627
</table>

resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -162,10 +162,12 @@ private[spark] object Config extends Logging {
162162
val KUBERNETES_DRIVER_LABEL_PREFIX = "spark.kubernetes.driver.label."
163163
val KUBERNETES_DRIVER_ANNOTATION_PREFIX = "spark.kubernetes.driver.annotation."
164164
val KUBERNETES_DRIVER_SECRETS_PREFIX = "spark.kubernetes.driver.secrets."
165+
val KUBERNETES_DRIVER_SECRET_KEY_REF_PREFIX = "spark.kubernetes.driver.secretKeyRef."
165166

166167
val KUBERNETES_EXECUTOR_LABEL_PREFIX = "spark.kubernetes.executor.label."
167168
val KUBERNETES_EXECUTOR_ANNOTATION_PREFIX = "spark.kubernetes.executor.annotation."
168169
val KUBERNETES_EXECUTOR_SECRETS_PREFIX = "spark.kubernetes.executor.secrets."
170+
val KUBERNETES_EXECUTOR_SECRET_KEY_REF_PREFIX = "spark.kubernetes.executor.secretKeyRef."
169171

170172
val KUBERNETES_DRIVER_ENV_PREFIX = "spark.kubernetes.driverEnv."
171173
}

resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/KubernetesConf.scala

Lines changed: 9 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -54,6 +54,7 @@ private[spark] case class KubernetesConf[T <: KubernetesRoleSpecificConf](
5454
roleLabels: Map[String, String],
5555
roleAnnotations: Map[String, String],
5656
roleSecretNamesToMountPaths: Map[String, String],
57+
roleSecretEnvNamesToKeyRefs: Map[String, String],
5758
roleEnvs: Map[String, String]) {
5859

5960
def namespace(): String = sparkConf.get(KUBERNETES_NAMESPACE)
@@ -129,6 +130,8 @@ private[spark] object KubernetesConf {
129130
sparkConf, KUBERNETES_DRIVER_ANNOTATION_PREFIX)
130131
val driverSecretNamesToMountPaths = KubernetesUtils.parsePrefixedKeyValuePairs(
131132
sparkConf, KUBERNETES_DRIVER_SECRETS_PREFIX)
133+
val driverSecretEnvNamesToKeyRefs = KubernetesUtils.parsePrefixedKeyValuePairs(
134+
sparkConf, KUBERNETES_DRIVER_SECRET_KEY_REF_PREFIX)
132135
val driverEnvs = KubernetesUtils.parsePrefixedKeyValuePairs(
133136
sparkConf, KUBERNETES_DRIVER_ENV_PREFIX)
134137

@@ -140,6 +143,7 @@ private[spark] object KubernetesConf {
140143
driverLabels,
141144
driverAnnotations,
142145
driverSecretNamesToMountPaths,
146+
driverSecretEnvNamesToKeyRefs,
143147
driverEnvs)
144148
}
145149

@@ -167,8 +171,10 @@ private[spark] object KubernetesConf {
167171
executorCustomLabels
168172
val executorAnnotations = KubernetesUtils.parsePrefixedKeyValuePairs(
169173
sparkConf, KUBERNETES_EXECUTOR_ANNOTATION_PREFIX)
170-
val executorSecrets = KubernetesUtils.parsePrefixedKeyValuePairs(
174+
val executorMountSecrets = KubernetesUtils.parsePrefixedKeyValuePairs(
171175
sparkConf, KUBERNETES_EXECUTOR_SECRETS_PREFIX)
176+
val executorEnvSecrets = KubernetesUtils.parsePrefixedKeyValuePairs(
177+
sparkConf, KUBERNETES_EXECUTOR_SECRET_KEY_REF_PREFIX)
172178
val executorEnv = sparkConf.getExecutorEnv.toMap
173179

174180
KubernetesConf(
@@ -178,7 +184,8 @@ private[spark] object KubernetesConf {
178184
appId,
179185
executorLabels,
180186
executorAnnotations,
181-
executorSecrets,
187+
executorMountSecrets,
188+
executorEnvSecrets,
182189
executorEnv)
183190
}
184191
}
Lines changed: 57 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,57 @@
1+
/*
2+
* Licensed to the Apache Software Foundation (ASF) under one or more
3+
* contributor license agreements. See the NOTICE file distributed with
4+
* this work for additional information regarding copyright ownership.
5+
* The ASF licenses this file to You under the Apache License, Version 2.0
6+
* (the "License"); you may not use this file except in compliance with
7+
* the License. You may obtain a copy of the License at
8+
*
9+
* http://www.apache.org/licenses/LICENSE-2.0
10+
*
11+
* Unless required by applicable law or agreed to in writing, software
12+
* distributed under the License is distributed on an "AS IS" BASIS,
13+
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14+
* See the License for the specific language governing permissions and
15+
* limitations under the License.
16+
*/
17+
package org.apache.spark.deploy.k8s.features
18+
19+
import scala.collection.JavaConverters._
20+
21+
import io.fabric8.kubernetes.api.model.{ContainerBuilder, EnvVarBuilder, HasMetadata}
22+
23+
import org.apache.spark.deploy.k8s.{KubernetesConf, KubernetesRoleSpecificConf, SparkPod}
24+
25+
private[spark] class EnvSecretsFeatureStep(
26+
kubernetesConf: KubernetesConf[_ <: KubernetesRoleSpecificConf])
27+
extends KubernetesFeatureConfigStep {
28+
override def configurePod(pod: SparkPod): SparkPod = {
29+
val addedEnvSecrets = kubernetesConf
30+
.roleSecretEnvNamesToKeyRefs
31+
.map{ case (envName, keyRef) =>
32+
// Keyref parts
33+
val keyRefParts = keyRef.split(":")
34+
require(keyRefParts.size == 2, "SecretKeyRef must be in the form name:key.")
35+
val name = keyRefParts(0)
36+
val key = keyRefParts(1)
37+
new EnvVarBuilder()
38+
.withName(envName)
39+
.withNewValueFrom()
40+
.withNewSecretKeyRef()
41+
.withKey(key)
42+
.withName(name)
43+
.endSecretKeyRef()
44+
.endValueFrom()
45+
.build()
46+
}
47+
48+
val containerWithEnvVars = new ContainerBuilder(pod.container)
49+
.addAllToEnv(addedEnvSecrets.toSeq.asJava)
50+
.build()
51+
SparkPod(pod.pod, containerWithEnvVars)
52+
}
53+
54+
override def getAdditionalPodSystemProperties(): Map[String, String] = Map.empty
55+
56+
override def getAdditionalKubernetesResources(): Seq[HasMetadata] = Seq.empty
57+
}

resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/KubernetesDriverBuilder.scala

Lines changed: 9 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717
package org.apache.spark.deploy.k8s.submit
1818

1919
import org.apache.spark.deploy.k8s.{KubernetesConf, KubernetesDriverSpec, KubernetesDriverSpecificConf, KubernetesRoleSpecificConf}
20-
import org.apache.spark.deploy.k8s.features.{BasicDriverFeatureStep, DriverKubernetesCredentialsFeatureStep, DriverServiceFeatureStep, LocalDirsFeatureStep, MountSecretsFeatureStep}
20+
import org.apache.spark.deploy.k8s.features._
2121

2222
private[spark] class KubernetesDriverBuilder(
2323
provideBasicStep: (KubernetesConf[KubernetesDriverSpecificConf]) => BasicDriverFeatureStep =
@@ -30,6 +30,9 @@ private[spark] class KubernetesDriverBuilder(
3030
provideSecretsStep: (KubernetesConf[_ <: KubernetesRoleSpecificConf]
3131
=> MountSecretsFeatureStep) =
3232
new MountSecretsFeatureStep(_),
33+
provideEnvSecretsStep: (KubernetesConf[_ <: KubernetesRoleSpecificConf]
34+
=> EnvSecretsFeatureStep) =
35+
new EnvSecretsFeatureStep(_),
3336
provideLocalDirsStep: (KubernetesConf[_ <: KubernetesRoleSpecificConf])
3437
=> LocalDirsFeatureStep =
3538
new LocalDirsFeatureStep(_)) {
@@ -41,10 +44,14 @@ private[spark] class KubernetesDriverBuilder(
4144
provideCredentialsStep(kubernetesConf),
4245
provideServiceStep(kubernetesConf),
4346
provideLocalDirsStep(kubernetesConf))
44-
val allFeatures = if (kubernetesConf.roleSecretNamesToMountPaths.nonEmpty) {
47+
var allFeatures = if (kubernetesConf.roleSecretNamesToMountPaths.nonEmpty) {
4548
baseFeatures ++ Seq(provideSecretsStep(kubernetesConf))
4649
} else baseFeatures
4750

51+
allFeatures = if (kubernetesConf.roleSecretEnvNamesToKeyRefs.nonEmpty) {
52+
allFeatures ++ Seq(provideEnvSecretsStep(kubernetesConf))
53+
} else allFeatures
54+
4855
var spec = KubernetesDriverSpec.initialSpec(kubernetesConf.sparkConf.getAll.toMap)
4956
for (feature <- allFeatures) {
5057
val configuredPod = feature.configurePod(spec.pod)

resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/KubernetesExecutorBuilder.scala

Lines changed: 10 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,24 +17,32 @@
1717
package org.apache.spark.scheduler.cluster.k8s
1818

1919
import org.apache.spark.deploy.k8s.{KubernetesConf, KubernetesExecutorSpecificConf, KubernetesRoleSpecificConf, SparkPod}
20-
import org.apache.spark.deploy.k8s.features.{BasicExecutorFeatureStep, LocalDirsFeatureStep, MountSecretsFeatureStep}
20+
import org.apache.spark.deploy.k8s.features.{BasicExecutorFeatureStep, EnvSecretsFeatureStep, LocalDirsFeatureStep, MountSecretsFeatureStep}
2121

2222
private[spark] class KubernetesExecutorBuilder(
2323
provideBasicStep: (KubernetesConf[KubernetesExecutorSpecificConf]) => BasicExecutorFeatureStep =
2424
new BasicExecutorFeatureStep(_),
2525
provideSecretsStep:
2626
(KubernetesConf[_ <: KubernetesRoleSpecificConf]) => MountSecretsFeatureStep =
2727
new MountSecretsFeatureStep(_),
28+
provideEnvSecretsStep:
29+
(KubernetesConf[_ <: KubernetesRoleSpecificConf] => EnvSecretsFeatureStep) =
30+
new EnvSecretsFeatureStep(_),
2831
provideLocalDirsStep: (KubernetesConf[_ <: KubernetesRoleSpecificConf])
2932
=> LocalDirsFeatureStep =
3033
new LocalDirsFeatureStep(_)) {
3134

3235
def buildFromFeatures(
3336
kubernetesConf: KubernetesConf[KubernetesExecutorSpecificConf]): SparkPod = {
3437
val baseFeatures = Seq(provideBasicStep(kubernetesConf), provideLocalDirsStep(kubernetesConf))
35-
val allFeatures = if (kubernetesConf.roleSecretNamesToMountPaths.nonEmpty) {
38+
var allFeatures = if (kubernetesConf.roleSecretNamesToMountPaths.nonEmpty) {
3639
baseFeatures ++ Seq(provideSecretsStep(kubernetesConf))
3740
} else baseFeatures
41+
42+
allFeatures = if (kubernetesConf.roleSecretEnvNamesToKeyRefs.nonEmpty) {
43+
allFeatures ++ Seq(provideEnvSecretsStep(kubernetesConf))
44+
} else allFeatures
45+
3846
var executorPod = SparkPod.initialPod()
3947
for (feature <- allFeatures) {
4048
executorPod = feature.configurePod(executorPod)

resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/KubernetesConfSuite.scala

Lines changed: 11 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,6 +40,9 @@ class KubernetesConfSuite extends SparkFunSuite {
4040
private val SECRET_NAMES_TO_MOUNT_PATHS = Map(
4141
"secret1" -> "/mnt/secrets/secret1",
4242
"secret2" -> "/mnt/secrets/secret2")
43+
private val SECRET_ENV_VARS = Map(
44+
"envName1" -> "name1:key1",
45+
"envName2" -> "name2:key2")
4346
private val CUSTOM_ENVS = Map(
4447
"customEnvKey1" -> "customEnvValue1",
4548
"customEnvKey2" -> "customEnvValue2")
@@ -103,6 +106,9 @@ class KubernetesConfSuite extends SparkFunSuite {
103106
SECRET_NAMES_TO_MOUNT_PATHS.foreach { case (key, value) =>
104107
sparkConf.set(s"$KUBERNETES_DRIVER_SECRETS_PREFIX$key", value)
105108
}
109+
SECRET_ENV_VARS.foreach { case (key, value) =>
110+
sparkConf.set(s"$KUBERNETES_DRIVER_SECRET_KEY_REF_PREFIX$key", value)
111+
}
106112
CUSTOM_ENVS.foreach { case (key, value) =>
107113
sparkConf.set(s"$KUBERNETES_DRIVER_ENV_PREFIX$key", value)
108114
}
@@ -121,6 +127,7 @@ class KubernetesConfSuite extends SparkFunSuite {
121127
CUSTOM_LABELS)
122128
assert(conf.roleAnnotations === CUSTOM_ANNOTATIONS)
123129
assert(conf.roleSecretNamesToMountPaths === SECRET_NAMES_TO_MOUNT_PATHS)
130+
assert(conf.roleSecretEnvNamesToKeyRefs === SECRET_ENV_VARS)
124131
assert(conf.roleEnvs === CUSTOM_ENVS)
125132
}
126133

@@ -155,6 +162,9 @@ class KubernetesConfSuite extends SparkFunSuite {
155162
CUSTOM_ANNOTATIONS.foreach { case (key, value) =>
156163
sparkConf.set(s"$KUBERNETES_EXECUTOR_ANNOTATION_PREFIX$key", value)
157164
}
165+
SECRET_ENV_VARS.foreach { case (key, value) =>
166+
sparkConf.set(s"$KUBERNETES_EXECUTOR_SECRET_KEY_REF_PREFIX$key", value)
167+
}
158168
SECRET_NAMES_TO_MOUNT_PATHS.foreach { case (key, value) =>
159169
sparkConf.set(s"$KUBERNETES_EXECUTOR_SECRETS_PREFIX$key", value)
160170
}
@@ -170,6 +180,6 @@ class KubernetesConfSuite extends SparkFunSuite {
170180
SPARK_ROLE_LABEL -> SPARK_POD_EXECUTOR_ROLE) ++ CUSTOM_LABELS)
171181
assert(conf.roleAnnotations === CUSTOM_ANNOTATIONS)
172182
assert(conf.roleSecretNamesToMountPaths === SECRET_NAMES_TO_MOUNT_PATHS)
183+
assert(conf.roleSecretEnvNamesToKeyRefs === SECRET_ENV_VARS)
173184
}
174-
175185
}

resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/BasicDriverFeatureStepSuite.scala

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -69,6 +69,7 @@ class BasicDriverFeatureStepSuite extends SparkFunSuite {
6969
DRIVER_LABELS,
7070
DRIVER_ANNOTATIONS,
7171
Map.empty,
72+
Map.empty,
7273
DRIVER_ENVS)
7374

7475
val featureStep = new BasicDriverFeatureStep(kubernetesConf)
@@ -138,6 +139,7 @@ class BasicDriverFeatureStepSuite extends SparkFunSuite {
138139
DRIVER_LABELS,
139140
DRIVER_ANNOTATIONS,
140141
Map.empty,
142+
Map.empty,
141143
Map.empty)
142144
val step = new BasicDriverFeatureStep(kubernetesConf)
143145
val additionalProperties = step.getAdditionalPodSystemProperties()

resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/BasicExecutorFeatureStepSuite.scala

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -87,6 +87,7 @@ class BasicExecutorFeatureStepSuite
8787
LABELS,
8888
ANNOTATIONS,
8989
Map.empty,
90+
Map.empty,
9091
Map.empty))
9192
val executor = step.configurePod(SparkPod.initialPod())
9293

@@ -124,6 +125,7 @@ class BasicExecutorFeatureStepSuite
124125
LABELS,
125126
ANNOTATIONS,
126127
Map.empty,
128+
Map.empty,
127129
Map.empty))
128130
assert(step.configurePod(SparkPod.initialPod()).pod.getSpec.getHostname.length === 63)
129131
}
@@ -142,6 +144,7 @@ class BasicExecutorFeatureStepSuite
142144
LABELS,
143145
ANNOTATIONS,
144146
Map.empty,
147+
Map.empty,
145148
Map("qux" -> "quux")))
146149
val executor = step.configurePod(SparkPod.initialPod())
147150

resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/DriverKubernetesCredentialsFeatureStepSuite.scala

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -59,6 +59,7 @@ class DriverKubernetesCredentialsFeatureStepSuite extends SparkFunSuite with Bef
5959
Map.empty,
6060
Map.empty,
6161
Map.empty,
62+
Map.empty,
6263
Map.empty)
6364
val kubernetesCredentialsStep = new DriverKubernetesCredentialsFeatureStep(kubernetesConf)
6465
assert(kubernetesCredentialsStep.configurePod(BASE_DRIVER_POD) === BASE_DRIVER_POD)
@@ -88,6 +89,7 @@ class DriverKubernetesCredentialsFeatureStepSuite extends SparkFunSuite with Bef
8889
Map.empty,
8990
Map.empty,
9091
Map.empty,
92+
Map.empty,
9193
Map.empty)
9294

9395
val kubernetesCredentialsStep = new DriverKubernetesCredentialsFeatureStep(kubernetesConf)
@@ -124,6 +126,7 @@ class DriverKubernetesCredentialsFeatureStepSuite extends SparkFunSuite with Bef
124126
Map.empty,
125127
Map.empty,
126128
Map.empty,
129+
Map.empty,
127130
Map.empty)
128131
val kubernetesCredentialsStep = new DriverKubernetesCredentialsFeatureStep(kubernetesConf)
129132
val resolvedProperties = kubernetesCredentialsStep.getAdditionalPodSystemProperties()

0 commit comments

Comments
 (0)