Skip to content
This repository was archived by the owner on Jan 9, 2020. It is now read-only.

Conversation

@mccheah
Copy link

@mccheah mccheah commented Apr 26, 2017

Implements #192 for submission v2.

@mccheah
Copy link
Author

mccheah commented Apr 27, 2017

rerun unit tests please

@mccheah
Copy link
Author

mccheah commented Apr 27, 2017

@aash @tnachen @foxish for review.

" authenticating against Kubernetes. Typically this is configured by spark-submit from" +
" mounting a secret from the submitting machine into the pod, and hence this" +
" configuration is marked as internal, but this can also be set manually to" +
" use a certificate that is mounted into the driver pod via other means.")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: certificate -> key file

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bump

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

" authenticating against Kubernetes. Typically this is configured by spark-submit from" +
" mounting a secret from the submitting machine into the pod, and hence this" +
" configuration is marked as internal, but this can also be set manually to" +
" use a certificate that is mounted into the driver pod via other means.")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

certificate -> token

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bump

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

* This client is primarily designed in a piecewise manner, where different aspects of the driver's
* execution are managed by separate components. This allows each component to be tested
* individually. The component providers are dependency-injected into the client via the
* constructor, thus allowing the providers to be stubbed out in testing the client class.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💯

}

override def createCredentialsSecret(): Option[Secret] = {
val allSecretData = resolveSecretData(
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: drop resolveSecretData onto the next line so it lines up with the other resolveSecretDatas

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bump


/**
* Mount any Kubernetes credentials from the submitting machine's disk into the
* driver pod.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

note that the credentialsSecret passed in here is the output of createCredentialsSecret(), allowing implementations of this trait to not have to save state between method invocations

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bump

// Tests with local dependencies with the mounted dependency manager.
test("Uploading local dependencies should create Kubernetes secrets and config map") {
val initContainerConfigMap = getInitContainerConfigMap()
val initContainerSecret = getInitContainerSecret()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

after this change is initContainerSecret's type ConfigMap or a () -> ConfigMap function that creates it?

What's Spark style? cc @tnachen

}

private def getInitContainerSecret(): Secret = {
private def getInitContainerSecret: Secret = {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think Spark style is to not use getX for getters.

http://docs.scala-lang.org/style/naming-conventions.html#methods

APP_ID, KubernetesCredentials(None, None, None, None), None, None, None, None)

// Test matrices
private val TEST_MATRIX_EXPECTED_SPARK_CONFS = Table(
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fancy

kubernetesTestComponents.clientConfig.getClientCertFile)
sparkConf.set(KUBERNETES_DRIVER_CA_CERT_FILE,
kubernetesTestComponents.clientConfig.getCaCertFile)
runSparkAppAndVerifyCompletion(SparkLauncher.NO_RESOURCE)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this verifies that the spark app finishes when these confs are set, but I'm not sure it verifies that the key/cert files are used when requesting (e.g. this conf could just be ignored and the test would still pass). How can we test those?

basePod, driverContainer.getName, driverPodKubernetesCredentialsSecret)
nonDriverPodKubernetesResources ++= driverPodKubernetesCredentialsSecret.toSeq
val sparkConfWithDriverPodKubernetesCredentialLocations =
driverPodKubernetesCredentialsMounter.setDriverPodKubernetesCredentialLocations(sparkConf)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if it's better to have this be all one method that returns a tuple of values, some new and some updated, and then apply those appropriately. E.g. (driverPodWithMountedCreds, sparkConfWithMountedCredLocations)

Would that save any complexity here?

@mccheah mccheah force-pushed the submission-v2-kubernetes-credentials branch from 95fa714 to 940bc9e Compare April 28, 2017 20:14
@mccheah mccheah changed the base branch from init-container-downloads-remote-files to init-containers-on-executors-with-remote-files April 28, 2017 20:14
@mccheah
Copy link
Author

mccheah commented May 1, 2017

rerun unit tests please

@ash211
Copy link

ash211 commented May 10, 2017

rerun integration test please

@mccheah mccheah force-pushed the submission-v2-kubernetes-credentials branch from dc19d14 to c2bbe6f Compare May 17, 2017 19:17
@mccheah mccheah changed the base branch from init-containers-on-executors-with-remote-files to branch-2.1-kubernetes May 17, 2017 19:19
@ash211 ash211 changed the title Re-implement driver pod kubernetes credentials mounting for V2. Support driver pod kubernetes credentials mounting in V2 May 17, 2017
Copy link

@ash211 ash211 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me (did most of the review a few weeks ago). Main question is about the coverage of the integration test

runSparkPiAndVerifyCompletion(SparkLauncher.NO_RESOURCE)
}

test("Use client key and client cert file when requesting executors") {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is minikube configured at this point to require auth via the key/cert and CA cert files? Theoretically this test could pass because nothing happens to the config, rather than that the config is set and used in the request to the apiserver which requires the auth config

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was under the impression it either needs the service account token or the client key-cert pair. It's unfortunately difficult to test just this in isolation since we fall back to using the service account token if the user does not provide other credentials.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh yeah, we'd need to configure minikube specially for this, or inspect minikube output to verify this. Without a reasonably available way to verify this, I think the test is good as is.

Copy link

@ash211 ash211 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, merging

@ash211 ash211 merged commit 9d6665c into branch-2.1-kubernetes May 18, 2017
@mccheah mccheah mentioned this pull request May 23, 2017
ifilonenko pushed a commit to ifilonenko/spark that referenced this pull request Feb 26, 2019
[NOSQUASH] Resync from apache-spark-on-k8s upstream
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants