Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .prow.sh
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ CSI_PROW_GINKO_PARALLEL="-p -nodes 40" # default was 7
#CSI_PROW_BUILD_JOB=false
#CSI_PROW_KUBERNETES_VERSION=latest
#CSI_PROW_HOSTPATH_CANARY=canary
CSI_PROW_HOSTPATH_DRIVER_NAME="hostpath.csi.k8s.io"

CSI_PROW_TESTS_SANITY="sanity"

Expand Down
15 changes: 8 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ csi-hostpathplugin-0 2/2 Running 0 5m45s
From the root directory, deploy the application pods including a storage class, a PVC, and a pod which mounts a volume using the Hostpath driver found in directory `./examples`:

```shell
$ kubectl create -f ./examples
$ for i in ./examples/csi-storageclass.yaml ./examples/csi-pvc.yaml ./examples/csi-app.yaml; do kubectl apply -f $i; done
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we explicitly call out each file as a separate step?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could. It would be more work to copy-and-paste the whole sequence. I don't have a strong opinion here.

pod/my-csi-app created
persistentvolumeclaim/csi-pvc created
storageclass.storage.k8s.io/csi-hostpath-sc created
Expand Down Expand Up @@ -230,13 +230,14 @@ Since volume snapshot is an alpha feature starting in Kubernetes v1.12, you need
> Resource Version: 2418
> Self Link: /apis/snapshot.storage.k8s.io/v1alpha1/volumesnapshotclasses/csi-hostpath-snapclass
> UID: c8f5bc47-c716-11e8-8911-000c2967769a
> Snapshotter: csi-hostpath
> Snapshotter: hostpath.csi.k8s.io
> Events: <none>
> ```

Use the volume snapshot class to dynamically create a volume snapshot:
After having created the `csi-pvc` as described in the example above,
use the volume snapshot class to dynamically create a volume snapshot:

> $ kubectl create -f examples/csi-snapshot.yaml
> $ kubectl apply -f examples/csi-snapshot.yaml
> ```
> volumesnapshot.snapshot.storage.k8s.io/new-snapshot-demo created
> ```
Expand Down Expand Up @@ -299,7 +300,7 @@ Use the volume snapshot class to dynamically create a volume snapshot:
> Spec:
> Csi Volume Snapshot Source:
> Creation Time: 1538576205471577525
> Driver: csi-hostpath
> Driver: hostpath.csi.k8s.io
> Restore Size: 1073741824
> Snapshot Handle: f55ff979-c716-11e8-bb16-000c2967769a
> Deletion Policy: Delete
Expand All @@ -324,7 +325,7 @@ Use the volume snapshot class to dynamically create a volume snapshot:

Follow the following example to create a volume from a volume snapshot:

> $ kubectl create -f examples/csi-restore.yaml
> $ kubectl apply -f examples/csi-restore.yaml
> `persistentvolumeclaim/hpvc-restore created`
>
> $ kubectl get pvc
Expand Down Expand Up @@ -387,7 +388,7 @@ spec:
volumes:
- name: my-csi-volume
csi:
driver: csi-hostpath
driver: hostpath.csi.k8s.io
```

> See sample YAML file [here](./examples/csi-app-inline.yaml).
Expand Down
2 changes: 1 addition & 1 deletion cmd/hostpathplugin/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ func init() {

var (
endpoint = flag.String("endpoint", "unix://tmp/csi.sock", "CSI endpoint")
driverName = flag.String("drivername", "csi-hostpath", "name of the driver")
driverName = flag.String("drivername", "hostpath.csi.k8s.io", "name of the driver")
nodeID = flag.String("nodeid", "", "node id")
ephemeral = flag.Bool("ephemeral", false, "deploy in ephemeral mode")
showVersion = flag.Bool("version", false, "Show version.")
Expand Down
1 change: 1 addition & 0 deletions deploy/kubernetes-1.13/hostpath/csi-hostpath-plugin.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,7 @@ spec:
- name: hostpath
image: quay.io/k8scsi/hostpathplugin:v1.1.0
args:
- "--drivername=hostpath.csi.k8s.io"
- "--v=5"
- "--endpoint=$(CSI_ENDPOINT)"
- "--nodeid=$(KUBE_NODE_NAME)"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@ apiVersion: snapshot.storage.k8s.io/v1alpha1
kind: VolumeSnapshotClass
metadata:
name: csi-hostpath-snapclass
snapshotter: csi-hostpath
snapshotter: hostpath.csi.k8s.io
1 change: 1 addition & 0 deletions deploy/kubernetes-1.14/hostpath/csi-hostpath-plugin.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,7 @@ spec:
- name: hostpath
image: quay.io/k8scsi/hostpathplugin:v1.1.0
args:
- "--drivername=hostpath.csi.k8s.io"
- "--v=5"
- "--endpoint=$(CSI_ENDPOINT)"
- "--nodeid=$(KUBE_NODE_NAME)"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@ apiVersion: snapshot.storage.k8s.io/v1alpha1
kind: VolumeSnapshotClass
metadata:
name: csi-hostpath-snapclass
snapshotter: csi-hostpath
snapshotter: hostpath.csi.k8s.io
4 changes: 2 additions & 2 deletions examples/csi-app-inline.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
kind: Pod
apiVersion: v1
metadata:
name: my-csi-app
name: my-csi-app-inline
spec:
containers:
- name: my-frontend
Expand All @@ -13,4 +13,4 @@ spec:
volumes:
- name: my-csi-volume
csi:
driver: csi-hostpath
driver: hostpath.csi.k8s.io
2 changes: 1 addition & 1 deletion examples/csi-storageclass.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,6 @@ apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-hostpath-sc
provisioner: csi-hostpath
provisioner: hostpath.csi.k8s.io
reclaimPolicy: Delete
volumeBindingMode: Immediate
33 changes: 29 additions & 4 deletions release-tools/build.make
Original file line number Diff line number Diff line change
Expand Up @@ -118,14 +118,39 @@ test-fmt:
fi

# This test only runs when dep >= 0.5 is installed, which is the case for the CI setup.
# When using 'go mod', we allow the test to be skipped in the Prow CI under some special
# circumstances, because it depends on accessing all remote repos and thus
# running it all the time would defeat the purpose of vendoring:
# - not handling a PR or
# - the fabricated merge commit leaves go.mod, go.sum and vendor dir unchanged
# - release-tools also didn't change (changing rules or Go version might lead to
# a different result and thus must be tested)
.PHONY: test-vendor
test: test-vendor
test-vendor:
@ echo; echo "### $@:"
@ case "$$(dep version 2>/dev/null | grep 'version *:')" in \
*v0.[56789]*) dep check && echo "vendor up-to-date" || false;; \
*) echo "skipping check, dep >= 0.5 required";; \
esac
@ if [ -f Gopkg.toml ]; then \
echo "Repo uses 'dep' for vendoring."; \
case "$$(dep version 2>/dev/null | grep 'version *:')" in \
*v0.[56789]*) dep check && echo "vendor up-to-date" || false;; \
*) echo "skipping check, dep >= 0.5 required";; \
esac; \
else \
echo "Repo uses 'go mod' for vendoring."; \
if [ "$${JOB_NAME}" ] && \
( [ "$${JOB_TYPE}" != "presubmit" ] || \
[ $$(git diff "${PULL_BASE_SHA}..HEAD" -- go.mod go.sum vendor release-tools | wc -l) -eq 0 ] ); then \
echo "Skipping vendor check because the Prow pre-submit job does not change vendoring."; \
elif ! GO111MODULE=on go mod vendor; then \
echo "ERROR: vendor check failed."; \
false; \
elif [ $$(git status --porcelain -- vendor | wc -l) -gt 0 ]; then \
echo "ERROR: vendor directory *not* up-to-date, it did get modified by 'GO111MODULE=on go mod vendor':"; \
git status -- vendor; \
git diff -- vendor; \
false; \
fi; \
fi;

.PHONY: test-subtree
test: test-subtree
Expand Down
44 changes: 26 additions & 18 deletions release-tools/prow.sh
Original file line number Diff line number Diff line change
Expand Up @@ -154,6 +154,7 @@ configvar CSI_PROW_WORK "$(mkdir -p "$GOPATH/pkg" && mktemp -d "$GOPATH/pkg/csip
configvar CSI_PROW_HOSTPATH_VERSION fc52d13ba07922c80555a24616a5b16480350c3f "hostpath driver" # pre-1.1.0
configvar CSI_PROW_HOSTPATH_REPO https://github.com/kubernetes-csi/csi-driver-host-path "hostpath repo"
configvar CSI_PROW_DEPLOYMENT "" "deployment"
configvar CSI_PROW_HOSTPATH_DRIVER_NAME "csi-hostpath" "the driver (aka provisioner) name of the chosen hostpath driver"

# If CSI_PROW_HOSTPATH_CANARY is set (typically to "canary", but also
# "1.0-canary"), then all image versions are replaced with that
Expand Down Expand Up @@ -673,6 +674,29 @@ hostpath_supports_block () {
echo "${result:-true}"
}

# The default implementation of this function generates a external
# driver test configuration for the hostpath driver.
#
# The content depends on both what the E2E suite expects and what the
# installed hostpath driver supports. Generating it here seems prone
# to breakage, but it is uncertain where a better place might be.
generate_test_driver () {
cat <<EOF
ShortName: csiprow
StorageClass:
FromName: true
SnapshotClass:
FromName: true
DriverInfo:
Name: ${CSI_PROW_HOSTPATH_DRIVER_NAME}
Capabilities:
block: $(hostpath_supports_block)
persistence: true
dataSource: true
multipods: true
EOF
}

# Captures pod output while running some other command.
run_with_loggers () (
loggers=$(start_loggers -f)
Expand All @@ -698,23 +722,7 @@ run_e2e () (
# When running on a multi-node cluster, we need to figure out where the
# hostpath driver was deployed and set ClientNodeName accordingly.

# The content of this file depends on both what the E2E suite expects and
# what the installed hostpath driver supports. Generating it here seems
# prone to breakage, but it is uncertain where a better place might be.
cat >"${CSI_PROW_WORK}/hostpath-test-driver.yaml" <<EOF
ShortName: csiprow
StorageClass:
FromName: true
SnapshotClass:
FromName: true
DriverInfo:
Name: csi-hostpath
Capabilities:
block: $(hostpath_supports_block)
persistence: true
dataSource: true
multipods: true
EOF
generate_test_driver >"${CSI_PROW_WORK}/test-driver.yaml" || die "generating test-driver.yaml failed"

# Rename, merge and filter JUnit files. Necessary in case that we run the E2E suite again
# and to avoid the large number of "skipped" tests that we get from using
Expand All @@ -727,7 +735,7 @@ EOF
trap move_junit EXIT

cd "${GOPATH}/src/${CSI_PROW_E2E_IMPORT_PATH}" &&
run_with_loggers ginkgo -v "$@" "${CSI_PROW_WORK}/e2e.test" -- -report-dir "${ARTIFACTS}" -storage.testdriver="${CSI_PROW_WORK}/hostpath-test-driver.yaml"
run_with_loggers ginkgo -v "$@" "${CSI_PROW_WORK}/e2e.test" -- -report-dir "${ARTIFACTS}" -storage.testdriver="${CSI_PROW_WORK}/test-driver.yaml"
)

# Run csi-sanity against installed CSI driver.
Expand Down
2 changes: 1 addition & 1 deletion release-tools/travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ services:
- docker
matrix:
include:
- go: 1.11.1
- go: 1.12.4
before_script:
- mkdir -p bin
- wget https://github.com/golang/dep/releases/download/v0.5.1/dep-linux-amd64 -O bin/dep
Expand Down