Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,11 @@ It provides high-level APIs in Java, Scala, Python and R,
and an optimized engine that supports general execution graphs.
It also supports a rich set of higher-level tools including [Spark SQL](sql-programming-guide.html) for SQL and structured data processing, [MLlib](ml-guide.html) for machine learning, [GraphX](graphx-programming-guide.html) for graph processing, and [Spark Streaming](streaming-programming-guide.html).

# Security

Security in Spark is OFF by default. This could mean you are vulnerable to attack by default.
Please see [Spark Security](security.html) before downloading and running Spark.

# Downloading

Get Spark from the [downloads page](https://spark.apache.org/downloads.html) of the project website. This documentation is for Spark version {{site.SPARK_VERSION}}. Spark uses Hadoop's client libraries for HDFS and YARN. Downloads are pre-packaged for a handful of popular Hadoop versions.
Expand Down
5 changes: 5 additions & 0 deletions docs/quick-start.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,11 @@ you can download a package for any version of Hadoop.

Note that, before Spark 2.0, the main programming interface of Spark was the Resilient Distributed Dataset (RDD). After Spark 2.0, RDDs are replaced by Dataset, which is strongly-typed like an RDD, but with richer optimizations under the hood. The RDD interface is still supported, and you can get a more detailed reference at the [RDD programming guide](rdd-programming-guide.html). However, we highly recommend you to switch to use Dataset, which has better performance than RDD. See the [SQL programming guide](sql-programming-guide.html) to get more information about Dataset.

# Security

Security in Spark is OFF by default. This could mean you are vulnerable to attack by default.
Please see [Spark Security](security.html) before running Spark.

# Interactive Analysis with the Spark Shell

## Basics
Expand Down
5 changes: 5 additions & 0 deletions docs/running-on-kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,11 @@ Kubernetes scheduler that has been added to Spark.
In future versions, there may be behavioral changes around configuration,
container images and entrypoints.**

# Security

Security in Spark is OFF by default. This could mean you are vulnerable to attack by default.
Please see [Spark Security](security.html) and the specific security sections in this doc before running Spark.

# Prerequisites

* A runnable distribution of Spark 2.3 or above.
Expand Down
5 changes: 5 additions & 0 deletions docs/running-on-mesos.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,11 @@ The advantages of deploying Spark with Mesos include:
[frameworks](https://mesos.apache.org/documentation/latest/frameworks/)
- scalable partitioning between multiple instances of Spark

# Security

Security in Spark is OFF by default. This could mean you are vulnerable to attack by default.
Please see [Spark Security](security.html) and the specific security sections in this doc before running Spark.

# How it Works

In a standalone cluster deployment, the cluster manager in the below diagram is a Spark master
Expand Down
5 changes: 5 additions & 0 deletions docs/running-on-yarn.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,11 @@ Support for running on [YARN (Hadoop
NextGen)](http://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/YARN.html)
was added to Spark in version 0.6.0, and improved in subsequent releases.

# Security

Security in Spark is OFF by default. This could mean you are vulnerable to attack by default.
Please see [Spark Security](security.html) and the specific security sections in this doc before running Spark.

# Launching Spark on YARN

Ensure that `HADOOP_CONF_DIR` or `YARN_CONF_DIR` points to the directory which contains the (client side) configuration files for the Hadoop cluster.
Expand Down
17 changes: 15 additions & 2 deletions docs/security.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,20 @@ title: Security
* This will become a table of contents (this text will be scraped).
{:toc}

# Spark RPC
# Spark Security: Things You Need To Know

Security in Spark is OFF by default. This could mean you are vulnerable to attack by default.
Spark supports multiple deployments types and each one supports different levels of security. Not
all deployment types will be secure in all environments and none are secure by default. Be
sure to evaluate your environment, what Spark supports, and take the appropriate measure to secure
your Spark deployment.

There are many different types of security concerns. Spark does not necessarily protect against
all things. Listed below are some of the things Spark supports. Also check the deployment
documentation for the type of deployment you are using for deployment specific settings. Anything
not documented, Spark does not support.

# Spark RPC (Communication protocol between Spark processes)

## Authentication

Expand Down Expand Up @@ -123,7 +136,7 @@ The following table describes the different options available for configuring th
Spark supports encrypting temporary data written to local disks. This covers shuffle files, shuffle
spills and data blocks stored on disk (for both caching and broadcast variables). It does not cover
encrypting output data generated by applications with APIs such as `saveAsHadoopFile` or
`saveAsTable`.
`saveAsTable`. It also may not cover temporary files created explicitly by the user.

The following settings cover enabling encryption for data written to disk:

Expand Down
5 changes: 5 additions & 0 deletions docs/spark-standalone.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,11 @@ title: Spark Standalone Mode

In addition to running on the Mesos or YARN cluster managers, Spark also provides a simple standalone deploy mode. You can launch a standalone cluster either manually, by starting a master and workers by hand, or use our provided [launch scripts](#cluster-launch-scripts). It is also possible to run these daemons on a single machine for testing.

# Security

Security in Spark is OFF by default. This could mean you are vulnerable to attack by default.
Please see [Spark Security](security.html) and the specific security sections in this doc before running Spark.

# Installing Spark Standalone to a Cluster

To install Spark Standalone mode, you simply place a compiled version of Spark on each node on the cluster. You can obtain pre-built versions of Spark with each release or [build it yourself](building-spark.html).
Expand Down