Skip to content

Conversation

@marmbrus
Copy link
Contributor

A common problem that users encounter with Spark 1.6.0 is that writing to a partitioned parquet table OOMs. The root cause is that parquet allocates a significant amount of memory that is not accounted for by our own mechanisms. As a workaround, we can ensure that only a single file is open per task unless the user explicitly asks for more.

@marmbrus
Copy link
Contributor Author

/cc @nongli @rxin

@rxin
Copy link
Contributor

rxin commented Feb 22, 2016

Do we have any off by one error? (I hope we don't)

@rxin
Copy link
Contributor

rxin commented Feb 22, 2016

LGTM

@SparkQA
Copy link

SparkQA commented Feb 22, 2016

Test build #51661 has finished for PR 11308 at commit b4da054.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@marmbrus
Copy link
Contributor Author

Merging to master and 1.6

asfgit pushed a commit that referenced this pull request Feb 22, 2016
A common problem that users encounter with Spark 1.6.0 is that writing to a partitioned parquet table OOMs.  The root cause is that parquet allocates a significant amount of memory that is not accounted for by our own mechanisms.  As a workaround, we can ensure that only a single file is open per task unless the user explicitly asks for more.

Author: Michael Armbrust <[email protected]>

Closes #11308 from marmbrus/parquetWriteOOM.

(cherry picked from commit 173aa94)
Signed-off-by: Michael Armbrust <[email protected]>
@asfgit asfgit closed this in 173aa94 Feb 22, 2016
val PARTITION_MAX_FILES =
intConf("spark.sql.sources.maxConcurrentWrites",
defaultValue = Some(5),
defaultValue = Some(1),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We will have 1+1 writers actually

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants