Skip to content

Conversation

@nongli
Copy link
Contributor

@nongli nongli commented Feb 9, 2016

Some parts of the engine rely on UnsafeRow which the vectorized parquet scanner does not want
to produce. This add a conversion in Physical RDD. In the case where codegen is used (and the
scan is the start of the pipeline), there is no requirement to use UnsafeRow. This patch adds
update PhysicallRDD to support codegen, which eliminates the need for the UnsafeRow conversion
in all cases.

The result of these changes for TPCDS-Q19 at the 10gb sf reduces the query time from 9.5 seconds
to 6.5 seconds.

@nongli
Copy link
Contributor Author

nongli commented Feb 9, 2016

@davies I think this is a general purpose way we can remove 'copy' from InternalRow. We're essentially delaying the serialization as late in the plan tree as possible.

@davies
Copy link
Contributor

davies commented Feb 9, 2016

In the past, we had a planner to insert a ConvertToUnsafe and ConvertToSafe for operators that expect UnsafeRow or SafeRow, that is removed to simplify complexity, maybe we need to bring that back for the parquet reader? see #10511

@nongli
Copy link
Contributor Author

nongli commented Feb 9, 2016

@davies I don't like that approach. Converting to unsaferow is not really the thing we want to be doing long term. In this example for instance, i think it would be nice if the value in the hashtable was just the unsafe row of the non-join key columns. This is hard to do if we inserted a plan node.

@SparkQA
Copy link

SparkQA commented Feb 9, 2016

Test build #50996 has finished for PR 11141 at commit 662559c.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@davies
Copy link
Contributor

davies commented Feb 9, 2016

@nongli Are you suggesting that we should do this kind of workaround for every operator that requires UnsafeRow (in-memory cache, SortMergeJoin, CartesianProduct, Exchange, etc) ?

@nongli
Copy link
Contributor Author

nongli commented Feb 10, 2016

@davies Yes I am. I don't think we're going to add a ton more operators and in all the one you mentioned, we should think hard about serializing to the in memory structure the operator wants rather than just copying. For example, CartesianProduct should probably serialize all the rows on one side contiguously; similarly sort for tungsten pages. In-memory cache should use a columnar version and not need to first go to UnsafeRow.

I think we can take what I've done here and make it more componentized so it's less code duplication to use this elsewhere. I'm okay not doing this in general in the planner. The operators that need to accumulate memory should think hard about how to do it.

@SparkQA
Copy link

SparkQA commented Feb 10, 2016

Test build #51010 has finished for PR 11141 at commit 7b54584.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@nongli
Copy link
Contributor Author

nongli commented Feb 10, 2016

I updated the approach to do this a differnet way.

@SparkQA
Copy link

SparkQA commented Feb 11, 2016

Test build #51070 has finished for PR 11141 at commit 4b6093a.

  • This patch fails Scala style tests.
  • This patch merges cleanly.
  • This patch adds no public classes.


// Support codegen so that we can avoid the UnsafeRow conversion in all cases. Codegen
// never requires UnsafeRow as input.
override def supportCodegen: Boolean = true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's true by default in CodegenSupport

@davies
Copy link
Contributor

davies commented Feb 11, 2016

Could you also update the title and description of PR?

@nongli nongli changed the title [SPARK-13250][SQL] Make broadcast join handle InternalRow input. [SPARK-13250][SQL] Update PhysicallRDD to convert to UnsafeRow if using the vectorized scanner. Feb 11, 2016
@SparkQA
Copy link

SparkQA commented Feb 11, 2016

Test build #51071 has finished for PR 11141 at commit a5448a3.

  • This patch fails Scala style tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Feb 11, 2016

Test build #51073 has finished for PR 11141 at commit b18ab65.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Feb 11, 2016

Test build #51081 has finished for PR 11141 at commit bad9f33.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Feb 12, 2016

Test build #51203 has finished for PR 11141 at commit 3dbfc52.

  • This patch fails PySpark unit tests.
  • This patch does not merge cleanly.
  • This patch adds no public classes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add SQL metric for "numOutputRows"?

@SparkQA
Copy link

SparkQA commented Feb 23, 2016

Test build #2573 has started for PR 11141 at commit 534c472.

@SparkQA
Copy link

SparkQA commented Feb 23, 2016

Test build #51788 has finished for PR 11141 at commit 534c472.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Feb 24, 2016

Test build #51823 has finished for PR 11141 at commit 19ebdbb.

  • This patch passes all tests.
  • This patch does not merge cleanly.
  • This patch adds no public classes.

@davies
Copy link
Contributor

davies commented Feb 24, 2016

LGTM, @nongli Could you fix the conflicts?

…ng the vectorized scanner.

Some parts of the engine rely on UnsafeRow which the vectorized parquet scanner does not want
to produce. This add a conversion in Physical RDD. In the case where codegen is used (and the
scan is the start of the pipeline), there is no requirement to use UnsafeRow. This patch adds
update PhysicallRDD to support codegen, which eliminates the need for the UnsafeRow conversion
in all cases.

The result of these changes for TPCDS-Q19 at the 10gb sf reduces the query time from 9.5 seconds
to 6.5 seconds.
@SparkQA
Copy link

SparkQA commented Feb 24, 2016

Test build #51887 has finished for PR 11141 at commit 955e9e4.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@yhuai
Copy link
Contributor

yhuai commented Feb 24, 2016

test this please

@SparkQA
Copy link

SparkQA commented Feb 24, 2016

Test build #51893 has finished for PR 11141 at commit 731c30e.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@yhuai
Copy link
Contributor

yhuai commented Feb 24, 2016

test this please

@SparkQA
Copy link

SparkQA commented Feb 25, 2016

Test build #51908 has finished for PR 11141 at commit 731c30e.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@davies
Copy link
Contributor

davies commented Feb 25, 2016

Merging this into master, thanks!

@asfgit asfgit closed this in 5a7af9e Feb 25, 2016
@nongli nongli deleted the spark-13250 branch February 26, 2016 22:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants