Skip to content

Conversation

@scwf
Copy link
Contributor

@scwf scwf commented Dec 2, 2014

Using executeCollect to collect the result, because executeCollect is a custom implementation of collect in spark sql which better than rdd's collect

@SparkQA
Copy link

SparkQA commented Dec 2, 2014

Test build #24023 has started for PR 3547 at commit 184c594.

  • This patch merges cleanly.

@SparkQA
Copy link

SparkQA commented Dec 2, 2014

Test build #24023 has finished for PR 3547 at commit 184c594.

  • This patch fails Spark unit tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@AmplabJenkins
Copy link

Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/24023/
Test FAILed.

@rxin
Copy link
Contributor

rxin commented Dec 2, 2014

@liancheng can you take a look at this (the test failure)? Thanks.

@scwf
Copy link
Contributor Author

scwf commented Dec 2, 2014

I am looking this. Here are some case we not cover in HiveContext toHiveString method

@scwf
Copy link
Contributor Author

scwf commented Dec 2, 2014

I will fix this

@SparkQA
Copy link

SparkQA commented Dec 2, 2014

Test build #24043 has started for PR 3547 at commit a5ab68e.

  • This patch merges cleanly.

@SparkQA
Copy link

SparkQA commented Dec 2, 2014

Test build #24043 has finished for PR 3547 at commit a5ab68e.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@AmplabJenkins
Copy link

Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/24043/
Test PASSed.

@scwf scwf changed the title [SQL] Get result using executeCollect [SPARK-4695][SQL] Get result using executeCollect Dec 2, 2014
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this case can be deleted now. Internal decimal types should never be returned to users.

@marmbrus
Copy link
Contributor

marmbrus commented Dec 2, 2014

Thanks for tracking this down! I'm going to go ahead and merge now so we can include this in 1.2. Can you please open a follow up PR to remove the unnecessary condition?

asfgit pushed a commit that referenced this pull request Dec 2, 2014
Using ```executeCollect``` to collect the result, because executeCollect is a custom implementation of collect in spark sql which better than rdd's collect

Author: wangfei <[email protected]>

Closes #3547 from scwf/executeCollect and squashes the following commits:

a5ab68e [wangfei] Revert "adding debug info"
a60d680 [wangfei] fix test failure
0db7ce8 [wangfei] adding debug info
184c594 [wangfei] using executeCollect instead collect

(cherry picked from commit 3ae0cda)
Signed-off-by: Michael Armbrust <[email protected]>
@asfgit asfgit closed this in 3ae0cda Dec 2, 2014
@scwf
Copy link
Contributor Author

scwf commented Dec 2, 2014

Sure, opened #3563 to delete it.

@scwf scwf deleted the executeCollect branch December 2, 2014 23:18
asfgit pushed a commit that referenced this pull request Dec 12, 2014
a follow up of #3547
/cc marmbrus

Author: scwf <[email protected]>

Closes #3563 from scwf/rnc and squashes the following commits:

9395661 [scwf] remove unnecessary condition
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants