Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
484 commits
Select commit Hold shift + click to select a range
5e4afbf
[SPARK-18617][CORE][STREAMING] Close "kryo auto pick" feature for Spa…
uncleGen Nov 30, 2016
7043c6b
[SPARK-18366][PYSPARK][ML] Add handleInvalid to Pyspark for QuantileD…
techaddict Nov 30, 2016
05ba5ee
[SPARK-18612][MLLIB] Delete broadcasted variable in LBFGS CostFun
Nov 30, 2016
6e044ab
[SPARK-17897][SQL] Fixed IsNotNull Constraint Inference Rule
gatorsmile Nov 30, 2016
3de93fb
[SPARK-18220][SQL] read Hive orc table with varchar column should not…
cloud-fan Nov 30, 2016
eae85da
[SPARK][EXAMPLE] Added missing semicolon in quick-start-guide example
Nov 30, 2016
7c0e296
[SPARK-18640] Add synchronization to TaskScheduler.runningTasksByExec…
JoshRosen Nov 30, 2016
f542df3
[SPARK-18318][ML] ML, Graph 2.1 QA: API: New Scala APIs, docs
yanboliang Nov 30, 2016
9e96ac5
[SPARK-18251][SQL] the type of Dataset can't be Option of non-flat type
cloud-fan Nov 30, 2016
c2c2fdc
[SPARK-18546][CORE] Fix merging shuffle spills when using encryption.
Nov 30, 2016
6e2e987
[SPARK-18655][SS] Ignore Structured Streaming 2.0.2 logs in history s…
zsxwing Dec 1, 2016
7d45967
[SPARK-18617][SPARK-18560][TEST] Fix flaky test: StreamingContextSuit…
zsxwing Dec 1, 2016
e8d8e35
[SPARK-18476][SPARKR][ML] SparkR Logistic Regression should should su…
wangmiao1981 Dec 1, 2016
9dc3ef6
[SPARK-18635][SQL] Partition name/values not escaped correctly in som…
ericl Dec 1, 2016
8579ab5
[SPARK-18666][WEB UI] Remove the codes checking deprecated config spa…
viirya Dec 1, 2016
cbbe217
[SPARK-18645][DEPLOY] Fix spark-daemon.sh arguments error lead to thr…
wangyum Dec 1, 2016
6916ddc
[SPARK-18674][SQL] improve the error message of using join
cloud-fan Dec 1, 2016
4c673c6
[SPARK-18274][ML][PYSPARK] Memory leak in PySpark JavaWrapper
techaddict Dec 1, 2016
4746674
[SPARK-18617][SPARK-18560][TESTS] Fix flaky test: StreamingContextSui…
zsxwing Dec 1, 2016
2d2e801
[SPARK-18639] Build only a single pip package
rxin Dec 2, 2016
2f91b01
[SPARK-18141][SQL] Fix to quote column names in the predicate clause …
sureshthalamati Dec 2, 2016
b9eb100
[SPARK-18538][SQL][BACKPORT-2.1] Fix Concurrent Table Fetching Using …
gatorsmile Dec 2, 2016
fce1be6
[SPARK-18284][SQL] Make ExpressionEncoder.serializer.nullable precise
kiszk Dec 2, 2016
0f0903d
[SPARK-18647][SQL] do not put provider in table properties for Hive s…
cloud-fan Dec 2, 2016
a7f8ebb
[SPARK-17213][SQL] Disable Parquet filter push-down for string and bi…
liancheng Dec 2, 2016
65e896a
[SPARK-18679][SQL] Fix regression in file listing performance for non…
ericl Dec 2, 2016
415730e
[SPARK-18419][SQL] `JDBCRelation.insert` should not remove Spark options
dongjoon-hyun Dec 2, 2016
e374b24
[SPARK-18659][SQL] Incorrect behaviors in overwrite table for datasou…
ericl Dec 2, 2016
32c8538
[SPARK-18674][SQL][FOLLOW-UP] improve the error message of using join
gatorsmile Dec 2, 2016
c69825a
[SPARK-18677] Fix parsing ['key'] in JSON path expressions.
rdblue Dec 2, 2016
f915f81
[SPARK-18291][SPARKR][ML] Revert "[SPARK-18291][SPARKR][ML] SparkR gl…
yanboliang Dec 2, 2016
f537632
[SPARK-18670][SS] Limit the number of StreamingQueryListener.StreamPr…
zsxwing Dec 2, 2016
839d4e9
[SPARK-18324][ML][DOC] Update ML programming and migration guide for …
yanboliang Dec 3, 2016
cf3dbec
[SPARK-18690][PYTHON][SQL] Backward compatibility of unbounded frames
zero323 Dec 3, 2016
28ea432
[SPARK-18685][TESTS] Fix URI and release resources after opening in t…
HyukjinKwon Dec 3, 2016
b098b48
[SPARK-18582][SQL] Whitelist LogicalPlan operators allowed in correla…
nsyca Dec 3, 2016
28f698b
[SPARK-18081][ML][DOCS] Add user guide for Locality Sensitive Hashing…
Yunni Dec 4, 2016
8145c82
[SPARK-18091][SQL] Deep if expressions cause Generated SpecificUnsafe…
Dec 4, 2016
41d698e
[SPARK-18661][SQL] Creating a partitioned datasource table should not…
ericl Dec 4, 2016
c13c293
[SPARK-18643][SPARKR] SparkR hangs at session start when installed as…
felixcheung Dec 5, 2016
88e07ef
[SPARK-18625][ML] OneVsRestModel should support setFeaturesCol and se…
zhengruifeng Dec 5, 2016
1821cbe
[SPARK-18279][DOC][ML][SPARKR] Add R examples to ML programming guide.
yanboliang Dec 5, 2016
afd2321
[MINOR][DOC] Use SparkR `TRUE` value and add default values for `Stru…
dongjoon-hyun Dec 5, 2016
30c0743
Revert "[SPARK-18284][SQL] Make ExpressionEncoder.serializer.nullable…
rxin Dec 5, 2016
e23c8cf
[SPARK-18711][SQL] should disable subexpression elimination for Lambd…
cloud-fan Dec 5, 2016
39759ff
[DOCS][MINOR] Update location of Spark YARN shuffle jar
nchammas Dec 5, 2016
c6a4e3d
[SPARK-18694][SS] Add StreamingQuery.explain and exception to Python …
zsxwing Dec 5, 2016
fecd23d
[SPARK-18634][PYSPARK][SQL] Corruption and Correctness issues with ex…
viirya Dec 6, 2016
6c4c336
[SPARK-18729][SS] Move DataFrame.collect out of synchronized block in…
zsxwing Dec 6, 2016
1946854
[SPARK-18657][SPARK-18668] Make StreamingQuery.id persists across res…
tdas Dec 6, 2016
d458816
[SPARK-18722][SS] Move no data rate limit from StreamExecution to Pro…
zsxwing Dec 6, 2016
8ca6a82
[SPARK-18572][SQL] Add a method `listPartitionNames` to `ExternalCata…
Dec 6, 2016
655297b
[SPARK-18721][SS] Fix ForeachSink with watermark + append
zsxwing Dec 6, 2016
e362d99
[SPARK-18634][SQL][TRIVIAL] Touch-up Generate
hvanhovell Dec 6, 2016
ace4079
[SPARK-18714][SQL] Add a simple time function to SparkSession
rxin Dec 6, 2016
d20e0d6
[SPARK-18671][SS][TEST] Added tests to ensure stability of that all S…
tdas Dec 6, 2016
65f5331
[SPARK-18652][PYTHON] Include the example data and third-party licens…
lins05 Dec 6, 2016
9b5bc2a
[SPARK-18734][SS] Represent timestamp in StreamingQueryProgress as fo…
tdas Dec 7, 2016
3750c6e
[SPARK-18671][SS][TEST-MAVEN] Follow up PR to fix test for Maven
tdas Dec 7, 2016
340e9ae
[SPARK-18686][SPARKR][ML] Several cleanup and improvements for spark.…
yanboliang Dec 7, 2016
99c293e
[SPARK-18701][ML] Fix Poisson GLM failure due to wrong initialization
actuaryzhang Dec 7, 2016
51754d6
[SPARK-18678][ML] Skewed reservoir sampling in SamplingUtils
srowen Dec 7, 2016
4432a2a
[SPARK-18208][SHUFFLE] Executor OOM due to a growing LongArray in Byt…
Dec 7, 2016
5dbcd4f
[SPARK-17760][SQL] AnalysisException with dataframe pivot when groupB…
aray Dec 7, 2016
acb6ac5
[SPARK-18764][CORE] Add a warning log when skipping a corrupted file
zsxwing Dec 7, 2016
76e1f16
[SPARK-18762][WEBUI] Web UI should be http:4040 instead of https:4040
sarutak Dec 7, 2016
e9b3afa
[SPARK-18588][TESTS] Fix flaky test: KafkaSourceStressForDontFailOnDa…
zsxwing Dec 7, 2016
1c64197
[SPARK-18754][SS] Rename recentProgresses to recentProgress
marmbrus Dec 7, 2016
839c2eb
[SPARK-18633][ML][EXAMPLE] Add multiclass logistic regression summary…
wangmiao1981 Dec 8, 2016
617ce3b
[SPARK-18758][SS] StreamingQueryListener events from a StreamingQuery…
tdas Dec 8, 2016
ab865cf
[SPARK-18705][ML][DOC] Update user guide to reflect one pass solver f…
sethah Dec 8, 2016
1c3f1da
[SPARK-18326][SPARKR][ML] Review SparkR ML wrappers API for 2.1
yanboliang Dec 8, 2016
0807174
Preparing Spark release v2.1.0-rc2
pwendell Dec 8, 2016
48aa677
Preparing development version 2.1.1-SNAPSHOT
pwendell Dec 8, 2016
9095c15
[SPARK-18325][SPARKR][ML] SparkR ML wrappers example code and user guide
yanboliang Dec 8, 2016
726217e
[SPARK-18667][PYSPARK][SQL] Change the way to group row in BatchEvalP…
viirya Dec 8, 2016
e0173f1
[SPARK-16589] [PYTHON] Chained cartesian produces incorrect number of…
aray Dec 8, 2016
d69df90
[SPARK-18590][SPARKR] build R source package when making distribution
felixcheung Dec 8, 2016
a035644
[SPARK-18751][CORE] Fix deadlock when SparkContext.stop is called in …
zsxwing Dec 8, 2016
9483242
[SPARK-18760][SQL] Consistent format specification for FileFormats
rxin Dec 8, 2016
e43209f
[SPARK-18590][SPARKR] Change the R source build to Hadoop 2.6
shivaram Dec 8, 2016
fcd22e5
[SPARK-18776][SS] Make Offset for FileStreamSource corrected formatte…
tdas Dec 9, 2016
1cafc76
[SPARK-18774][CORE][SQL] Ignore non-existing files when ignoreCorrupt…
zsxwing Dec 9, 2016
ef5646b
[SPARKR][PYSPARK] Fix R source package name to match Spark version. R…
shivaram Dec 9, 2016
4ceed95
[SPARK-18349][SPARKR] Update R API documentation on ml model summary
wangmiao1981 Dec 9, 2016
e8f351f
Copy the SparkR source package with LFTP
shivaram Dec 9, 2016
2c88e1d
Copy pyspark and SparkR packages to latest release dir too
felixcheung Dec 9, 2016
72bf519
[SPARK-18637][SQL] Stateful UDF should be considered as nondeterministic
Dec 9, 2016
b226f10
[MINOR][CORE][SQL][DOCS] Typo fixes
jaceklaskowski Dec 9, 2016
0c6415a
[SPARK-17822][R] Make JVMObjectTracker a member variable of RBackend
mengxr Dec 9, 2016
eb2d9bf
[MINOR][SPARKR] Fix SparkR regex in copy command
shivaram Dec 9, 2016
562507e
[SPARK-18745][SQL] Fix signed integer overflow due to toInt cast
kiszk Dec 9, 2016
e45345d
[SPARK-18812][MLLIB] explain "Spark ML"
mengxr Dec 10, 2016
8bf56cc
[SPARK-18807][SPARKR] Should suppress output print for calls to JVM m…
felixcheung Dec 10, 2016
b020ce4
[SPARK-18811] StreamSource resolution should happen in stream executi…
brkyvz Dec 10, 2016
2b36f49
[SPARK-17460][SQL] Make sure sizeInBytes in Statistics will not overflow
huaxingao Dec 10, 2016
83822df
[MINOR][DOCS] Remove Apache Spark Wiki address
dongjoon-hyun Dec 10, 2016
5151daf
[SPARK-3359][DOCS] Fix greater-than symbols in Javadoc to allow build…
michalsenkyr Dec 10, 2016
de21ca4
[SPARK-18815][SQL] Fix NPE when collecting column stats for string/bi…
Dec 11, 2016
d4c03f8
[SQL][MINOR] simplify a test to fix the maven tests
cloud-fan Dec 11, 2016
d5f1416
[SPARK-18628][ML] Update Scala param and Python param to have quotes
krishnakalyan3 Dec 11, 2016
63693c1
[SPARK-18790][SS] Keep a general offset history of stream batches
Dec 12, 2016
3501160
[DOCS][MINOR] Clarify Where AccumulatorV2s are Displayed
Dec 12, 2016
523071f
[SPARK-18681][SQL] Fix filtering to compatible with partition keys of…
wangyum Dec 12, 2016
1aeb7f4
[SPARK-18810][SPARKR] SparkR install.spark does not work for RCs, sna…
felixcheung Dec 12, 2016
9dc5fa5
[SPARK-18796][SS] StreamingQueryManager should not block when startin…
zsxwing Dec 13, 2016
9f0e3be
[SPARK-18797][SPARKR] Update spark.logit in sparkr-vignettes
wangmiao1981 Dec 13, 2016
207107b
[SPARK-18835][SQL] Don't expose Guava types in the JavaTypeInference …
Dec 13, 2016
d5c4a5d
[SPARK-18840][YARN] Avoid throw exception when getting token renewal …
jerryshao Dec 13, 2016
292a37f
[SPARK-18816][WEB UI] Executors Logs column only ran visibility check…
ajbozarth Dec 13, 2016
f672bfd
[SPARK-18843][CORE] Fix timeout in awaitResultInForkJoinSafely (branc…
zsxwing Dec 13, 2016
25b9758
[SPARK-18834][SS] Expose event time stats through StreamingQueryProgress
tdas Dec 13, 2016
5693ac8
[SPARK-18793][SPARK-18794][R] add spark.randomForest/spark.gbt to vig…
mengxr Dec 14, 2016
019d1fa
[SPARK-18588][TESTS] Ignore KafkaSourceStressForDontFailOnDataLossSuite
zsxwing Dec 14, 2016
8ef0059
[MINOR][SPARKR] fix kstest example error and add unit test
wangmiao1981 Dec 14, 2016
f999312
[SPARK-18814][SQL] CheckAnalysis rejects TPCDS query 32
nsyca Dec 14, 2016
16d4bd4
[SPARK-18730] Post Jenkins test report page instead of the full conso…
liancheng Dec 14, 2016
af12a21
[SPARK-18753][SQL] Keep pushed-down null literal as a filter in Spark…
HyukjinKwon Dec 14, 2016
e8866f9
[SPARK-18853][SQL] Project (UnaryNode) is way too aggressive in estim…
rxin Dec 14, 2016
c4de90f
[SPARK-18852][SS] StreamingQuery.lastProgress should be null when rec…
zsxwing Dec 14, 2016
d0d9c57
[SPARK-18795][ML][SPARKR][DOC] Added KSTest section to SparkR vignettes
jkbradley Dec 14, 2016
280c35a
[SPARK-18854][SQL] numberedTreeString and apply(i) inconsistent for s…
rxin Dec 15, 2016
0d94201
[SPARK-18865][SPARKR] SparkR vignettes MLP and LDA updates
wangmiao1981 Dec 15, 2016
cb2c842
[SPARK-18856][SQL] non-empty partitioned table should not report zero…
cloud-fan Dec 15, 2016
b14fc39
[SPARK-18869][SQL] Add TreeNode.p that returns BaseType
rxin Dec 15, 2016
d399a29
[SPARK-18875][SPARKR][DOCS] Fix R API doc generation by adding `DESCR…
dongjoon-hyun Dec 15, 2016
2a8de2e
[SPARK-18849][ML][SPARKR][DOC] vignettes final check update
felixcheung Dec 15, 2016
e430915
[SPARK-18870] Disallowed Distinct Aggregations on Streaming Datasets
tdas Dec 15, 2016
900ce55
[SPARK-18826][SS] Add 'latestFirst' option to FileStreamSource
zsxwing Dec 15, 2016
b6a81f4
[SPARK-18888] partitionBy in DataStreamWriter in Python throws _to_se…
brkyvz Dec 15, 2016
ef2ccf9
Preparing Spark release v2.1.0-rc3
pwendell Dec 15, 2016
a7364a8
Preparing development version 2.1.1-SNAPSHOT
pwendell Dec 15, 2016
08e4272
[SPARK-18868][FLAKY-TEST] Deflake StreamingQueryListenerSuite: single…
brkyvz Dec 15, 2016
ae853e8
[MINOR] Only rename SparkR tar.gz if names mismatch
shivaram Dec 16, 2016
ec31726
Preparing Spark release v2.1.0-rc4
pwendell Dec 16, 2016
62a6577
Preparing development version 2.1.1-SNAPSHOT
pwendell Dec 16, 2016
b23220f
[MINOR] Handle fact that mv is different on linux, mac
shivaram Dec 16, 2016
cd0a083
Preparing Spark release v2.1.0-rc5
pwendell Dec 16, 2016
483624c
Preparing development version 2.1.1-SNAPSHOT
pwendell Dec 16, 2016
d8548c8
[SPARK-18892][SQL] Alias percentile_approx approx_percentile
rxin Dec 16, 2016
a73201d
[SPARK-18850][SS] Make StreamExecution and progress classes serializable
zsxwing Dec 16, 2016
d8ef0be
[SPARK-18108][SQL] Fix a schema inconsistent bug that makes a parquet…
maropu Dec 16, 2016
df589be
[SPARK-18897][SPARKR] Fix SparkR SQL Test to drop test table
dongjoon-hyun Dec 16, 2016
d2a131a
[SPARK-18904][SS][TESTS] Merge two FileStreamSourceSuite files
zsxwing Dec 16, 2016
001f49b
[SPARK-18849][ML][SPARKR][DOC] vignettes final check reorg
felixcheung Dec 17, 2016
4b8a643
[SPARK-18918][DOC] Missing </td> in Configuration page
gatorsmile Dec 18, 2016
a5da8db
[SPARK-18827][CORE] Fix cannot read broadcast on disk
wangyum Dec 18, 2016
3080f99
[SPARK-18703][SPARK-18675][SQL][BACKPORT-2.1] CTAS for hive serde tab…
gatorsmile Dec 19, 2016
fc1b256
[SPARK-18700][SQL] Add StripedLock for each table's relation in cache
xuanyuanking Dec 19, 2016
c1a26b4
[SPARK-18921][SQL] check database existence with Hive.databaseExists …
cloud-fan Dec 19, 2016
f07e989
[SPARK-18928] Check TaskContext.isInterrupted() in FileScanRDD, JDBCR…
JoshRosen Dec 20, 2016
2971ae5
[SPARK-18761][CORE] Introduce "task reaper" to oversee task killing i…
JoshRosen Dec 20, 2016
cd297c3
[SPARK-18281] [SQL] [PYSPARK] Remove timeout for reading data through…
viirya Dec 20, 2016
3857d5b
[SPARK-18927][SS] MemorySink for StructuredStreaming can't recover fr…
brkyvz Dec 20, 2016
063a98e
[SPARK-18900][FLAKY-TEST] StateStoreSuite.maintenance
brkyvz Dec 21, 2016
bc54a14
[SPARK-18947][SQL] SQLContext.tableNames should not call Catalog.list…
cloud-fan Dec 21, 2016
3c8861d
[SPARK-18894][SS] Fix event time watermark delay threshold specified …
tdas Dec 21, 2016
162bdb9
[SPARK-18031][TESTS] Fix flaky test ExecutorAllocationManagerSuite.ba…
zsxwing Dec 21, 2016
3184834
[SPARK-18954][TESTS] Fix flaky test: o.a.s.streaming.BasicOperationsS…
zsxwing Dec 21, 2016
0e51bb0
[SPARK-18949][SQL][BACKPORT-2.1] Add recoverPartitions API to Catalog
gatorsmile Dec 21, 2016
17ef57f
[SPARK-18588][SS][KAFKA] Create a new KafkaConsumer when error happen…
zsxwing Dec 21, 2016
60e02a1
[SPARK-18234][SS] Made update mode public
tdas Dec 22, 2016
021952d
[SPARK-18528][SQL] Fix a bug to initialise an iterator of aggregation…
maropu Dec 22, 2016
9a3c5bd
[FLAKY-TEST] InputStreamsSuite.socket input stream
brkyvz Dec 22, 2016
07e2a17
[SPARK-18908][SS] Creating StreamingQueryException should check if lo…
zsxwing Dec 22, 2016
def3690
[SQL] Minor readability improvement for partition handling code
rxin Dec 22, 2016
ec0d6e2
[DOC] bucketing is applicable to all file-based data sources
rxin Dec 22, 2016
f6853b3
[SPARK-18973][SQL] Remove SortPartitions and RedistributeData
rxin Dec 22, 2016
132f229
[SPARK-17807][CORE] split test-tags into test-JAR
ryan-williams Dec 22, 2016
5e80103
[SPARK-18985][SS] Add missing @InterfaceStability.Evolving for Struct…
zsxwing Dec 23, 2016
1857acc
[SPARK-18972][CORE] Fix the netty thread names for RPC
zsxwing Dec 23, 2016
5bafdc4
[SPARK-18991][CORE] Change ContextCleaner.referenceBuffer to use Conc…
zsxwing Dec 23, 2016
ca25b1e
[SPARK-18837][WEBUI] Very long stage descriptions do not wrap in the UI
sarutak Dec 24, 2016
ac7107f
[MINOR][DOC] Fix doc of ForeachWriter to use writeStream
carsonwang Dec 28, 2016
7197a7b
[SPARK-18993][BUILD] Unable to build/compile Spark in IntelliJ due to…
srowen Dec 28, 2016
80d583b
[SPARK-18669][SS][DOCS] Update Apache docs for Structured Streaming r…
tdas Dec 28, 2016
47ab4af
[SPARK-19003][DOCS] Add Java example in Spark Streaming Guide, sectio…
adesharatushar Dec 29, 2016
20ae117
[SPARK-19016][SQL][DOC] Document scalable partition handling
liancheng Dec 30, 2016
3483def
[SPARK-19050][SS][TESTS] Fix EventTimeWatermarkSuite 'delay in months…
zsxwing Jan 1, 2017
63857c8
[MINOR][DOC] Minor doc change for YARN credential providers
viirya Jan 2, 2017
517f398
[SPARK-18379][SQL] Make the parallelism of parallelPartitionDiscovery…
Nov 15, 2016
d489e1d
[SPARK-19041][SS] Fix code snippet compilation issues in Structured S…
lw-lin Jan 2, 2017
94272a9
[SPARK-19028][SQL] Fixed non-thread-safe functions used in SessionCat…
gatorsmile Dec 31, 2016
7762550
[SPARK-19048][SQL] Delete Partition Location when Dropping Managed Pa…
gatorsmile Jan 3, 2017
1ecf1a9
[SPARK-18877][SQL][BACKPORT-2.1] CSVInferSchema.inferField` on Decima…
dongjoon-hyun Jan 4, 2017
4ca1788
[SPARK-19033][CORE] Add admin acls for history server
jerryshao Jan 6, 2017
ce9bfe6
[SPARK-19083] sbin/start-history-server.sh script use of $@ without q…
Jan 6, 2017
ee735a8
[SPARK-19074][SS][DOCS] Updated Structured Streaming Programming Guid…
tdas Jan 6, 2017
86b6621
[SPARK-19110][ML][MLLIB] DistributedLDAModel returns different logPri…
wangmiao1981 Jan 7, 2017
c95b585
[SPARK-19106][DOCS] Styling for the configuration docs is broken
srowen Jan 7, 2017
ecc1622
[SPARK-18941][SQL][DOC] Add a new behavior document on `CREATE/DROP T…
dongjoon-hyun Jan 8, 2017
8690d4b
[SPARK-19127][DOCS] Update Rank Function Documentation
bllchmbrs Jan 9, 2017
8779e6a
[SPARK-19126][DOCS] Update Join Documentation Across Languages
bllchmbrs Jan 9, 2017
80a3e13
[SPARK-18903][SPARKR][BACKPORT-2.1] Add API to get SparkUI URL
felixcheung Jan 9, 2017
3b6ac32
[SPARK-18952][BACKPORT] Regex strings not properly escaped in codegen…
brkyvz Jan 9, 2017
65c866e
[SPARK-16845][SQL] `GeneratedClass$SpecificOrdering` grows beyond 64 KB
lw-lin Jan 10, 2017
69d1c4c
[SPARK-19137][SQL] Fix `withSQLConf` to reset `OptionalConfigEntry` c…
dongjoon-hyun Jan 10, 2017
e0af4b7
[SPARK-19113][SS][TESTS] Set UncaughtExceptionHandler in onQueryStart…
zsxwing Jan 10, 2017
81c9430
[SPARK-18997][CORE] Recommended upgrade libthrift to 0.9.3
srowen Jan 10, 2017
230607d
[SPARK-19140][SS] Allow update mode for non-aggregation streaming que…
zsxwing Jan 11, 2017
1022049
[SPARK-19133][SPARKR][ML][BACKPORT-2.1] fix glm for Gamma, clarify gl…
felixcheung Jan 11, 2017
82fcc13
[SPARK-19130][SPARKR] Support setting literal value as column implicitly
felixcheung Jan 11, 2017
0b07634
[SPARK-19158][SPARKR][EXAMPLES] Fix ml.R example fails due to lack of…
yanboliang Jan 12, 2017
9b9867e
[SPARK-18857][SQL] Don't use `Iterator.duplicate` for `incrementalCol…
dongjoon-hyun Jan 10, 2017
616a78a
[SPARK-18969][SQL] Support grouping by nondeterministic expressions
cloud-fan Jan 12, 2017
042e32d
[SPARK-19055][SQL][PYSPARK] Fix SparkSession initialization when Spar…
viirya Jan 12, 2017
23944d0
[SPARK-17237][SQL] Remove backticks in a pivot result schema
maropu Jan 12, 2017
0668e06
Fix missing close-parens for In filter's toString
ash211 Jan 13, 2017
b2c9a2c
[SPARK-18687][PYSPARK][SQL] Backward compatibility - creating a Dataf…
vijoshi Jan 13, 2017
2c2ca89
[SPARK-19178][SQL] convert string of large numbers to int should retu…
cloud-fan Jan 13, 2017
ee3642f
[SPARK-18335][SPARKR] createDataFrame to support numPartitions parameter
felixcheung Jan 13, 2017
5e9be1e
[SPARK-19180] [SQL] the offset of short should be 2 in OffHeapColumn
Jan 13, 2017
db37049
[SPARK-19120] Refresh Metadata Cache After Loading Hive Tables
gatorsmile Jan 15, 2017
bf2f233
[SPARK-19092][SQL][BACKPORT-2.1] Save() API of DataFrameWriter should…
gatorsmile Jan 16, 2017
4f3ce06
[SPARK-19082][SQL] Make ignoreCorruptFiles work for Parquet
viirya Jan 16, 2017
9758905
[SPARK-19232][SPARKR] Update Spark distribution download cache locati…
felixcheung Jan 16, 2017
f4317be
[SPARK-18905][STREAMING] Fix the issue of removing a failed jobset fr…
CodingCat Jan 17, 2017
2ff3669
[SPARK-19019] [PYTHON] Fix hijacked `collections.namedtuple` and port…
HyukjinKwon Jan 17, 2017
13986a7
[SPARK-19065][SQL] Don't inherit expression id in dropDuplicates
zsxwing Jan 17, 2017
3ec3e3f
[SPARK-19129][SQL] SessionCatalog: Disallow empty part col values in …
gatorsmile Jan 17, 2017
29b954b
[SPARK-19066][SPARKR][BACKPORT-2.1] LDA doesn't set optimizer correctly
wangmiao1981 Jan 18, 2017
77202a6
[SPARK-19231][SPARKR] add error handling for download and untar for S…
felixcheung Jan 18, 2017
047506b
[SPARK-19113][SS][TESTS] Ignore StreamingQueryException thrown from a…
zsxwing Jan 18, 2017
4cff0b5
[SPARK-19168][STRUCTURED STREAMING] StateStore should be aborted upon…
lw-lin Jan 18, 2017
7bc3e9b
[SPARK-18899][SPARK-18912][SPARK-18913][SQL] refactor the error check…
cloud-fan Dec 20, 2016
482d361
[SPARK-19314][SS][CATALYST] Do not allow sort before aggregation in S…
tdas Jan 20, 2017
4d286c9
[SPARK-18589][SQL] Fix Python UDF accessing attributes from both side…
Jan 21, 2017
6f0ad57
[SPARK-19267][SS] Fix a race condition when stopping StateStore
zsxwing Jan 21, 2017
8daf10e
[SPARK-19155][ML] MLlib GeneralizedLinearRegression family and link s…
yanboliang Jan 22, 2017
1e07a71
[SPARK-19155][ML] Make family case insensitive in GLM
actuaryzhang Jan 23, 2017
ed5d1e7
[SPARK-19306][CORE] Fix inconsistent state in DiskBlockObject when ex…
jerryshao Jan 23, 2017
4a2be09
[SPARK-9435][SQL] Reuse function in Java UDF to correctly support exp…
HyukjinKwon Jan 24, 2017
570e5e1
[SPARK-19268][SS] Disallow adaptive query execution for streaming que…
zsxwing Jan 24, 2017
9c04e42
[SPARK-18823][SPARKR] add support for assigning to column
felixcheung Jan 24, 2017
d128b6a
[SPARK-16473][MLLIB] Fix BisectingKMeans Algorithm failing in edge case
imatiach-msft Jan 23, 2017
b94fb28
[SPARK-19017][SQL] NOT IN subquery with more than one column may retu…
nsyca Jan 24, 2017
c133787
[SPARK-19330][DSTREAMS] Also show tooltip for successful batches
lw-lin Jan 25, 2017
e2f7739
[SPARK-16046][DOCS] Aggregations in the Spark SQL programming guide
Jan 25, 2017
f391ad2
[SPARK-18750][YARN] Avoid using "mapValues" when allocating containers.
Jan 25, 2017
af95455
[SPARK-18863][SQL] Output non-aggregate expressions without GROUP BY …
nsyca Jan 25, 2017
c9f075a
[SPARK-19307][PYSPARK] Make sure user conf is propagated to SparkCont…
Jan 25, 2017
97d3353
[SPARK-18750][YARN] Follow up: move test to correct directory in 2.1 …
Jan 25, 2017
a5c10ff
[SPARK-19064][PYSPARK] Fix pip installing of sub components
holdenk Jan 25, 2017
0d7e385
[SPARK-14804][SPARK][GRAPHX] Fix checkpointing of VertexRDD/EdgeRDD
tdas Jan 26, 2017
b12a76a
[SPARK-19338][SQL] Add UDF names in explain
maropu Jan 26, 2017
59502bb
[SPARK-19220][UI] Make redirection to HTTPS apply to all URIs. (branc…
Jan 27, 2017
ba2a5ad
[SPARK-18788][SPARKR] Add API for getNumPartitions
felixcheung Jan 27, 2017
4002ee9
[SPARK-19333][SPARKR] Add Apache License headers to R files
felixcheung Jan 27, 2017
9a49f9a
[SPARK-19324][SPARKR] Spark VJM stdout output is getting dropped in S…
felixcheung Jan 27, 2017
f421a1c
doc fix
felixcheung Jan 31, 2017
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
2 changes: 1 addition & 1 deletion .github/PULL_REQUEST_TEMPLATE
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,4 @@
(Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
(If this patch involves UI changes, please attach a screenshot; otherwise, remove this)

Please review https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark before opening a pull request.
Please review http://spark.apache.org/contributing.html before opening a pull request.
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,8 @@ project/plugins/project/build.properties
project/plugins/src_managed/
project/plugins/target/
python/lib/pyspark.zip
python/deps
python/pyspark/python
reports/
scalastyle-on-compile.generated.xml
scalastyle-output.xml
Expand Down
4 changes: 2 additions & 2 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
## Contributing to Spark

*Before opening a pull request*, review the
[Contributing to Spark wiki](https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark).
[Contributing to Spark guide](http://spark.apache.org/contributing.html).
It lists steps that are required before creating a PR. In particular, consider:

- Is the change important and ready enough to ask the community to spend time reviewing?
- Have you searched for existing, related JIRAs and pull requests?
- Is this a new feature that can stand alone as a [third party project](https://cwiki.apache.org/confluence/display/SPARK/Third+Party+Projects) ?
- Is this a new feature that can stand alone as a [third party project](http://spark.apache.org/third-party-projects.html) ?
- Is the change being proposed clearly explained and motivated?

When you contribute code, you affirm that the contribution is your original work and that you
Expand Down
3 changes: 0 additions & 3 deletions NOTICE
Original file line number Diff line number Diff line change
Expand Up @@ -421,9 +421,6 @@ Copyright (c) 2011, Terrence Parr.
This product includes/uses ASM (http://asm.ow2.org/),
Copyright (c) 2000-2007 INRIA, France Telecom.

This product includes/uses org.json (http://www.json.org/java/index.html),
Copyright (c) 2002 JSON.org

This product includes/uses JLine (http://jline.sourceforge.net/),
Copyright (c) 2002-2006, Marc Prud'hommeaux <[email protected]>.

Expand Down
91 changes: 91 additions & 0 deletions R/CRAN_RELEASE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@
# SparkR CRAN Release

To release SparkR as a package to CRAN, we would use the `devtools` package. Please work with the
`[email protected]` community and R package maintainer on this.

### Release

First, check that the `Version:` field in the `pkg/DESCRIPTION` file is updated. Also, check for stale files not under source control.

Note that while `run-tests.sh` runs `check-cran.sh` (which runs `R CMD check`), it is doing so with `--no-manual --no-vignettes`, which skips a few vignettes or PDF checks - therefore it will be preferred to run `R CMD check` on the source package built manually before uploading a release. Also note that for CRAN checks for pdf vignettes to success, `qpdf` tool must be there (to install it, eg. `yum -q -y install qpdf`).

To upload a release, we would need to update the `cran-comments.md`. This should generally contain the results from running the `check-cran.sh` script along with comments on status of all `WARNING` (should not be any) or `NOTE`. As a part of `check-cran.sh` and the release process, the vignettes is build - make sure `SPARK_HOME` is set and Spark jars are accessible.

Once everything is in place, run in R under the `SPARK_HOME/R` directory:

```R
paths <- .libPaths(); .libPaths(c("lib", paths)); Sys.setenv(SPARK_HOME=tools::file_path_as_absolute("..")); devtools::release(); .libPaths(paths)
```

For more information please refer to http://r-pkgs.had.co.nz/release.html#release-check

### Testing: build package manually

To build package manually such as to inspect the resulting `.tar.gz` file content, we would also use the `devtools` package.

Source package is what get released to CRAN. CRAN would then build platform-specific binary packages from the source package.

#### Build source package

To build source package locally without releasing to CRAN, run in R under the `SPARK_HOME/R` directory:

```R
paths <- .libPaths(); .libPaths(c("lib", paths)); Sys.setenv(SPARK_HOME=tools::file_path_as_absolute("..")); devtools::build("pkg"); .libPaths(paths)
```

(http://r-pkgs.had.co.nz/vignettes.html#vignette-workflow-2)

Similarly, the source package is also created by `check-cran.sh` with `R CMD build pkg`.

For example, this should be the content of the source package:

```sh
DESCRIPTION R inst tests
NAMESPACE build man vignettes

inst/doc/
sparkr-vignettes.html
sparkr-vignettes.Rmd
sparkr-vignettes.Rman

build/
vignette.rds

man/
*.Rd files...

vignettes/
sparkr-vignettes.Rmd
```

#### Test source package

To install, run this:

```sh
R CMD INSTALL SparkR_2.1.0.tar.gz
```

With "2.1.0" replaced with the version of SparkR.

This command installs SparkR to the default libPaths. Once that is done, you should be able to start R and run:

```R
library(SparkR)
vignette("sparkr-vignettes", package="SparkR")
```

#### Build binary package

To build binary package locally, run in R under the `SPARK_HOME/R` directory:

```R
paths <- .libPaths(); .libPaths(c("lib", paths)); Sys.setenv(SPARK_HOME=tools::file_path_as_absolute("..")); devtools::build("pkg", binary = TRUE); .libPaths(paths)
```

For example, this should be the content of the binary package:

```sh
DESCRIPTION Meta R html tests
INDEX NAMESPACE help profile worker
```
10 changes: 5 additions & 5 deletions R/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ SparkR is an R package that provides a light-weight frontend to use Spark from R

Libraries of sparkR need to be created in `$SPARK_HOME/R/lib`. This can be done by running the script `$SPARK_HOME/R/install-dev.sh`.
By default the above script uses the system wide installation of R. However, this can be changed to any user installed location of R by setting the environment variable `R_HOME` the full path of the base directory where R is installed, before running install-dev.sh script.
Example:
Example:
```bash
# where /home/username/R is where R is installed and /home/username/R/bin contains the files R and RScript
export R_HOME=/home/username/R
Expand Down Expand Up @@ -46,19 +46,19 @@ Sys.setenv(SPARK_HOME="/Users/username/spark")
# This line loads SparkR from the installed directory
.libPaths(c(file.path(Sys.getenv("SPARK_HOME"), "R", "lib"), .libPaths()))
library(SparkR)
sc <- sparkR.init(master="local")
sparkR.session()
```

#### Making changes to SparkR

The [instructions](https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark) for making contributions to Spark also apply to SparkR.
The [instructions](http://spark.apache.org/contributing.html) for making contributions to Spark also apply to SparkR.
If you only make R file changes (i.e. no Scala changes) then you can just re-install the R package using `R/install-dev.sh` and test your changes.
Once you have made your changes, please include unit tests for them and run existing unit tests using the `R/run-tests.sh` script as described below.

#### Generating documentation

The SparkR documentation (Rd files and HTML files) are not a part of the source repository. To generate them you can run the script `R/create-docs.sh`. This script uses `devtools` and `knitr` to generate the docs and these packages need to be installed on the machine before using the script. Also, you may need to install these [prerequisites](https://github.com/apache/spark/tree/master/docs#prerequisites). See also, `R/DOCUMENTATION.md`

### Examples, Unit tests

SparkR comes with several sample programs in the `examples/src/main/r` directory.
Expand Down
50 changes: 44 additions & 6 deletions R/check-cran.sh
Original file line number Diff line number Diff line change
Expand Up @@ -34,13 +34,30 @@ if [ ! -z "$R_HOME" ]
fi
R_SCRIPT_PATH="$(dirname $(which R))"
fi
echo "USING R_HOME = $R_HOME"
echo "Using R_SCRIPT_PATH = ${R_SCRIPT_PATH}"

# Build the latest docs
# Install the package (this is required for code in vignettes to run when building it later)
# Build the latest docs, but not vignettes, which is built with the package next
$FWDIR/create-docs.sh

# Build a zip file containing the source package
"$R_SCRIPT_PATH/"R CMD build $FWDIR/pkg
# Build source package with vignettes
SPARK_HOME="$(cd "${FWDIR}"/..; pwd)"
. "${SPARK_HOME}"/bin/load-spark-env.sh
if [ -f "${SPARK_HOME}/RELEASE" ]; then
SPARK_JARS_DIR="${SPARK_HOME}/jars"
else
SPARK_JARS_DIR="${SPARK_HOME}/assembly/target/scala-$SPARK_SCALA_VERSION/jars"
fi

if [ -d "$SPARK_JARS_DIR" ]; then
# Build a zip file containing the source package with vignettes
SPARK_HOME="${SPARK_HOME}" "$R_SCRIPT_PATH/"R CMD build $FWDIR/pkg

find pkg/vignettes/. -not -name '.' -not -name '*.Rmd' -not -name '*.md' -not -name '*.pdf' -not -name '*.html' -delete
else
echo "Error Spark JARs not found in $SPARK_HOME"
exit 1
fi

# Run check as-cran.
VERSION=`grep Version $FWDIR/pkg/DESCRIPTION | awk '{print $NF}'`
Expand All @@ -54,11 +71,32 @@ fi

if [ -n "$NO_MANUAL" ]
then
CRAN_CHECK_OPTIONS=$CRAN_CHECK_OPTIONS" --no-manual"
CRAN_CHECK_OPTIONS=$CRAN_CHECK_OPTIONS" --no-manual --no-vignettes"
fi

echo "Running CRAN check with $CRAN_CHECK_OPTIONS options"

"$R_SCRIPT_PATH/"R CMD check $CRAN_CHECK_OPTIONS SparkR_"$VERSION".tar.gz
if [ -n "$NO_TESTS" ] && [ -n "$NO_MANUAL" ]
then
"$R_SCRIPT_PATH/"R CMD check $CRAN_CHECK_OPTIONS SparkR_"$VERSION".tar.gz
else
# This will run tests and/or build vignettes, and require SPARK_HOME
SPARK_HOME="${SPARK_HOME}" "$R_SCRIPT_PATH/"R CMD check $CRAN_CHECK_OPTIONS SparkR_"$VERSION".tar.gz
fi

# Install source package to get it to generate vignettes rds files, etc.
if [ -n "$CLEAN_INSTALL" ]
then
echo "Removing lib path and installing from source package"
LIB_DIR="$FWDIR/lib"
rm -rf $LIB_DIR
mkdir -p $LIB_DIR
"$R_SCRIPT_PATH/"R CMD INSTALL SparkR_"$VERSION".tar.gz --library=$LIB_DIR

# Zip the SparkR package so that it can be distributed to worker nodes on YARN
pushd $LIB_DIR > /dev/null
jar cfM "$LIB_DIR/sparkr.zip" SparkR
popd > /dev/null
fi

popd > /dev/null
19 changes: 1 addition & 18 deletions R/create-docs.sh
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
# Script to create API docs and vignettes for SparkR
# This requires `devtools`, `knitr` and `rmarkdown` to be installed on the machine.

# After running this script the html docs can be found in
# After running this script the html docs can be found in
# $SPARK_HOME/R/pkg/html
# The vignettes can be found in
# $SPARK_HOME/R/pkg/vignettes/sparkr_vignettes.html
Expand Down Expand Up @@ -52,21 +52,4 @@ Rscript -e 'libDir <- "../../lib"; library(SparkR, lib.loc=libDir); library(knit

popd

# Find Spark jars.
if [ -f "${SPARK_HOME}/RELEASE" ]; then
SPARK_JARS_DIR="${SPARK_HOME}/jars"
else
SPARK_JARS_DIR="${SPARK_HOME}/assembly/target/scala-$SPARK_SCALA_VERSION/jars"
fi

# Only create vignettes if Spark JARs exist
if [ -d "$SPARK_JARS_DIR" ]; then
# render creates SparkR vignettes
Rscript -e 'library(rmarkdown); paths <- .libPaths(); .libPaths(c("lib", paths)); Sys.setenv(SPARK_HOME=tools::file_path_as_absolute("..")); render("pkg/vignettes/sparkr-vignettes.Rmd"); .libPaths(paths)'

find pkg/vignettes/. -not -name '.' -not -name '*.Rmd' -not -name '*.md' -not -name '*.pdf' -not -name '*.html' -delete
else
echo "Skipping R vignettes as Spark JARs not found in $SPARK_HOME"
fi

popd
2 changes: 1 addition & 1 deletion R/install-dev.sh
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ if [ ! -z "$R_HOME" ]
fi
R_SCRIPT_PATH="$(dirname $(which R))"
fi
echo "USING R_HOME = $R_HOME"
echo "Using R_SCRIPT_PATH = ${R_SCRIPT_PATH}"

# Generate Rd files if devtools is installed
"$R_SCRIPT_PATH/"Rscript -e ' if("devtools" %in% rownames(installed.packages())) { library(devtools); devtools::document(pkg="./pkg", roclets=c("rd")) }'
Expand Down
3 changes: 3 additions & 0 deletions R/pkg/.Rbuildignore
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
^.*\.Rproj$
^\.Rproj\.user$
^\.lintr$
^cran-comments\.md$
^NEWS\.md$
^README\.Rmd$
^src-native$
^html$
12 changes: 7 additions & 5 deletions R/pkg/DESCRIPTION
Original file line number Diff line number Diff line change
@@ -1,26 +1,27 @@
Package: SparkR
Type: Package
Version: 2.1.1
Title: R Frontend for Apache Spark
Version: 2.0.0
Date: 2016-08-27
Description: The SparkR package provides an R Frontend for Apache Spark.
Authors@R: c(person("Shivaram", "Venkataraman", role = c("aut", "cre"),
email = "[email protected]"),
person("Xiangrui", "Meng", role = "aut",
email = "[email protected]"),
person("Felix", "Cheung", role = "aut",
email = "[email protected]"),
person(family = "The Apache Software Foundation", role = c("aut", "cph")))
License: Apache License (== 2.0)
URL: http://www.apache.org/ http://spark.apache.org/
BugReports: https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark#ContributingtoSpark-ContributingBugReports
BugReports: http://spark.apache.org/contributing.html
Depends:
R (>= 3.0),
methods
Suggests:
knitr,
rmarkdown,
testthat,
e1071,
survival
Description: The SparkR package provides an R frontend for Apache Spark.
License: Apache License (== 2.0)
Collate:
'schema.R'
'generics.R'
Expand Down Expand Up @@ -48,3 +49,4 @@ Collate:
'utils.R'
'window.R'
RoxygenNote: 5.0.1
VignetteBuilder: knitr
30 changes: 27 additions & 3 deletions R/pkg/NAMESPACE
Original file line number Diff line number Diff line change
@@ -1,9 +1,26 @@
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

# Imports from base R
# Do not include stats:: "rpois", "runif" - causes error at runtime
importFrom("methods", "setGeneric", "setMethod", "setOldClass")
importFrom("methods", "is", "new", "signature", "show")
importFrom("stats", "gaussian", "setNames")
importFrom("utils", "download.file", "object.size", "packageVersion", "untar")
importFrom("utils", "download.file", "object.size", "packageVersion", "tail", "untar")

# Disable native libraries till we figure out how to package it
# See SPARKR-7839
Expand All @@ -16,6 +33,7 @@ export("sparkR.stop")
export("sparkR.session.stop")
export("sparkR.conf")
export("sparkR.version")
export("sparkR.uiWebUrl")
export("print.jobj")

export("sparkR.newJObject")
Expand Down Expand Up @@ -45,7 +63,8 @@ exportMethods("glm",
"spark.als",
"spark.kstest",
"spark.logit",
"spark.randomForest")
"spark.randomForest",
"spark.gbt")

# Job group lifecycle management methods
export("setJobGroup",
Expand Down Expand Up @@ -92,6 +111,7 @@ exportMethods("arrange",
"freqItems",
"gapply",
"gapplyCollect",
"getNumPartitions",
"group_by",
"groupBy",
"head",
Expand Down Expand Up @@ -353,7 +373,9 @@ export("as.DataFrame",
"read.ml",
"print.summary.KSTest",
"print.summary.RandomForestRegressionModel",
"print.summary.RandomForestClassificationModel")
"print.summary.RandomForestClassificationModel",
"print.summary.GBTRegressionModel",
"print.summary.GBTClassificationModel")

export("structField",
"structField.jobj",
Expand All @@ -380,6 +402,8 @@ S3method(print, summary.GeneralizedLinearRegressionModel)
S3method(print, summary.KSTest)
S3method(print, summary.RandomForestRegressionModel)
S3method(print, summary.RandomForestClassificationModel)
S3method(print, summary.GBTRegressionModel)
S3method(print, summary.GBTClassificationModel)
S3method(structField, character)
S3method(structField, jobj)
S3method(structType, jobj)
Expand Down
Loading