Skip to content

Commit b2e5b4a

Browse files
committed
Fixes to docs / examples
1 parent 078617c commit b2e5b4a

File tree

2 files changed

+4
-4
lines changed

2 files changed

+4
-4
lines changed

docs/mllib-clustering.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -220,7 +220,7 @@ val numClusters = 2
220220
val model = new StreamingKMeans()
221221
.setK(numClusters)
222222
.setDecayFactor(1.0)
223-
.setRandomWeights(numDimensions)
223+
.setRandomCenters(numDimensions, 0.0)
224224

225225
{% endhighlight %}
226226

mllib/src/main/scala/org/apache/spark/mllib/clustering/StreamingKMeans.scala

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -32,9 +32,9 @@ import org.apache.spark.util.random.XORShiftRandom
3232
/**
3333
* :: DeveloperApi ::
3434
* StreamingKMeansModel extends MLlib's KMeansModel for streaming
35-
* algorithms, so it can keep track of the number of points assigned
36-
* to each cluster, and also update the model by doing a single iteration
37-
* of the standard k-means algorithm.
35+
* algorithms, so it can keep track of a continuously updated weight
36+
* associated with each cluster, and also update the model by
37+
* doing a single iteration of the standard k-means algorithm.
3838
*
3939
* The update algorithm uses the "mini-batch" KMeans rule,
4040
* generalized to incorporate forgetfullness (i.e. decay).

0 commit comments

Comments
 (0)