File tree Expand file tree Collapse file tree 2 files changed +4
-4
lines changed
mllib/src/main/scala/org/apache/spark/mllib/clustering Expand file tree Collapse file tree 2 files changed +4
-4
lines changed Original file line number Diff line number Diff line change @@ -220,7 +220,7 @@ val numClusters = 2
220220val model = new StreamingKMeans()
221221 .setK(numClusters)
222222 .setDecayFactor(1.0)
223- .setRandomWeights (numDimensions)
223+ .setRandomCenters (numDimensions, 0.0 )
224224
225225{% endhighlight %}
226226
Original file line number Diff line number Diff line change @@ -32,9 +32,9 @@ import org.apache.spark.util.random.XORShiftRandom
3232/**
3333 * :: DeveloperApi ::
3434 * StreamingKMeansModel extends MLlib's KMeansModel for streaming
35- * algorithms, so it can keep track of the number of points assigned
36- * to each cluster, and also update the model by doing a single iteration
37- * of the standard k-means algorithm.
35+ * algorithms, so it can keep track of a continuously updated weight
36+ * associated with each cluster, and also update the model by
37+ * doing a single iteration of the standard k-means algorithm.
3838 *
3939 * The update algorithm uses the "mini-batch" KMeans rule,
4040 * generalized to incorporate forgetfullness (i.e. decay).
You can’t perform that action at this time.
0 commit comments