Machine learning algorithms, such as the k-means clustering, typically require several passes on a dataset of learning examples (e.g., signals, images, data volumes). These must thus be all acquired, stored in memory, and read multiple times, which becomes prohibitive when the number of examples becomes very large. On the other hand, the machine learning model learned from this data (e.g. the centroids in k-means clustering) is usually simple and contains relatively few information compared to the size of the dataset.