Compressive Learning (CL) is a framework where a target learning task (e.g., clustering or density fitting) is not performed on the whole dataset of signals, but on a heavily compressed representation of it (called sketch), enabling training with reduced time and memory resources. Because the sketch only keeps track of general tendencies (i.e., generalized moments) of the dataset while discarding individual data records, previous work argued that CL should protect the privacy of the users that contributed to the dataset, but without providing formal arguments to back up this claim. This work aims to formalize this observation.