Résumé

With the advent of the digital age, data storage continues to grow rapidly, especially with the development of internet data centers. The environmental impact of this technological revolution has become a problem. As the cost of digital recordings decreases, the amount of unnecessary data stored increases. This paper presents a new algorithm for compressing digital data series, which uses a local measure of relevance based on statistical characteristics. This compression produces non-uniform sampling with a density dependent on the relevance of the data, hence the adaptive feature of the algorithm. It works without any additional input and allows to build a data tree with progressive compression. Such a structure can feed multiscale analysis tools as well as selective memory release solutions for efficient archive management. Tests were carried out on two ideal noise-free signals as well as two real-world applications, namely compression of electrocardiograms retrieved from the PhysioNet database and compression of remote measurements provided by the constellation of ESA's Swarm satellites. Non-sparse type signals have been chosen in order to investigate compression performances in unfavorable conditions. Despite this, the number of samples has been reduced by more than half while maintaining the relevant characteristics of the signals. By reconstructing uniform samplings of the ideal noise-free signals, a measure of the compression error is obtained. Comparing the Fourier transforms of the original and the reconstructed signals, we further allow for future comparative analysis taking into account the ratio between the bandwidth and the sampling frequency of the original signal.

Détails

Actions

PDF