• German
German

Main Navigation

Heppe/etal/2020a: Resource-Constrained On-Device Learning by Dynamic Averaging

Bibtype Inproceedings
Bibkey Heppe/etal/2020a
Author Heppe, Lukas and Kamp, Michael and Adilova, Linara and Piatkowski, Nico and Heinrich, Danny and Morik, Katharina
Editor Koprinska, Irena and Kamp, Michael and Appice, Annalisa and Loglisci, Corrado and Antonie, Luiza and Zimmermann, Albrecht and Guidotti, Riccardo and {\"O}zg{\"o}bek, {\"O}zlem and Ribeiro, Rita P. and Gavald{\`a}, Ricard and Gama, Jo{\~a}o and Adilova, Linara and Krishnamurthy, Yamuna and Ferreira, Pedro M. and Malerba, Donato and Medeiros, Ib{\'e}ria and Ceci, Michelangelo and Manco, Giuseppe and Masciari, Elio and Ras, Zbigniew W. and Christen, Peter and Ntoutsi, Eirini and Schubert, Erich and Zimek, Arthur and Monreale, Anna and Biecek, Przemyslaw and Rinzivillo, Salvatore and Kille, Benjamin and Lommatzsch, Andreas and Gulla, Jon Atle
Title Resource-Constrained On-Device Learning by Dynamic Averaging
Booktitle ECML PKDD 2020 Workshops
Pages 129--144
Address Cham
Publisher Springer International Publishing
Abstract The communication between data-generating devices is par-
tially responsible for a growing portion of the world’s power consumption.
Thus reducing communication is vital, both, from an economical and an
ecological perspective. For machine learning, on-device learning avoids
sending raw data, which can reduce communication substantially. Fur-
thermore, not centralizing the data protects privacy-sensitive data. How-
ever, most learning algorithms require hardware with high computation
power and thus high energy consumption. In contrast, ultra-low-power
processors, like FPGAs or micro-controllers, allow for energy-efficient
learning of local models. Combined with communication-efficient dis-
tributed learning strategies, this reduces the overall energy consumption
and enables applications that were yet impossible due to limited energy
on local devices. The major challenge is then, that the low-power pro-
cessors typically only have integer processing capabilities. This paper
investigates an approach to communication-efficient on-device learning
of integer exponential families that can be executed on low-power pro-
cessors, is privacy-preserving, and effectively minimizes communication.
The empirical evaluation shows that the approach can reach a model
quality comparable to a centrally learned regular model with an order of
magnitude less communication. Comparing the overall energy consump-
tion, this reduces the required energy for solving the machine learning
task by a significant amount.
Year 2020
Projekt SFB876-A1
Bibtex Here you can get this literature entry as year = {2020},
pages = {129--144},
publisher = {Springer International Publishing},
isbn = {978-3-030-65965-3},
abstract = {The communication between data-generating devices is par- tially responsible for a growing portion of the world’s power consumption. Thus reducing communication is vital, both, from an economical and an ecological perspective. For machine learning, on-device learning avoids sending raw data, which can reduce communication substantially. Fur- thermore, not centralizing the data protects privacy-sensitive data. How- ever, most learning algorithms require hardware with high computation power and thus high energy consumption. In contrast, ultra-low-power processors, like FPGAs or micro-controllers, allow for energy-efficient learning of local models. Combined with communication-efficient dis- tributed learning strategies, this reduces the overall energy consumption and enables applications that were yet impossible due to limited energy on local devices. The major challenge is then, that the low-power pro- cessors typically only have integer processing capabilities. This paper investigates an approach to communication-efficient on-device learning of integer exponential families that can be executed on low-power pro- cessors, is privacy-preserving, and effectively minimizes communication. The empirical evaluation shows that the approach can reach a model quality comparable to a centrally learned regular model with an order of magnitude less communication. Comparing the overall energy consump- tion, this reduces the required energy for solving the machine learning task by a significant amount.}
}')"> BibTeX format.