Recent trends such as the Internet of Things and pervasive computing demand for novel engineering approaches able to support the specification and scalable runtime execution of adaptive behaviour for large collections of interacting devices. Aggregate Computing is one such approach, formally founded in the field calculus, which enables programming of device aggregates by a global stance, through functional composition of self-organisation patterns that is turned automatically into repetitive local computations and gossip-like interactions. However, the logically decentralised and open nature of such algorithms and systems presumes a fundamental cooperation of the devices involved: an error in a device or a focused attack may significantly compromise the computation outcome and hence the algorithms built on top. For this reason, in this paper, we move the first steps towards attack-resistant aggregate computations. We propose trust as a framework to detect, ponder or isolate voluntary/involuntary misbehaviours, with the goal of mitigating the influence on the overall computation. On top of this, we consider recommendations in order to provide more reactivity and stability through the sharing of individual perceptions. To better understand the fragility of aggregate systems in face of attacks and investigate the extent of the mitigation afforded by the adoption of trust mechanisms, we consider the paradigmatic case of the gradient algorithm. Experiments are carried out to analyse the sensitivity of the adopted trust framework to malevolent actions and to study the impact of different factors on the error committed by trust-based gradients under attack. Finally, in a case study of the spatial channel algorithm, it is shown how the protection afforded by attack-resistant gradients can be effectively propagated to higher-level building blocks.

Towards attack-resistant Aggregate Computing using trust mechanisms

Aldini, Alessandro
;
VIROLI, MIRKO
2018

Abstract

Recent trends such as the Internet of Things and pervasive computing demand for novel engineering approaches able to support the specification and scalable runtime execution of adaptive behaviour for large collections of interacting devices. Aggregate Computing is one such approach, formally founded in the field calculus, which enables programming of device aggregates by a global stance, through functional composition of self-organisation patterns that is turned automatically into repetitive local computations and gossip-like interactions. However, the logically decentralised and open nature of such algorithms and systems presumes a fundamental cooperation of the devices involved: an error in a device or a focused attack may significantly compromise the computation outcome and hence the algorithms built on top. For this reason, in this paper, we move the first steps towards attack-resistant aggregate computations. We propose trust as a framework to detect, ponder or isolate voluntary/involuntary misbehaviours, with the goal of mitigating the influence on the overall computation. On top of this, we consider recommendations in order to provide more reactivity and stability through the sharing of individual perceptions. To better understand the fragility of aggregate systems in face of attacks and investigate the extent of the mitigation afforded by the adoption of trust mechanisms, we consider the paradigmatic case of the gradient algorithm. Experiments are carried out to analyse the sensitivity of the adopted trust framework to malevolent actions and to study the impact of different factors on the error committed by trust-based gradients under attack. Finally, in a case study of the spatial channel algorithm, it is shown how the protection afforded by attack-resistant gradients can be effectively propagated to higher-level building blocks.
File in questo prodotto:
File Dimensione Formato  
1-s2.0-S0167642318303046-main.pdf

non disponibili

Tipologia: Versione editoriale
Licenza: Pubblico con Copyright
Dimensione 5.81 MB
Formato Adobe PDF
5.81 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
main.pdf

accesso aperto

Descrizione: versione post-print
Tipologia: Versione referata/accettata
Licenza: Creative commons
Dimensione 3.3 MB
Formato Adobe PDF
3.3 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11576/2661132
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 21
  • ???jsp.display-item.citation.isi??? 19
social impact