Trust is an ambiguous concept. Most of this ambiguity is produced by the fact that trust (such as, e.g., love) is a naïve concept: every human being possesses a rough and pre-theoretical notion of trust and, to apply it, they often rely on their gut’s feelings, rather than objective evaluations. For instance, even though not advisable, there seems to be no particular problem with the sentence: “I don't know why, but I feel I can trust him.”, uttered by someone placing his trust in a complete stranger. This fact produces the unpleasant effect of not having a general meaning of trust on which everyone agrees, since each individual attaches a specific meaning to the concept based on their personal experiences and, more importantly, on their moral and ethical status and upbringing. A further source of ambiguity is generated by the fact that trust is a wide concept, applicable in different contexts with meanings that can vary greatly. When someone utters: “I trust you are enjoying your summer.”, she has a quite different meaning in mind compared to when she utters: “I trust you will complete the task in time.”; the former sentence expressing hope, while the latter expressing a form of belief on the abilities of the other agent. Coupling those two facts together (trust naïve and multi-purpose nature), it comes with no surprise that in everyday life there is a plethora of different interpretations surrounding the concept of trust. Per se, this doesn't produce any specific problems: in natural languages there are many concepts which do not have a precise meaning, and, after all, the word ‘trust’ is employed on a daily basis by many agents without particular difficulties. This is due to the fact that in ordinary interactions, context identification and physical cues can help disambiguating the various meanings of the concepts employed and, when they don't, further enquiries and repeated interactions make it possible for groups of agents to agree on common meanings. However, the multifaceted nature of trust becomes problematic when the plethora of interpretations is carried over to formal settings. If two political economists discuss some national policy which involves trust, but they do not realize that they are not sharing a common meaning for the notion, then it is very unlikely that they will settle on the correct way of implementing the policy. This is indeed what happens with the concept of trust. Different disciplines conceptualize trust differently and, often, inside the same discipline, it is possible to find highly different conceptualizations. The most straightforward example is that of economy, where: “[T]rust is defined by some as a characteristic of a particular class of action, while others identify it as a precondition for any such action to take place. At the same time, some discuss trust with reference to governments and organisations, while others examine trust between individuals or people in particular roles.” (Furlong, The Conceptualization of ‘Trust’ in Economic Thought) Another example is that of (the security fragment of) computer science: with the gradual transition of social interactions from face-to-face to digital environments, the importance of having a digital counterpart of trust contributed to the emergence of numerous theoretical analyses of the notion, which, instead of fostering a unified account, produced a potpourri of definitions with different domains of application and different levels of abstraction. This is a pressing issue for computer scientists who desire to build general frameworks for digital systems which include a trust component: different definitions of trust might have drastically different effects on the frameworks and thus, might produce incompatible digital systems. This calls for a unified formal account of the notion of trust and a formal framework which embody this formal notion. With such structures in hand, a computer scientist would be able to construct general frameworks which can reproduce and imitate social environments better and, thus, promote interactions of a higher quality on the web. The focus of this thesis is, therefore, that of forming a set of core features of socio-economical notions of trust. From there, the aim is to build a computational counterpart of the notion, which does justice to previous attempts of formalising trust in computer science. This computational counterpart is then employed to build gradually more powerful formal languages which allow reasoning about trust. The main properties of those formal languages are analysed and, finally, comparisons are made with other models which are employed to model trust in computer science.

A Logical Language for Computational Trust

Tagliaferri, Mirko
2019

Abstract

Trust is an ambiguous concept. Most of this ambiguity is produced by the fact that trust (such as, e.g., love) is a naïve concept: every human being possesses a rough and pre-theoretical notion of trust and, to apply it, they often rely on their gut’s feelings, rather than objective evaluations. For instance, even though not advisable, there seems to be no particular problem with the sentence: “I don't know why, but I feel I can trust him.”, uttered by someone placing his trust in a complete stranger. This fact produces the unpleasant effect of not having a general meaning of trust on which everyone agrees, since each individual attaches a specific meaning to the concept based on their personal experiences and, more importantly, on their moral and ethical status and upbringing. A further source of ambiguity is generated by the fact that trust is a wide concept, applicable in different contexts with meanings that can vary greatly. When someone utters: “I trust you are enjoying your summer.”, she has a quite different meaning in mind compared to when she utters: “I trust you will complete the task in time.”; the former sentence expressing hope, while the latter expressing a form of belief on the abilities of the other agent. Coupling those two facts together (trust naïve and multi-purpose nature), it comes with no surprise that in everyday life there is a plethora of different interpretations surrounding the concept of trust. Per se, this doesn't produce any specific problems: in natural languages there are many concepts which do not have a precise meaning, and, after all, the word ‘trust’ is employed on a daily basis by many agents without particular difficulties. This is due to the fact that in ordinary interactions, context identification and physical cues can help disambiguating the various meanings of the concepts employed and, when they don't, further enquiries and repeated interactions make it possible for groups of agents to agree on common meanings. However, the multifaceted nature of trust becomes problematic when the plethora of interpretations is carried over to formal settings. If two political economists discuss some national policy which involves trust, but they do not realize that they are not sharing a common meaning for the notion, then it is very unlikely that they will settle on the correct way of implementing the policy. This is indeed what happens with the concept of trust. Different disciplines conceptualize trust differently and, often, inside the same discipline, it is possible to find highly different conceptualizations. The most straightforward example is that of economy, where: “[T]rust is defined by some as a characteristic of a particular class of action, while others identify it as a precondition for any such action to take place. At the same time, some discuss trust with reference to governments and organisations, while others examine trust between individuals or people in particular roles.” (Furlong, The Conceptualization of ‘Trust’ in Economic Thought) Another example is that of (the security fragment of) computer science: with the gradual transition of social interactions from face-to-face to digital environments, the importance of having a digital counterpart of trust contributed to the emergence of numerous theoretical analyses of the notion, which, instead of fostering a unified account, produced a potpourri of definitions with different domains of application and different levels of abstraction. This is a pressing issue for computer scientists who desire to build general frameworks for digital systems which include a trust component: different definitions of trust might have drastically different effects on the frameworks and thus, might produce incompatible digital systems. This calls for a unified formal account of the notion of trust and a formal framework which embody this formal notion. With such structures in hand, a computer scientist would be able to construct general frameworks which can reproduce and imitate social environments better and, thus, promote interactions of a higher quality on the web. The focus of this thesis is, therefore, that of forming a set of core features of socio-economical notions of trust. From there, the aim is to build a computational counterpart of the notion, which does justice to previous attempts of formalising trust in computer science. This computational counterpart is then employed to build gradually more powerful formal languages which allow reasoning about trust. The main properties of those formal languages are analysed and, finally, comparisons are made with other models which are employed to model trust in computer science.
2019
File in questo prodotto:
File Dimensione Formato  
phd_uniurb_275351.pdf

accesso aperto

Tipologia: DT
Licenza: Creative commons
Dimensione 3.94 MB
Formato Adobe PDF
3.94 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11576/2666335
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact