The scientific research in the area of trust and reputation mechanisms for virtual societies is a recent discipline oriented to increase the reliability and performance of electronic communities by introducing in such communities these well known human soc ial control mechanisms. Computer science has moved from the paradigm of an isolated machine to the paradigm of a network of systems and of distributed computing. Likewise, artificial intelligence is quickly moving from the paradigm of an isolated and non-s ituated intelligence to the paradigm of situated, social and collective intelligence. The new paradigm of the so called intelligent or adaptive agents and Multi-Agent Systems (MAS) together with the spectacular emergence of the information society technolo gies (specially reflected by the popularisation of electronic commerce) are responsible of the increasing interest on trust and reputation mechanisms applied to electronic societies. An agent is a computer system capable of flexible autonomous action in a dynamic, unpredictable and open environment endowed with the capacity to interact with other systems (artificial or natural). Agents are often deployed in environments in which they interact, and maybe cooperate, with other agents that have possibly confli cts aims. Such environments are known as multi-agent systems and are call to become a key element of information society. In this context, trust and reputation play a similar role that in human societies.The objectives of this project are:1. Improve the re liability and security of virtual societies by improving the state-of-the-art of current computational trust and reputation models.2. Provide a common metrics to compare computational trust and reputation models. 3. Increase people’s confidence in multi-agent systems technology.
Call for proposal
See other projects for this call