European Commission logo
français français
CORDIS - Résultats de la recherche de l’UE
CORDIS

Responsible sharing: Paving the path for transparent trust

Periodic Reporting for period 2 - TRUSTPATH (Responsible sharing: Paving the path for transparent trust)

Période du rapport: 2022-03-01 au 2023-08-31

The collaborative (sharing, gig, or “on-demand”) economy is the peer-to-peer activity of obtaining, giving, or sharing access to goods and services, and is estimated to add €160–€572 billion to the EU economy. As such, the sharing economy is increasing in popularity, and affects the lives of many. Many sharing economy platforms highlight the positive aspects of partaking in this new economy, highlighting trust as a key component linking users and providers. Indeed, Airbnb encourages hosts to “connect with others”, Uber states it allows drivers to “make money on your schedule”, and Deliveroo highlights that people can “get paid to ride around your city”. Partaking in the sharing economy, however, can also have ethical risks and challenges. For instance, third parties, like neighbors, can be negatively influenced if Airbnb apartments are rented out excessively, providers can be harmed by the blurred lines between employee and self-employed status they have on such platforms, and consumers may risk getting subpar service due to the blurred lines between professional and non-professional provision of services. Given sharing economy’s clear benefits, but also ethical risks, it is important to make sure that people partaking in the sharing economy are doing so in a responsible way –while considering the consequences to the parties directly involved in the transaction, as well as uninvolved third parties. The overarching objective of the project is to examine the ways in which people can be encouraged to share responsibly, and the mechanisms underlying it. The project will develop and test a novel psychological theory for the collaborative economy: Transparency Based Sharing. The theory focuses on the interplay between trust and transparency as the psychological mechanisms allowing the facilitation of responsible sharing. Specifically, the current work seeks to examine (1) whether users are aware of, or motivated to learn about, the side effects of using collaborative platforms? (2) Does the platforms’ promotion of trust increase users’ information neglect and willful-ignorance?, and finally (3) Does transparency reduce information neglect and increase responsible sharing?
The project so far made several significant contributions by summarizing existing knowledge using literature reviews, meta-analysis, and writing opinion pieces. From the insights gathered by these efforts, new projects are now in various stages of development have emerged.

First, we conducted a systematic literature review (Köbis, Soraperra, & Shalvi, 2021; Journal of Management) providing a comprehensive systematic literature review of the consequences of partaking in the trust-based sharing economy. The review covers 93 empirical papers, providing an authoritative overview of the economic, social, and psychological consequences of trust-based commercial sharing for service providers, users, and third parties. The review concludes that whereas a well-functioning infrastructure of payment, insurance, and communication enables the positive consequences of sharing, ambiguity about rules, roles, and regulations causes non-negligible negative consequences. To overcome these negative consequences and promote responsible forms of sharing, the review proposes a transparency-based sharing framework, outlining an agenda for future research. In a follow up project (Shalvi, Mol, Molho, Vu, Leib, & Soraperra, 2021, Current Opinion in Psychology) we utilize insights from psychological science and review three main conditions under which evoking trust in sharing platforms may lead to irresponsible sharing: ethical blind spots, willful ignorance, and misinformation. We further propose that transparent information is key to enable and encourage responsible sharing.

Second, we focused one of the key conditions which can lead people to not take into account how their actions affect others and engage in irresponsible sharing – willful ignorance. Providing an overview of the empirical work on the topic, we have conducted the first meta-analysis on willful ignorance (Vu, Soraperra, Leib, van der Weele, & Shalvi, working paper), analyzing 33,603 decisions made by 6,531 participants from 56 different treatments. Meta-analytic results reveal that 40% of participants avoid easily obtainable information about the consequences of their actions on others, leading to a 15.6-percentage-point decrease in altruistic behavior compared to when information is provided. We discuss the motives behind willful ignorance and provide evidence consistent with excuse-seeking behaviors to maintain a positive self-image. In an additional project, we examine how people search information about the behavior of others in ethically tempting situations (Leib, 2022, Journal of Behavioral Decision Making). We find that people use information they obtained as an excuse to engage in unethical behavior, but do not seek this excuse intentionally. When people learned that others lied, they were more likely to lie themselves. However, people did not intentionally searched information about others’ lies ahead of time.

Beyond examining the search and avoidance of information, we further test the supply of inconvenient information – information about consequences one’s action have on others (Soraperra, van der Weele, Villeval, Shalvi, 2023; Games and Economics Behavior). We find that senders who can supply information prefer to suppress a substantial amount of “inconvenient” information. They do so because they believe the decision maker, just like them, prefers to not be informed. Indeed, around one-third of decision makers avoids senders who transmit inconvenient information. Together, when people can choose their senders a matching process emerge: selfish decision makers remain ignorant about the consequences of their action by choosing non-informative senders, in turn being more selfish themselves. Altruistic decision makers, however, seek out senders who send information, in turn being more prosocial themselves.

Finally, in an online study we examine how different revenue models of sharing economy platforms (for pure-sharing vs. for-profit), combined with the information the platform highlights (the trust vs. the profitability dimension) shape trust behavior (Cicognani, Romagnoli, & Soraperra, In preparation). Results reveal that trust decreases when there is a mismatch between the revenue model and the message being highlighted. Namely, when a for-profit platform highlights feelings of trust, people are less likely to engage in trusting behavior. Results suggest that leveraging a rhetoric of trust can actually backfire for-profit oriented platforms in terms of trust generation.
The results so far take us beyond the state of the art by (1) pinpointing the likelihood of people to engage in willful ignorance by performing the first meta-analysis on willful ignorance; (2) map the field of responsible sharing by (i) introducing the transparency based sharing theoretical framework and (ii) providing an extensive systematic literature review of the topic. We further made several empirical discoveries associated with the way people share and search for information about the negative consequences their actions may have on others, see above. We expect to continue adding discoveries along those lines, including using the cross cultural study on willful ignorance (planned for later in the project) and the large scale experiment.
Transparency based sharing framework