Periodic Reporting for period 2 - TRUSTPATH (Responsible sharing: Paving the path for transparent trust)
Okres sprawozdawczy: 2022-03-01 do 2023-08-31
First, we conducted a systematic literature review (Köbis, Soraperra, & Shalvi, 2021; Journal of Management) providing a comprehensive systematic literature review of the consequences of partaking in the trust-based sharing economy. The review covers 93 empirical papers, providing an authoritative overview of the economic, social, and psychological consequences of trust-based commercial sharing for service providers, users, and third parties. The review concludes that whereas a well-functioning infrastructure of payment, insurance, and communication enables the positive consequences of sharing, ambiguity about rules, roles, and regulations causes non-negligible negative consequences. To overcome these negative consequences and promote responsible forms of sharing, the review proposes a transparency-based sharing framework, outlining an agenda for future research. In a follow up project (Shalvi, Mol, Molho, Vu, Leib, & Soraperra, 2021, Current Opinion in Psychology) we utilize insights from psychological science and review three main conditions under which evoking trust in sharing platforms may lead to irresponsible sharing: ethical blind spots, willful ignorance, and misinformation. We further propose that transparent information is key to enable and encourage responsible sharing.
Second, we focused one of the key conditions which can lead people to not take into account how their actions affect others and engage in irresponsible sharing – willful ignorance. Providing an overview of the empirical work on the topic, we have conducted the first meta-analysis on willful ignorance (Vu, Soraperra, Leib, van der Weele, & Shalvi, working paper), analyzing 33,603 decisions made by 6,531 participants from 56 different treatments. Meta-analytic results reveal that 40% of participants avoid easily obtainable information about the consequences of their actions on others, leading to a 15.6-percentage-point decrease in altruistic behavior compared to when information is provided. We discuss the motives behind willful ignorance and provide evidence consistent with excuse-seeking behaviors to maintain a positive self-image. In an additional project, we examine how people search information about the behavior of others in ethically tempting situations (Leib, 2022, Journal of Behavioral Decision Making). We find that people use information they obtained as an excuse to engage in unethical behavior, but do not seek this excuse intentionally. When people learned that others lied, they were more likely to lie themselves. However, people did not intentionally searched information about others’ lies ahead of time.
Beyond examining the search and avoidance of information, we further test the supply of inconvenient information – information about consequences one’s action have on others (Soraperra, van der Weele, Villeval, Shalvi, 2023; Games and Economics Behavior). We find that senders who can supply information prefer to suppress a substantial amount of “inconvenient” information. They do so because they believe the decision maker, just like them, prefers to not be informed. Indeed, around one-third of decision makers avoids senders who transmit inconvenient information. Together, when people can choose their senders a matching process emerge: selfish decision makers remain ignorant about the consequences of their action by choosing non-informative senders, in turn being more selfish themselves. Altruistic decision makers, however, seek out senders who send information, in turn being more prosocial themselves.
Finally, in an online study we examine how different revenue models of sharing economy platforms (for pure-sharing vs. for-profit), combined with the information the platform highlights (the trust vs. the profitability dimension) shape trust behavior (Cicognani, Romagnoli, & Soraperra, In preparation). Results reveal that trust decreases when there is a mismatch between the revenue model and the message being highlighted. Namely, when a for-profit platform highlights feelings of trust, people are less likely to engage in trusting behavior. Results suggest that leveraging a rhetoric of trust can actually backfire for-profit oriented platforms in terms of trust generation.