(Priors) Does evidence change beliefs? -- (Emotion) How we were persuaded to reach for the moon -- (Incentives) should you scare people into action? -- (Agency) How you obtain power by letting go -- (Curiosity) What do people really want to know? -- (State) What happens to minds under threat? -- (Others, part I) Why do babies love iPhones? -- (Others, part II) Is "unanimous" as reassuring as it sounds? -- The future of influence?
AbstractPeople's risk estimates often do not align with the evidence available to them. In particular, people tend to discount bad news (such as evidence suggesting their risk of being involved in a car accident is higher than they thought) as compared to good news (evidence suggesting it is lower) – this is known as the belief update bias. It has been assumed that individuals use motivated reasoning to rationalise away unwanted evidence (e.g., "I am a safe driver, thus these statistics do not apply to me"). However, whether reasoning is required to discount bad news has not been tested directly. Here, we restrict cognitive resources using a cognitive load (Experiment 1) and a time restriction manipulation (Experiment 3) and find that while these manipulations diminish learning in general, they do not diminish the bias. Furthermore, we also show that the relative neglect of bad news happens the moment new evidence is presented, not when participants are subsequently prompted to state their belief (Experiment 2). Our findings suggest that reasoning is not required for bad news to be discounted as compared to good news.
AbstractWhen faced with a global threat peoples' perception of risk guides their response. When danger is to the self as well as to others two risk estimates are generated—to the self and to others. Here, we set out to examine how people's perceptions of health risk to the self and others are related to their psychological well-being and behavioral response. To that end, we surveyed a large representative sample of Americans facing the COVID-19 pandemic at two times (N1 = 1145, N2 = 683). We found that people perceived their own risk to be relatively low, while estimating the risk to others as relatively high. These risk estimates were differentially associated with psychological well-being and behavior. In particular, perceived personal but not public risk was associated with people's happiness, while both were predictive of anxiety. In contrast, the tendency to engage in protective behaviors were predicted by peoples' estimated risk to the population, but not to themselves. This raises the possibility that people were predominantly engaging in protective behaviors for the benefit of others. The findings can inform public policy aimed at protecting people's psychological well-being and physical health during global threats.
Unrealistic optimism is a pervasive human trait influencing domains ranging from personal relationships to politics and finance. How people maintain unrealistic optimism, despite frequently encountering information that challenges those biased beliefs, is unknown. Here, we provide an explanation. Specifically, we show a striking asymmetry, whereby people updated their beliefs more in response to information that was better than expected compared to information that was worse. This selectivity was mediated by a relative failure to code for errors that should reduce optimism. Distinct regions of the prefrontal cortex tracked estimation errors when those called for positive update, both in highly optimistic and low optimistic individuals. However, highly optimistic individuals exhibited reduced tracking of estimation errors that called for negative update within right inferior prefrontal gyrus. These findings show that optimism is tied to a selective update failure, and diminished neural coding, of undesirable information regarding the future.
Dishonesty is an integral part of our social world, influencing domains ranging from finance and politics to personal relationships. Anecdotally, digressions from a moral code are often described as a series of small breaches that grow over time. Here, we provide empirical evidence for a gradual escalation of self-serving dishonesty and reveal a neural mechanism supporting it. Behaviorally, we show that the extent to which participants engage in self-serving dishonesty increases with repetition. Using fMRI we show that signal reduction in the amygdala is sensitive to the history of dishonest behavior, consistent with adaptation. Critically, the extent of amygdala BOLD reduction to dishonesty on a present decision relative to the last, predicts the magnitude of escalation of self-serving dishonesty on the next decision. The findings uncover a biological mechanism that supports a "slippery slope": what begins as small acts of dishonesty can escalate into larger instances.
On political questions, many people prefer to consult and learn from those whose political views are similar to their own, thus creating a risk of echo chambers or information cocoons. We test whether the tendency to prefer knowledge from the politically like-minded generalizes to domains that have nothing to do with politics, even when evidence indicates that politically like-minded people are less skilled in those domains than people with dissimilar political views. Participants had multiple opportunities to learn about others' (1) political opinions and (2) ability to categorize geometric shapes. They then decided to whom to turn for advice when solving an incentivized shape categorization task. We find that participants falsely concluded that politically like-minded others were better at categorizing shapes and thus chose to hear from them. Participants were also more influenced by politically like-minded others, even when they had good reason not to be. These results replicate in two independent samples. The findings demonstrate that knowing about others' political views interferes with the ability to learn about their competency in unrelated tasks, leading to suboptimal information-seeking decisions and errors in judgement. Our findings have implications for political polarization and social learning in the midst of political divisions.