Self-presentation shapes moral dilemma judgments
Journal of Experimental Social Psychology
journal homepage: www.elsevier.com/locate/jesp
The strategic moral self: Self-presentation shapes moral dilemma judgments
Sarah C. Roma,⁎, Paul Conwayb
a Department of Psychology, University of Cologne, Germany b Florida State University, Department of Psychology, United States
A R T I C L E I N F O
Keywords: Moral dilemmas Social judgment Social perception Self-perception Meta-perception
A B S T R A C T
Research has focused on the cognitive and affective processes underpinning dilemma judgments where causing harm maximizes outcomes. Yet, recent work indicates that lay perceivers infer the processes behind others’ judgments, raising two new questions: whether decision-makers accurately anticipate the inferences perceivers draw from their judgments (i.e., meta-insight), and, whether decision-makers strategically modify judgments to present themselves favorably. Across seven studies, a) people correctly anticipated how their dilemma judg- ments would influence perceivers’ ratings of their warmth and competence, though self-ratings differed (Studies 1–3), b) people strategically shifted public (but not private) dilemma judgments to present themselves as warm or competent depending on which traits the situation favored (Studies 4–6), and, c) self-presentation strategies augmented perceptions of the weaker trait implied by their judgment (Study 7). These results suggest that moral dilemma judgments arise out of more than just basic cognitive and affective processes; complex social con- siderations causally contribute to dilemma decision-making.
During the Second World War, Alan Turing and his team cracked the Enigma Code encrypting German war communications. Soon, British High Command discovered an impending attack on Coventry—but taking countermeasures would reveal the decryption (Winterbotham, 1974). Thus, they faced a moral dilemma: allow the deadly raid to proceed and continue intercepting German communications, or deploy lifesaving countermeasures and blind themselves to future attack. Ultimately, the Allies allowed the attack to proceed. Lives were lost, but some analysts suggest this decision expedited the war’s conclusion (Copeland, 2014). The moral judgment literature suggests that such decisions reflect a tension between basic affective processes rejecting harm and cognitive evaluations of outcomes allowing harm (Green, Nystrom, Engell, Darley, & Cohen, 2004). But is it possible that self-presentation also factored in? The British High Command may have considered how their allies would react upon learning they threw away a tool for victory to prevent one deadly, but relatively modest, raid.
Moral dilemmas typically entail considering whether to accept harm to prevent even greater catastrophe. Philosophers originally developed such dilemmas to illustrate a distinction between killing someone as the means of saving others versus as a side effect of doing so (Foot, 1967), but subsequent theorists have largely described them as illustrating a
conflict between deontological and utilitarian philosophy (e.g., Greene, Sommerville, Nystrom, Darley, & Cohen, 2001). The dual process model suggests that affective reactions to harm underlie decisions to reject harm, whereas cognitive evaluations of outcomes underlie decisions to accept harm to maximize outcomes (Greene et al., 2004). Other the- orists have described these as processes in terms of basic cognitive ar- chitecture for decision-making (Crockett, 2013; Cushman, 2013), or heuristic adherence to moral rules (Sunstein, 2005). Notably, all such existing models focus on relatively basic, non-social processing.
Yet, Haidt (2001) argued that moral judgments are intrinsically social, and communicate important information about the speaker. In- deed, recent work indicates that lay perceivers view decision-makers who reject harm (upholding deontology) as warmer, more moral, more trustworthy, more empathic, and more emotional than decision-makers who accept harm (upholding utilitarianism), whom perceivers view as more competent and logical, with consequences for hiring decisions (Everett, Pizarro, & Crockett, 2016; Kreps &Monin, 2014; Rom, Weiss, & Conway, 2016; Uhlmann, Zhu, and Tannenbaum, 2013).1
Moreover, social pressure can influence dilemma judgments (Bostyn & Roets, 2016; Kundu & Cummins, 2012; Lucas & Livingstone, 2014). Such findings raise the question of whether people have meta-insight
http://dx.doi.org/10.1016/j.jesp.2017.08.003 Received 4 April 2017; Received in revised form 8 August 2017; Accepted 17 August 2017
⁎ Corresponding author at: Department of Psychology, University of Cologne, Richard-Strauss-Str. 2, 50931, Cologne, Germany. E-mail addresses: sarah.rom@uni-koeln.de (S.C. Rom), conway@psy.fsu.edu (P. Conway).
1 Deontological dilemma judgments appear to convey both warmth and morality (Rom et al., 2016). Although these constructs can be disentangled (e.g., Brambilla et al., 2011), in the present context they happen to covary substantially. It may be that different aspects of deontological decisions influence these perceptions (e.g., whether they accord with moral rules; whether they suggest emotional processing), but these aspects overlap in the current paradigm. We focus primarily on perceptions of warmth, which roughly corresponds to the affective processing postulated by the dual process model, and relegated findings regarding morality the supplement. Future work should disentangle warmth trait perceptions from moral character evaluations.
Journal of Experimental Social Psychology 74 (2018) 24–37
Available online 30 August 2017 0022-1031/ © 2017 Elsevier Inc. All rights reserved.
T
https://www.sciencedirect.com/science/journal/00221031
https://www.elsevier.com/locate/jesp
http://dx.doi.org/10.1016/j.jesp.2017.08.003
http://dx.doi.org/10.1016/j.jesp.2017.08.003
mailto:sarah.rom@uni-koeln.de
mailto:conway@psy.fsu.edu
https://doi.org/10.1016/j.jesp.2017.08.003
http://crossmark.crossref.org/dialog/?doi=10.1016/j.jesp.2017.08.003&domain=pdf
into how their dilemma judgments make them appear in the eyes of others, and whether decision-makers strategically adjust dilemma judgments to create desired social impressions. If so, this would provide the first evidence to our knowledge that higher-order processes causally influence judgments, suggesting dilemma decisions do not merely re- flect the operation of basic affective and cognitive processes.
1. Moral dilemma judgments: basic vs. social processes
Moral dilemmas originated as philosophical thought experiments, including the famous trolley dilemma where decision-makers could redirect a runaway trolley so it kills one person instead of five (Foot, 1967). According to Greene et al. (2001), refusing to cause harm to save others qualifies as a ‘characteristically deontological’ decision, because in deontological ethics the morality of action primarily hinges on its intrinsic nature (Kant, 1785/1959). Conversely, causing harm by re- directing the trolley saves five people, thereby qualifying as a ‘char- acteristically utilitarian’ decision, because in utilitarian ethics the morality of an action primarily hinges on its outcomes (Mill, 1861/ 1998).2 Note that utilitarian philosophy technically entails impartial maximization of the greater good, which represents a subset of the broader concept of consequentialism, which advocates for outcome- focused decision-making more generally. We do not wish to imply that making a judgment consistent with utilitarianism renders one a utili- tarian—it need not (e.g., Kahane, 2015)—but rather we use the term ‘utilitarian’ in the simpler senses that such judgments a) objectively maximize overall outcomes, b) appear to often entail ordinary cost- benefit reasoning, and c) utilitarian/consequentialist philosophers generally approve of such judgments (see Amit & Greene, 2012).
Although dilemmas originated in philosophy, research in psy- chology, neuroscience, and experimental philosophy has aimed to clarify the psychological mechanisms driving dilemma judgments. Most prominent among these is the dual process model, which postulates that basic affective and cognitive processes drive dilemma judgments (Greene et al., 2001). Other theorists have argued judgments reflect decision-making systems focused on immediate action versus long- range goals (Crockett, 2013; Cushman, 2013), heuristic adherence to moral rules (Sunstein, 2005), or the application of innate moral grammar (Mikhail, 2007a, 2007b). We do not aim to adjudicate be- tween these various claims, nor do we dispute the contribution of such processes. Rather, we simply note that these models focus on basic, nonsocial processes.
Research has largely ignored the possibility that higher-order sophis- ticated social processes might causally contribute to dilemma judgments. Yet, morality appears intrinsically social (Haidt, 2001), and most real- world moral judgments involve publicly communicating with others (e.g., Hofmann, Wisneski, Brandt, & Skitka, 2014). We expect the same is true of dilemma judgments. Although the best-known dilemmas are hypothetical (such as the trolley dilemma), many real-world decisions entail causing harm to improve overall outcomes (e.g., launching airstrikes in Syria to prevent ISIS from gaining momentum, punishing naughty children to improve future behavior, imposing fines to prevent speeding). As decisions in such cases align with either deontological or utilitarian ethical positions, they correspond to real world moral dilemmas. Moreover, lay decision- makers employ verbal arguments that align with deontological and
utilitarian ethical positions (Kreps &Monin, 2014). Hence, social con- sideration of dilemma judgments is not restricted to responses to hy- pothetical scenarios, but forms an ordinary part of communication about common moral situations.
Kreps and Monin (2014) examined deontological and utilitarian arguments in speeches by Presidents Clinton…
