Opinion Science

#32: Moralizing and Attention with Ana Gantman

March 01, 2021 Andy Luttrell Season 2 Episode 12
Opinion Science
#32: Moralizing and Attention with Ana Gantman
Show Notes Transcript

Dr. Ana Gantman studies how people process moral stuff. She’s an assistant professor at Brooklyn College, and she finds that our attention is often drawn more quickly to morally relevant stimuli in our environment. More recently, she’s been looking into how our moral judgments collide with bureaucracy and how we can use moral psychology to address issues surrounding consent and sexual assault. 

 

Things we mention in this episode:

  • The “moral pop-out” effect where moral stuff grabs our attention (Gantman & Van Bavel, 2014; Brady, Gantman, & Van Bavel, 2020)
  • Moral pop-out seems to work like a motivational state because it goes away when needs for justice are satisfied (Gantman & Van Bavel, 2016)
  • Using EEG to study the time course of moral perception (Gantman et al., 2020)
  • The books The Utopia of Rules and Bullshit Jobs by David Graeber
  • How “phantom rules” can be selectively enforced when someone’s violated other social norms. 
  • Taking “consent pledges” before a party can get college students to moralize consent (The Daily Princetonian)


Check out my new audio course on Knowable: "The Science of Persuasion."

For a transcript of this episode, visit: http://opinionsciencepodcast.com/episode/moralizing-and-attention-with-ana-gantman/

Learn more about Opinion Science at http://opinionsciencepodcast.com/ and follow @OpinionSciPod on Twitter.

 

Andy Luttrell:

There’s no shortage of moralized stuff in our social media feeds. From one outrage to the next, it’s not hard to find people talking about what they find morally right and wrong, and you don’t even have to go looking for it. This kind of moralized language in our environment jumps out and grabs our attention for us. I’ll admit it, I am too often lured in by online comment sections. Whenever NPR posts an article, I go right to the comments to see what people think about it. And my eyes are quickly drawn to the people who highlight the injustice of the event and other people accusing the first person of virtue signaling, and still more people clamoring for fairness. This moral stuff is everywhere, but what makes it moral and why does it catch our attention? 

 

You’re listening to Opinion Science, the show about our opinions, where they come from, and how they change. I’m Andy Luttrell, and this week I talk with Ana Gantman. She’s an Assistant Professor Psychology at Brooklyn College, and she finds that our attention is often drawn more quickly to morally relevant stimuli in our environment. More recently, she’s been looking into how our moral judgments collide with bureaucracy and how we can use moral psychology to address issues surrounding consent in sexual assault. We talk about all of that and more in our conversation, so let’s jump into it. 

 

It was weird, so I was revisiting some of your stuff this morning, and I have this bizarrely vivid memory of reading one of your papers in a Chinese restaurant. So-

 

Ana Gantman:  

Oh, I love it. 

 

Andy Luttrell: 

So, I don’t remember reading all the papers I’ve ever read, but for whatever reason, that one stands out. 

 

Ana Gantman:  

I love it. I hope it was the good Chinese food. 

 

Andy Luttrell:

No, no. The food was awful, but the reading was good. 

 

Ana Gantman:  

Oh. Well, I’ll take it. 

 

Andy Luttrell: 

So, I guess just to kick things off, you seem to kind of come out the gates with this moral pop-out idea in grad school, so I’m kind of curious, even if we start from square one and just be like, “What is that?” What does it mean when you say that moral information or moral stuff captures people’s attention? And where did that notion even come from to begin with? 

 

Ana Gantman:  

Yes. Okay, so I will tell you sort of how I think about it now, and then also how I was thinking about it at the time. 

 

Andy Luttrell: 

Perfect. 

 

Ana Gantman:  

So, now I think about my work as fitting under this broad umbrella of having a moral lens, or that morality is kind of a way that we can see the world, and sometimes I treat that kind of figuratively and think about how when we moralize something it affects our thoughts, and opinions, our behaviors, and also have taken it very literally to think about how morality affects what we literally see. And so, it’s in this kind of broader moral lens idea that I think about it now. 

 

At the time, the way that this idea came to be is that I was in both a motivation lab in Peter Gollwitzer and Gabriele Oettingen’s lab in graduate school, and in Jay Van Bavel’s lab, and I was learning about both the effects that motivation or active goals have on your thoughts, and also how they affect attention, so when you’re hungry, food smells really good. You notice it. You notice everybody has it. But then when you’re not hungry anymore, because you’ve eaten something, you can notice other stuff. And so, active goals tune our attention and satiated goals, it goes away. 

 

So, with that kind of guiding principle in mind, I was learning all these things about morality, also learning about approaches to the mind other than dual process model, so thinking about iterative reprocessing, and dynamic systems, and how maybe some of these sort of higher level things can be affecting really low level processes. So, for example, like morality and vision, and so we thought, “Well, if your active goals tune what you see, and morality fulfills multiple goals, maybe there’s a link.” So, the idea is that moral information and in particular right now, I’ll always be talking about moral words, are salient to us. 

 

And so, the way that we study that is that we created a list of moral words and matched non-moral words, and this was one of the most fun things that I did in grad school, which was to try to come up with a bunch of words I thought were really moral, like kill, and should, and just, and then mentally subtract the morality out of them, so I created them in pairs, even though we don’t analyze them that way. So, we have kill versus die, just versus even, should versus could, delightful brain exercise to do, and then-

 

Andy Luttrell: 

So, just to kind of zero in on that, what is it that makes those moral words different? Like what’s the thing that separates list number one from list number two? 

 

Ana Gantman:  

Yeah. That’s a wonderful question. One answer to that question is that when you ask undergraduates to tell you how related to morality those words are, the moral ones are more related to morality. Of course, the heart of it is a little bit hard to say, isn’t it? So, they are in some sense more intense, so we also ask them in addition to how related to morality they are, how positive or negative are they? They tend to be slightly more negative and also tend to be a little bit more extreme. If you adjust for those differences in the statistical analysis, the moral dimension holds. But it is all of those things, right? The moral domain is intense. 

 

Andy Luttrell: 

But you’re saying, so even if the word kill is more intense, and evocative, and extreme than the word die, it doesn’t really matter that it’s those things. It matters that one is moral and one, people say, “I don’t see morality at play here.” 

 

Ana Gantman:  

Yes. Or at least to a lesser extent. Yes. And so, trying to understand exactly what that difference is, the moral domain versus not, or a comparison domain like the pragmatic domain or something like that, is a big question that I think about all the time, and so we can come back to how to actually think about what the difference between those two things is. But for now, in thinking about the moral pop-out effect, we basically had two word lists, one where the words are all related to morality in some recognizable way, even if we can’t totally put our finger on exactly what that means. 

 

So, in addition to my trying to just think about making moral and non-moral words, we matched them, so like I said, we piloted them on these dimensions. We also matched them on dimensions that make it easier to see a word or not. So, words that are shorter are easier to see, and words that are more frequent are easier to see, so you’re much better at spotting the word dog, which people see all the time, than… I don’t know. I have to think of a word nobody says out of the air, like-

 

Andy Luttrell:

I’m on the edge of my seat. 

 

Ana Gantman:  

I don’t know. 

 

Andy Luttrell: 

Some unusual word. Yeah. Yeah. 

 

Ana Gantman: 

But you get the point. I’m actually illustrating the point even better than if I had been able to pick a random word out of the ether, because they’re not very accessible. And so, what we did is we put them into a modified lexical decision task, so a lexical decision task asks you to… You see a flash of letters on the screen pretty quickly and you have to decide if it’s a word or not a word, so we don’t mention morality at all, and the words are displayed for an extremely short time. So, at about, and this was also fun to do in the lab, is showing myself these letter strings at really short durations, like 40, 50 milliseconds, so you know something is there, but you barely see them, and then playing around. What does it look like if I show myself the word moral versus the word merit or something like that? Or we had hero and pilot in there.

 

And one funny thing that happened in the lab is that in the word list, there’s the word god, and as a non-religious person, could not get that word to not be visible to myself. It just pops all the way out. And so, it’s short, and frequent, and also very emotionally laden, and potentially very moral, and so that was an encouraging moment to myself. I thought, “This is going to work.” 

 

Andy Luttrell: 

So, you were the first participant, and you were just analyzing your own data as you went along. 

 

Ana Gantman:  

Yes. That’s a delightful thing about vision, actually, is that often with visual illusions and effects, you yourself just can’t help it, right? And part of a nice thing also about studying the intersection of morality and vision is that morality is full of shoulds. It’s full of what you’re supposed to say and what it is desirable to say, and we tend to have a very good sense, possibly because we are highly attuned to what other people think is moral that we are able to pick up on these things. And so, it’s nice to use vision, where it’s really hard to control your responses. 

 

So, we showed participants these moral and non-moral words in this modified lexical decision task, where the words are just right at your threshold for visual awareness, so you know something is there and then the idea is that the moral quality of the words, or the words being morally relevant, just puts them right over the threshold so that you see them, and you can identify them. So, people are better at identifying morally-relevant words than non-morally-relevant words, and that is the moral pop-out effect. 

 

Andy Luttrell: 

So, if I can just sort of backpedal and make sure that that makes sense, because I think that whenever I teach lexical decision type stuff, I’m like, “This is this kind of weird, indirect…” It’s like not quite getting at the question blatantly, but it sort of  reveals a state of how people view things. So, just to make sure I understand, and that people understand, people are just watching this stream of stuff on a screen, basically, right? So, the experience of being in the study is you’re just looking at a screen and stuff is happening on the screen. And sometimes the stuff that is displayed is a word, like a real world, in English, that you would say, “That’s a word in the dictionary.” Or it's a bunch of letters that maybe looks like it could be a word, but isn’t actually a word, right? And my job, as far as I’m concerned, is just to go, “I decide whether what I’m seeing is a word in the dictionary or letters that don’t create a word in the dictionary.” 

 

And if the real word is one of these moral, kill, should words, is it that I’m… You said you didn’t look at reaction time? It’s just accuracy?

 

Ana Gantman:  

You can look at reaction time, as well. So, when we do this, you can either show all of the words right at that threshold, so this perceptual ambiguity is important, right? So, if I show you the words at 20 milliseconds and you can’t see anything, morality can’t matter, right? You can’t see anything at all. 

 

Andy Luttrell: 

Nothing can matter. 

 

Ana Gantman:  

Nothing can matter. And then if I show you the words for a long time, so that’s like 80, 90, 100 milliseconds, we find ceiling, because the word… The phenomenal experience that you have is the word is just hanging there, right? And so, everybody sees everything. When the words are presented right at that threshold for visual awareness, we see an advantage for moral words in terms of accuracy. So, on average, when people get it right, it’s a moral word versus a non-moral word. 

 

But you’re right that the experience when you do it is just half of the words that you’re seeing are words and half of them are just non-word anagrams of those same letters. And so, one thing that  gives you a flavor for what it’s like to be in it is that at those really fast durations, where no one can see anything, people are actually biased towards saying non-word, because there’s something weird about saying that thing that I saw that was nothing, it was a word though. So, that also gives you a feeling, right? When you don’t know what it is, usually you get a sense that something flashed on the screen, or I didn’t see anything at all. I missed it. That can happen, too. 

 

Andy Luttrell: 

And so, that ambiguous, it’s just sort of like it’s right at the edge, like it’s right at the sweet spot where my brain can pull off the task, but it’s pretty hard to do, and the idea is that if the real word that shows up is a moral word, then people are better at saying, “Oh, that was a word.” They’re not better at saying, “That was a moral word.” They just… It’s somehow getting into their brain more easily than words that are not moral, right? That’s kind of the idea, right? So, we sort of reverse engineer that and go like, “Oh, all these moral words are getting through when we’re sort of hanging right there at the edge, but these non-moral words are falling off more often.” 

 

And so, what does that mean? So, in some ways someone listening could go like, “Okay, so I guess if I’m ever in a situation where someone’s presenting letters on a screen, I’ve learned something.” But what’s the bigger picture? What does this actually reveal about what morality means to people? 

 

Ana Gantman:  

Yeah. So, I think there are a couple of ways to think about what this means. In terms of how our minds think about morality, we think morality is important. It’s getting priority. And so, in some way the threshold for it reaching your awareness is lowered. And so, that’s something that we see, for example for threatening stimuli, right? So, things that we want to make sure we don’t miss, so sort of classic example of you’re hiking in the woods, there’s a brown stick-like thing on the ground. If you make a mistake and you think it’s a snake, but it was just a stick, no problem. If you make the mistake in the other direction, now you have stepped on a snake, right? 

 

And so, there’s a bias for detecting threatening things, and you can imagine by analogy something similar with morality, which is that if you’re in a new environment, for example, like think about your first year of college, for example. When you get to college, it’s really important that you learn now, regardless of what you were taught before, that plagiarism is a no. Colleges take plagiarism really seriously. You have to learn this new value. And it is really beneficial to you if you can pick up on that right away. 

 

And so, being attuned to these kind of things helps us navigate our social situations quickly, and if we screw them up, they have big consequences. And so, we want to get them right on the first try, so being attuned to them is important. So, it gives us a window into how and why things in the moral domain, they feel salient, and that salience is to help us navigate our social environments, because it’s really consequential, so there’s that element of it.

 

And there is another element, which is that even though we’re not in situations very often where words are flashing on a screen it seems, actually we are, right? So, lots of people, the first thing that they do in the morning is look at their phone, and open their email, or open Twitter, and those are strings of letters flashing by quickly on a screen. And so, especially it seems that we have this idea of the attention economy, right? Which is that people care about capturing your attention, whether they are marketers, or politicians, or whoever, and using moralized language seems to be something that catches people’s attention, and we have evidence that words that catch your attention to a greater extent in the lab are related to how often they’re retweeted on Twitter when they have this sort of moral component to them. 

 

So, the moral pop-out effect both tells us something about how we prioritize moral information, and I think with the implication that it’s because it’s important for navigating the social world, maybe uniquely important, and that it has these implications for the kind of information that catches our eye and that potentially we share with others, as well. 

 

Andy Luttrell: 

I don’t know that you’ve unpacked it like this, in quite this way, but if you’re saying that what this is telling us is like we’re vigilant for moral information, like it really matters to us, it’s important in the same way that we’re like on guard for other stuff, but the vigilant part of it makes me wonder if this is more about a certain variety of moral word in that… Is it like the bad stuff, right? Are we less attentive to moral virtue, where we go like, “Oh, good. You donated to charity.” Yeah, that’s ethical, but it doesn’t grab my attention in the same way that like all of these serial killer documentaries on Netflix do. 

 

Ana Gantman:  

Yes, yes, yes, yes, yes. 

 

Andy Luttrell: 

So, maybe I’m expanding the scope out a little bit, but have you dissected sort of the words themselves?

 

Ana Gantman: 

A little bit. So, we don’t  get difference by valence, so there doesn’t seem to be a negativity bias where people are recognizing just the negative moral words, for example. And we also find that… So, one thing that’s interesting about the moral domain I think is that often when we think about bad moral things, we think about really extremely bad moral things, like serial killers. And when we think about good things, we think about like, “Oh, it was nice that you helped that elderly woman cross the road,” right? And they’re just not matched on how-

 

Andy Luttrell:

Extreme or intense. Yeah. 

 

Ana Gantman:  

Extreme or… Yeah, I would call it diagnostic. This is borrowing from Peter Mende-Siedlecki, that it’s the idea of how many people out of 100 would do this. And so, we tend to choose non-diagnostic positive things, but very diagnostic negative ones. And so, we don’t see a valence asymmetry in our pop-out data, and the way that I think about it is more through the lens of motivation. So, well, it brings us to this really interesting question of how special morality is, because I could also tell you that… Not tell you. I could make you very hungry by telling you not to eat, and if you follow, if you comply with that request, and I show you words related to food, you will be better at spotting words related to food. 

 

And so, it’s a general principle of motivation, and in that way, it’s not necessarily unique to morality in terms of the process, necessarily, although we do get it when you don’t manipulate a particular motivation at all, which does make it a little bit different from the hunger case. But it also has these interesting consequences, right? Like what you’re gonna share on Twitter. And I think it’s uniquely interesting in the morality case, but that could be my own personal bias. 

 

Andy Luttrell:

Yeah, so I’m glad you brought the specialness of morality up, because there are often these claims about morality is central, and has all the important things, and I kind of see the pop-out effect as being some evidence of support for that, that like under natural conditions, this kind of information is just grabbing people’s attention. And so, I think you’re right that it doesn’t have to mean that the process is special, that this is like there’s the morality center in the brain that just like loves seeing these words. But that the morality goal sits at the front of the line and sort of camps out at the front of the line most of the time. And sure, like you say, if you have a study where you say, “Justice has been served,” people can go, “Okay, great. I don’t have to worry about that for now.” Because that’s what you find, right? That if I can satisfy the need, then this is no longer a priority. 

 

Ana Gantman:  

That’s right. 

 

Andy Luttrell: 

Because one could have thought of it as like just a regular priming thing of just like, “Oh, I’m just gonna remind you that morality’s a thing in the world.” And you go, “Oh yeah.” I always give the example in class for priming of like you just buy a Toyota Camry and all of a sudden all the cars on the road that you notice are Toyota Camrys. And that’s just because your brain is like, “Oh yeah, remember that kind of car? And now I see it everywhere.” Whereas the motivation thing would say like, “Well, that would only be true until you buy your car.” Then the motivation’s gone, which is more like this morality thing, where you go it’s like a hunger, like you say. And once you satisfy it, you can go, “Oh yeah, I know morality’s a thing. But I don’t need… I’m not yearning for it, so it doesn’t capture my attention in the same way.” 

 

Ana Gantman:  

That’s right. And it’s also really important to talk about the idea of priming in this case, because we got some valuable pushback on this finding right when it came out suggesting that we were actually only showing a semantic priming effect, and the idea is that the moral words are all more related to each other than the non-moral words are. And so, as you see them unfolding on the screen we’re activating the construct of morality for the moral words, but not the non-moral words. So, this was a really important critique that we wrestled with really seriously, and so two things came as a result of that. 

 

One is that we did some interesting analyses, so showing, for example, that you actually see moral pop-out on trial one. So, if the very first trial is a moral word, you’re more likely to see it than a non-moral one. And also, that the effect doesn’t get bigger as the experiment goes on, so there’s no time effect, so you would expect that if you’re going through the experiment and morality is being more activated in your mind, that you might see the effect getting bigger as you go, for example. So, the idea of priming is really important here, and I have revised my views on this to say that I think there is some priming in our experiment, but that it doesn’t explain our whole effect. 

 

So, one way that we have sought to kind of disentangle this question of how early in perceptual processing you’re seeing this advantage of morally-relevant words over non-moral ones is to use electroencephalography, so trying to get really fine grain time course information, which EEG lets you do. And we’re finding that basically what happens is that you see differences in brain activity, first distinguishing words from non-words, so you do that first, which makes sense. Although, you can imagine a really strong hypothesis for moral pop-out that you… Any hint of moral relevance at all, it’s just right out the gate, it’s not that. 

 

So, we’re constrained, right? It’s really not like a wild, early, early, early vision effect. 

 

Andy Luttrell:

So, you’re saying your brain kind of first has to recognize it as a word, like-

 

Ana Gantman:  

Yes. 

 

Andy Luttrell:

Step one, this is a word. 

 

Ana Gantman:  

This is a word.

 

Andy Luttrell: 

Got it. Okay. 

 

Ana Gantman:  

Then, after that we see differences, greater P3 activity for moral words versus non-moral words, and trying to understand what that means is that we have… Other studies have shown that sometimes the P3 is related to motivational relevance. That’s quite fitting with what we’ve been talking about so far. This also suggests the idea that this is where you’re sort of converting the perceptual information into what you need to do, getting at this kind of transition phase from this is what I see, to this is what I should do now. And it’s associated with this kind of cascade of activity that brings the information to conscious awareness. So, we’re getting support not for a super strong, very early in vision hypothesis, but also not for just this a memory effect, or just a response bias effect. 

 

Instead, evidence that after you recognize words versus non-words, you get this sense that moral things are just… They’re like you said, going in the front of the line in terms of being aware of them. 

 

Andy Luttrell: 

I’m not an EEG guy, so the P3 part is… What does it tell you that it’s… Is that what, a bands, or a time, what does that refer to? 

 

Ana Gantman:  

That’s time. 

 

Andy Luttrell: 

Okay. 

 

Ana Gantman:  

About 300 milliseconds after the stimulus appears. 

 

Andy Luttrell:

Got it. Okay. So, we’re learning. Right, so basically what we get from that is to say like we can see very, very quickly your brain is responding to the wordiness of the word, and then not too long thereafter it’s responding to the morality of the word, right? 

 

Ana Gantman: 

Yes. Yeah. 

 

Andy Luttrell:

So, it’s still pretty early. I mean, that’s not ages. 

 

Ana Gantman:  

That’s right. 300 milliseconds is pretty fast. 

 

Andy Luttrell:

So, let’s pivot a little bit to the stuff that you’re doing these days, which it kind of feels like it could be an outgrowth of the early morality stuff that you’ve done, but in terms of looking at like… I guess you’d characterize it as interventions related to sexual assault and how people moralize issues in that area. So, could you kind of give a flavor of like what is now sort of at the front lines of your research program these days? 

 

Ana Gantman:  

Yeah, so I think a lot about how to understand the difference between the moral domain and some other domain, or what it means for something to be moralized, and that is a big driving force in what I’m interested in, and so we are… Well, really broadly I read two books by David Graeber, The Utopia of Rules and Bullshit Jobs, and I thought they were both just incredible and very inspiring, and so right now in the lab we’re really interested in this intersection of morality and bureaucracy, so our everyday lives are just full of bureaucratic systems, right? 

 

So, a bureaucracy is a hierarchical organization where people’s jobs are compartmentalized, and they’re meant to be meritocratic, that is a clear path for moving up the ladder, so of course colloquially, people hate bureaucracy, right? Nobody is like, “Oh, yay. The bureaucracy.” 

 

Andy Luttrell:

That sounds right to me. 

 

Ana Gantman:  

Yeah. People don’t like them, right? But their history is interesting because they come out of these enlightenment ideals that we want to improve fairness, and equality, and not give people special favors, but Max Weber, a sociologist sort of famous for writing about this, he coined the term, “the iron cage,” and so what bureaucracy asks of you is that for equality and fairness, you have to give up some of your kind of unique humanness. It doesn’t matter if you’re the one in a rush. You have to wait in line at the DMV. And so, that just is I think so interesting, to think about how it intersects with our moral values, because that’s not really how we think about morality at all. 

 

And so, a lot of the really new stuff in the lab is thinking about how our moral values interact with rules, how it affects our perception of policies, and also getting-

 

Andy Luttrell:

So, is the idea with the bureaucracy thing kind of like it’s hard for people to believe you can do the right thing when there is a bunch of bureaucracy? Is that sort of what you’re getting at?

 

Ana Gantman:  

I wouldn’t put as fine a point on it as that. I would just say that our bureaucratic structures and our moral systems are at a mismatch. So, here, let me give you one example. One project that we have going on in the lab right now we’re calling phantom rules, and the idea, this came from my graduate student, Jordan Wylie, and she’s a big tennis fan, and she was watching the U.S. Open in 2018, and I don’t know if you remember this, but Serena Williams got hit with this coaching violation and it cost her like $17,000, and the title, and all this stuff. 

 

And what’s interesting about it is that people get coached all the time, so it’s a rule that everybody breaks and people do not care about, but somehow because, we think, Serena Williams, in addition to being one of the greatest athletes of all time, is often in the news for what she’s wearing, or what she said, and so there’s a lot of pushback about the way that she violates norms in tennis, and so we think that one thing that phantom rules do is when you violate a norm, which is a representation of how most people do or should behave… But they’re informal, like so thinking about social norms specifically, not as codified rules, right? But as implicit rules that everybody knows. Those are harder to punish, but phantom rules are codified rules, and so if you want to punish somebody, what you can do is pull a phantom rule out of the air. That’s why we call them phantom rules. They’re kind of invisible unless you have unfinished business, so these are Casper the Friendly Ghost rules, or phantoms. If you have unfinished business, then all of a sudden the rule appears to you as a way to enact the punishment. 

 

And so, we have been identifying phantom rules. These are things like jaywalking, so in the U.S. legal system, for example, but you can imagine any codified set of rules could have some like this. Jaywalking, loitering, downloading music, like pirating music, smoking marijuana in most states, these things that everybody kind of has the sense that everyone does. If you just ask people, they can recognize that these things are frequently broken, these rules. They are technically illegal and also they’re relatively morally inconsequential, right? So, this is a place where we can see the bureaucracy and the morality bumping up against each other. You can’t do it, but nobody cares, right? 

 

And so, what we find is that people think that phantom rules are more legitimate, more punishable, more blameworthy to violate if that person has also violated a social norm. So, you learn about a person, either they’re just hanging out or they violated a norm, like they were talking loudly in a movie, or they were trying to kind of provoke a stranger doing something rude, and then you also find out that they jaywalked. People think that jaywalking is worse. So, these rules increase in our estimation of how legitimate they are and how legitimate it is to punish someone for breaking them in a motivated way, in a way where we want to then use the rule to punish them. 

 

And this effect goes away if you learn that that person was punished for the norm violation. So, if you learn, “Hey, that guy just tried to provoke a stranger into a fight, but somebody told him off for it,” then you don’t get the phantom rule effect. You don’t find that it was bad that they jaywalked after that. So, we really think it’s, again, a motivational effect. 

 

Andy Luttrell: 

It sounds just worldy, where it’s like, “I just need to turn to something legitimate seeming to make a right out of a wrong.” Right? Like, someone did something bad and I understand that I can’t turn to illegitimate things. Is that sort of it? There’s like enough rationality where people are like, “Well, okay, I can’t just do anything to punish this person, but if it’s already on the books…” I wonder if some of it’s like a responsibility thing, too. People would be like, “I didn’t make the rules. They already exist, so what do with them?” 

 

Ana Gantman:  

This is gonna hurt you a lot more than it’s gonna hurt me. Yeah. Right. And so, it could both be a sense of what ways are appropriate for me to enact punishment, and also they’re just less painful, right? It’s much more intense to go tell somebody not to do what they’ve done than it is to be like, “Oh, but they’re jaywalking. Could somebody handle that?” And so, it’s also just less… It’s less difficult.

 

So, that’s one exciting avenue that we’re interested in about how our morality kind of scratches up against some of the rules that we have in place. A kind of fundamental tenet of bureaucracy is that the rules are meant to apply equally to everybody, but phantom rules show us that we’re not… Even when we have rules that are meant to work like that, we don’t use them that way. We find another way to use these rules, so that’s one kind of intersection viewing that actor through a moral lens. 

 

Andy Luttrell: 

To wrap up, can we talk about college parties? 

 

Ana Gantman:  

Oh. Yes, we can. Right. We didn’t talk any of my work on … yet. Yes. Okay. 

 

Andy Luttrell:

I mostly am very intrigued by what it was like to do this. And so, I’ll ask you to describe it, but really in my heart what I want to know is how involved were you specifically versus sending minions to operate a study like this? 

 

Ana Gantman:  

Very. I was there. So, let’s zoom out a little bit, so we’re talking now about the work that I have done with Betsy Levy Paluck at Princeton, thinking about the intersection of norms, and morality, and sexual assault on college campuses. And we started doing this work actually before MeToo was gaining a lot of traction, so before for example I had heard of it. But we were following this conversation which really started on college campuses, which is about kind of trying to push and put more behaviors into the domain of assault and harassment than we had previously thought, or that we had previously realized were going on so much, right? 

 

And so, while we were thinking about those issues, I learned at Princeton… So, Princeton has eating clubs. These are like fraternities or sororities, except they’re all coed. Students don’t live in them, but they do have parties in them, and we learned that one eating club had instituted a consent pledge, and so the idea is that before you go into the party, you read a definition of consent and if you don’t read the pledge, you can’t go to the party. So, it’s a huge signal for this club that they care about consent and they want you to care too, and you can’t come party with them if you don’t. 

 

And so, we thought this was fascinating and a wonderful way that… It’s important for these signals to be authentic signals. It’s really different if your dean or your resident tutor or whatever tells you that consent is important than versus your friend. And so, we reached out to that club and asked them if they would be interested in kind of measuring its effectiveness, and in particular, we started talking about changing the wording a bit. So, the original wording of the consent pledge was fairly neutral. It was about not engaging with someone else or their belongings without their consent, and so we wrote what I sometimes call the psychological sledgehammer pledge, which had a reference to the university itself, to social norms, to moral values. We used really moralized language, like non-consensual sex is violence, and we just randomized as people entered the club on a party night which version of the pledge they got. 

 

So, they got to the door, somebody from our team was standing there randomizing original pledge or psychological sledgehammer pledge, or moral pledge, and we would stamp their hands so we could figure out who got what, and then on their way out, we had pizza and surveys. And so, we asked them questions trying to get at how much they moralized consent, and so this is now getting us also into interesting territory about figuring out how you measure whether somebody is moralizing something. 

 

And we really did not want to just ask, “Hey, is consent a moral value to you?” Because we just didn’t think that was gonna give us very much information. And so, we tried to get at it in a different way, so we asked about norms. We asked what percent of your friends appreciate the pledge. We asked about universalizing, so should everybody pledge? We asked a question about… So, one thing that I think distinguishes morality is that it’s very black and white. It’s right or it’s wrong. And so, we ask is consent confusing? So, getting at this kind of black and whiteness, and we found that when students, this first club, where they initiated the pledge, when students received the more moral or sledgehammer pledge, they were more likely to agree to these kind of moralizing things or to say no, consent is not confusing, or to show us these kind of signatures that people are thinking in a moralized way. 

 

Then we did the exact same experiment at a different club, where they had not initiated the pledge, and our arriving and implementing this experiment and randomizing who got what was the first time they had also done the pledge, and in this case you find that people who get the moralized pledge are less likely to agree to those items. This fits nicely with some of your work about moral matching. If you already have the value, then the moralized language works. It makes people agree more strongly. But if they don’t already, it may not be the right way to go in terms of how to talk about something like consent. 

 

Andy Luttrell: 

That’s so funny that you mentioned that. So, when I was reading, because this isn’t out-out, right? Like I’ve read, I saw a couple articles, and it was sort of like a, “It worked the one time and then not the other.” And I was like, “Well, I wonder if something about the disposition that the people entered into the agreement with would change whether one version of that message…” I mean, that’s sort of my whole thing, right? One version of a message versus another, depends on whether the person is open to that way of framing it, so it’s funny that we came to the same thought on that. 

 

So, I didn’t realize that there was sort of like an identified difference in terms of it being fresh for one group than for the other, right? That was kind of the main difference between the two trials? 

 

Ana Gantman:  

The main difference. It is a difference. Another difference is that they’re two different social spaces where kind of your expectations on norms about what the party will be like are different. 

 

Andy Luttrell:

Sorry, the second time around was not at-

 

Ana Gantman:  

At another place. At a different eating club. So, they are 12 of them and it was at a different one. 

 

Andy Luttrell: 

Got it. 

 

Ana Gantman: 

And so, there’s just a whole different set of… Yeah, of norms, and values, and expectations at the different clubs. They have their own identities. And so, this different context is not a match for that moralized language, and it doesn’t have to be, right? I hope it’s not coming off like, “You should react to the moralized version of the pledge.” I don’t care which version of the pledge is effective for making people care about consent, right? And I should also say that it’s not that we think that students necessarily are kind of remembering the pledge, but it’s really just about a signal, a real signal that it’s normative to care about this and that it’s a shared value in your group. 

 

So, it has to be a believable signal, and we think that’s part of what is going on there.

 

Andy Luttrell: 

Was it clear that this was endorsed by the eating club, rather than like, “Here are these scientists trying to get us to do this.” 

 

Ana Gantman:  

Oh yeah. 

 

Andy Luttrell: 

Because like you said, the normative part makes me think like, “Oh, if I understand that my group is rallying around this message, sounds fine to me.” Whereas if I feel like, “Oh, these sort of consent police are out here trying to get me to think about how important this is,” that that doesn’t carry the same normative weight. 

 

Ana Gantman: 

Yes. So, this was absolutely done in conjunction with the club. They actually approached us to do it. We didn’t approach them. So, there was movement within the club to try to do this, and officers in the club were the ones at the door, holding up the sign, helping us implement the randomization, giving out the surveys. We had our team of research assistants, but we also had help… We were working together with the club to implement this and measure it. 

 

Andy Luttrell:

I was also curious in terms of the amount of time that goes by between signing this or reciting this pledge and then doing the survey at the end. What kind of just ballpark, like what… Over the course of a night? Or right away? 

 

Ana Gantman:  

One night. One night. So, you go in, you take the pledge, you do your thing at the party, dance it out, and then on your way out, we are there with water, and pizza, and a survey. So, it varies depending on how long you stay at the party, but not more than four hours. 

 

Andy Luttrell:

It’s pretty remarkable that you get movement in general, like that this message, after an experience like that, that still there’s this kind of lingering like, “Oh yeah, you know, I sort of… I get that this is a moralized issue or a clear issue and an issue that we all should rally behind.” But the end goal there was those perceptions of moralization. Is there any movement beyond that, like in terms of what consequence is it for people to moralize? 

 

Ana Gantman:  

Yeah. Great question. So, right now we’re actually looking into this exact question. We are looking at data for when students experience, interact with the punishment system of the university. For some reason I’m blanking on a better word for that. So, you know, when they get in trouble, so we have the… Princeton graciously and incredibly is wonderful about this, and so they are letting us look at this administrative data. So, what we did after we saw this initial effect where the effect, the wording of the pledge has a different effect depending on different contexts and values. There is one night at Princeton that we know of where every club is open and its party, and so we encouraged every club on that night to try to do something with the pledge, or something of their own that they thought would make sense in place of the pledge. Something to signal that they’re thinking about consent and MeToo, and one thing that helped us at this moment is that it, MeToo, was really kind of increasing in its visibility, and so everyone was sort of willing to talk to us about this, which was awesome. 

 

And we asked everyone to do something, and so this way, because we know that this date exists at the end of every semester, it’s like right after or right before finals, we could look over time, so right now we’re in the process of cleaning the data to try to look at whether… Is this trajectory the same or did some discreet thing happen that’s changing the direction here? And so, we’re looking at instances where, of course, of sexual misconduct, right? Reports of sexual misconduct and maybe also looking also a little bit into other areas of misconduct where students are getting in trouble, and just seeing if that night made a difference, where everyone is expressing that they care about consent at the same time. 

 

And so, we’re thinking that that could be a large enough signal that we might be able to detect it in the administrative data. But I should say that whenever you’re looking at incidence of sexual misconduct or assault, reporting, using reporting as the behavioral measure is tricky, right? Because we know that this is something that goes really underreported, and so actually more reporting can be a sign of improvement because people are thinking they will be believed and treated well when they report these incidents. And so, it’s not really like a true signal, exactly, of the number of incidents. 

 

Andy Luttrell: 

Well, nevertheless, I’ll be curious to see what you guys find. 

 

Ana Gantman:  

Yeah. Yeah. Yeah. This is all part of a larger behavioral science framework for thinking about sexual assault, because typically we see the discussion of this issue as either very clinical, like people who commit sexual assault are very comfortably othered, right? It’s like, “Oh, this isn’t anybody that we know. This is someone else.” And so, it would make sense, for example, to bring a character witness to defend them, to say like, “Oh, this is a good person. They would never do that.” So, either a very clinical perspective and specifically in that clinical perspective the idea that people who commit acts of sexual violence are repeat inveterate offenders, who are different on these kind of low in empathy, high on hostility towards women, and actually the data for this idea of the repeat offender is not very good. And instead, it looks more like kind of a cultural type of problem, so not bad apples, but bad climate, and so this is more of a sociological approach, right? Thinking about the factors at the societal level that are permissible to rape. This is the idea of rape culture. 

 

But we want to come in in between, right? At this sort of immediate situation, activating mental states in people, and so that’s why we think that the pledge case, or thinking about the moralization of consent at the eating clubs is a nice instantiation of this, right? It’s a situational moment, it’s a signal from other people, and you care about what they think, that it’s activating this idea of consent in the moment. So, getting at this more kind of person by situation approach and bringing it to this problem. 

 

We really want to encourage people who are psychologists to think, especially when we return to campus, thinking about this issue, because it’s happening in our places of work, and so in addition to being a really important issue, it was an opportunity for me to learn how to do field work kind of “at home.” Because we… It’s happening on campuses and so I definitely… Almost anything that social psychologists study I think is worth considering through this lens, thinking about the effects of norms, values, motivations, all kinds of ways that the things that we study are relevant to this. And so, we’re hoping to put out this behavioral science model to encourage social psychologists to think about this problem. 

 

Andy Luttrell:

Yeah. I was gonna say, spoken like a real social psychologist, like that’s like bread and butter, like, “Hey, this intersection is a good one.” A useful one.

 

Ana Gantman:  

Think here. Because what’s nice about these behavioral interventions is that we can evaluate them, which is great. We have the tools and the knowledge to do that, and also they tend to be relatively inexpensive, and so they can be scalable, potentially. 

 

Andy Luttrell: 

Well, I’ve taken enough of your time, so I want to say thank you for taking the time to talk about your work and I’ll be interested to see what those new analyses yield. 

 

Ana Gantman:  

Yes. Me too. Thank you so much for having me. This was delightful. 

 

Andy Luttrell: 

All right, that’ll do it for another episode of Opinion Science. Big thank you to Dr. Ana Gantman for coming on to talk about moral psychology and the work that she does. I had a great time talking with her. Check out the show notes for a link to her website and links to the research that we talked about. To learn more about this show, you can type OpinionSciencePodcast.com in your web browser, but be sure to hit go or whatever to actually get to the site. Subscribe to the podcast, take a second or two to write a nice review, and follow on social media @OpinionSciPod. Okay, enough of all of that. Thank you so much for listening. I’ll see you back here in a couple weeks for more Opinion Science. Bye-bye!