Featured post

Hunger: The “Proverbial” Emotion?

“The soul that is full loathes honey, but to a hungry soul, any bitter thing is sweet.”

We don’t know who authored this little proverb (which comes from the Bible’s Book of Proverbs), but you’ve got to hand it to whoever it was: he was a keen observer of human psychology. Hunger, he noticed, doesn’t simply motivate our souls to get off the couch and go find something to eat; instead, it changes our psychological relationship to food. When we’re hungry, he noted, we lower our standards: even the nasty bits are magically transformed into delicacies.

Our quality standards will sink so low when we’re hungry, in fact, that we’re even willing to eat foods that are bitter. Ordinarily, humans tend to dislike things that taste bitter because bitterness often indicates that the food we’re eating (it’s typically a plant) contains toxins whose very function is to prevent us from eating it: The plants don’t want to be eaten, and the bitter taste deters us from doing exactly that.* Hunger says, “Look. You’re starving. What difference is a little bit of plant toxin going to make? Eat that bitter thing today. You can do your detox diet tomorrow.”

What exactly is hunger? It goes without saying that hunger is a psychological state that deals with a discrete problem of survival and reproduction—your body is low on the energy it needs to keep itself running, and you won’t stay alive if your body stops running, and you can’t reproduce if you aren’t alive. Fine, but what kind of thing is it? Although many emotion researchers have been coy about saying it out loud, hunger is starting to look an awful lot like an emotion—at least on the view of emotions that evolutionary psychologists tend to favor.

Articulating a widely accepted evolutionary-psychology view of emotions, the psychologists Laith al-Shawaf and David Lewis defined emotions as “coordinating mechanisms whose evolved function is to coordinate a variety of programs in the mind and body in the service of solving a specific adaptive problem.” As examples of prototypical emotions, Al-Shawaf and Lewis point to fear (which directs a variety of physiological, behavioral, and cognitive responses that help to keep us away from dangerous things), disgust (which directs a variety of physiological, behavioral, and cognitive responses that help to keep us away from infectious things), and sexual arousal (which directs a variety of physiological, behavioral, and cognitive responses that help to direct us, as they put it, toward “advantageous mating opportunity[ies].”

So why not count hunger among them? Well, why not indeed? In another paper, Laith posited as much when he defined hunger as “a mechanism that coordinates the activity of psychological processes in the service of solving the adaptive problem of acquiring food.” And here, I think he was right on the money: If all things that are “coordinating mechanisms whose evolved function is to coordinate a variety of programs in the mind and body in the service of solving a specific adaptive problem,” and hunger is “a mechanism that coordinates the activity of psychological processes in the service of solving the adaptive problem of acquiring food,” then hunger is an emotion, is it not?

Although the evolutionary psychologist’s view that hunger is an emotion puts them (I think) in the minority of emotion researchers, there are other scientists who think of hunger in a very similar manner. The neuroscientist E.T. Rolls has conceptualized hunger as what neuroscientists call a gate. When the hunger gate senses that the body is in a depleted nutritional state, it turns the bare sensory information that we pick up from the tastes, sights, smells, and textures of foods into behavioral mandates. When the gate is closed, “the soul loathes honey.” When it’s open, “any bitter thing is sweet.” Indeed, researchers have now found the gene-regulator that opens and closes the hunger gate in the roundworm C. elegans (which is probably the most studied organism in the history of biology). It’s plausible that a similar molecular signaling system functions as a hunger gate for humans as well.

What psychological and physiological “programs” does hunger coordinate? Probably quite a few. Laith provides a long list of hunger’s miraculous powers (including some that are supported by existing research and others that are more speculative, requiring further research). Hunger, he ventures, influences perception (the bare sensory properties of foods, such as their sights and smells, lead us, gate-style, to craving or pleasure rather than indifference), attention (we notice food-related stimuli that we otherwise would ignore), problem-solving (we automatically start sorting through our options for finding food) categorization (we begin to automatically categorize things in the world as either “food” or “not food”), and memory (we find it easier to recall the locations of food items).

Support for these hypotheses is rolling in. In research that came out recently, scientists found that hunger makes food odors more attractive. Another team of researchers recently found that hunger makes Dutch undergraduate students more willing to eat novel foods such as African cucumber, fried snake, crocodile meat, kangaroo thigh, and the notoriously tastes-delicious-but-smells-like-sewage fruit known as the durian.

Effects like these bear witness to hunger’s long reach into our bodies, our thoughts, our feelings, and our behaviors. If the aphorist who wrote Proverbs 27:7 were alive today, and he could spare a day from scrawling down universal human wisdom in order to read up on the evolutionary psychology of hunger, perhaps he would write a follow-up proverb:

“Hunger gates incoming food-related sensory information into physiological responses, psychological states, and behavioral propensities that evolved for the function of re-uniting our bodies with nutrients.”

Or not. Either way, it’s fun to imagine that guy, probably wearing a flowing robe or something, hunched over his computer while he scrolls through Google Scholar with one hand and grips a massive, stinky durian with the other.

~

*Yes, as a matter of fact; I am aware that coffee, collards, and cocoa are bitter, along with many other things that are nice to eat and drink. There may be an adaptive reason why we’re attracted to bitterness in some cases—and it turns out those cases are the exceptions that prove the rule. But that’s a story for another day.

Helping Others with Purpose in the Age of COVID-19

The world is confronting a level of suffering unlike any we have experienced for many decades. The World Bank estimates that the COVID-19 pandemic will push as many as 150 million people into extreme poverty worldwide. Closer to home, nearly 6% of Americans are currently out of work, and one out of every seven households with children didn’t get enough to eat in the past seven days. Tragically, our non-profit organizations are suffering as well. Due to a lack of cash, ten percent of them may go out of business by the end of the year—and with them, the services they provide to their communities will disappear as well.

Most of us would like to do something meaningful to help others during these dismal times–especially during the holidays. In the face of so much suffering, however, it’s easy to experience choice paralysis: How do we choose how to help, and whom to help, and how much to help? In this world of finite resources, we can’t help everybody. And with so much need all around us, it is tempting to conclude that there’s no point in trying to help anybody. How can we overcome this despondency about the practical value of our own altruistic impulses?

Over the past several years, I have been studying the history of human generosity in order to figure out how our ancestors made these choices for themselves. In doing so, I have identified five principles that have, time and time again, helped people—individuals and societies alike—to overcome their own philanthropic choice paralysis. These five lessons are no less relevant today than they were when they were discovered decades, centuries, or even millennia ago.

  • Give cost-effectively. We are inclined to focus too much on how much time, money, and energy we put into our helping, and not nearly enough on how much of a difference that help will make in others’ lives. Many well-intentioned efforts simply relieve less suffering than others do. We should seek out information on how to get the most bang for our charitable buck.
  • Give globally and locally. The situation in the US is dire, and we are right to want to help people who are suffering here at home, but if coronavirus takes hold in the world’s poorest nations, the consequences will be catastrophic. As you think about how to invest your time and treasure into charitable activity, keep an eye not only on the US, but also on how Coronavirus is affecting the poorest countries in Africa, Latin America, the Middle East, and Southeast Asia.
  • Give collectively. For as long as we have been human, people have pooled their resources to help each other through tough times. Two and a half millennia ago, we began to extend this same logic to the welfare of strangers: When we pool our resources by making contributions to highly respected, cost-effective charities, we can achieve economies of scale that allow us to solve problems decisively instead of merely putting Band-Aids on them.
  • Give cash. We can help people by serving food, offering emotional support, or basic acts of kindness, of course, but one of the most effective things we can give to people with acute needs is cash. You can’t know exactly what a stranger need most urgently, but he or she does. You can’t pay the rent in diapers, and you can’t pay for child care with food or medicine. With money, though, you can pay for any and all of them.
  • Set it and forget it. Once you have set your priorities for how to give and how much to give, set your commitments in stone by putting them on autopay. By allowing the charities you select to draw monthly from your bank account, or to charge to your credit card, you won’t have to face choice paralysis again.

By observing these principles, we can not only avoid choice paralysis as we try to act on our most beneficent impulses: We can also imbue our volunteering and our giving with the meaning and purpose that comes from knowing that we are, as the ethicist Peter Singer put it, “doing the most good we can do.”

Originally published in The Social Scientist, the official magazine of the Division of Social Sciences at The University of California, San Diego.

Economic Pain is Coming for the College Graduates of 2020. But Who Will Suffer, and How Will They Suffer, and For How Long?

It’s official: America’s economy is in decline. Economists with the National Bureau of Economic Research announced yesterday that the United States economy, thanks to COVID-19, entered a recession in February, ending its longest period of economic expansion in 166 years.

Although very few Americans’ will emerge from this crisis with their financial situations unscathed, I am a university professor, so I think a lot in particular about the economic futures of the students that come in and out of our institutions of higher learning. In light of the economic contraction we’ll likely face over the next couple of years, many of the nation’s newly minted college grads are concerned that they are entering a hostile job market that will leave them unemployed for months, underemployed for years, and, in the long run, less competitive due to “skills obsolescence:” In 2021, after all, the class of 2021 will have more up-to-date skills than the class of 2020, and in 2022, the class of 2022 will have more up-to-date skills than the classes of 2020 and 2021. 

If the past is prologue, then they’re right to worry.

The Great Recession of 2007-2009 gives us some idea of what the future might hold for the college graduates of the next couple of years. A recent paper by the economist Jesse Rothstein reveals that the young people who graduated just after the Great Recession of 2007-2009 experienced significant economic setbacks from which they still have not completely recovered. These setbacks were not due merely to the immediate shocks that came from the depressed job market of 2010: They continue to face disadvantages in the job market to this day, which is an economic phenomenon that labor economists call scarring

If history is a reliable guidepost, the Class of 2020 (along with graduates over the next couple of years) should also expect significant struggles: Although 10% of recent graduates may face unemployment in the short term, their more likely fate is underemployment. According to Jaison Abel and Richard Dietz’s research on employment patterns following the Great Recession, our new college grads are likely to face a job market in which fewer of the jobs that are available (compared to the pre-COVID era) will require a college degree. As many as 50% of recent college grads may confront this reality. And 10% of them might find that their first jobs out of college are so-called “low-skilled service jobs” that tend to pay minimum wage. Although we were wrong to catastrophize that the college students who graduated after the Great Recession would all be forced to take jobs as baristas in cool coffee shops, some did, in fact, become baristas.

If there is a bright side to be found in any of this, it’s in the fact that underemployed college graduates will still probably make more money than non-college graduates who work in the same job categories. Even among the underemployed, a college degree will fetch higher wages.

Nevertheless, the economic disadvantages for the class of 2020 and beyond are likely to be substantial and long-lasting. Indeed, a 2018 analysis from Boston College’s Center for Retirement Research, based on data from the aftermath of the Great Recession, suggests that the Class of 2020 should brace itself for significant student debt and for jobs that bring lower wages and fewer fringe benefits. These economic disadvantages may persist long enough to discourage them in their late 20s and early 30s from marrying and from buying their first homes. And they’re really going to need to pay attention to their retirement savings.

Not every group of college graduates will suffer this fate, of course. In fact, how well our recent grads endure the COVID-19 recession is likely to depend greatly on the fields in which they majored. Math, physics, engineering, education, and nursing majors will need to worry less than most about underemployment or employment in low-skilled service jobs. Those who majored in criminal justice, performing arts, leisure and hospitality, anthropology, or art history, on the other hand, may be about to encounter some stiff economic headwinds.

What is Classic Style? A Primer for Social Scientists

This quarter, I’m teaching a course called Writing About Thinking. The course got a soft launch a couple of years back, when I taught it as an undergraduate seminar at The University of Miami. Now that I’m at UCSD, I am teaching a more advanced version of the course to a very nice group of our PhD students. The course is based on a simple premise: Writing about thinking, which every psychologist must do, is hard, but it’s possible to get better at it by first thinking about thinking. The course, therefore, involves excursions into psychological research on communication, cooperation, memory, syntax, argumentation, and, of course, style.

One of the books we’re reading is Francis-Noël Thomas and Mark Turner’s little book Clear and Simple as the Truth, in which they explicate a style of writing they call Classic Style. It’s an intentionally coy, playful little book that teaches as much about Classic Style by showing what Classic Style is as by telling what Class Style is.

The Classic Style, as Thomas and Turner lay it out, involves several guiding principles. Here are eight principles that I think are among the most important. 

(1) It is based on the conceit that it is possible to say things about the world that are true, and that it is the writer’s job to point to these things. 

(2) It assumes a writer that “takes the pose of full knowledge,” and is competent to explain everything the reader needs to know to understand the subject.

(3) It rests on the gambit that the reader is no less intelligent than the writer. The only difference between the writer and the reader is that the writer happens to know something that the reader doesn’t. The reader is perfectly competent to acquire this truth.

(4) It relies on a writer who is confident in her own abilities. She resists the temptation to argue for the importance of her subject matter, she abstains from complaining about how hard writing is or how hard-won her insights are, and she avoids self-reflection and rumination. The classic-style writer hides her effort, but because she exerted herself so mightily in advance, the end product of her effort appears effortless, as if it could have been written in no other way.

Here, Thomas and Turner convey this idea in what I regard as a triumph of Classic Style:

The classic writer is not like a television cook showing you how to mix mustard and balsamic vinegar. He is like a chef whose work is presented to you at table but whose labor you are never allowed to see, a labor the chef certainly does not expect you to share. There are no salt and pepper shakers on your table.

(5) Because the writer and the reader are intellectual equals, and because the writer is pointing at true things in the world, the two of them can have a conversation. Classic-style writing, when read aloud, sounds like one person talking to another, like a really good tour guide when you’re visiting a museum or a foreign city.

(6) Sentences and paragraphs go somewhere. Each unit of meaning, Thomas and Turner write, “has a clear direction and goal.” The payoff comes at the end of the sentence or passage, but to get to that payoff, the reader must follow a path, made of several steps, along which the writer is leading him.

(7) With all of its reality and pointing and seeing and touring , the Classic Style relies on the same image schema we use to interact with the physical world. Ideas have weight; they develop. Arguments go somewhere. We follow lines of reasoning. By relying on physical imagery, Classic Style is able to depend on some of the cognitive processes that use so successfully to navigate the real world.

(8) No topic is so complex that it cannot be explained.

The first part of Clear and Simple as the Truth is the exposition. The second part is “The Museum,” consisting of a variety of classic-style passages, along with Thomas and Turner’s analyses of them. The Museum is well worth a visit, but its examples are not as helpful for social scientists as examples from actual social science might be. I was therefore very pleased to discover yeseterday that one of my favorite articles in Psychology–Denny Borsboom, Gideon Mellenbergh, and Jaap van Heerden’s The Concept of Validity (which I am currently re-reading for a paper I’m working on, and which I blogged about earlier here)–is an exemplar of classic style.

At the opening of the paper, you find this marvel:

Please take a slip of paper and write down your definition of the term construct
validity. Now, take the classic article of Cronbach and Meehl (1955), who invented the concept, and a more recent authoritative article on validity, for instance that of Messick (1989), and check whether you recognize your definition in these works. You are likely to fail. The odds are that you have written down something like “construct validity is about the question of whether a test measures what it should measure.” If you have read the articles in question carefully, you have realized that they do not conceptualize validity like you do. They are not about a property of tests but about a property of test score interpretations. They are not about the simple, factual question of whether a test measures an attribute but about the complex question of whether test score interpretations are consistent with a nomological network involving theoretical and observational terms (Cronbach & Meehl, 1955) or with an even more complicated system of theoretical rationales, empirical data, and social consequences of testing (Messick, 1989).

Who in psychology opens a paper like that? Too few of us.

A little further along, there’s this:

The argument to be presented is exceedingly simple; so simple, in fact, that it articulates an account of validity that may seem almost trivial. It is as follows. If something does not exist, then one cannot measure it. If it exists but does not causally produce variations in the outcomes of the measurement procedure, then one is either measuring nothing at all or something different altogether. Thus, a test is valid for measuring an attribute if and only if (a) the attribute exists and (b) variations in the attribute causally produce variations in the outcomes of the measurement procedure. The general idea is based on the causal theory of measurement (e.g., Trout, 1999).

And then this: 

That the position taken here is so at variance with the existing conception in the literature is largely because in defining validity, we have reversed the order of reasoning. Instead of focusing on accepted epistemological processes and trying to fit in existing test practices, we start with the ontological claim and derive the adequacy of epistemological practices only in virtue of its truth. This means that the central point in validity is one of reference: The attribute to which the psychologist refers must exist in reality; otherwise, the test cannot possibly be valid for measuring that attribute. This does not imply that the attribute cannot change over time or that that psychological attributes are unchanging essences (cf. Kagan, 1988). It does imply that to construe theoretical terms as referential requires a realist position about the phenomena to which such terms refer. Thus, measurement is considered to involve realism about the measured attribute. This is because we cannot see how the sentences Test X measures the attitude toward nuclear energy and Attitudes do not exist can both be true. If you agree with us in this, then you are in disagreement with some very powerful philosophical movements that have shaped validity theory to a large extent.

In spite of their scholarly apparatus (such as citations in parentheses, maybe slightly too much meta-discourse), these passages bear all of the marks of Classic Style. No hedging, no apologizing, no showing off, plenty of grounding in spatial imagery (with its taking of positions, reversings of causal orderings, and so on), and a confidence that even a very complicated idea can be expressed in plain English to any reader who is willing to take some time out to “talk” with an expert about it.

As a bonus, the paper itself pushes what I regard as a classic-style view of science, measurement, and validity. On Borsboom and colleagues’ view of measurement, things either exist or they don’t, and it’s only the things that exist that can be measured. And a measure has validity as a measure of that invisible entity (intelligence, self-esteem, reading comprehension, or whatever) only if that invisible entity is real and if that entity is involved in the chain of causal processes that lead to the representations that we take to be “measurements.” Reality is out there, validity is much simpler than you think, and when we do measurement, we take a sounding of real things. I love the fit here between the the writers’ medium and their message: Borsboom and colleagues help their case along through clear, confident, conversational writing that asks the reader to no more than look where the writer is pointing. 

The UK Publication of My Upcoming Book, The Kindness of Strangers, has been delayed until September 2020

I just received word from OneWorld Publications, which is publishing The Kindness of Strangers in the UK, that they are delaying publication until September.

By then, one hopes, the world will be in good enough shape that people will have the bandwidth to turn their attention to non-Covid matters.

Until then, please enjoy the UK cover for the book, which I think is just dandy.

Trust in the Time of Coronavirus: Low Trusters are Particularly Skeptical of Local Officials and Their Own Neighbors

A few days ago, I saw the results of a new Pew poll on Americans’ trust in the wake of the Coronavirus outbreak. The poll, based on a random sample of 11,537 U.S. adults, addressed two questions: Which groups of people and societal institutions do Americans trust right now? And how do their background levels of generalized trust influence their trust in those specific groups of people and institutions?

The takeaway is troubling: High trusters and low trusters have comparable amounts of trust in our federal agencies and national institutions, but they have vastly different amounts of trust in the responses and judgments of their local officials and neighbors.

To examine these issues, the Pew resesarchers first divided the sample into three groups based on their responses to three standard questions for measuring generalized trust. Helpfully, they called these three subgroups Low Trusters, Medium Trusters, and High Trusters.

As many other researchers have found, generalized trust was associated with ethnicity (white Americans have higher levels of generalized trust than blacks and hispanics do), age (the more you have of one, the more you have of the other) education (ditto), and income (ditto). These results are hardly surprising–ethnicity, age, education, and income are among the most robust predictors of trust in survey after survey–but they do nevertheless provide an interpretive backdrop for the study’s more important findings.

What really struck me were the associations of people’s levels of generalized trust and their sentiments toward public institutions and groups of other people. Low, medium, and high trusters had fairly similar evaluations of how the CDC, the news media, and even Donald Trump were responding: On average, people at all three levels of generalized trust had favorable evaluations of the CDC; on average, people at all three levels of generalized trust had lukewarm evaluations of Trump’s response.

Where the three groups of trusters differed more conspicuously was in their evaluations of their state officials, their local officials, and–most strikingly–ordinary people in their communities. About 80% of high trusters thought their local and state officials were doing an excellent or good job of responding to the outbreak. Only 57% of low trusters said the same.

But the biggest gulf in the sentiments of high trusters and low trusters was in their evaluations of ordinary people in their communities. Eighty percent of high trusters said that ordinary people in their community were doing an excellent or good job in responding to the outbreak. Only 44% of low trusters approved.

 

High trusters, medium trusters, and low trusters also had widely divergent opinions about the responses of ordinary people–both across the country and in their local communities.

Most people, regardless of how much generalized trust they had, thought their state governments, local governments, and local school systems were responding with the right amount of urgency to the outbreak. However, high trusters and low trusters differed greatly in their attitudes toward the responses of their neighbors. Where as16% of high trusters thought ordinary people in their local communities were overreacting; 35% of the low trusters–more than twice as many–thought ordinary people in their local communities were overreacting.

What I find troubling about these statistics is that all epidemics, like all politics, are local. The people who should be best equipped to tell you about what’s going on in your community are the people who are paid to know what’s going on in your community and the people who actually live in your community. We’re entitled to clear and accurate information from local officials, and we should be ashamed that local people cannot always trust their judgment. But local officials are not the only source of information that people should be able to trust. An ordinary person in your community could, in principle, be able to tell you whether a teacher at your kid’s school or a cashier at your local grocery store tested positive. How much unnecessary risk do we expose ourselves to when some of us inhabit communities or worldviews that cause us to perceive our local officials and neighbors are liars, incompetents, or chicken-littles?

Social Distancing By the Numbers: Who’s Staying Home?

The New York Times has been doing some excellent reporting about the spread of COVID-19. I particularly admire their graphics, which put the message into a visual form that anyone with the eyes to see can comprehend and appreciate.

One hopes that most Americans now know that COVID-19 spreads through person-to-person contact, and that the best way to avoid contracting or spreading the virus is to avoid interacting with others in close proximity–or better still, to simply stay home. Has this message sunk in? The visualizations published in today’s NYT (which are not only informative, but also beautiful), which are based on analyses of 15 million anonymous Americans’ cell phone use over the past few weeks, show just how much (or little) people in each U.S. county have been curtailing their travel over the past few weeks.

The three lessons these data teach are striking and troubling.

First, there is tremendous county-by-county variation in how much people have reined in their travel. In some counties (in the light pastels and greys below), travel has ground to a near standstill, with the average daily travel declining from five miles a day to around a mile or so:

Clearly, people in those light-pastel and grey counties have stopped driving their cars and have turned instead to walking their dogs:

Second, the declines in travel are not uniformly distributed across the nation. It is particularly noteworthy that counties with stay-at-home orders in place have had much steeper reductions in travel than those without travel orders in place. People in counties with stay-at-home orders have curtailed their travel by 80% or so; those in counties without stay-at-home orders have curtailed their travel by maybe 65%. That difference of 15% might not sound like much, but it’s actually a huge effect, so readily comprehensible to the naked eye that you don’t even have to do any statistics on the data to appreciate the difference:

Third, the counties with stay-at-home orders are mostly concentrated in the Northeast, the West Coast, and the Midwest. Unsurprisingly, given how few stay-at-home orders are in place, the counties in which people have reduced their travel the least are concentrated in the South. In Duval County, where I grew up, people were still driving about 3.4 miles per day this past Friday, making it the third least staying-at-home large county in the Nation. (My family members in Duval County, to my great relief, have been locked down in their homes for two weeks).

These figures say all we really need to know about staying at home during this crisis: Whether you like the idea of the state or county officials ordering Americans to stay at home during outbreaks of communicable diesases (for what it’s worth, the federal government arrogated that power long ago, and has exercised it with impunity, as the need has arisen, for centuries), stay-at-home orders seem to be working (bearing in mind the standard caveats about correlation vs. causation). The apparent effectiveness of stay-at-home orders at getting people to stay at home is so striking that it’s almost as if people possess a tendency to heed the directives of people in positions of legitmate authority–particularly when those people have the ability to impose sanctions.

The second lesson, equally clear, is that the Southern states, along with Texas, Oklahoma, Kansas, Wyoming, and a few others, are still in for a great deal of pain.

Empathy: Does “Putting Yourself in the Other Person’s Shoes” Make any Difference?

Are humans hardwired to care about strangers? Glancing over my bookshelves, titles such as Born to Be Good, The Compassionate Instinct, and The Altruistic Brain remind me that many of my scientific colleagues answer this questions with a resounding yes. Each of these books, in its own way, teaches that the animal designated Homo sapiens has evolved for compassion. Caring about strangers is just part of who we are. If it doesn’t come effortlessly, all it takes is some patience and some practice. Attend a workshop. Volunteer at a homeless shelter. Read some fiction. Meditate. Compassion is inside of you. You just need to coax it out.

One of the ways we have been taught to coax empathy out is by deliberately trying to take the perspective of a suffering person. “Try to see things from his point of view.” “Imagine how it would feel to walk a mile in her shoes.” “How would you feel if the shoe were on the other foot?” (A surprising number of shoes make an appearance in these aphorisms.) We encourage our kids to take the perspective of the people who might be negatively affected by their nasty or self-centered behavior, hoping that our admonitions are doing something to turn them into better people. But does encouraging people to take the perspective of others actually work?

For half-century (give or take a few months) experimental psychologists have been working under the assumption that perspective-taking does, in fact, encourage empathy. The social psychologist Ezra Stotland was the first person to try to encourage empathy with what have come to be called “perspective-taking instructions.” According to Stotland’s research, it worked.

By the way, here’s a fun photo of Professor Stotland with Ted Bundy. That’s Ted on the left; Ezra’s on the right. (This really deserves a blog entry of its own.)

WhenTedMetEzra

Following Stotland’s 1969 lead, researchers have been using perspective-taking instructions in attempts to manipulate empathy experimentally for five decades. In the typical experiment, subjects encounter a stranger in the lab who is going through something difficult in his or her personal life; then, the experimenter asks subjects to do one of several things. To encourage perspective-taking, researchers might instruct subjects to

try to imagine how the person feels about what has happened and how it was affected his or her life. Try not to concern yourself with attending to all of the information presented. Just concentrate on trying to imagine how the person feels.

In a variant of these standard perspective-taking instructions, researchers instruct participants to imagine how they (rather than the suffering person) might feel in a similar predicament:

try to imagine how you yourself would feel if you were experiencing what has happened to the person and how this experience would affect your life. Try not to concern yourself with attending to all of the information presented. Just concentrate on trying to imagine how the person feels.

To encourage still other subjects to remain objective (under the premise that doing so will squelch empathy), researchers instruct subjects to

try to be as objective as possible about what has happened to the person and how it has affected his or her life. To remain objective, do not let yourself get caught up in imagining what this person has been through and how he or she feels as a result. Just try to remain objective and detached.

In the ideal experiment, researchers also assign some subjects to an experimental condition in which they receive no instructions at all. They just learn about a person in need without any prompting to do anything in particular in response. These subjects serve as a control group that enables experimenters to find out both (a) whether perspective-taking increases empathy, and (b) whether remaining objective reduces empathy. Without such a control group, any differences in empathy that arise between people who engage in perspective taking and people who remain objective cannot be attributed to either condition: As a result, we can’t know whether perspective-taking raised empathy above its typical levels, remaining objective lowered empathy below its typical levels, or a little of both. As we’ll see below, this turns out to be really important.

My colleagues and I, with the psychologist William McAuliffe in charge, just published a statistical review (called a meta-analysis) of the results of every experimental investigation that we could get our hands on that compared the effects of these instructional sets on self-reported empathic emotion toward a needy stranger.

The paper was published here, but you can download a pre-publication copy of it here.

We found 85 papers in all. From these 85 papers, we extracted 177 comparisons between any two pairs of the four experimental conditions (imagine-other, imagine-self, remain objective, no-instructions). 

Here’s a very quick summary of what we found when we meta-analyzed those 177 two-group comparisons. There are some surprises.

1. Imagining how the needy person feels does not generate any more empathy than imagining how you yourself might feel in the same situation.

In other words, “Imagining how he/she might feel” = “imagining how you might feel.”

2. Imagine-other and imagine-self instructions do not generate any more empathy than receiving no instructions at all.

In other words, Perspective-Taking = No Instructions.

3. People instructed to remain objective experience less empathy than people who are not given any instructions at all.

In other words, No Instructions > Remain Objective.

4. People who get perspective-taking instructions experience more empathy than people instructed to remain objective.

In other words, Perspective Taking > Remain Objective.  (SEE: TRANSITIVE PROPERTY OF MATHEMATICS.) This contrast is the only reason why Perspective Taking Instructions Appear to boost empathy. They don’t. Instead, Remaining Objective reduces empathy.

For people who like to stare at the results of meta-analyses, here is a figure that summarizes those results reasonably well.

ma-figure

Take a moment to let these findings sink in. What they show is that perspective-taking instructions
do not, as a matter of fact, increase empathy. They’re no better than being given no instructions at all.
Instead, it appears that remain objective instructions lower empathy.

By the way, we also examined whether perspective-taking instructions affect men’s and women’s empathy differently, or whether they alter our empathy levels differently when we are trying to empathize with people who belong to our own social groups than when we are trying to empathize with people who belong to other social groups. These two factors made no difference.

So, as I see it, there are three big take-aways.

  • While it may be true that trying to take the perspective of needy others encourages empathy for their plights, it’s not true to say that there’s a great deal of experimental evidence for it. Things can be true without being supported by experimental evidence, of course, but the lack of support must surely be some kind of wake-up call to re-think our assumptions.
  • The previous evidence suggesting that perspective-taking instructions increase empathy appeared to do so only because they were being compared against “remain objective” instructions, which do, in fact, reduce empathy.
  • It does appear that we know how to restrain people from feeling empathy: just tell them to remain objective and ignore how the needy person might be feeling.

What should we make of these results? I see a glass-half-empty interpretation and a glass-half-full interpretation.

Glass-half-empty: The major tool that social psychologists have counted on for half a century for increasing people’s empathy doesn’t work. We have a nice tool for reducing empathy, though: Just tell people to ignore the suffering person’s feelings.

Glass-half-full: People could be walking around in the world with the same amount of empathy for needy others as they would experience if they were explicitly instructed to take the perspective of needy others. Maybe we take needy people’s perspectives so intuitively that we can’t get any additional bang for the buck by trying to do it deliberately. Maybe, by default, we’re more empathic than callous.

Reference: McAuliffe, W. H. B., Carter, E. C., Berhane, J., Snihur, A. C., & McCullough, M. E. (2019). Is Empathy the Default Response to Suffering? A Meta-Analytic Evaluation of Perspective Taking’s Effect on Empathic Concern. Personality and Social Psychology Review. https://doi.org/10.1177/1088868319887599

The Golden Rule: Gold or Fool’s Gold?

The Golden Rule—do unto others as you would have them do unto you—didn’t get its start with a 1961 Norman Rockwell painting. It’s the ethical bedrock for the major world religions, including Hinduism, Confucianism, Judaism, Christianity, and Islam, so it has been swimming around in people’s consciences for at least two millennia. In his Analects, for example, Confucius wrote, “What you do not want done to yourself, do not do to others.”[1] The Mahabharata of Hinduism gives similar guidance: “Knowing how painful it is to himself, a person should never do that to others which he dislikes when done to him by others.[2] The book of Leviticus from the Hebrew Bible features a Yahweh who commands his followers “Thou shalt love thy neighbor as thyself.” Five centuries later, Jesus took the “love thy neighbor” idea even further by using the Parable of the Good Samaritan to assert that people had ethical obligations to help strangers in their times of need—even strangers from outside ethnic groups.

Confucius, Yahweh, and Jesus didn’t teach the Golden Rule because they thought it was cute: They taught the Golden Rule because they believed it makes for more ethical people. Not everyone agrees that it does, however—or even that it could. In fact, many modern philosophers think the Golden Rule is philosophical claptrap.

In his defense of the Golden Rule, the philosopher W. T. Blackstone first listed the charges: It’s a flawed ethical principle because it implies that we can figure out how to treat others morally simply by consulting our own wants and needs. It’s flawed because it leads us to treat others immorally if immoral treatment is what we want for ourselves. It’s flawed because it insists that we can look inward to discover what is right, even though this habit of thought breeds ethnocentrism and motivates us to perpetuate society’s moral status quo, no matter how ethically flawed the status quo might be.[3] Or imagine a judge who uses the Golden Rule to justify why she decides to let a convicted mass murderer go free: If the shoe were on the other foot, she would want to avoid prison time, so shouldn’t she extend the same consideration to the killer? Because the Golden Rule seems to have these sorts of limitations, the ethicist Kwame Anthony Appiah has called it “fool’s gold.”[4]

I wonder if the Golden Rule really deserves so much cynicism. It seems to me that most of the philosophers’ worries are quite silly unless you assume that the person attempting to live by the Golden Rule has the intellect and reasoning powers of a five-year-old. A masochist who gets sexual pleasure from abuse at the hands of others, yet seeks to live by the Golden Rule, doesn’t follow it so slavishly as to assume that it obligates him to abuse other people in the same way. Instead, he knows that others might have tastes and preferences that differ from his own. Likewise, the judge who seeks to follow the Golden Rule in her professional decisions doesn’t need to vacate the sentences of mass murderers. Instead, she also considers her obligations to the law-abiding people who would not want convicted murderers running around free.

The philosopher Harry Gensler is the world’s leading exponent of the idea that the Golden Rule can, when read properly, withstand close ethical scrutiny. As he explains in his book Ethics and the Golden Rule, many philosophical objections to the Golden Rule vanish once we have a better grasp on how to implement the rule intelligently and reasonably. Gensler recommends a series of four steps, which he summarizes with the acronym KITA (Know-Imagine-Test-Act). In the Know step, we take time to learn what will help a specific person and what will harm him. A conscientious golden-rule follower does his homework. After having learned about the other person’s basic needs and desires, a conscientious Golden Rule follower will then implement the Imagine step by trying to imagine how his possible courses of action will affect others. Gensler isn’t talking about idle, half-second flashes of intuition. He’s talking about deliberate effort to work through the possible consequences for everyone who might be affected. The judge has to consider not only how her sentence will affect the convicted criminal, but also how it will affect the citizens who don’t want a convicted murderer released back into their community.

After obtaining the relevant facts and running all of the simulations to figure out which ones will harm and which ones will help, a conscientious Golden Rule follower can proceed to the third step in KITA, which is a Test for consistency: We must ask whether the action we have in mind is what we would want for ourselves. If the behavior we have in mind passes this consistency test—if we conclude that the behavior we intend to impose on someone else is consistent with how we think we would desire to be treated in exactly the same circumstances—including sharing that person’s beliefs, and desires—then we are ready to execute KITA’s fourth step: we can Act.[5]

I appreciate Gensler’s efforts to defend the Golden Rule’s honor. I am not totally satisfied that it does everything we might want from an overarching guide to a moral life, but it does seem to make others’ welfare a primary moral consideration, which sits well with my own Utilitarian leanings. In addition, Gensler’s KITA routine does seem to help us avoid most of the pitfalls associated with a five-year-old’s application of the Golden Rule—even though I am skeptical that most of us would take the time to go through all of those steps in real life. Who has the time to do all that homework?

Even so, the fact that living by the Golden Rule is cognitively difficult doesn’t mean it’s dumb to try.

 

[1] Confucius (trans. 1861), Book 15, Chapter 23.

[2] Krishna-Dwaipayana Vyasa (trans. 1896)  Book XII, Section 259, p. 620.

[3] Blackstone (1965).

[4] Appiah (2006), p. 60.

[5] Gensler (2013).

Appiah, K. A. (2006). Cosmopolitanism: Ethics in a world of strangers. New York: Norton.

Blackstone, W. T. (1965). The Golden Rule: A defense. The Southern Journal of Philosophy, 3, 172-177.

Confucius. (trans. 1861). The analects of Confucius (J. Legge, Trans.): Pantianos Classics.

Gensler, H. J. (2013). Ethics and the golden rule. New York: Routledge.

Krishna-Dwaipayana Vyasa. (trans. 1896). The Mahabharata (K. M. Ganguli, Trans.). (n.p.): Author. (Reprinted from: 2018).