Migration and Reform in Early America: An Interview with J.T. Jamieson

JT Jamieson

What role did American social and moral reformers play in managing human migrations? J.T. Jamieson, a Phd Candidate in UC Berkeley’s History Department, examines how social reformers in the first half of the 19th century sought to control migration and insert their own understandings of morality, social benevolence, and humanitarianism into the lives and experiences of migrants. In so doing, he argues, their reforms frequently perpetuated racial supremacy, religious supremacy, and Christian expansionism. In other words, they sought to determine who belongs in America — and who doesn’t.

Jamieson’s dissertation, “A Mere Change of Location: Migration and Reform in America, 1787-1857,” integrates the histories of religion, immigration, slavery, Indigenous dispossession, and Western expansion to argue that 19th-century social and moral reformers attempted to control the mass migrations of various peoples: African Americans, Indigenous peoples, European immigrants, and American settlers. A forthcoming journal article, “Home Work: Religious Nationalism and the American Home Missionary Society,” will appear in Early American Studies: An Interdisciplinary Journal in 2023.

Matrix Content Curator Julia Sizek spoke with Jamieson about his research. Listen to the interview below, or on Google Podcasts or Apple Podcasts. (A transcript of the conversation is included below, edited for length and clarity.)


Q: We’re familiar with migration as being a very hotly contested moral and political debate, but your research investigates this topic during a time period that is less familiar. What were the big debates about migration during the 18th century?

You’re right that it is a major hot button topic in political discourse and cultural politics today. And it has been for most of the United States’ history, including in the late 18th century and throughout the first half of the 19th century. There are many different political debates about citizenship, about migration, about different ways to control migration, usually on a local, municipal, or state level. But the debates I’m most interested in my research have less to do with policy and law, and more to do with cultural and social debates about morality and migration, and what the consequences were of different kinds of people moving from one place to another, either within the United States or beyond its borders, for the moral character of American society in general. 

I look at different kinds of moral and social reformers from the late 18th century up to the Civil War, and the way that they debated whether or not they should control the mobility of different kinds of people in different ways, and how they thought that one’s moral character could be influenced by a change of location. I look at how reformers who are interested in European immigration, reformers interested in expelling and removing and deporting Black and Indigenous peoples, and reformers who were interested in the movements and the fate of European and Anglo-Americans who were settling in the American West. I look at how, in all these different contexts, different moral and social reformers developed ideas about moral character, and what they thought it meant for one big group of people to move from one place to another, and how that became a kind of mechanism for them to manage inclusion and exclusion in the body politic.

Who were the migrants coming to the United States during this time? 

That’s a good question. There was a large number of enslaved people coming to North America during the 18th century and beginning of the 19th century; the international slave trade technically then stopped, but it went on illegally. And then a lot of the people coming later were Europeans. During the Gold Rush era, you saw transpacific migrations, from other parts of North America, Central America, and South America. 

But I’m interested in not only people moving into the United States, but also moving within it and moving out of it. To take the example of slavery, one of the main case studies in my dissertation is the American Colonization Society (ACS). The ACS was a national voluntary organization that advocated for the deportation of enslaved and free Black Americans to Africa. And it was really a wide-ranging movement with lots of different splinter organizations, and attracted lots of different people with various degrees of racism and various motivations for wanting to expel free and enslaved Black people. 

I look at that organization from the from the perspective of moral and social reform and the people who thought of themselves as humanitarians, and argued that the moral thing to do was to expel Black people and keep the United States as basically a White republic. They thought this was moral for a couple of reasons. They thought it would help Black people achieve a sense of moral uplift or improvement in their moral character if they were removed from White people. They thought that they could utilize colonies of Black Americans in Africa to serve as missionaries and evangelists to Indigenous Africans. There were different ways that they said, “If we deport and expel and uphold a racially exclusionary population, we’re actually doing something good and humane and humanitarian.”

There’s this logic that social benevolence is good, that they’re doing something good for these people. But the effect is that they are claiming that they [Black Americans] did not belong in the United States. There was this weird logic in the social benevolence that ultimately upheld racism and racial exclusion. But in their minds, they were drawing on arguments about moral uplift, evangelical regeneration, and exporting Christianity through the world.

Q: One of the things that’s interesting about this group of White people proposing that Black Americans should move back to Africa is that it contained the idea that segregation is good, and that they had learned something in the US that they could take back with them as missionaries. How did they portray that aspect of this supposedly moral work?

It came down to a lot of propaganda. Often it was a fiction, especially when other White missionaries are seeing these colonies in action and how mismanaged they were, and how much these Black colonists were suffering and the conflicts that they endured with Indigenous Africans.

It fell on a colonizationists, and especially colonists, to think of themselves as reformers and as humanitarians, and to create this image in the United States of Black people who are willing to do these things that they may not actually be willing to do, and saying they were ready to embrace their new life as voluntary emigrants in some other land. They did a lot of propagandistic work to create this fiction of the excited, willing emigrant who is, in reality, often being forcefully removed from the United States. 

There’s an image on a membership card for the Pennsylvania Colonization Society that depicts a scene of White men delivering Black colonists to Liberia. And it’s a scene of total jubilation, as if they have found their true proper home, and White people can take some kind of pleasure in having effected this transatlantic migration back to Africa.

So in many cases it was a total fiction that depended on the propagandistic work of colonizationists, in their publications sermonizing to their congregations, about how good and right this actually was, without recognizing the inherent racism and violence in this form of what they think of as social benevolence.

Q: As a historian, how did you approach the archives of printed materials and separate propaganda from fact?

It’s definitely hard to distinguish the two; often it is a challenge to try to figure out when people were speaking genuinely or it was intended to cover up their true motives. For historians, it often comes down to judgment calls and trying to fill out the worldviews of the of the people that you are examining, to try to figure out if their public and private writing seem to align, or if they seem to be genuinely thinking, I’m doing some kind of humanitarian work here, or if they’re really just using this idea of charity, benevolence, and philanthropy as a cover for more sinister motives. It depends on taking lots of different sources into account — public sources that were published and meant for public consumption, as well as private writings — and trying to reconstruct a worldview that in many cases seems alien and weird to us today. It’s about trying to situate that within the logic of the historical moment that it existed in.

Q: What were some of the archives you used for this research?

I used some archival collections from here in Berkeley from the Graduate Theological Union, particularly for a chapter about domestic missionaries who were concerned with the migration of settlers to the American West. Some missionaries or missionary organizations feared the depopulation of their congregations as people moved west, and they tried to complain and do something to stop and control it. Meanwhile, others were celebrating the migration of people to the American West as a means to expand Christianity. The papers of this organization, the American Home Missionary Society, are in the Graduate Theological Union, with lots of other great stuff. I also looked at materials in Kansas related to Kansas settlers, and people who were involved in trying to remove Indigenous people from east of the Mississippi. There were also archives in the Northeast, where a lot of these organizations and this culture of social reform and benevolence was mainly rooted — in New York, Boston, Philadelphia, and places like that.

Q: One of the points you raised is that a lot of these organizations or groups were associated with religious groups. What was the structure of these societies at this time, and where did they fall in relation to the major religious movements in the US?

The sort of reform culture that I’m speaking about came out of what was a very broad culture of philanthropy and charity and social benevolence that emerged in the first half of the 19th century, as more wealthy people and an emerging middle class developed a kind of humanitarian attitude. It was very wide ranging, and there were lots of conflicts within it. People were interested in prisons and education as well as slavery, but a lot of people were also interested in missions. Many people who subscribed to this culture of benevolence and reform in the 19th century tended to be Protestants and have a cultural Protestant ethic that inspired their views on social benevolence. But the really overtly religious stuff came in the form of missions or movements to found and organize Sunday schools.

Some of the major missionary organizations that I looked at include the American Home Missionary Society, which was often concerned with White and American and European settlers moving to the American West. There was also a foreign counterpart to that, the American Board of Commissioners for Foreign Missions (ABCFM), which was a large national society working beyond the borders of the United States, and also with Indigenous people in some cases. Some of these of the missionaries associated with this organization, or organizations of other religious denominations, either supported or tried to argue against the deportation of Indigenous peoples, and they made various moral arguments, similar to how people made arguments about the removal of Black Americans to Africa or elsewhere. But religion played a big role as an undercurrent for a lot of these movements and these attitudes about benevolence and reform. There were also missionary organizations that turned into like very big national missionary organizations, which tended to be either Presbyterian or Congregationalists, though the Baptists have some and the Episcopalians have some. These missionary organizations were very visible in this culture of benevolence and and reform.

Q: You noted that the missionaries were working domestically within the United States and also outside of the United States. Where were they operating primarily?

The ABCFM was working among Indigenous peoples in what we would now consider the boundaries of the United States, as well as places like Hawaii, Burma, or Africa — places all over all over the world. But the domestic missionaries were much more concerned with ideas about migration, more so than the foreign missionaries, who were actually moving farther from the United States. The domestic missionaries were concerned with the migration of European and Anglo-Americans throughout the West, and with the depopulation of churches in the U.S. They were also concerned with immigration. 

They often viewed the world as collapsing in upon the United States. In part, they had ideas that were very rooted in nativism, specifically in anti-Catholic nativism, where there’s a fear of mainly European immigrants coming into the United States. There were lots of conspiracies about Catholic European immigrants coming under the thumb of insidious papal forces that would then destroy American democracy. There was a lot of nativistic fear of immigrants, and especially of Catholics. 

They had come to think that God had purposefully designed the world and had himself inspired migrations from abroad to come into the United States, whether from China or from Europe. They saw an almost millennialist promise in the migration of other people into the United States, because God is bringing them to us to convert them. So there is this kind of pessimism, and this fear and anxiety about immigrants coming into the United States from the perspective of domestic missionaries. 

There’s also a great hope and optimism about it, because they think that God is making the world collapse into the United States, and it’s in the religious theater of the United States that they will be converted. It’s almost like they were converting the world with foreign missionary work, but within the territorial boundaries of the United States. Interestingly, it was really these domestic missionaries, working within the bounds of the United States, who were much more concerned with migratory flows of various kinds, whether within the United States or coming into the United States from other countries. It’s the domestic missionaries that that seem to really be thinking about the meaning of migration, more so than the foreign missionaries.

Q: One of the cases that you look at is sort of the famous case of Bleeding Kansas. What was Bleeding Kansas, and how was it significant for the migration debate?

Bleeding Kansas, as probably many Americans know, refers to a series of violent episodes in Kansas Territory in the mid-1850s. It’s one of these things that precipitated the sectionalism of the Civil War. The violence erupted over debates about whether or not Kansas would be established as a free state or a slave state. It was up to the residents to vote about whether or not they wanted their state to be a free state or not. The effect this had is that a lot of pro- and anti-slavery people started talking about sending or supporting migrants who were going to Kansas with this presumption that those populations who are sympathetic to either slavery or anti-slavery will then vote to make Kansas a free state or a slave state. That would have major political repercussions at this time, and so there was lots of violence and political and cultural debate about the destiny of Kansas.

I look at it through the lens of of emigrant aid societies. The big one is something called the New England Emigrant Aid Company, or NEEAC, which was a formed by a bunch of people in Massachusetts, both religious reformers as well as wealthy philanthropists, who supported this. Their idea is that they would help “Free-Soil” (or anti-slavery) settlers on their journey to Kansas, by pumping capital into settlements in Kansas. 

This is important to my story in two ways. One is that these organizations — the NEEAC and other other organizations that wanted to support Free-Soil and anti-slavery Kansas settlers — viewed migration as a kind of tool to solve the problem of slavery in the United States. They thought they could just support the migration of a large number of people, but they turned out to actually not support that many people, which was the case with a lot of my case studies. They have very grandiose ideas, and they actually don’t work out all the time. 

But anyway, the idea of the of the NEEAC is that, by moving a certain population with particular religious or economic or political affiliations from one place to another, they can influence the destiny of slavery in the United States, and they hoped to help end slavery in the United States, not in any kind of radical abolitionist way. They’re all pretty conservative, but they were anti-slavery. So they think of it as a tool to solve a major social problem. 

They also think about how migration to the West has been, in their view, a humanitarian problem for White and American European settlers. They say, often when people migrate to the West as settlers, they face innumerable challenges, and tend to suffer a lot and  experience destitution and suffering of various kinds. But, they say, if we organize this migration and support their settlements economically, if we help transplant communities of similar people, that will go a very long way to ease the suffering or abuse of Western migrants. 

So they’re thinking about supporting and helping migrants both to address this larger social problem, the problem of slavery, and they’re also thinking about migration itself as as a problem that requires some kind of benevolent intervention. They say, “the settlers, in our view, have been suffering. But if we support and organize them migrate, they won’t suffer anymore.” They saw emigration as a tool to solve a problem, and they think of migration as a problem in and of itself that they need to intervene in, and in some way regulate or control. Those two views of migration are really where the bulk of my argument throughout my dissertation hinges around.

Q: One of the big figures in the history of the American West was the “land speculator,” who was often accused of promoting disorganized settlement by sending someone off with their little parcel of land in a somewhat amoral, or perhaps entirely immoral way. How did they view themselves relative to these land speculators?

Land speculation is a big part of my story, from the beginning , when I talk about European immigrants coming in the late 18th century, to the end, when I talk about immigrants going to the west and to Kansas in the 1850s. I don’t really look so much at speculation activities, but more the idea of the land speculator, as you say, as a sort of amoral person intent on deceiving poor people to come settle their lands, when they don’t actually care about them at all. These people end up suffering in all kinds of ways. And the speculator just wants to make their dollar.

At the beginning of my story, when I look at European immigration, there was a transatlantic fear about American land speculators in the late 18th century. You had people in England and in France vocally trying to demonize American land speculators, because they said, we will both lose our population and we will end up suffering. It’s our job to warn people about these kind of speculative projects. You had land speculators saying, “Oh, great things wait for you, if you come to America.” Europeans think, “it’s our job to tell people the truth.” And in telling them the truth about how they will suffer at the hands of these land speculators, then they won’t migrate. It’s a kind of informal tool to regulate migration. 

The same thing happened in the West. In Kansas, there was a long history throughout the 19th century. They have this same kind of fear of the figure of the speculator who’s only interested in deceiving people. And these reformers, people interested in social benevolence, think it’s their job to tell people the truth, and say, “Don’t trust these land speculators – believe the information that we give you.” And in doing that, we will in some way have a hand in controlling or regulating the movements of migrants and of settlers with the NEEAC, the New England Emigrant Aid Company in Kansas. They’re very aware of this. And so they go to great lengths to say, “Oh, we’re really only interested in giving out correct information.” There’s a big emphasis on trustworthy, correct advice being given to prospective settlers and migrants. And then they also come under attack by pro-slavery enemies of the NEEAC, who say, “Actually, you are deceptive land speculators only interested in yourselves.” 

This kind of debate reached into Congress, as Congressman from the North and the South were debating emigrant aid companies to Kansas, saying, “you are guilty of this inhumane abuse of settlers.” The land speculator again figures in my story in all these kinds of ways. It’s more against the idea of the land speculator, as someone who is intent on abusing or creating suffering for migrants and settlers. And lots of people take it upon themselves to accuse someone else of being a speculator, but with the interest of wanting to influence whether or not a person is going to move to a particular place. Insofar as it figures into my story about reform and benevolence, calling out land speculators as a tool to regulate or limit or control whether or not people are going to make a decision to migrate plays an important part.

Q: That’s interesting because it suggests a 19th-century version of what people today might call the spreading of misinformation. So what ended up happening around Bleeding Kansas and the debates about emigration?

Ultimately, Kansas did become a free state as the Civil War was about to happen. But the people related to emigrant aid, the charitable organizations that I talked about, did not have the best track record. There were lots of settlers in Kansas who said, “Actually, this organization ended up not doing very much for us.” Practically, they figured into the political debate about migration to the West in the 1850s. But after the end of the 1850s, a lot of these guys — and they were mostly guys — started to look beyond Kansas, and they tried to take ideology of what they called “organized immigration” and apply it to other places. Some people associated with it were now looking at developing some kind of migration project to Texas, to Oregon, to Florida, or even to Nicaragua. And none of these really works out in any kind of real way. They all fail or don’t work out in different ways and for different reasons. 

But what it shows, right up to the end of the pre-Civil War period, is the belief that it’s possible to colonize and organize the migration of people through these avenues of philanthropy and moral reform, and that the outcome will be that we’re making the world a better place for whatever reason, whether we’re spreading what they would think of as an Anglo-American civilization, or supporting migrants who they think might otherwise be suffering, if they were to settle somewhere else on their own without some kind of charitable support. It shows that, despite their failures, [migration supporters] believed this is still possible. In theory, it’s within the realm of possibility to regulate and control and support large movements of people from one place to another. 

In many of my case studies, not that many people actually ended up moving under the auspices of these different organizations, but what it did is cultivate this middle-class social politics of humanitarianism, where supporting all kinds of different people moving becomes significant, and people start to make a connection between large groups of people moving from one place to the other, and a sort of social transformation on both an individual level and a larger national or community level. Even though in many cases, these projects don’t work out, it demonstrates the way that people were thinking about migration, and trying to embed some kind of humanitarian language into debates about migration, and trying to use philanthropy and social benevolence as a tool to control or regulate where different people are moving.

Ultimately, in their minds, that determines who belongs in the United States and who doesn’t; what religious, national, ethnonational, racial, or economic political groups belong here, and which don’t. Relying on philanthropy and social benevolence as a means to determine that is something that emerged in this period, and is something that has generally been understudied by historians, who are otherwise interested in looking at ideas about citizenship and policy and the work of the state in controlling and regulating migration.

I’m arguing that there were other ways that Americans thought they could influence and regulate migratory flows of people. And one of those ways was through participating in philanthropy and charity and social benevolence.


Reconsidering the Achievement Gap: An Interview with Monica Ellwood-Lowe

Monica Ellwood-Lowe

Monica Ellwood-Lowe is a PhD Candidate in the UC Berkeley Department of Psychology whose research focuses on differences between outcomes for students of different socioeconomic status, as well as the societal barriers that might hinder student success. Ellwood-Lowe tries to answer such questions as, what skills do children develop when they come from socioeconomically disadvantaged homes, even in the face of societal barriers to success? Do children’s brains simply adapt to their respective environments?

Ellwood-Lowe is co-mentored by Professors Mahesh Srinivasan and Silvia Bunge. She earned her bachelor’s degree from Stanford University. Monica’s work is supported by the NSF Graduate Research Fellowship Program, the UC Berkeley Chancellor’s Fellowship, and the Greater Good Science Center.

For this episode of the Matrix podcast, Matrix Content Curator Julia Sizek spoke with Ellwood-Lowe about her recent research on the topic of children’s cognitive performance, and how we might think about removing barriers to children’s success. 

Listen to the podcast below, or on Google Podcasts or Apple Podcasts. More episodes of the Matrix Podcast can be found on this page.

A transcript of the interview is included below (edited for length and clarity).

What is the achievement gap, and how has it typically been studied in psychology?

The idea of an “achievement gap” is the idea that kids who grow up in higher socioeconomic status homes, where their parents are more highly educated or have higher incomes, tend to do better in school than kids who grow up in lower socioeconomic status homes. I look at the achievement gap in terms of socioeconomic status (SES), but people also study it in terms of racial and ethnic differences. But it’s the idea that kids’ test scores, even by the time they enter kindergarten, are higher when they come from higher SES backgrounds. 

What kinds of tests do researchers use to evaluate children’s cognitive performance?

Even before kindergarten, lots of researchers measure children’s vocabulary. They look at how many words kids understand and produce; starting as early as 18 months, you can get some indices of the number of words kids know. But one thing I really want to emphasize is that vocabulary doesn’t have to mean the same thing as achievement. One of the things that I think psychology does not do well, compared to other disciplines, is that when we think about these big issues like school performance or outcomes, we’re really focused on individual-level metrics, like vocabulary. When you think about how significant and longstanding these issues are, it’s really important to zoom out and think about the structural factors that are playing into all of this.

Beyond vocabulary, what are some of the other ways researchers can measure children’s cognitive function or school performance?

There are lots of different tests that measure children’s “executive function,” which is supposed to be an unbiased measure of children’s cognitive abilities. It’s gotten a lot of flack for not being unbiased for all sorts of different reasons. Typically, the people who have created it are white, upper-middle-class researchers, who have a certain idea of what cognitive performance looks like. 

But normally, it ends up looking like kids playing games designed to tell us something about their cognition. They might be doing some kind of matching game, or there’s a measure where they see different sets of patterns, and they’re asked to fill in the missing set, as in, what completes this pattern? That’s called the matrix reasoning task. Those are the kinds of tests that we usually use.

What are the methods that psychologists use to explain the differences in cognitive function among the kids as they take these tests?

That’s been something psychologists have been really interested in. We administer these tests, and we find differences between kids from different backgrounds, and then psychologists come in and they want to know, why do we see these differences? When they’re happening even before school, it seems like it’s something that might be happening in the home. 

One of the things many researchers have looked at is the amount of language that parents are directing toward their children. They’ll look at the very specific form of speech where parents are talking directly to their kid. This doesn’t include speech like parents talking to other siblings, or parents talking to other adults. That’s one of the things we have found really correlates with how many words kids end up knowing. But one of the things that’s really limiting about this is that children are perfectly capable of learning from those other forms of speech. We just think that kind of learning might be happening later, rather than earlier.

This is what people popularly call the “word gap.” You worked on a study about this concept and what it does for how we think about children’s performance. What did you learn in that study?

The concept of the “word gap” was popularized in 1995 by Hart and Risley, and they did a large study that led them to conclude that, by the time they are three years old, kids from higher SES homes have heard 30 million more words than kids from lower SES homes. 

There are a lot of issues with this metric, and it has definitely come under fire. For one thing, we know that the gap can’t possibly be that big from more recent measures. For another thing, these were just numbers that were extrapolated from hours-long recordings in the home, and we now have better ways of quantifying kids’ all-day language environments. Third, this was again only looking at that very specific type of child-directed speech. When you zoom out and look at the entire language environment, that gap totally disappears. 

That said, the general idea that higher SES parents talk more to their kids than lower SES parents has been replicated a lot. A lot of different researchers, even all around the world, have found this general phenomenon. That really led us to wonder why this is such a stable phenomenon. Lots of researchers have looked at individual-level mechanisms that might be promoting this. For example, maybe higher SES parents have more parenting knowledge, whatever that is, and that [knowledge] leads them to talk to their kids more. So maybe the solution is: let’s go into the home and train lower SES parents to talk more to their kids. But when you think about just how broad this problem is — it’s been documented since the 1950s in the US, it’s been documented all over the world, and in rural areas and urban areas — it doesn’t seem like these individual-level explanations can carry that much weight. 

We were interested in zooming out to think about, structurally, what does it mean to be lower SES? When you think about socioeconomic status, it’s not a characteristic of an individual, but rather it has to do with their access to societal resources. So this was a first pass at looking at how structural barriers that lower SES parents are facing actually influence the amount that they can talk to their kid. We focus specifically for this study on financial strain, that is, maybe just having to think about their finances is actually quite taxing and leads parents to talk to their child less.

How did you go about measuring parents’ financial stress and how that might play into how they talk to their kids?

This was kind of a sneaky study on our part. What we were interested in is whether just the experience of being reminded of recent financial strain, or not having enough resources, would lead parents to talk less to their kids, regardless of their SES. We actually brought higher SES families into the lab, because these are the families that researchers in the past have said have the “parenting knowledge” to talk more to their kids, or they have whatever individual-level characteristics might lead them to talk more to their kid. We assigned half of these parents to fill out a survey about times when they didn’t have enough resources during the last week, or when resources were scarce. Some of them did talk about finances, but they talked about a range of things. And then we assigned the other half to fill out a control survey where they just reported on things they did in the last week.

After they filled out this survey, we left them in a room alone with their kid for 10 minutes under the guise of getting a second survey for them to fill out. We would say, we just realized the survey isn’t loaded, we’re going have to go to the other room and load it. So we would leave the parent and child alone in a room together for 10 minutes. And we gave them a fun puzzle box toy for the kid to play with, so the parents had the opportunity to engage in speech with their kid. They could narrate what was happening with the puzzle box toy, they could explain certain pieces of it. Or they could just sit quietly and sit on their phone and let the child play. We were interested in whether parents who had been thinking about their own experiences of scarcity would talk less during those 10 minutes than parents who just thought about things they had done over the last week.

How many people did you bring into the lab to ask these questions, and what did you find?

We brought in about 70 people to the lab. It’s a small sample, and it was our first pass at running the study, so I would call these very preliminary results. But what we found in general is that parents who thought about financial scarcity in particular talked less to their kid than parents who thought about all other forms of scarcity. And these parents didn’t differ in their income or in their education. They were all the same on these kinds of individual characteristics. But something about reflecting on financial scarcity might have led them to talk less to their kids.

How might one measure this outside of the lab setting?

There are a lot of different tools researchers have used to measure this. One is called a LENA recording device. It’s just a tiny little recording device, it sits in kids’ front pockets, and you turn it on at the start of the day, and then it records the entire day for 16 hours. For that full 16-hour recording, it quantifies the number of adult words spoken near the child, the number of child vocalizations, and the amount of back and forth between the adult and the child  — what we call “conversational turns,” where maybe the kid says something and the adult responds.

Using LENA, how would you measure whether financial stress might be affecting the amount of language a child hears?

That’s what we were really interested in doing to follow up on this lab study. You can imagine that just bringing families into the lab and saying, “Okay, think about scarcity,” isn’t the most externally valid, meaning it doesn’t necessarily hold up in the real world. 

What we really wanted to do next was make use of already available data and see if we could find any evidence for this phenomenon in the wild, so to speak. We used data from these LENA recording devices that other researchers around the country had already collected, and we used a few datasets where families had completed these LENA recorders multiple times over the course of a period of time. They ended up varying randomly in where in the month they fell. Some families recorded a couple times at the beginning of the month, and a couple times at the end of the month, in random order. 

The reason we cared about that is because there’s a fair amount of research in economics showing that families feel more financial strain at the end of the month compared to the rest of the month. We thought that, if this was a real phenomenon, we should see dips in parents’ speech to their children at the end of the month, when they’re likely to be experiencing the most financial strain. What was really cool about this is that, because we had these multiple recordings for a single family, rather than comparing families to one another, we could really look within a family and see, do families talk less at times of the month that they’re experiencing more financial strain?

That’s a really amazing tool to have at your disposal. What did you find?

Again, I would call this pretty preliminary evidence. But we found some possible evidence that parents do indeed talk less to their kids at the end of the month. It looked like what was really affected was this specific form of child-directed speech or conversational turns. There were fewer conversational turns back and forth, vocalizations between parent and child, at the end of the month for a lot of these families. But things like the overall number of words adults were saying didn’t change. It seemed like it might be specific to child-directed speech.

They might be having conversations with other members of the family, but they aren’t thinking about talking to their kid.

Exactly. And I should say that many of the kids in this study, and in all of the studies that we’ve done, are really young. Think about kids in the first couple years of their lives. They’re not the most fun conversational partners, right? They don’t have that much to say. So it can take a bit more cognitive effort and energy to engage kids at that age.

And if you’re stressed out, that’s exactly the sort of thing that you wouldn’t have the capacity to do. This connects to the broader research you’ve been doing on other aspects of socioeconomic status and how it might affect children’s cognition. You conducted a study that uses fMRI imaging to look at how kids’ brains are working when we ask them questions. Tell us a bit about the methods used in that work.

We took the finding that there are some structural reasons why we see differences in kids’ early environments. We wanted to know how kids in lower SES environments then thrive. Because you’ll hear in the media that kids need to hear a certain number of words, or kids need to be exposed to lots of child-directed speech in the first three years of life. But really, when it comes to language development, that’s not actually true. We’re capable of learning new words throughout our lifetimes. Anybody who’s ever started a new job can identify times that they’ve learned words in later life. So we don’t think these kids are messed up if they’re not hearing lots of speech, but we want to know, what are the ways that they’re then succeeding? Because it might not be through the same mechanisms as higher SES kids. 

So for the next study, we turned to the brain, using what’s called functional magnetic resonance imaging, or fMRI. We use resting state fMRI, which means kids sat in the scanner, and they didn’t do anything. They were instructed to look at a [neutral image] and do nothing else. And the brain never stops working. So during that time, the brain is activating; things are happening. We use what is happening in the brain during that time to make an inference about what their typical thought patterns are. What fMRI allows us to do is to look at what regions in the brain are activating in synchrony with one another. Where are neurons firing in the brain? And where are neurons typically firing at the same time as one another?

What are some of the ways that researchers typically have thought about how the neurons are firing in relation to having higher cognitive function?

One of the things we’ve learned about the brain is that there are a lot of different regions in the brain that perform really diverse tasks, but regions work together frequently. We have something called brain networks, which are made up of a whole bunch of regions that typically work together to carry out certain tasks. 

One example of that is something called the frontoparietal brain network. This is a set of brain regions in the frontal and parietal parts of the brain, as the word would suggest; those regions are mostly along the forehead and the top of your head. These are regions that typically work together when we’re doing these externally demanding cognitive tasks. If you were filling out some kind of reasoning test, you would typically see a lot of activation from these regions in the frontoparietal brain network. That’s one that we look to a lot. 

Another one that we think about is a different set of brain regions, which we call the “default mode network.” This is a set of brain regions that really work together at rest, so people thought maybe this was a default brain pattern, so that when you’re not doing anything, these are the regions that are activating. But we now know, they’re really involved in thinking about yourself, or thinking about things outside of the here and now, anything that’s really not external, but more internal. These are the brain regions that will typically work together to do those kinds of thinking patterns. These are the two brain networks, that frontal parietal network and the default mode network, that we investigated in our next study.

If we think about a child having more executive function, or cognitive ability, which parts of the brain do we think are doing that?

A pretty common finding in the literature is that as kids grow up, the connection between the frontal parietal network and the default mode network gets smaller. What this means is, say you are doing a really cognitively demanding task with intense reasoning, lots of researchers think you want the default mode network to shut down. You want thoughts about yourself to be really quiet, you want thoughts that have nothing to do with what’s going on to be as distant as possible. So you want less of a connection between the frontoparietal network and the default mode network. And so researchers have indeed found that a lack of connection develops with age. When kids are younger, the two networks work together more, and as they get older, they tend to separate more. They found that, at least among higher SES kids, the more separate those brain networks are, the better they do on cognitive tests. They found that all the way into adulthood.

Your research focused on the potential connection between these two networks that we wouldn’t expect for cognitively high performing kids. What was that connection?

What we were interested in is what’s going on for the kids in poverty who are doing really well on cognitive tests. When we think about things like the achievement gap, or kids’ test performance, we end up grouping kids off and saying “higher SES kids” and “lower SES kids.” But there are lots of lower SES kids who are living in poverty, and are still performing really highly on these cognitive tests. So we thought it would be interesting to see what’s going on for them. Are they achieving this high performance through the same mechanisms as higher SES kids? We went in looking at the connection between these two brain networks — the frontoparietal network and the default mode network. And we expected, based on all of the research that we had seen, that less of a connection between these two networks would be good for kids in poverty. We thought, maybe those kids in poverty who are doing really well have a lack of connection between these two networks. And what surprised us, and what we think is so cool, is that we actually found the opposite. We found this expected negative association for the higher SES kids, which all of the literature had shown before. But for the lower SES kids, we actually found that the kids whose two brain networks were more connected to each other were doing better.

Why might that be?

We don’t know yet. We’re still trying to figure out what the mechanism might be. But one of the things that we know is that those two brain networks, even in adults, do sometimes work together. They definitely work together for things like creative thinking. There are certain kinds of thinking where you want to be both engaging a lot of cognitive control, and thinking about things that are outside of the here and now.

You can think about designing something new. That’s the time when those two brain regions would be activated together. They would also be activated together if you were planning for the future. The future is not right in front of you, but you are planning it. We think that maybe the kids in poverty who are doing better on these cognitive tests are doing so because they’ve really had to adapt to a set of structural constraints that haven’t been set up for them to succeed. And maybe one of the ways that they’re doing that is by thinking outside of the box about how they can succeed, or planning for the future.

We actually found that this effect was strongest for kids who were living in more dangerous neighborhoods. It was strongest for kids who are Black relative to White. And we think that both of these things are evidence of structural barriers to success. It really points to kids having to adapt in creative ways to do well on these tests.

What do you think are the implications of this research, and what are the possibilities for future research in the same realm?

Whenever you read studies about brain development, it’s pretty likely that they were done with kids who are higher SES. If you have ever been in an MRI before, it’s a giant magnet. It’s not the most inviting machine. It requires a lot of time and patience. And it requires a lot of trust that the person who’s running the machine is not going to hurt you. 

It just happens that higher SES families know more about this kind of research, and they’re more willing to participate, and they have more time to volunteer, whereas lower SES families don’t, often for good reasons. Typically, this ends up correlating with race and ethnicity. They don’t have a lot of trust in the research system to take care of them or to accurately report what’s going on for them. So a lot of our studies, and a lot of what we know about brain development, has come from this very specific set of kids whose parents are highly educated, wealthier, live near universities, and are excited about the idea of participating in research. And it has really limited the broader understanding of what healthy brain development really is.

That healthy brain development may not be one set of things. We can’t use universal measures to predict what is happening in someone’s resting brain state with how they’re going to perform on this cognitive test. 

One thing we know for sure about the brain is that it’s really plastic, and it changes a lot throughout childhood. But it also continues to change in adulthood. And it’s built to adapt. Humans have lived in all sorts of different contexts and cultures very successfully for a very long time. We really think that one of the things that allows us to do that is the flexibility of the brain.

What do you think are the implications for thinking about the brain’s development beyond children and into adolescence and adulthood?

This last study was with 10 year olds, who are just entering adolescence. Adolescence is a really cool time. We think that some sensitive periods — times where the brain is very sensitive to certain kinds of environmental input — happen during adolescence. That might be a time when these kids are super receptive to new kinds of information. When you think about potential implications, if we were to make schools for adolescents and redesign them in a way that used the skill sets they already had, that’s one way of thinking about it. But it’s really just taking a broader view on what it means to be successful and how society can be restructured, and how it has been structured in the past.

One of the great potential applications for this research seems to be about that intersection of psychology and other disciplines. Have you made plans on how to work with other disciplines on this kind of research?

We have been working with economists right here at Berkeley. One of the things we’re doing, thinking back about the first study, is that we are going to give some families unconditional cash, and see whether that affects their speech to their kids. One very direct application is just giving parents more money, enough to change their behavior. There’s a big national study going on in that realm called the “Baby’s First Years” project. It’s an unconditional cash transfer study, as well. That’s one future direction.

I think it would be really cool to pair up with educators and people who are in the schools to think about what kinds of skills kids are developing from all different contexts — and how we can best measure and support that.


Listen to more episodes on the Matrix Podcast page, or listen on Apple Podcasts or Google Podcasts.




The Rise of Mass Incarceration: An Interview with Chris Muller and Alex Roehrkasse

Alex Roehrkasse and Chris Muller

On this episode of the Matrix Podcast, Julia Sizek spoke with two UC Berkeley scholars whose work focuses on explaining how mass incarceration has changed over the last 30 years.

Alex Roehrkasse is an Assistant Professor of Sociology and Criminology at Butler University. He studies the production of racial, class, and gender inequality in the United States through violence and social control. He was previously a postdoctoral associate in the Department of Sociology at Duke University and at the National Data Archive on Child Abuse and Neglect at Cornell University.

Christopher Muller is Associate Professor of Sociology at the University of California, Berkeley. He studies the political economy of incarceration in the United States from Reconstruction to the present. He is particularly interested in how agricultural labor markets, migration, and struggles over land and labor have affected incarceration and racial and class inequality in incarceration. His work has been published in journals such as the American Journal of Sociology, Demography, Social Forces, and Science

Listen to the podcast below, or on Google Podcasts or Apple Podcasts.

Excerpts from the interview are included below (edited for length and content).

Q: Let’s start by talking about the main topic at the center of your collaborative research, which is how mass incarceration has changed over the last 30 years. What motivated you to take on this topic?

Muller: It’s useful to step back and try to define mass incarceration. There isn’t complete agreement about how to define mass incarceration, but I think the most influential definition comes from the sociologist David Garland, who argues that mass incarceration is defined by two main features. The first is a scale of incarceration that’s unusual in both historical and comparative terms. This fits the US case because its incarceration rate is so extreme, both in comparison to similar countries and in comparison to its past. From 1970 to 2010, the US imprisonment rate rose from roughly 100 per 100,000 people to roughly 500 per 100,000 people. If you count people in jails, that number gets even higher, to about 700 per 100,000 people. That makes the US a vast outlier with respect to comparable countries.

The second feature of mass incarceration that Garland focuses on is what he calls the social concentration of incarceration. In the US, what he’s referring to is mainly the incarceration rate of young Black men. If you look at the most recent estimates, roughly a quarter of Black men can expect to be imprisoned at some point in their lives. When you zoom in to look at Black men who dropped out of high school, that number jumps to over two-thirds. These are really astonishing numbers, and are part of what has inspired people to try to understand how we got here over time. 

One of the main motivations of this project with Alex has been the emergence of a recent debate around this last point – about the relationship between racial inequality and incarceration on the one hand, and mass incarceration on the other. On the one side of the debate, we have a book like The New Jim Crow: Mass Incarceration in the Age of Colorblindness by Michelle Alexander. This is probably the most widely read book on mass incarceration, and it focuses mainly on its disproportionate impact on Black Americans, due in part to the War on Drugs, and due in part to the concentration of police in poor, predominantly Black neighborhoods. 

On the other side, you have scholars like James Forman, Jr. and Marie Gottschalk, who are sympathetic to Alexander’s account, but who argue that it’s incomplete. In particular, they focus on the fact that mass incarceration has negatively affected many groups beyond just Black Americans, and that it’s particularly concentrated among the poor.

My read of the debate is that it’s been quite civil and collegial. But as it has spun out into wider public arenas, it’s gotten more heated. As I’ve encountered this debate, I’ve had a sense that people have been talking past each other. And so one of the main goals for me in working on this project with Alex was to try to establish a more comprehensive and up-to-date empirical foundation for the debate. I had a hunch that this foundation would help us to see why both positions actually look quite reasonable depending on how you look at the question — depending on whether you’re looking at the direct experience of incarceration, or whether you’re looking at its indirect effects. 

What we tried to do in the project was two main things. The first thing was to update previous estimates of racial and class inequality in prison admissions. They hadn’t been calculated since 2002. You would think this would be a relatively straightforward thing to do, but as I’m sure we’ll discuss, there are all kinds of complicated issues related to how you actually estimate these quantities. One of the main reasons we wanted to do this was based on research that’s come out in recent years showing that there’s been a huge shift in the fortunes of people without a college degree. One of the most famous examples of this is the work of the economists Anne Case and Angus Deaton, who’ve shown that there’s been a marked rise in the mortality rates particularly of White people without a bachelor’s degree. We had a hunch that this shift might also be visible in prison admissions. 

The second thing we wanted to do in the paper was to look beyond the direct experience of incarceration and look at the indirect experience. This includes looking at people’s likelihood of having a family member imprisoned, and looking at people’s likelihood of living in a neighborhood with a high imprisonment rate. The reason we wanted to do this was because of a whole body of sociological research that has shown how, because of Black-White wealth gaps, for example, middle-class Black people are much more likely than middle-class White people to be the offshoots from poor family trees. That means they’re much more likely to have family members who are poor than similar White people.

We were also inspired by a lot of research, much of it coming out of sociology, showing how segregation has meant that middle-class Black people are more likely than middle-class White people to live in poor neighborhoods. If you think incarceration and poverty are becoming increasingly associated over time, these dynamics are going to influence differences in the relative direct and indirect experiences of incarceration.

Together, we thought these facts suggested that it was possible that racial and class inequality in people’s risk of having a family member imprisoned — and racial and class inequality in their risk of living in a high imprisonment neighborhood — could seriously differ from racial and class inequality in their risk of being imprisoned themselves.

Q: That points to the two challenges of studying mass incarceration: the question of the class and race factors that make one more at risk of being in prison, and the question of the people who are in direct or indirect contact with the prison system. What were your findings when you put these two different parts of mass incarceration together?

Roehrkasse: Corresponding to these two parts that you’re describing, we really have two main sets of findings. The first is that we show that there have been really significant shifts in the contours of inequality in prison admissions in the 21st century. On the one hand, Black-White disparities have pretty meaningfully declined since the late 20th century. For example, at peak levels of racial inequality in the early 1990s, Black people were somewhere between six and eight times more likely to enter prison than similarly educated White people. That’s just an astonishing level of inequality.

To be frank, you don’t often see racial disparities that large in social science. This is not reducible to any underlying educational differences, because we’re comparing like to like here. By 2015, though, the Black-White ratio of prison admissions had fallen to something more like two or three. That’s a pretty significant decline, but it’s important to say that’s still a really big disparity. 

On the other hand, inequality between people who had attained different levels of education skyrocketed over the same period. So again, in the early 1990s, people who hadn’t attended college were roughly five to six times more likely to go to prison than people who had attended college. But by 2015, when our analysis ends, people without college were 20 to 25 times more likely to go to prison than people who had been to college before. 

Our second set of findings adds some nuance to this picture. In two separate analyses, we examined people’s likelihood of having a family member in prison, or of living in a neighborhood where a high proportion of residents in that neighborhood go to prison. In both of these cases, we find that Black people with the highest levels of education or income are actually more likely to experience indirect contact with the prison system than White people with the lowest levels of education, or the lowest levels of income.

Ultimately, what we find is that while class inequality in prison admissions now appears to dominate racial inequality, it’s racial inequality that still predominates in other aspects of the lived experience of mass incarceration. Depending on whether we look at these direct or indirect experiences of the prison system, we’ll come to different conclusions about whether race or class matters more. Rather than trying to decide which is absolutely more important, we’ve become much more interested in trying to understand how racial and class inequality interact, and even how these interactions could create opportunities for new alliances to combat mass incarceration.

Q: Can you talk more about how you decided to use education as a proxy for socioeconomic class status?

Muller: The main reason is just data limitations. When people are admitted to prison, they’re not asked about their income, and so we’re forced to use their level of education. We use education as a proxy for class. This is clearly an imperfect measure, and there are all kinds of quibbles you could have with it. But on the other hand, the work of Case and Deaton shows that having a college education is an increasingly important determinant of people’s life chances in the United States. And there are even Marxist sociologists — who you’d expect would have the most issue with this proxy — who’ve come around to the importance of the college divide.

In the first analysis, we were looking at racial and class inequality in prison admission. Here, we only have measures of education; we don’t have measures of income. But in the second two analyses — of people’s likelihood of having a family member imprisoned and people’s likelihood of living in a high imprisonment neighborhood — we had both education and income. And the results were almost identical. And so in this particular case we’re not especially concerned about using education as a proxy for a class, even though we acknowledge that the two concepts are different.

Q: One of the problems you have in doing this research is not only trying to figure out what serves as a useful proxy, but how to extract the information from whatever data you’re getting from the prisons or other systems. How did you manage this giant data sample that you had?

Roehrkasse: There are three key quantities that we’re trying to measure in this study, and we use three different datasets to measure each of those. Each of those datasets has its own unique value, and some serious limitations.

The first quantity we’re interested in is the likelihood that people enter prison. You might think that’s a really straightforward thing to measure. But it turns out that there’s actually no national data that are publicly available that disaggregate rates of entrance into prison by people’s race and ethnicity or their educational attainment. And so for people who are interested in these kinds of inequalities, a really useful and common resource is what’s called the National Corrections Reporting Program. Unfortunately, this resource is restricted in access, because it involves individual-level records of imprisoned people, so the data are pretty sensitive. But for those people who are interested in these kinds of questions, this is really the most important resource available. These are administrative data, and, unfortunately, they represent the voluntary contributions of different state prison systems to this overall program. In any given year, the NCRP doesn’t actually include all state prison admissions. So an important assumption of our study is that the contributing states in the years we examine are more or less representative of the country more broadly. It’s also important to say that the NCRP no longer includes federal prison admissions. Federal prisons make up a small proportion of the total prison population in the United States, but it is by no means a trivial proportion.

A second quantity that we’re trying to understand is the likelihood that someone has had a family member go to prison. And people can use any number of different resources to do this. People have used the Fragile Families study before or the Panel Study of Income Dynamics. We use a new survey that’s designed specifically to measure this quantity. It’s called the Family History of Incarceration Survey, or FamHIS. 

The third quantity we’re interested in measuring is the likelihood that people live in a neighborhood with a high imprisonment rate. This is really challenging, because people aren’t usually imprisoned in the neighborhoods where they were living before they went to prison, and geo-locating prisoners back to the neighborhoods where they came from with any detail can actually be quite difficult. To do this, we use a resource that’s actually pretty underutilized, called the Justice Atlas of Sentencing and Corrections. This is another administrative dataset that compiles information from about 20 states, and it allows us to geolocate people in state prisons back to the specific census tract where they resided before they were imprisoned. We use census tracts, which on average have about 4000 residents, as a proxy for neighborhoods. And we use these data to calculate imprisonment rates for census tracts in these 20 states. Then we use census data to put people of different races and ethnicities and educational groups into neighborhoods to understand their likelihood of living in a high-imprisonment neighborhood. Then for all three of these experiences—prison admissions, family member incarceration, and neighborhood incarceration—we calculate the rates at which people of different ethnoracial groups and educational groups have these experiences. And then to measure inequality, we look at the ratio of these different rates across different groups.

Q: Another aspect of the complicated nature of this research is the temporality problem you have. When you’re looking at prison admissions, these are people who are entering the system. This is not representative of the body of people who are currently imprisoned as a whole. But then you’re asking people about the experience over their lifetimes, whether they’ve known someone who is incarcerated. How do you disentangle these different temporal aspects in this research?

Roehrkasse: This is a really important point. Our study is focused on prison admissions, specifically the rate at which people in the population enter prison in any given year. And this is a pretty different quantity from the proportion of the population that’s imprisoned at any given point in time. Generally speaking, prison admissions are much more volatile than prison populations, because they’re going to be more responsive to economic, social, and political changes. For example, a policy that diverts people away from the criminal justice system would have a pretty immediate impact on prison admission rates, but only delayed effects on the prison population, because that population reflects not only that recent policy, but the cumulative history of decades of previous policies, rates of imprisonment, sentencing, corrections, etc. What that means is that if we were to redo our study examining prison populations, instead of prison admission rates, some of the changes in inequality that we document would probably be a bit more muted. But what that also means is that if the trends we document in our study continue, we should expect to see similar changes in the prison population over time. There other aspects of our data—like the fact that the FamHIS survey captures whether a person’s family member has ever been imprisoned—that incorporate this whole cumulative history of incarceration over the last several decades, that we’re just limited in our ability to deal with.

Q: That points us back to one of the key topics people talk about with mass incarceration, which is the War on Drugs. How did the War on Drugs become so central to the conversation around mass incarceration, and how did your research complicate this story?

Muller: The paper itself is not directly about the War on Drugs, but the War on Drugs has become a key part of debate over mass incarceration. On the one hand, if you look at a point in time, the number of people who are in prison strictly for drug offenses is actually quite small. People often are critical of the argument that the War on Drugs was a key part of mass incarceration, given the small proportion of people who are in prison for drug offenses.

On the other hand, if you have people going into prison for relatively short sentences, that is going to mean that for people’s experience of having ever gone to prison, the relative importance of the War on Drugs is likely to be quite a bit larger. So, the temporal aspects we’re talking about have a particular relationship to the War on Drugs. 

Alex pointed to these extreme disparities in incarceration during the mid-1990s, even within educational groups. I haven’t seen a study that’s nailed this down, but I think it’s unlikely that some part of that spike does not have anything to do with the War on Drugs. Some of the spike in the racial disparity in the prison admission rate in the 90s almost certainly was related to the War on Drugs. And so the War on Drugs is quite clearly is an important part of the story. How important it is really depends on which aspects of mass incarceration you’re trying to look at — whether you’re looking at the number of people in prison and the proportion of them who are in for drug offenses, whether you’re looking at people who’ve cycled through prison, and how many of them have been imprisoned for drug offenses, and whether you’re looking at racial disparity. I think you’re going to get a slightly different story, depending on which of those quantities you’re focused on.

Q: You’ve also done research on how factors like the labor market play a central role in how we explain rises in imprisonment and mass incarceration. Can you tell us more about this relationship?

Muller: First let me step back and talk about the previous state of the literature on the causes of mass incarceration, then I’ll talk about my own research. To be honest, I’ve been working on this topic for a while, and the longer I’ve worked on it, the more complex the answers have gotten about what the sources of mass incarceration are. 

The broad contours are set out in a book by a sociologist named Bruce Western called Punishment and Inequality in America, which came out in 2006. Those main causes are still pretty widely accepted, even though there’s been a lot of important work to appear since that book was published. Western focuses mainly on economic and political causes, things like the collapse of urban labor markets, the related rise in crime, the urban uprisings of the 1960s, and then the politicization of crime that increased the chance that all of these changes would receive a punitive response. In the following years, we saw sentences increase, and we saw a greater willingness among prosecutors to pursue incarceration in cases where they might not have in the past. That’s an oversimplified summary, but it captures the main currents, and though people will disagree about the relative weight to place on any one of those causes, very few would say they’re wholly unimportant. 

To give broader context, one of the main motivations for my work on incarceration — and for my work in other areas — has been the idea that, in my view, too often in sociology we begin our studies of racial inequality in the 1960s, and that leaves out a lot of really important historical context. We forget, for example, that for much of US history, Black Americans worked primarily in agriculture, not just during slavery, but for almost a century after the Civil War. Once you recognize this fact, a lot of otherwise puzzling features about long-run patterns in the Black incarceration rate begin to make more sense. 

To take one example, there’s a popular argument that after the Civil War, incarceration became a kind of functional replacement for slavery. This is different from the argument that the form that incarceration took closely resembled slavery, which is an argument that has a lot of support, especially if you’re looking at the convict lease system, chain gangs, or things like that. But if you’re looking at the functional replacement argument, it’s hard to square with the fact that the Black incarceration rate in the years after Reconstruction was actually lowest in the counties that had depended most on enslaved labor before the Civil War. A lot of people are surprised when they hear this fact. But it becomes less surprising once you recognize that slavery and sharecropping were systems of economic exploitation, in addition to systems of racial domination. Both slaveholders before the Civil War and planters after the Civil War depended heavily on Black Americans’ labor. What that means is that, unless they could use the labor of people in prison, they had strong reasons to try to keep workers out of prison rather than in it. One of the key underappreciated ways that they did this is that planters often would go to courthouses, and they would offer to pay the fines of any people who had been convicted. The person then had to pay off the “debt” by working on their land. This system of peonage allowed planters to reestablish a coerced labor force after the Civil War. But it also had the side effect of lowering the Black incarceration rate in the Cotton Belt. So rather than see a relatively low Black incarceration rate in the Cotton Belt in those counties where slavery had been most prevalent after Reconstruction as a sign of the region’s mercy, we should instead see it as a sign of Black Americans’ continuing unfreedom outside of the prison in the years after the Civil War. 

There’s an additional puzzle that this way of looking at things helps to solve. Often, critics of the functional replacement argument — critics of the idea that incarceration was a replacement for slavery — will say, “Well, if slavery and mass incarceration are connected, why does mass incarceration take off a century after slavery ends?” For me, a key part of the answer to that question is that cotton harvesting was almost fully mechanized between 1950 and 1970 — the two decades that precede the start of the prison boom. A lot of work has focused on the effects of deindustrialization, but there’s been much less of an emphasis on the collapse of agricultural employment. This is particularly important because the effects of the collapse in agricultural employment on Black men’s labor force participation were much larger than the effects of deindustrialization.

Q: That’s fascinating because it points us to this question of the relationship between these different labor markets and ties it into other historical phenomena that we might be familiar with, like the Great Migrations. As we switch towards the 1970s, how was the labor market shift related to the rise of mass incarceration?

Muller: There are three main ways we could think about this. Here I’m more synthesizing previous work, rather than drawing on my own, but we had a massive collapse in the share of young Black men who were working in agriculture. In 1940, about a third of young Black men worked in agriculture. By 1970, it was lower than three percent. It was a dramatic shift. I don’t know of any research looking directly at the effects of this mechanization of cotton harvesting on both changes in crime and changes in imprisonment, but there’s a lot of work looking at other shocks to the labor market and showing quite clearly that those are related both to rates of crime and to rates of imprisonment. That’s actually something I’m working on right now. 

Secondly, one of the main responses to the mechanization of cotton harvesting was the second Great Migration. There was a huge political backlash to this migration. Ellora Derenoncourt, an economist who was at Berkeley until very recently, has shown how the second Great Migration led to increases in police spending, in homicide rates, in the Black incarceration rate, and in reductions in spending and other types of public goods. Ellora’s work shows clearly how this second Great Migration was related to the onset of mass incarceration. 

Thirdly, there have been economic historians who have argued that the mechanization of cotton harvesting and the second Great Migration created a material foundation for the rise and the emergence of the Civil Rights Movement. Of course, a lot of the literature on mass incarceration discusses how there was a political backlash to this movement and focuses on this as a key component of the politicization of crime — one of the key ingredients in the rise of mass incarceration. 

So, it’s through a bunch of different paths, but I do think many of these causes that other scholars have focused on are related to this massive decline in agricultural employment that happened mid-century in the United States.

Q: What can scholars and policymakers learn from your research on the complicated relationship between race and class?

Roehrkasse: Part of our analysis is aimed at decomposing racial and class inequality: overall, racial inequality in mass incarceration appears in part to reflect some underlying disparities in educational attainment. That’s an important fact to understand. 

But one of the main goals of our study, and I think one of its main successes, is to show that racial and class inequality cannot be disentangled. And that’s because they’re mutually constitutive. That can sound kind of hand-wavy, but we make our best effort to measure this as concretely as we can. We show that, irrespective of one’s education or income, Black people are much more likely to have family members or neighbors imprisoned. This can seem somewhat at odds with the fact that we’re simultaneously documenting that there’s been this shift toward much greater educational inequality in prison admissions. 

We think, though, that a really important factor that can reconcile these two seemingly contradictory facts is that, as a result of racial segregation and racial discrimination, an important feature of being Black in America today is that, irrespective of your class position, you’re much more closely connected to poor people. What that means is that the scale of racial inequality really can’t be fully appreciated without reference to the ways that social networks and social environments translate these growing class disparities into racial disparities. 

Rather than being competing forms of inequality, race and class are really intersecting dimensions of domination. And for researchers, for activists, and for policymakers, the more we can do to understand that, the more successful we’ll be in our efforts to combat mass incarceration.


The Materiality of the Telegraph Revolution: A Visual Interview with Sophie FitzMaurice

Sophie FitzMaurice

How did the telegraph change the environment? While scholars have typically examined how the telegraph changed communication, Sophie FitzMaurice, a PhD candidate in the UC Berkeley Department of History, argues that the telegraph was both dependent upon and constrained by the material world during its heyday in the 19th and early 20th centuries. Her research reveals the flip side of US imperial expansion by showing how this novel technology reshaped the environment. 

In this visual interview (an interview accompanied by images related to a scholar’s research), we spoke with FitzMaurice about a specific aspect of the telegraph’s materiality: how poles were produced, and how woodpeckers responded to the concomitant disappearance of forests and the rise of telegraph lines. 

Q: This image provides us with a rare shot of a woodpecker in action — that is, pecking on a telephone pole. At the time, what did scientists think about woodpeckers and their impacts on human industries, and how did they track them? 

woodpecker on a telephone pole
Caption: California Woodpecker – on telephone pole – showing holes. Locality: Ash Mountain, Sequoia National Park. Negative #5864. Joseph S. Dixon/National Park Service, 1935.

Nineteenth- and early 20th-century scientists thought of birds as part of the balance of nature, which was conceptually divorced from the world of technology and human industry. The balance of nature paradigm held that every species had a unique, perhaps divinely ordained role that did not change over time. Although scientists believed that humans could throw nature out of harmony, nature itself (including birds and insects) had no capacity to affect human industry or technology.  

Scientists who studied birds therefore mostly focused on birds’ role in agriculture, and lacked a conceptual framework to analyze woodpeckers’ impact on technology. The core methodology at the Bureau of Biological Survey (today’s U.S. Fish and Wildlife Service) was stomach content analysis; this method was used to determine which birds were “injurious” to agriculture and which were “beneficial.” On the basis of these findings, scientists made recommendations to farmers and local authorities on which species to encourage and which to discourage or kill. Since woodpeckers ate pest insects, they were considered “beneficial” species, but the damage they caused to utility poles challenged the notion that stomach content analysis alone could be used to determine the economic impacts of birds. Wood does not show up in woodpeckers’ stomachs, since woodpeckers do not eat wood; they drill holes in wood to access insects or to build nests, but they do not eat the wood itself. 

This image is dated to 1935, so it was taken at the tail end of economic ornithology, when ecological theory was on the rise. Scientists were beginning to look beyond feeding habits to think about how birds fit into a broader organic community.

How did these attempts to track woodpeckers fail, and how did you try to find these woodpeckers in the archives, where they may not leave a trace? 

Before the rise of ecological theory, scientists lacked a conceptual framework to understand how habitat change (including deforestation, to which telegraph construction contributed) might impact woodpecker behavior. At first, very few scientists recognized the correlation between habitat change and woodpeckers’ use of telegraph poles. Over time, they did begin to recognize that woodpecker damage to poles was a problem, and one that could not be understood through stomach analysis. By pecking into and structurally weakening poles, woodpeckers demonstrated conclusively that the nonhuman world could impact human technology and the built environment, and this realization helped drive changes in scientific methodology and government policy.  

Woodpeckers don’t tend to show up in the traditional sources of technology history, such as business records and legal contracts. In my research, woodpeckers show up in the archives via the observations of people who worked with utility poles — such as telegraph linemen, superintendents, and pole suppliers — as well as in the publications of scientists and government officials who were called on to address the problem of woodpecker damage.

By integrating non-traditional, vernacular, and scientific sources into the study of technology, I hope to bridge the boundaries between sub-disciplines and shine light on previously overlooked aspects of the telegraph’s materiality. We can also draw inferences about woodpeckers’ historic impact on telegraph poles by looking at how woodpeckers interact with utility poles in the present day. Here in California, you don’t have to look far to see an acorn woodpecker working away at a utility pole. By foregrounding woodpeckers in my research, I show that technology does not operate in a vacuum devoid of animals and insects, and that non-human animals have long been actors in the human world. 

Let’s backtrack from woodpeckers’ impacts on finished telegraph poles to understand what the telegraph was and what it represented in the late 19th century. How did people think about the telegraph, and how did it revolutionize communications?

Before the invention of the electric telegraph, information could travel only as fast as people could move (though there were a few exceptions, such as smoke signals and semaphore). In 1860, before there was a telegraph line across the continent, the fastest a message could travel from Missouri to California was ten days, and it took over two weeks to send a message across the Atlantic. The telegraph changed all of that. It was now possible to send messages across thousands of miles in a fraction of a second. 

In addition to this, communication was no longer directly dependent on movement. Historians have described this as a “quantum leap” in communications. I accept that characterization to an extent, but concentrating on the speed of sending a message has caused historians to overlook the huge amount of labor, materials, and energy that went into making this apparently instantaneous and disembodied communication possible. Such broad characterizations of the telegraph also erase the fact that its expense made it inaccessible to most Americans. It was a “quantum leap,” but only for the wealthy. 

a team of horses
Horse team, date unknown (1900s-1920s). Weyerhaeuser Company Records.

Let’s start at the beginning of pole production: logging. What did logging camps look like for workers in this era? 

Much of the labor of producing utility poles was performed onsite at the logging camps where wood was felled by seasonal laborers. Here, labor was divided between a superintendent, foremen, teamsters, loaders, swampers, sawyers, office staff, timekeepers, and a cook. These were overwhelmingly male spaces, which makes the presence of children in this image intriguing; perhaps this image captures a rare family visit to a logging camp. 

This image also shows that a lot of the human labor at logging camps was organized around the work of horses, who were used to haul cut logs to transportation points — either rivers, whose motive power was harnessed to float logs downstream at no cost, or train cars. Poles were then transported to pole yards, where much of the finishing work was done. 

Historians of telecommunications have tended to focus on the desk work or customer-facing service work of telegraph operators and messengers, but my research instead foregrounds the labor of constructing and maintaining telegraph infrastructure. Behind every telegram delivered lay a history of strenuous, and often dangerous, human and animal labor.   

loading logs
“Loading Western Red Cedar Poles at a Camp in Western Washington.” Joseph Burke Knapp and Alexander Grant Johnson, Western Red Cedar in the Pacific Northwest, USDA Forest Service (Washington: Government Printing Office, 1914).

Your research reveals how telegraphs were intimately related to the material world. What did it take to make telegraph poles from start to finish?

For the most part, the very first telegraph lines were built with wood taken from near the site of construction along the right of way. For many telegraph companies, it was more important in the immediate term to establish a right of way than it was to build a durable network. So telegraph construction crews simply chopped down and used whatever wood was available, even if this wood was not particularly fit for the purpose. 

Over time, as networks expanded and as companies introduced pole specifications and standards, an entire industry emerged to supply utility companies with poles. The Western Union Telegraph Company, which was the largest telegraph corporation in the 19th century, operated its own pole yards in Michigan and Tennessee, but smaller-scale utility companies acquired their poles from independent suppliers. These suppliers purchased poles from logging companies and stockpiled them in pole yards. From there, they were transported to construction sites via railroads. But pole yards were more than just distribution points; they were also sites of labor, where the logs were finished by being stripped of their bark and cut to standard dimensions. Later, when utility companies began to demand their poles be preserved, chemical treatment facilities were installed at pole yards. 

Where were the pole yards located, and how did they relate to railroad and other transportation networks?

Although historians have made much of the fact that the telegraph divorced communication from transportation, a focus on the pole supply shows that the telegraph relied on transportation networks. Some of the largest pole yards were located in Chicago and Michigan, at the meeting point of water and rail transportation routes. Poles were transported across the Great Lakes from logging camps to pole yards via rafts and steamships, and then from pole yards to construction sites via rail. In addition to this, Western Union had a large pole yard in Chattanooga, Tennessee, which was a distribution point for southern pine poles. 

By concentrating on the physical infrastructure of telegraphs, my research demonstrates that electricity relied on steam; long-distance electrical communication was inconceivable without colossal amounts of organic materials like wood and coal. 

Telegraph and telephone pole supply was big business. Many of the largest suppliers were in the Midwest (Michigan, Minnesota, Illinois). Source: Telephony vol. 19, no. 1 (1910).

In these advertisements from Telephony, an industry trade magazine, we can see how pole producers promoted their poles. What kinds of trees were typically used for poles, and where did they come from? 

The preferred species of wood for telegraph and telephone poles was cedar, because cedar is both durable and light, meaning it is relatively cheap to transport and handle. (The same quality that makes cedar suitable for telegraph poles — its lightness — is what made it attractive to woodpeckers; the softer the wood, the less energy has to be expended in excavating nest holes). Other favored species of wood included pine and chestnut, although chestnut blight wiped out much of the chestnut supply at the turn of the 20th century. In the West, douglas fir and western red cedar were widely used. Most of the poles used in the eastern United States came from Canada and the Upper Midwest — specifically, Michigan, Wisconsin, and Minnesota. By the turn of the century, there was a burgeoning international trade in poles; poles were imported from Canada to the US and exported from the US to Egypt, Greece, Chile, and many other locations across the globe. 

stacks of logs
One of the Three Alleys in the Big Chicago Pole Yard of the Naugle Company, Telephony vol. 64 (1913).

In this image, we can see the massive scale of telegraph and telephone pole production. What were the ecological impacts of these practices? 

Telegraph and telephone line construction contributed to massive deforestation and habitat destruction, but this ecological impact was largely invisible to people who used the technology. After telegraph poles were stripped of bark, cut to standard dimensions, and spread out over great distances, they lost a lot of their apparent treeness. The location of logging camps and pole yards also meant that most of the environmental impacts of telegraph construction were geographically displaced; building a telegraph line in Arizona might mean cutting down thousands of trees in Wisconsin, for example. 

Importantly, some of the environmental impacts of telegraph construction could have been mitigated had utility companies paid for their poles to be treated with preservatives. While expensive and time consuming, chemical treatment could extend the lifespan of a pole by many years, thus lessening the need for replacement poles. In the 1900s and 1910s, the U.S. Forest Service, driven by the conservation ideal, urged telegraph and telephone companies to treat their poles. But these efforts met with little success; it was simply more cost-effective, at least in the short term, for companies to replace poles than proactively to treat them. This began to change in the 1920s, when the increased cost of replacement poles started to reflect the dwindling supply of wood. 

woodpecker holes
Northern White-headed Woodpecker — adult male at nest hole. Location: Auto Log, Sequoia National Park. Negative #3459. Joseph S. Dixon/National Park Service, 1933.

This photograph brings us back to the woodpeckers and their impact on the growing telecommunications industry. How do woodpeckers help us tell the history of telecommunications in a new way, and how have historians told the story of the telegraph? 

In this image, we see animal traces (woodpecker holes) side-by-side with traces left by humans, who have carved their initials into the wood. This is a perfect metaphor for how the historical record contains both human and nonhuman traces, a concept beautifully explored by the historian Etienne Benson. For 19th-century Americans, telegraph poles may have represented the triumph of science and technology over nature, but for woodpeckers, they represented something far more prosaic: potential nesting sites. The image is also suggestive of how our physical surroundings, which both reflect and make possible commercial and economic life, are co-created by humans and nonhuman animals. Finally, woodpeckers remind us that the story of the telegraph is inescapably a story about wood. Rather than transcending the organic world, the telegraph was very much embedded in it.   



Economic Benefits of Higher Education: Zach Bleemer and Maximilian Müller

Zach Bleemer and Maximilian Müller

Why do people choose to go to college (or not)? What impact do race-based or financial aid policies have on higher education and the broader economy? In this episode of the Matrix Podcast, Julia Sizek spoke with two UC Berkeley-trained economists whose research focuses on the economic impacts of higher education.

Maximilian Müller completed his PhD in Economics at UC Berkeley this year and is now starting a position as Postdoctoral Fellow at the briq Institute on Behavior and Inequality in Bonn. In Fall 2023, he will join the Toulouse School of Economics as an Assistant Professor. Maximilian is a behavioral economist studying questions in fields such as education, development, and family economics. In his research, he examines social influences on individual behavior around big life decisions, such as career choices, and their potential consequences for society-wide outcomes, such as social mobility. Prior to his PhD, he obtained an M.Phil. in Economics from the University of Oxford and a B.Sc. in Economics from the Ludwig-Maximilians-Universität in Munich.

Zach Bleemer is an Assistant Professor of Economics at the Yale School of Management and a research associate at UC Berkeley’s Center for Studies in Higher Education. His current research uses natural experiments to examine the net efficiency and equity ramifications of educational meritocracy, with recent studies on race-based affirmative action, race-neutral alternatives to affirmative action, and university policies that restrict access to high-demand college majors. Zach holds a BA in philosophy, economics, and mathematics from Amherst College and a PhD in economics from UC Berkeley.

Listen to the podcast below or on Google Podcasts or Apple Podcasts. Excerpts from the interview are included below.

Q: As economists, how do you measure the benefits of higher education, particularly as economists?

Maximilian Müller: We start with what would be the counterfactual — if a person had not gone to college — and we compare that to how they did when they did go to college. We compare these two scenarios. As economists, we often focus on earnings, of course, but we also think about benefits in terms of health behavior, life satisfaction, and then some spillover benefits to broader society, such as democratic participation, or jobs you create — benefits that don’t just accrue to you, but to broader society. That’s how we think about it. But measurement is tricky, because we do not observe a person going to college and not going to college. That’s what makes it hard. And we cannot just compare people who have gone to college with people who haven’t gone to college, because they might be different in several aspects. So this makes it tricky. But thanks to researchers like Zach, we’ve come up with ways to make this counterfactual comparison possible, and compare the benefits of higher education.

Zach Bleemer: That’s right, there are personal economic returns of a college education that can be measured in terms of an individual’s wages, and then also public economic returns, things like innovation and entrepreneurship that don’t just benefit the person who generates them, but benefit people in communities at large. We try to measure the degree to which a college education shapes people’s decisions in a way that either leads them to higher wages themselves, or to generate these  public economic returns. For example, I do a lot of work with tax records. You can look at individuals once they start filing taxes in their late teens or early 20s, and then follow them until years later. You can see in tax records people’s earnings, as well as information like business formation and entrepreneurship, which also have tax ramifications. And you can link individuals to patent records to get a measure of innovation.

The trick here is to try to figure out what these people would have done in their lives if they hadn’t gone to college. To give a couple of examples of ways to study this, you can look at changes in university admissions policies that pull a group of students who may not have otherwise gone to college at all, or who may have otherwise gone to a less selective set of universities, and ask what happens to these kids when the admissions policies change, providing them with access to this new higher education resource. You can look at policies inside of universities that change students’ likelihood of degree attainment, particularly kids on the margins who wouldn’t have earned a college degree if not for the change. We can try to leverage these natural experiments to learn the impact of college on these students’ lives.

Q: Zach, you conducted research on how changing admission policies at the University of California changed the student body and who could benefit from college. What was your study about?

Bleemer: As you may know, California public universities do not use race-based affirmative action in admissions, and they haven’t since 1998, when a ballot proposition (Proposition 209) prohibited the use of race-based affirmative action in the state. Before 1998, Black and Hispanic applicants to any University of California campus were provided with large admissions advantages. They were able to enroll at universities that they would have otherwise not had access to, absent this race-based affirmative action policy. Then after 1998, these admissions advantages disappeared, and we saw this cascade of Black and Hispanic students into less selective universities, and in some cases, out of university enrollment altogether.

So what’s the ramification of going to college? Well, one way of understanding that is to link all of these University of California applicants in the years prior to and after this policy change to a variety of outcomes, including where they went to college, what they studied in college, and how they did in college (in terms of their grades and whether they graduated), and then following these students into the labor market. You can measure how the kinds of colleges these students went to changed their earnings and their place in the California economy.

The striking thing that comes out of this study is that after 1998, Black and Hispanic young Californians lost a lot of economic power — which is to say, on average, Black and Hispanic applicants to the University of California earned about five percent less in their early 30s (10 or 15 years later) than they would have earned if they had continued receiving the admissions advantages provided prior to Prop 209. To give you a sense of magnitude, that means that by 2014, there were about three percentage points fewer high-earning Black and Hispanic young workers in California than there would have been if Prop 209 hadn’t been passed and the University of California had continued providing these admissions advantages. That gives you a sense of how important it is whether and where kids go to college — not just for themselves, but for the economy at large.

The key reason most economists think college provides personal economic returns to graduates is that kids learn a lot in college, and to some degree, they learn more or a different kind of skill at more selective universities or in one major over another. These skills are really valuable. Employers are willing to pay employees more who have these skills because of the value these employees provide to the firm. When kids lose access to more selective universities or lucrative college majors, they’re losing access to a special kind of knowledge that’s extremely valuable to economic production, and so can provide economic mobility to lower-income students.

Q: Max, your research deals with the question of how people decide to go to college outside the United States. One of the major factors in the US is the cost of college, but your research examines a place where college is free. Can you tell us more about your research?

Müller: Yes, I look at higher education in Germany and what makes students in Germany go to college or not. As you said, college in Germany is actually free. Education from primary school all the way through to university is pretty much free and state-financed. But in Germany, we observe that college attendance very strongly depends on parental background. There is a 40 percentage point gap in college attendance between students with and without college-educated parents, even conditional on having finished high school. That’s a bit discouraging, right? We might think that in the US, we only have to get rid of tuition and these gaps will go away, but looking at Germany shows that may be the case, or maybe not.

I was really interested in understanding why, despite college being free, there is still such a strong relationship between parental background and college attendance. I wanted to look at the family in more detail. How do families drive their kids to attend college (or not)? One thing I wanted to understand is whether students are willing to adjust their educational choices, such as whether to go to college or not, based on perceived pressure or expectations from their parents. And if they are willing to adjust, does this affect the socioeconomic gaps in college attendance? And could this be one explanation for why we see such pronounced gaps in educational choices that are conditional on parent background?

Q: As an economist, how do you study these questions about why someone decides whether or not to go to college?

Müller: Part of your question might be, why is this economics? If you think about your educational choices, they are determinants of your allocation of attention to what you learn. It really determines your life path and your economic and social success. It’s really important for individuals, but it also determines the societal allocation of talents to jobs and tasks. And the traditional definition of economics is the study of  allocation of scarce resources to its best use. So it is really an economic question. It’s one of the most important allocations we have in any society. I think that makes it really important for us economists to understand.

To go about it, I try to vary the perceived pressure from parents to some extent, then see how that changes students’ plans about whether to go to college or not, and which field to go into. I worked with students and ask them about their plans for after high school. Then I told students either that their plans would not be shared with anyone, including their parents, or that their plans would be shared with their parents. The only thing I varied is this perceived  pressure from parents, and then I looked at what happens. And students do adjust their plans when you tell them they will be shared with their parents.

What I found is that, for students with college-educated parents, if you tell them you will share their plans with the parents, they become more likely to say they want to go to college. It goes from 68 percent of students saying they want to go to college when this is confidential to 78 percent when they think the parents will be involved. For those without college educated parents, it goes down by five percentage points. In private, 56 percent say they want to go to college, and then it slightly decreases to roughly 50 percent. What I find is that the students with one college-educated parents react the most, while those with two college-educated parents do not seem to react much to this variation in perceived pressure. Even when this is not shared with the parents, those with two college-educated parents almost always say they want to go to college in any case. So it doesn’t say there is no pressure, but maybe all of it has been internalized at this point, so they know exactly that college is what they want to do.

Q: This raises a question about the broader role of parents in higher education, which is also a question that Zach has looked at in his research on higher education. Zach, can you tell us a little bit about how parents think about the costs and benefits of college?

Bleemer: What Max has been talking about is one potential intervention that either policymakers or some other group could try to impose or provide to parents in hopes of encouraging college enrollment, which most economists believe should be more popular than it is right now, in hopes of closing equity gaps, which yawn widely in both Germany and in the US. I’ve worked on a similar information experiment in a United States context focused on what parents believe about both the returns, i.e., the economic benefits, and the costs of college enrollment. This works by embedding an information experiment into a nationally representative sample of American parents, first asking them, what do you think kids who don’t go to college earn on average by the time they’re age 40? And what do you think people who do go to college earn on average when they’re 40? When we asked American parents this question, the average estimate we got was that college graduates earn about 63% more than non-college graduates. It’s a gigantic gap in their estimates, from a base of around $50,000 for non-graduates to $80,000 earned by college graduates.

But it turns out that the gap is even larger than that. For the last 20 or 25 years, the gap has been pretty consistent at about an 80% difference in average wages between college and non-college graduates, and a really gigantic change in the economic lives of these 40 year olds. The question for us was, what if we just told parents this? About two-thirds of parents underestimate the average economic return of a college degree. What if we just correct that impression? And then we tried to measure the degree to which this changes parents’ expectations of whether their own children should go to college, or the degree to which they’re going to encourage their kids (or their friends’ kids) to go to college. We find pretty meaningful effects that parents become about five percent more likely on average to expect that their kids will go to college, and they become more encouraging when talking about whether their friends’ kids should go to college.

We also saw a meaningful close in equity gaps where, on average, highly educated parents were more likely to encourage their kids to go to college than less educated parents. But when you provide both these groups of parents with this information about college costs, that gap closed, because on average, highly educated parents already had better information about college’s economic returns. The impact of this information was bigger for the less educated parents, who learned more from this new information.



The Effects of Reparations: A Visual Interview with Arlen Guarin

Arlen Guarin

What are the impacts of reparations on the lives of victims of violence? Arlen Guarin, a PhD Candidate in Economics at UC Berkeley, studies the effects of policies that aim to reduce poverty and inequality, including reparations given to victims of human rights violations in Colombia.

His research draws upon tools in applied econometrics to identify the causal impacts of various policies by linking large administrative datasets that capture information on a broad range of outcomes, including labor market outcomes, consumption, health, and human capital formation. His research uses careful analysis of administrative data to demonstrate how unrestricted reparations improve the lives of recipients.

For this interview, Matrix content curator Julia Sizek asked him about a working paper that he developed with Juliana Londoño-Vélez (UCLA) and Christian Posso (Banco de la República). 


bar graph
This figure displays the frequency of human rights violations during Colombia’s internal conflict, based on the date when the victimization (or human rights violation) occurred. Source: Authors’ calculation using Unified Victim’s Registry (Registro Único de Víctimas, RUV) data from National Information Network Subdirectorate (Subdirección Red Nacional de Información, SRNI).


The internal armed conflict of Colombia, which has included a prolonged conflict between the FARC-EP (Revolutionary Armed Forces of Colombia-People’s Army) and the government, has been a central part of Colombian politics since the 1960s. Can you describe the history of the conflict, and how the Colombian government decided to address the effects of this conflict through a reparations program?  

Colombia has had a very long internal armed conflict, the most prolonged in the Western Hemisphere. In the mid-1960s, some left-wing rebel groups like FARC-EP and ELN (National Liberation Army) emerged in remote regions of the country. In the 1980s, the violence escalated as right-wing paramilitary groups developed in order to contain the emergence of left-wing guerrillas and protect landowners and drug lords who were involved in the increasingly profitable cocaine trade. This intensified conflict caused an increased number of attacks against civilians. 

Between 1980 and 2010, the conflict claimed hundreds of thousands of lives, and almost nine million people were affected by the conflict. Attacks were widespread, with rural and poorer areas disproportionately affected by the violence. The majority of victims were forcibly displaced; the remainder had family members who were forcibly disappeared or murdered.

After a failed peace negotiation between the government and FARC-EP in 2002, violence peaked, as did victimizations of civilians. Following the peak, the number of victimizations decreased as Colombia attempted to transition toward peace and reconciliation. In 2005, Colombia demobilized paramilitary groups and reintegrated them into civilian life through the Peace and Justice Law. And in 2016, the government negotiated and signed a peace treaty with FARC–EP. 

As part of its attempts to transition into post-conflict reconciliation, the government passed the Victims’ Law in 2011. Considered one of the world’s largest and most ambitious peacebuilding and recovery programs, the law seeks to award reparations by 2031 to 7.4 million individuals victimized by guerrilla, paramilitary, or state forces . (While approximately 8.9 million people were victimized, today only 7.4 million people are eligible for reparations, as some people are deceased or unreachable). In addition to providing reparations, the law aims to restitute dispossessed lands, award humanitarian aid to households in emergency conditions, and enhance access to micro-credit and subsidized housing.

The Victims’ Law has personal significance to me. I was born and raised in a rural Colombian town called Granada. When I was nine years old, the conflict dramatically intensified there. Bombings and massacres claimed the lives of many of my neighbors. Innocent people were routinely taken away for “questioning,” and we would later learn they had been murdered.

In 2014, I heard that some of my neighbors and relatives had received the reparation. Shortly after that, I arrived at UC Berkeley to start the PhD. I discussed the idea of studying the impacts of the Victims’ Law with one of my fellow students, Juliana Londoño-Vélez. We decided to work together on the project, and she encouraged me to start working on the necessary data. 

This figure plots the different victims of each type of victimization, as tracked by the Colombian government. Because the Colombian state tracked reparations by the harms suffered by individuals, a victim can count in the categories of forced displacement and homicide or forced disappearance if (s)he was both forcibly displaced and has relatives who were victims of homicide or forced disappearance. The category “other” includes victims of torture, rape, or kidnapping. Source: Authors’ calculation using RUV data from SRNI.

Over seven million Colombians — more than ten percent of the population — suffered as a result of the conflict. Can you describe how the different types of victimization are being compensated through the 2011 Victims’ Law? 

Almost one in five Colombians is a victim of the conflict, or approximately 8.9 million people. During the last three decades, nearly eight million individuals were forcibly displaced, and 1.2 million people had their relatives murdered or forcibly disappeared. Thousands of others were raped, kidnapped, tortured, injured by landmines, or forcibly recruited as minors.

The Victim’s Law aimed to award reparations to the nearly 7.4 million who registered as victims. Victims included those who suffered forced displacement, homicide, forced disappearance or kidnapping, rape, injury from landmines, or other injustices. For those who died or disappeared during the conflict, their family members were awarded in their stead. The law also defined the size of the reparations and indexed them to the monthly national minimum wage (currently $250 USD), a figure that changes each year. Reparations to victims or their families are delivered at the household level. The size of the reparation depends only on the type of the victimization, with victims whose relatives were murdered or forcibly disappeared receiving 40 times the minimum wage – approximately 10,000 USD – and victims of forced displacement receiving 27 times the minimum wage. 

The amount of money is often sizable for the receiving victim, especially since many Colombians earn below the minimum wage.  For the population we study, the average reparation represents more than six years of income and thereby has the potential to improve victims’ wellbeing in the long run. The goal of our research project was to understand whether this money could help undo some of the socioeconomic gaps induced by victimization. For example, could it help victims find better jobs? Could it improve their health? Could it increase the educational opportunities available to their children?

This picture was taken during one of the meetings held by UARIV (Unidad Para La Atención Y Reparación Integral a Las Víctimas, or Unity for the Complete Attention and Reparation for Victims), a government entity responsible for dispensing reparations. The UARIV informs victims when they will receive a reparations check. Photo credit: Arlen Guarin.

This image shows one of the victim reparation meetings, in which victims are informed that they are going to receive reparation payments. How has the reparations process been run at the state level, and how are victims informed about when they will receive their payments?

The Victims’ Law created the Victims’ Unit, a government-run agency that has been in charge of the administration and delivery of reparations. Despite being logistically and operationally managed from Bogotá, the capital of Colombia, the Victims’ Unit has more than 30 regional centers and hundreds of contact centers around the country, where the final details of the delivery of the reparations are coordinated.

From the victims’ perspective, the process of receiving the reparation is as follows. First, they receive an unexpected phone call from the Victims’ Unit. The caller instructs them to attend an “important” meeting at a specified time and location but does not mention a reparation. At that time, some victims may suspect that they are going to be given the reparation since they may have learned from others’ experiences, but the timing of the call itself is unexpected.

A few days later, the victim arrives at said meeting, usually at one of the regional centers. During the meeting, the victim is informed that they will receive reparation and is given a letter. The letter formally acknowledges that the victimizations never should have happened and describes when the reparation check can be collected from Banco Agrario, Colombia’s state bank, which is usually 1–2 weeks later.

This figure plots when reparations were paid to victims. The figure shows the series by victimization type: homicide or forced disappearance, forced displacement, and all other types of victimizations.

Because of the large scale of the program, reparations were not paid all at once. How were you and your coauthors, Juliana Londoño-Vélez and Christian Posso, able to use the timing of the reparations to understand the causal effects of these payments? 

We used microdata from the universe of registered victims, a unified and centralized registry covering the more than eight million individuals who reported being victimized during the Colombian internal conflict by August 2019. We linked the victims registry to eight other national administrative data sets containing information on formal employment, entrepreneurship, access and use of formal loans, land and homeownership, health care system utilization, postsecondary attendance, and high school performance for all members of the victims’ households. 

Our final dataset has information on millions of victims eligible to receive the reparation payment and a comprehensive list of outcomes observed before and after the arrival of the payment. These types of data, in which the outcomes for the same individual can be observed over time, are called panel data sets.

Importantly for us, due to government budget and operational constraints – you can imagine the constraints associated with compensating one in seven Colombians – the rollout of the reparations program was staggered over time. This feature, together with the fact that the arrival times of the payments were unanticipated, have allowed us to identify the causal effect of the reparations using an empirical econometric approach called an event study.

An event study is a methodology that is used in contexts when the implementation of the program that is subject to be evaluated occurs over time rather than at once (staggered adoption), and where we can observe individuals and their characteristics at different points in time (panel data),  as in our case. Intuitively, this methodology compares outcomes for victims who have received the reparation to those who have not yet received it. By comparing outcomes between these groups, we are able to isolate the causal impacts of the program on victims’ long-term outcomes. 

This picture was taken at one of the victim reparation meetings between UARIV and a group of beneficiaries in Medellin, Colombia. After receiving reparations, victims could voluntarily participate in investment workshops, where they would receive information in budgeting and investing, including getting help to obtain small business or student loans and pay off old debts. (This program was known as “Programa de acompañamiento de inversión adecuada de los recursos.”)

This image shows an educational fair in which the victims are taught about how to use their reparations. How does the Colombian government view reparations as a tool for development? 

The government presented reparations to victims as seed money to transform their lives; specifically, they suggested that the victims use the money to invest in productive activities, such as postsecondary education, business creation, or housing, which could improve their families’ long-term wellbeing. By presenting the reparations in this way, the government was treating reparations like “labeled” cash transfers, as they suggest that victims invest the money in specific activities. 

In line with this purpose, the government held fairs to connect victims with local public and private institutions providing investment opportunities in education, housing, land, and small businesses. Victims could also voluntarily participate in investment workshops, where they would receive training in budgeting and investing, including getting help to obtain small business or student loans and pay off old debts .

The government also used the reparations to recognize the harm suffered by victims. The letter received at the time of the reparation also includes a dignification message about what the reparation means that reads roughly as follows:

“As the Colombian State, we deeply regret that your rights have been violated by a conflict that never should have happened. We know that the war has differentially affected millions of people in the country, and we understand the serious consequences it has had — it is impossible to imagine how much pain this conflict has caused. However, from the Victims’ Unit, we have witnessed conflict survivors’ capacity for transformation over these years. We have witnessed their spirit to keep going, their strength to raise their voices against those who have wanted to silence them, their ability to rebuild their lives… For this reason, with your help, we are working so that you can live in a peaceful Colombia since it is the victims who actively contribute to the development of a new society and a better future.”

As you mention in the previous answer, the Colombian government treats these reparations not only as a recognition for harms suffered, but as a means to raise the standard of living. How do the insights from this case help us understand poverty reduction and basic income programs, and how does this differ from previous research on the topic? 

The literature on reparations has largely consisted of qualitative work by political scientists, lawyers, sociologists, and other experts on transitional justice. We differ from prior approaches by offering one of the first known quantitative studies of a large-scale reparation program, in which we exploit rich existing administrative-level data on millions of victims of the conflict in Colombia to provide evidence on the causal effects of the reparations.

We also contribute to the literature on the effectiveness of cash transfers for poverty alleviation. Despite sharing similar features, Colombia’s reparations differ from the traditional version of those programs in two ways. First, the average reparation is over three years’ worth of household income and, therefore, substantially larger than most unconditional cash transfers. Second, reparations target victims of human rights violations, a uniquely vulnerable population. Adverse shocks in conflict settings, like forced displacement, can have lifelong detrimental effects and trap victims in poverty. We show that by providing households with a large, lump-sum grant, reparation can serve as a “big push” policy for the victim to transform their lives and escape poverty traps.

This figure summarizes the relative effects of reparation on adult victims and their children, using data collected either three or four years after reparations were paid. Each row reports the change based on the variable, with a 95 percent confidence interval. The variable ED is emergency department visit.

In this chart, we see how the reparations changed the lives of conflict victims. What were the effects of these reparations, both economic and non-economic, and what does this mean for thinking about reparations and universal basic income programs more broadly? 

We divide our results into three sections: the impacts on work and living standards, health, and human capital accumulation. 

For impacts on work, we find that reparations only have an economically small effect, with the money allowing victims to improve their working conditions, earn more money, and create more businesses. We also find that reparations increase victims’ consumption and wealth, as it allows them to buy a home or more land.

We also find that reparations cause an economically meaningful decrease in health care utilization. Victims are less likely to visit the emergency department, less likely to be hospitalized, and have fewer medical procedures after receiving the reparation. These findings are consistent with improved health due to better working and living conditions stemming from the reparation, findings that are novel in light of the inconclusive evidence for the impacts of money on the use of health services and physical health outcomes. 

Finally, we find that reparations close most of the intergenerational educational gap caused by the victimization. Victims frequently use reparations to enroll in and attend college for the first time. They also improve the high school test scores of the younger members of the households, an effect that is not explained by changes in the high schools that they attend. In the study, we conduct a back-of-the-envelope cost-benefit analysis that shows that the gains from reparation outweigh the monetary costs. This makes them both a progressive and efficient policy tool to promote recovery and development. 

Overall, our findings suggest that reparations programs improve long-term wellbeing along many dimensions. My hope is that this research can inform governments that are considering ways to heal the wounds induced by human rights violations. 



A Changing Landscape for Farmers in India: An Interview with Aarti Sethi and Tanya Matthan

Aarti Sethi and Tanya Matthan

In countries around the world, the “Green Revolution” has changed the scale and economy of growing crops, as pesticides, fertilizers, and new kinds of hybrid seeds have transformed the production process. In this episode of the Matrix Podcast, Julia Sizek spoke with two UC Berkeley scholars who study agrarian life in India, where farmers have been forced to adapt to changes in technology.

Aarti Sethi is Assistant Professor in the Department of Anthropology at UC Berkeley. She is a socio-cultural anthropologist with primary interests in agrarian anthropology, political-economy, and the study of South Asia. Her book manuscript, Cotton Fever in Central India, examines cash-crop economies to understand how monetary debt undertaken for transgenic cotton-cultivation transforms intimate, social, and productive relations in rural society.

Tanya Matthan is a S.V. Ciriacy-Wantrup Postdoctoral Fellow in UC Berkeley’s Department of Geography. An economic anthropologist and political ecologist, she finished her PhD in Anthropology at UCLA in 2021. Her current book project, tentatively titled, The Monsoon and the Market: Economies of Risk in Rural India, examines experiences of and responses to agrarian uncertainty among farmers in central India. 

Listen to the full podcast below or on Google Podcasts or Apple Podcasts.  Visit the Matrix Podcast page for more episodes.

Excerpts from the interview are included below, edited for length and clarity.

Q: You both study agriculture in India, but India has many different agricultural and ecological zones. Can you help us understand your research sites and how they fit into agricultural production in India more broadly?

Tanya Matthan: The region in which I work is called Malwa, which is located in central India, in the state of Madhya Pradesh. The history of Malwa is interesting, because prior to Indian independence, it was ruled by a number of princely states. Ecologically, it’s a semi-arid region, and it’s known for its very fertile black soil. And it is also a region that has always been tied to global networks of trade and markets, through the cultivation of crops such as cotton and opium  in the past, and now soybean and wheat, which are grown for national and global markets. Ecologically, it’s a very interesting region, and both different and similar to other parts of agrarian India.

Aarti Sethi: We work in regions that are both close by and also very far away. Subcontinental India is agriculturally very diverse and also very vast. I work in a region in east central India called Vidarbha. It’s about 500 kilometers inland from Bombay, in the state of Maharashtra. Vidarbha is part of the central Deccan Plateau, and it has black soils. Cotton is a very, very old crop in Vidarbha.

The reason I find Vidarhba to be a very interesting region to understand the long history of agrarian capitalism in India is because, in Vidarbha, local cotton production has been entangled with a global capitalist market — we could say a colonial capitalist market — for a very long time. We have evidence for cotton cultivation in this region for three millennia. But to take a more recent history, this is a region that became settled to the intensive cash cropping of cotton after it was taken over by the British colonial state in the mid-19th century. This happened in the wake of the fall in global cotton production and supply in the wake of the American Civil War. So there’s actually a very interesting historical relationship between Vidarbha and the American South.

This is the period when the British colonial state expanded what were called “settlement operations” and created new villages. A new peasantry came into being in what used to be an agro-pastoral region, that was specifically cropping cotton for a colonial market. And so you can see in Vidarbha a peasantry that is entangled wit international commodity markets in a very specific way. You can see this in the forms of land tenure that came into place at this time for instance. It’s an early form and moment of agrarian capitalism, and these processes that we see beginning in the late 19th century have a bearing on the cotton crisis in Vidarbha today. It is also an arid agro-ecological region that is very prone to droughts. These are the kinds of agricultural and ecological constraints within which agriculture in Vidarbha happens.

Q: You alluded to the fact that agriculture is changing in India and that farmers are facing new challenges, which both of you study in different ways. Can you tell us more about what those challenges are today?

Sethi: The specific challenges that we see vis-à-vis cotton production in Vidarbha today have to do with the emergence of a sharp economy of indebtedness, which begins from the mid-1990s. Over the next two decades, this becomes a very widespread mode of agriculture in Vidarbha. And this expansion of monetary debt as a critical component in the agricultural process in Vidarbha has had several economic and social consequences. One of the most tragic of them has been that Vidarbha is at the center (and has been for the last two decades) of a suicide epidemic where over a quarter of a million farmers have taken their lives across India. This is not a crisis only focused on Vidarbha, but Vidarbha is one of the earliest regions where the suicide epidemic began, so Vidarbha has become emblematic of a broader crisis in agriculture. The introduction of a new transgenic crop, Bt cotton, has sharply exacerbated the general prolonged agrarian crisis in which India finds itself.

Matthan: A place like Malwa also exhibits a lot of these same dimensions of this agrarian crisis. So you have, for instance, high levels of indebtedness, rising costs of production, extremely volatile prices of commodities. And ecologically we can see in Malwa the falling water tables. So many aspects of this crisis are evident in a place like Malwa.

One of the reasons I was interested in studying a region like Malwa, which is quite under-studied in Indian agrarian history, is because this region has been hailed as a sort of recent agricultural growth story. It’s emerging as a horticultural hub for the production of these high-value vegetables. But it’s also very recently been a site of protest. For instance, in 2018, six farmers were killed by the police as they were protesting crushingly low prices for their commodities.

One of the reasons why Malwa was interesting is because the state government has been at the fore of implementing and promoting a lot of risk management policies, trying to address some of these challenges through things like crop insurance, price support schemes, and so on. I was interested in how the Indian state is responding to these agrarian challenges and with what social and ecological effects. So, I’m looking at the crisis and some responses to it, and the implications of that.

Q: This seems like a complicated story. On the one hand, farmers’ debts are accruing, but there are also emerging forms of crop insurance that are presumably replacing other forms of government support that existed previously for farmers. From the Green Revolution to today, how have the forms of support for farmers changed? And what are the reasons why farming has become so much more expensive to do?

Sethi: If you look at cotton production over a recent historical durée — say, from the mid-19th century onwards — then we can think of three phases of cotton production: a precolonial economy of cotton, a postcolonial economy of cotton, and then a recent neoliberal economy of cotton.

The Green Revolution is very central now in the imaginations of the postcolonial economy, but the Green Revolution had a variegated uptake across the country. It was first introduced in the northern states of Punjab and Haryana, with wheat and rice as the primary Green Revolution crops. This turn to science and technology then had ancillary effects across the agrarian landscape.

The improvement of cotton has a very long history in India, beginning from the cotton improvement projects started by the colonial state. This is because cotton is such an important fiber crop in India. One thing to remember is that the Green Revolution produces a kind of economy of agricultural production that is entirely reliant on state support. Through the Green Revolution, the state undertakes different sorts of functions towards agriculture such as introducing a minimum price support for farmers, encouraging the use of chemicals and pesticides, creating pesticide and fertilizer subsidies and electricity subsidies, and very importantly, a state scientific establishment that is heavily involved in the development of new hybrid and cotton varieties. It is a public commitment that the postcolonial state undertakes towards agriculture in India. This included the All India Coordinated Research Project on Cotton, the establishment of 21 agricultural research universities, and the Central Institute for Cotton Research.

What the state does, and what scientists working in the public scientific apparatus do at this time, is take a very central role in developing new forms of seeds, and, through state extension mechanisms, getting those seeds to cultivators. This is very important to the Bt cotton story, as it is through this moment of what we could call the Green Revolution that the first hybrid cotton seed is created for the first time in India. And these hybrid seeds have far greater yields than conventional cotton varieties. This is the moment at which farmers who have access to large land holdings begin to adopt these new technologies and increase cotton yields and cotton production.

Now, this also comes with its problems. But the point I want to make is that the Green Revolution has a complex history in India. On the one hand, it introduces a non-capitalized, but intensified form of agricultural production, which increases yields. On the other hand, it also produces an ecologically vulnerable form of production that is dependent on high outlays. And this sets the stage for what comes later.

Matthan: Much of that story is a story of Malwa, but Malwa wasn’t initially a Green Revolution state. This was very geographically variegated, and Malwa was not a region that was considered for the introduction of these technologies. So it has a different history, but with many similar effects over the last sort of five decades or so.

What Malwa did see, which is analog to and parallel with the Green Revolution, was what is called the Yellow Revolution in the 1970s, with “yellow” referring to the color of soybeans. As soybean cultivation was introduced and expanded, you see a huge number of transformations in agricultural production: the displacement of crops such as cotton, sugarcane, sorghum that were grown in this region, and a shift to this industrialized model of agricultural production, which is built on monocropping, a huge capital-intensive form of cultivation. So even though it wasn’t directly impacted by initial green revolution years, you see many of the same technologies and logics at work.

Q: The Green Revolution helps to lay out how the government became intimately involved in the production of these crops. But today, a lot of farmers are protesting against the government. How have the conditions changed?

Sethi: What changed was the 1991 liberalization of the Indian economy and the reforms that came with it. Agriculture all over the country was impacted after the reforms phase.  Many, many things change. One of the things that changes is that, prior to 1991, domestic agricultural markets are protected from market volatility. So, if you look at cotton for instance, in Maharashtra, there was something called the Monopoly Procurement Scheme for Cotton, which was meant to support cultivators and increase the cultivation of cotton from the 1970s onwards, all the way till 2002. During this period the state was a monopoly procurer of cotton. All the cotton that cultivators produced could only be acquired by the state, and the state acquired all the cotton cultivators produced. And import duties on fiber imports from other countries were very high.

All of this changes in the post-reforms period. Agricultural products are brought under the General Agreement on Tariffs and Trade (GATT), and import duties on agriculture that used to be up to 100% for certain crops fall to 30% in the space of two or three years. The state raises rates on agricultural loans, and it withdraws from providing input support and infrastructure investment in irrigation and scientific research. There are upward revisions of the prices of diesel, of electricity, and of petrol. And all of this precipitously raises the cost of cultivation for farmers, without any change in the actual nature of production. There is no increase in irrigation. There’s no consolidation of land holdings. What you have is widespread adoption of hybrid seeds, which on the one hand, provide much more yield, but they’re also very vulnerable to pest depredation. So from the 1990s onwards, agriculture all over the country enters a huge crisis, and specifically cotton cultivation in Vidarbha.

Matthan: The Green Revolution was only a success, if it can be called a success at all, because of the state supports. So what happens when the state supports are withdrawn? You can see that in a range of arenas of agricultural production, whether it’s subsidies, agricultural extension service — so even the circuits of knowledge on which farmers depended now are increasingly privatized — and there’s less investment in agricultural infrastructures, whether that’s storage infrastructures, or irrigation, and so on. So since the 1990s, a lot of the state support for agriculture on which this model depended is taken away. And alongside that, not only is the cost of production increasing alongside the removal of these subsidies and support, but more broadly, the privatization of education, of health, and so on are also increasing the cost of social reproduction for agricultural households — where they send their children to school, what kinds of health services they access, and so on. So you have a situation in which costs of production are rising while state support and investment are declining.

Q: This obviously has tangible effects for the people who are trying to continue to farm. Both of you actually did research with individual farmers involved, sometimes being out there doing agricultural labor alongside them. Can you just give us an idea of what that looks like, especially since these aren’t big industrial farms that we might imagine here in the American Midwest?

Sethi: Let me answer that question in two parts. One is to actually address what Bt cotton is. I think that’s important because of the extraordinary change that that seed has produced economically, socially, and in terms of the labor regimes on the farm. Bt cotton is a seed that has been genetically modified to resist predation from a certain class of pests: lepidopteran pests. This is the larva of the gray moth, called the pink bollworm. Bt cotton is a trans-gene inserted into the plant, which makes the plant toxic to this larva. When the larva eats GM cotton, it dies. The justification for Bt cotton was that it offered a non-chemical solution to pesticide. And the reason that was important was, as I said, because of the introduction of these hybrid seeds, which are highly vulnerable to pest attacks.

BT cotton as a technology has a very interesting relationship to the legal regime, which is that what Monsanto did was, it nested this technology into a hybrid seed, which cannot be resown. All cotton grown everywhere in the world comes in two forms: something called hybrid cotton, and something called straight line cotton. With straightline cotton, you save your seed this year, you preserve it, and you resow it the next year, and you plant it in density across a field. So this is where I mean a laboring regime. A farmer will plow that field and then dribble seed into furrows in the field with lots and lots of smaller plants produced in a field. What hybrid seeds do is, you can’t resow them the next year. And so you are forced to buy that seed from the market. And the reason Monsanto did this was to protect its patent.

Hybrid seeds transform labor in a very big way. Fewer hybrid seeds are planted in a field, as they need to branch and bowl. Secondly, they have to be fed large amounts of fertilizer and pesticide. This increases costs, and the large amounts of fertilizer and pesticide actually produces huge amounts of weeds. And so things like weeding, which would be done a few times a season, is now done continuously through a season. Weeding is an activity primarily conducted by women. So it has increased the labor days that women spend on a field. Pesticide has to be sprayed very, very often because hybrid seeds foliage a lot, so all kinds of other pests get attracted, which means that men also now are involved in field labor in a different way. It means that women earn more income in their hands than they did earlier, because they have access to this kind of continuous wage labor. But it also means that their forms of domestic labor have vastly increased. So these are all the ways in which these new hybrid seeds and Bt cotton — besides the other social and economic costs — also transform laboring relations between farmers and their fields.

Matthan: I didn’t focus necessarily on one crop in the way that Aarti does with cotton, I found a slew of crops growing across the agricultural year: soybean, wheat, a range of vegetables. And the rhythms of agricultural production change according to the crop and according to the season. 

But in the day-to-day, these are very small farms. The average landholding in a place like India is about one hectare, which is about two and a half acres. These are extremely small farms, and a lot of the labor is done by people in the household alongside agricultural wage labor. It changes based on the crop and based on the season. Across the agricultural year, you have various kinds of activities going on in the field, from weeding, which happens a lot more in the wake of these new seeds and crops, to transplanting seedlings, in the case of onions, to long days of the difficult harvesting in the case of wheat. So you have very different kinds of work being done in the field, depending on the crop and the season. And even though a lot of my work involves going to fields and farms and walking and talking to people in these spaces, the nature of farming is such that it also entails a lot of work in the home, for example. There’s women who are cleaning seed in the home, or sorting produce in the home. There’s a lot of work that happens in the home, in the market, and so on. 

Q: You mentioned that so many of these different crops that people are growing, they’re being grown throughout the year, it’s not just one period of time, and they’re also highly dependent on rainfall, and on different climatic conditions. Can you tell us a little bit about how this has changed and how it relates to the risk that farmers are taking when they’re participating in this market?

Matthan: As I mentioned, there’s a range of crops that are central to agrarian life in a place like Malwa. There’s soybean, which is the primary crop in the monsoon season, roughly between June and October. And then farmers move to growing a range of other crops, most predominantly wheat and gram (chickpea), but also things like onions, potatoes, and garlic, have become increasingly important crops in this region. Each of these crops has a range of different qualities, ecologically, politically, economically, and so on. 

Farmers are making a range of choices and decisions in deciding what to plant, how much to plant, and so on. For instance, things like, how long does this crop take to harvest? So one reason soybean is still popular is because it’s a short duration crop, and certain varieties of seeds have been introduced in Malwa that are extremely short duration. So within 80 days, you can harvest soybean, which allows you to then plant two or three more crop cycles on the same plot of land, which is really important to farmers who don’t have huge land parcels. They can get more and more out of the same plot. 

To go back to the question of how risk plays into this, farmers are making calculations based on engagements with risk and uncertainty. Wheat, for instance, is an extremely water-intensive crop. It requires irrigation, so you have to invest in irrigation. But it’s also considered a safe crop because it can be sold at government procurement centers for a fixed price. So you don’t have to deal with the volatility of the market, you can just take your wheat at the end of the season, and you can be assured of a price. So it’s considered less risky. 

Onions, for example, which are increasingly grown by farmers across class and caste in Malwa, is seen as a risky crop. It requires a great deal of investment in inputs and in labor costs. But it’s also seen as very high-yielding. And it’s risky, because onions are incredibly price-volatile. In India, there’s huge price risks associated with growing onions. Onion prices can shift dramatically within the span of days, and you could potentially garner huge profits, but also face crushing losses if prices crash. There’s a range of risks and opportunities associated with different crops, and farmers are actually making a lot of careful calculations in deciding what to grow and how much to grow and when.

Sethi: One of the peculiar things about the way in which risk is absorbed into an agricultural milieu — and I see this with hybrid GM cotton in a very intense form — is that risk has acquired a new valence in the agricultural milieu where on the one hand, cotton yields have vastly expanded. The potential of what you can reap from cotton has vastly expanded from the pre-hybrid economy of cotton, but so have the risks associated with cotton cultivation.

So the kind of calculations that farmers make is one where farmers both engage in this form of production, and it has produced a sense of an everyday wearing stress. The English word “tension” has now become vernacularized into village speech. Beyond the economic risks, which are manifold and which a lot of scholars and the press have written about, is that cotton cultivation is economically intense. It costs now 25,000 rupees. And the return on investment is very small. It’s about three to five percent. 98% of farming is unirrigated, the monsoons are completely erratic, every farmer has to make a calculation depending on how much debt you have, how long you can hold on to cotton, how you can play the market. If you can store your cotton, you will get a higher price later in the in the in the buying season. But if you are carrying a lot of debt, for your seed costs, your fertilizer costs, your pesticide cost, you have to pay back that debt, and so a lot of small farmers will offload their cotton as soon as the sowing season ends and the cotton procurement season begins.

Risk is both an operative emotion for farmers, because we are talking about a personal relationship to this no-longer-new economy of cotton, and also an economic fact of current agricultural production, which operates at every level of the socioeconomic agricultural order. It is operating at the level of financial risk. It is operating at the level of climatic risk. It is operating at the level of crop failure. It is operating also through family relationships in a really intense way, because everybody requires money to cultivate, and everyone is taking debt from everyone else. So people undertake debt within kinship networks. Which means there is a social and familial risk in which social relations are also placed at risk of fraying. Supposing you take a loan from your maternal uncle, and you can’t pay back that loan in time, then that’s a family relation that has been placed at great risk. So one way to think of risk is to look at it in this expanded sense.

Matthan: You put it beautifully about how risk sort of pervades, and elsewhere you’ve said that risk is the structuring condition of agrarian life. It permeates the economy, but also intimate relations within family. And so while I was interested in using risk as a sort of analytical lens into agrarian change, what I found was as with the use of the term “tension,” the term “risk” was used all the time in rural India. 

So everything was understood in terms of, what is the risk of this? People were using this term all the time to describe a range of activities and practices, not just in relation to farming, but also beyond. There are highly differentiated engagements with risk, based on caste, class, and gender. Many other kinds of calculations go into how people are dealing with with it.




How Climate Change Became a Security Emergency: An Interview with Brittany Meché

Brittany Meche

How has climate change become a security issue? Geographer Brittany Meché argues that contemporary anxieties about climate change refugees rearticulate colonial power through international security. Through interviews with security and development experts, her research reveals how the so-called “pragmatic solutions” to climate change migration exacerbate climate change injustice. 

For this interview, Julia Sizek, Matrix’s Content Curator, asked Meché about her forthcoming article in New Geographies from the Harvard Graduate School of Design, which considers how expert explanations of climate migration rework the afterlives of empire in the West African Sahel, an area bordering the southern edge of the Sahara, stretching from Senegal and Mauritania in the West to Chad in the East.

Meché is an Assistant Professor of Environmental Studies and Affiliated Faculty in Science and Technology Studies at Williams College. She earned her PhD in Geography from UC Berkeley. Her work has appeared in Antipode, Acme, Society and Space, and in the edited volume A Research Agenda for Military Geographies. Meché is currently completing a book manuscript, Sustainable Empire, about transnational security regimes, environmental knowledge, and the afterlives of empire in the West African Sahel.

Q: Climate change is happening everywhere, but the effects of climate change are highly variable. Your research examines how climate change has come to be seen as a security issue for organizations like the UN and governments like the EU and US. How do they understand the problem of climate change in the West African Sahel?

One of the things that I examine in my research are the interrelations between environmental knowledge and security regimes more broadly. In so many ways, environmental knowledge can’t be divorced from militarism, empire, and other forms of institutionalized power. One of the things that often surprises my students is when they learn that one of the reasons we even know climate change is happening is because the US military poured billions of dollars into environmental science after World War II. That historical context is important, but in the contemporary moment, the consequences of what some scholars have described as “everywhere war” mean that so many aspects of social, political, and economic life become infused with and tied to the logics and infrastructures of security. 

The West African Sahel, where I conduct my research, is a region that is already experiencing the impacts of climate change, from rising temperatures to erratic rainfall patterns. At the same time there have been increasing rates of different forms of armed revolt, which get lumped together as Islamic terrorism. It becomes easy for foreign militaries to say that worsening environmental strain is linked to social and political collapse. In response, foreign militaries propose fortifying the local security sector through security cooperation agreements, military training, and investments in border security. In that way, security solutions replace any careful consideration of the structural inequities of climate change. My research seeks to challenge these approaches through a detailed accounting of how these kinds of security imperatives further imperil already vulnerable communities. 

A map illustrating the Sahel region of Africa. Source.

Q: In your forthcoming article on border security and climate change in the West African Sahel, you address how security actors like the UN respond to what they see as the threat of climate refugees. How are climate refugees understood as a security problem?   

The issue of climate refugees was one of the most vexing issues I encountered during my research. There are no formalized legal conventions about what constitutes a climate refugee or climate migrant, so the terms themselves are capacious and vague in ways that make it difficult to know what they actually describe. Is a climate migrant someone who is displaced during an acute event like a hurricane or earthquake? Someone who is no longer able to grow crops and chooses to relocate elsewhere? Someone who lives in a coastal area or on an island where sea level rise makes reliable habitation less feasible? Or all of the above? And, if so, how can we alter a global refugee system that many scholars — like Harsha Walia, Leslie Gross-Wyrtzen, and Gregory White — have noted is already at times violent, strained, and ineffectual, to accommodate these different categories? 

But more vexing than these conceptual and legal indeterminacies are the ways that present investments in border security and fortification make use of the figure of the climate refugee to whip up xenophobic fears. In my article, I note the ways that climate refugees, almost always depicted as people of color, become ways of making climate change knowable and actionable. Climate change becomes located on the body of migrants of color amid claims that “hordes” of climate migrants from the Global South will inundate the Global North. The embodiment aspect is key, as the literal bodies of these migrants come to signal and stand in for climate change as a security problem. This often leads to calls for “pragmatic solutions” like more border security and more heavily regulated immigration systems. 

Q: How do ideas about migration align (or not) with the realities of how migration works in reality? 

I think popular framings of migration in and from the Sahel miss the ways that circular migratory patterns have been a staple of life for centuries. The Sahel has a number of pastoral communities that migrate with their herds. There are also cycles of migration between rural and urban areas, and education and religious pilgrimages that take place. This is not to say there have not been people forced to move because of violence, or because of economic or environmental stress. But many aspects of how and why migration happens get lost when migration is simply offered up as a problem to be solved. 

One central aspect of this issue is how my informants framed climate change migration as a South-North issue: that is, people from the Global South going to the Global North. In reality, most migration is South-South. Most of the migration happening in the West African Sahel and across the African continent more broadly is intra-regional migration. But this fact does not receive the same level of attention. I had informants at the International Organization of Migration admit that, while their figures show the predominance of intra-regional migration, for funding purposes, they had to frame their work as speaking to the “migration crisis” in Europe. The fear of Africans inundating Europe obscures the realities of this South-South migration.  

Q: Your research also considers the longer history of anxieties about migrants by showing how contemporary takes on climate change migration have supplanted and reinforced colonial anxieties about overpopulation that bring back Malthusian ideas about scarcity and overpopulation. How do these anxieties appear in security policy, and how do security experts think about these colonial legacies in their work?

In many aspects of my research, it seems that Malthus never really left. The West African Sahel has some of the highest birth rates in the world, and that fact lends itself to easy, though ultimately false, claims that overpopulation is at the root of the region’s problems. Still, for me, it was important to trace the ways that the different institutions I study absorb criticisms and attempt to re-orient their work. For instance, when informants at different UN agencies, such as the UN Development Program (UNDP), UN Office on Drugs and Crime (UNODC), and International Organization of Migration (IOM), would mention population in the Sahel, they would do so with the acknowledgment that it was a “third rail” issue. So even as the ghost of Malthus lives on, I think it’s important to account for different mutations.

Similarly, when interviewing US military officials working for US Africa Command (the Department of Defense’s command dedicated to African affairs), they were very mindful of accusations of colonialism and empire, and attempted to cultivate what I call in my work a “non-imperial” vision of US empire. That is to say, their disavowals, far from being just a PR move, were being used to strategize new kinds of circumscribed actions that would allow for a US presence in the region without inviting anti-imperial protest. 

Q: You’ve mentioned that many of your interlocutors are experts in the international development and security fields. How did you conduct your research on such a transnational project, and how did you get access to the experts you interviewed? 

I knew at the outset that this project had a number of different threads, including multiple actors, and therefore demanded a multi-sited approach. I started in Washington, DC, where I interviewed US government officials who put me in touch with informants in Stuttgart, Germany, the headquarters of US Africa Command. In turn, these informants put me in touch with other military, diplomatic, development, and humanitarian workers in Senegal, Burkina Faso, and Niger. I also previously worked at the US State Department and have family ties to the US military, which facilitated access. 

But still, you can never underestimate the usefulness of showing up. Many of my most memorable interviews and points of contact were serendipitous. Similar to most fieldwork, it’s all about cultivating relationships. I primarily used snowball interviewing, which involved seeking additional recommendations from existing contacts and using those suggestions to map out a network of informants. Given their positions in “elite” institutions, many of my informants were very much interested in preserving their anonymity, especially when they offered criticism of work they were doing within those institutions. 

Q: While this new article focuses on migration, your broader book project focuses more on the role that a network of experts plays in constructing a past and predicting a specter of future catastrophe in Sahel. In addition to climate migrants, what other climate issues appear in your book? 

The broader book project, currently titled Sustainable Empire: Nature, Knowledge, and Insecurity in the Sahel, makes the central claim that attending to what has happened historically — and what continues to happen in the West African Sahel — is crucial for understanding the possibilities of just global environmental futures. It supports this claim in a number of ways. First, it explores how environmental knowledge in and from the Sahel helped assemble a conceptual and institutional bedrock for global climate change knowledge. I do this through a critical genealogy of desertification, considered the first global climate change issue in the mid-20th century. I then trace the place of West Africa in predictions about the “coming climate change wars,” reflecting on how racial and gendered fears helped set the stage for what became the global war on terror. The book then concludes with a consideration of the kinds of climate solutions being workshopped in the region, ranging from ongoing security projects to large-scale green-tech projects.


Institutionalizing Child Welfare: An Interview with Matty Lichtenstein

Matty Lichtenstein

How do American child welfare and obstetric healthcare converge? Matty Lichtenstein, a recent PhD from UC Berkeley’s Department of Sociology, studies how state and professional organizations shape social and health inequalities in maternal and child welfare. Her current book project focuses on evolving conceptions of risk in social work and medicine, illustrated by a study of the intertwined development of American child and perinatal protective policies. She is working on several collaborations related to this theme, including studies of maltreatment-related fatality rates, the racialization of medical reporting of substance-exposed infants, and risk assessment in child welfare.

In another stream of research, she has written on social policy change, with a focus on educational regulation and political advocacy, and she has conducted research on culture, religion, and politics. Dr. Lichtenstein’s work has been published in American Journal of Sociology, Qualitative Methods, and Sociological Methods and Research. She is currently a postdoctoral research associate at the Watson Institute for International and Public Affairs at Brown University.

In this podcast episode, Matrix content curator Julia Sizek speaks with Lichtenstein about her research on the transformation of American child welfare — and the impact of that transformation on contemporary maternal and infant health practices.

Excerpts from the interview are included below (edited for length and content).

How has the child welfare system changed over the span of time that you study?

I focused my research starting after the passage of the Social Security Act, because that is the major dividing line for American child welfare. Prior to 1935, when the Social Security Act was passed, we had a fragmented patchwork of mostly private child welfare agencies throughout the United States. The passage of the Social Security Act enabled an expansion of funding for state and local public child welfare. The main shift had to do with thinking about what welfare meant, and what it still means today.

In general, when we think about welfare, we are referring to government support for individuals or groups. The main distinction, especially in the 1930s, was between financial support — giving people money when they needed it, and couldn’t get it any other way — or providing services, such as funded medical services, educational services, or psychological counseling. Across social work, which was in a way the parent discipline of child welfare, there was a tension there. How do we help people — by giving them financial aid, or do we help them through social services?

The Social Security Act made that distinction quite clear for child welfare services, because the section that focused on child welfare services emphasized that this was about services in general, and financial aid was a separate part of the Social Security Act for families. One of the things that needed to be figured out was, what is child welfare, and how do you best serve children?

I’ve found in my research that there was an increased emphasis in the 1930s and 40s on the argument that child welfare should serve all the various needs children have. It was not just poverty-related needs. In fact, they veered away from poverty-related needs toward psychological needs, medical needs, health needs, etc. Child welfare advocates pushed for more funding and more resources for child welfare. What happened is that public child welfare grew exponentially in the 1950s and 1960s. The number of child welfare workers started rising dramatically. This led to a larger shift in child welfare and thinking about what child welfare meant in the 60s and 70s.

What was the focus of the child welfare system in the 1960s and 70s?

One of the major findings of my dissertation conflicts with the conventional narrative of child welfare history. The classic narrative is that the late 50s and 60s saw the discovery of child abuse as a social problem. Before then, scholars argue, nobody was talking about child abuse and neglect, and social workers and the public did not see it as a problem. And then by the 60s, it became a public and political issue, and you saw a number of laws being passed to mandate reporting of child abuse. This led to the creation of child welfare as we know it today, which is heavily focused on child abuse prevention and response.

The problem was that, as I dug through more archival resources, I found that that just wasn’t the case. The most damning piece of evidence I found was a publicly available report put out by the Children’s Bureau in 1959, which stated that 49% of public child welfare in home services related to abuse and neglect. This was in 1959, when current scholars were saying nobody talked about abuse and neglect.

I spent a few months in a sort of existential crisis: what is the meaning of my dissertation if everything is wrong? Eventually, I figured out that not everything is wrong, and that a lot of what was written about the history of child welfare was correct. There was much more of an emphasis on child abuse. But what it missed was this larger moment of transformation in child welfare.

What I show is that it’s not so much that child welfare agencies rediscovered child abuse, as much as they relinquished (sometimes willingly and sometimes unwillingly) jurisdiction over most other child welfare issues, including poverty, health issues, and education, and they retained jurisdiction only over child abuse and child neglect. I show that this happened largely due to larger trends in the American welfare state, specifically welfare state retraction and an increasing focus on efficiency and welfare governance in the late 60s and 1970s, which demanded that child welfare focus on issues that could be easily defined and services that you could put a price on.

The Children’s Bureau could no longer say they serve all of the needs of the population of children. Instead, there was an increasing shift toward, what is the problem you’re here to resolve? There were advocates that pushed for more focus, but it was all part of this larger shift in the American welfare state.

I also emphasize that the massive expansion of child welfare — that growth of staffing and funding — was also made possible by laws saying, you need to report child abuse. Where do you report it? To a child welfare agency. So now there were thousands of child welfare workers. It had unintended consequences. All the child welfare workers who were supposed to solve all of children’s problems were now there to solve one problem, which was the increasing the number of reports of child abuse and neglect.

How was the category of child abuse and neglect defined, and how did it transform over time?

Early research that tried to define what it meant to have abusive parents was primarily in medical journals. That was usually based on things like X-rays of children with broken bones and trying to figure out, was this an accident, or who caused this? There were also psychiatric evaluations of parents saying, what is wrong with parents who do this? It was a diagnostic model of approaching child abuse and neglect. The cases they were referring to were usually fairly severe cases of child abuse and neglect.

Originally, a lot of the laws addressed medical professionals, but they quickly expanded, in part because medical professionals pushed back and said, we can’t be the only ones mandated to report this. And so it quickly started to expand throughout the 1960s and 1970s to include professionals across the board who have any sort of interaction with children, including anyone in an educational setting, anyone in a medical setting, or people who work in funeral homes, for example. They became mandated reporters, which means they were supposed to be penalized if they did not report what were often very vaguely defined forms of abuse and neglect.

This varied greatly across states. Every state had different laws and different sets of mandated reporters, but child welfare agencies across the country started to receive a skyrocketing number of reports. This does not mean that everyone was reporting every suspicion, but there were enough reports pouring into child welfare that they had to figure out what to do with all these reports. In the 1970s, and increasingly in the 1980s, that forced a reckoning of the question of how to define child abuse — and how to figure out if what’s happening is child abuse and neglect.

Out of these millions of reports that started pouring in during this era, the majority were usually unsubstantiated. In the mid-1970s, usually around 60% of reports were unsubstantiated. The majority of reports that are substantiated were neglect reports that were highly correlated with poverty. There were eight times the rate of substantiated reports of physical neglect among low socioeconomic-status children versus non-low socioeconomic status children. So you had a broad category of neglect, which could include everything from passively allowing your child to starve to leaving your child home alone for a few hours when you go out to work. There was a huge range that varied by county and state.

The question then became, if you have this huge number of reports coming in, and the majority of them are not even abuse and neglect, or it’s not clear if it’s neglect or poverty, how do you create a system to prevent and treat a problem that we’re not even sure exists? And that’s really where you started to see this focus on risk. Child welfare and medical professionals affiliated with child welfare began to develop practical risk assessment tools to determine the risk that there’s an actual case of child abuse happening, or that it might happen in the future. These tools had all sorts of problems built into them.

What was wrong about the risk assessment tools that professionals were using?

In the 70s and 80s, the tools were often built on what was called a consensus approach to risk assessment. That was based on what social workers considered risk variables. This was deemed very problematic by the 1990s, but they were still widely used for the first 20 or so years. These tools tended to incorporate all kinds of variables having to do with the environment of the child. There may not have been any sign that the child was harmed directly, but you look at the environment and try to assess if there are risk variables there. That had to do with everything from the income status of the family to health issues of the parents to the marital status of the mother.

Childcare access could be a risk factor, as well as issues like the stability of the home. In the 1970s, there were risk assessment tools that had factors like, do the parents take this child to movies? Do they have a camera? Do they take the child fishing? Does the child have a mattress? You can see that it’s really hard to disentangle poverty from this.

There were also sometimes cultural factors. There was an early tool that was approved by the predecessor to the Department of Health and Human Services that asked whether the parent had wider family support in child care, and whether they were overly dependent on their family. That gets at something that is cultural, not just economic: studies have found that in families of color, there’s more interdependence and less of an emphasis on nuclear family units, so this could be problematic.

Drug or alcohol use was assessed as a risk factor. When you look at earlier surveys about child welfare services before this transformation toward a focus on child abuse, they would talk about health and family issues as issues of child welfare, but they weren’t risk factors for abuse. Child welfare might intervene if there was some sort of health issue with a parent, but that was seen as distinct, whereas when you look at the studies in the 1970s and 1980s, those same factors were not just a health issue, but a risk factor for abuse or neglect. So you saw a trend of structural inequalities and health issues turning into risk factors.

So instead of trying to say, how do we help this family as a whole, it became, how do we assess the assumption that the parent is harming the child? It’s an approach in which parent and child are seen as distinct units, and the question is, are they in some sort of conflict? What’s interesting is that this is a relatively rare problem, in which there’s an intentional effort by the parents to harm the child. It certainly happens, but it’s relatively rare.

How does what you’ve learned matter for people thinking about child welfare policy today?

First, child welfare is under-equipped for multi-dimensional problems. In some states, they might have access to more resources, and in other states, the only thing that can really do is child removal or interventions that are often quite disruptive to the family. Having child welfare in charge conflicts with the multidisciplinary approach that’s favored by most professionals.

Second, child welfare is associated with an enormous amount of trauma, especially for families that are low-income and for families of color in the United States. Fifty percent of African-American children in the United States today have experienced a child welfare investigation — one out of two. That’s just crazy. Huge numbers of children are experiencing these kinds of investigations. Perhaps some are very minimal, but some of them are not going to be so minimal.

What we have is potentially traumatic family surveillance and separation that’s intrinsically linked to child welfare, because no matter how helpful or well-meaning a child welfare worker might be, ultimately child welfare has the authority to take your child away, possibly forever. Even if they do that rarely, it can still be something that is laden with fear and anxiety for families.

Adding to that, lower standards of evidence are applied in child welfare proceedings, so that makes it particularly problematic to have child welfare involved in cases of substance-exposed infants, especially because (at least based on the limited data we have, for example, for California), a significant percentage of these infants are taken away from their mothers. Taking a newborn away from their mother is not necessarily an evidence-based approach to dealing with substance use issues. But the paradigm of child welfare is not necessarily to approach the best interests of the family as a whole. The paradigm of child welfare is to reduce and mitigate risk of future child abuse and neglect.

There have been significant shifts in child welfare over time. My research largely ends in about 2000. In the first couple of decades of the 21st century, there has been a concerted effort by child welfare agencies on every level to try to counter some of the intense racialization and income inequality that is reproduced by the child welfare system. We’ve seen a dramatic decline in child removals. For example, in New York City in 1995, there were 50,000 children in foster care. In 2018, there were 8,000 children in foster care. That is a dramatic decline. However, even though there were 8,000 children, there have been an enormous number of children investigated, and in New York City in 2019, 45,000 cases were in preventative services. So you still have a lot of child welfare involvement. What that means for families is not really clear yet.

The second major shift is that there’s been an intensification of the focus on risk assessment. We have seen the development of quite sophisticated risk assessment tools, not just the consensus tools, but actuarial tools and algorithmic tools that use computational methods to assess risk. And there have been a lot of critiques of some of these tools. The main issue is, do these tools funnel multiple problems, many of them poverty-related, into child welfare? And even if racial disproportionality in some states has declined, we still have a lot of racial disproportionality in child welfare, and income inequality continues. We don’t have enough data on that to fully assess it. And so we’ve continued to have significant issues with child welfare today, even as it has changed in this new century.

What are the approaches that different states take to the issue of infants who have been exposed to substance use during pregnancy?

In the 1980s, you have an increasing number of reports coming into child welfare of substance use during pregnancy, and a lot of this was highly racialized, in terms of how it was conceptualized. During the 1980s, this problem received a lot of media coverage. And what that means is that state legislators felt they had to do something; they had to respond in some way. And their options were basically to say, well, we can mandate medical intervention in such cases, we can criminalize these women for harming their children and mandate essentially law enforcement interventions, or we can mandate civil interventions through child welfare. The current scholarship on this period — and really on this issue — tends to focus a lot on criminalization, on how pregnant women are thrown into jail and how women are jailed or prosecuted for these kinds of uses. And then there’s also a lot of conflation of child welfare interventions and medical interventions, all part of this larger criminalization and policing of pregnant women. And there’s a lot to be said for that framework. But I think it’s actually really important to distinguish between those things, because criminalization is actually relatively rare compared to the thousands of women who are reported in each state to child welfare every year. By far the predominant response is child welfare reporting.

So how do we essentially manage and mitigate this risk of substance-exposed infants? Child welfare has this risk prevention framing, and also, it’s supposed to be dedicated to protecting children. So they are the perfect response. And what’s interesting about this is that child welfare increasingly across states becomes the primary authority for intervening in such cases, even as simultaneously, the professional consensus increasingly converges on the idea that we need a multidisciplinary response to the issue of substance-exposed infants. If you’ve read reports that are put out on this issue of substance-exposed infants, including from the federal government, the consensus is that we need doctors and social workers and financial aid, and perhaps even law enforcement. Everyone needs to work together to deal with this issue of substance-exposed infants. But in practice, the state laws overwhelmingly favor child welfare interventions, and child welfare is mandated to mitigate risk of child abuse and neglect. They’re not there to provide a multidisciplinary approach. They can and sometimes they do; it varies greatly by state. But that’s not their primary mandate. And there are very concrete consequences to having a child welfare response to this issue.

Listen to the full podcast above, or listen and subscribe on Google Podcasts or Apple Podcasts. For more Matrix Podcasts, including interviews and recordings of past events, visit this page.




How CRISPR Became Routine

A visual interview with Santiago Molina, a recent UC Berkeley PhD, on the normalization of CRISPR technologies and the new era of gene editing.

Santiago Molina

Santiago J. Molina (he/they) is a Postdoctoral Fellow at Northwestern University, with a joint appointment in the Department of Sociology and the Science in Human Culture program. They received a PhD in Sociology from the University of California, Berkeley in 2021 and a BA from the University of Chicago. Their work sits at the intersections of science and technology studies, political sociology, sociology of racial and ethnic relations, and bioethics. On a theoretical level, Santiago’s work concerns the deeply entangled relationship between the production of knowledge and the production of social order. Their research included fieldwork at conferences and in labs around the Bay Area.

In this visual interview, Julia Sizek, Matrix Content Curator and a recent PhD graduate in Anthropology from UC Berkeley, interviewed Molina about their research on CRISPR, the genetic engineering technology that has reshaped biological research through making gene editing easier. This new tool has excited biologists at the same time that it has worried ethicists, but Molina’s research shows how CRISPR has become institutionalized — that is, how CRISPR has become an everyday part of scientific practice.

This image depicts a model of the CRISPR-Cas9 system. How did you come to encounter this model of CRISPR, and how does CRISPR work? 

3D Printed interactive model of Cas9.

This model was passed around the audience at a bioethics conference in Davis, California back in 2014 when I started my fieldwork. I remember the speaker holding it high above his head and pronouncing, “This! This is what everyone is so excited about!” While he meant it as a way to demystify the new genome-editing technology, a 3D-printed model of a molecule doesn’t tell us a lot about the process behind the technology. 

What is a bit disorienting is that technically, this isn’t a model of CRISPR at all, but a model of Cas9 (CRISPR-associated protein 9, a kind of enzyme called a nuclease) in white, an orange guide RNA, and a blue DNA molecule. To put it really simply, CRISPR (Clustered-regularly-interspaced-palindromic-repeats) describes a region of DNA in bacteria where the molecular “signatures” of viruses are stored so that the bacteria can defend itself. This bacterial immune system was repurposed by scientists into a biotechnology.  At its core, CRISPR-Cas9 technology is just the white and orange parts. The Cas9 does the heavy lifting of cutting DNA, and the guide RNA, or gRNA, acts as the set of instructions that the Cas9 uses to find the specific sequence of DNA where it should cut.

While people use CRISPR as a shorthand for the entire CRISPR-Cas9 system, you won’t actually find a single Eppendorf tube in a lab marked “CRISPR.” As a process, the way scientists get this to work is by adding Cas9 and the “programmed” gRNA to cells via one of several delivery techniques, such as a plasmid or viral vector, so that the Cas9 will make a specific DNA cut. In the years since then, scientists have developed a whole toolbox of different Cas proteins, and each can make many different kinds of modifications. 

What is interesting about this sociologically is that CRISPR has a wide scope of potential application, and early in its development, every possible use was on the table, from bringing back the wooly mammoth to ending world hunger. This meant that exactly what it would be, ontologically, was really open. Scientists would describe the technology as a pair of scissors, as a scalpel, as a find-and-replace function for DNA, a guided missile, a sledgehammer, etc. I became obsessed with these metaphors because they were traces of the active construction of CRISPR as a technology. 

My research takes this focus on the development of genome editing technology and reframes it as a problem of institutionalization, which sociologists generally understand as the process by which a practice acquires permanence and reproducibility in society. I look at how the ideas around what the technology is, how it should be used, and what it should be used for come to be settled, legitimized, and eventually taken for granted.

CRISPR has recently been in the news, not only because of Emmanuelle Charpentier and Jennifer A. Doudna’s 2020 Nobel Prize, but because of the 2018 announcement that a Chinese researcher had used CRISPR to gene-edit babies. How has the media covered CRISPR and the ethics of the technology? 

A crowd of photographers and reporters gearing up for He Jiankui’s presentation in Hong Kong.

Most media articles go something like this: “The idea that scientists can modify your DNA at will sounds like science fiction. But now it’s reality!”

This framing does important work to normalize futures that are in active construction. When newspapers and magazines cover CRISPR, they are bridging the social worlds of science and civil society and making concrete a very fluid social process of knowledge production and technological development. In doing so, some media coverage amplifies the hype around CRISPR and genome editing.

That said, it’s more complicated than saying they sensationalize it, because most coverage draws directly from interviews with actual genome-editing scientists, and they do their best to represent the science accurately. Instead, I think about media coverage as part of the cultural side of institutionalization. News articles offer interpretive scripts though framing that audiences can use to make sense of what CRISPR is, how it is used, and what the ethical issues are. This “making sense” is part of how genome editing is coming to be seen as a normal practice in biomedicine.

The distinction between investigative reporting and general media is important to keep in mind. Take, for example, the controversy surrounding the birth of genetically modified twins in Shenzhen, China in November 2018. If it wasn’t for keen investigative reporting by Antonio Regalado of the MIT Technology Review ahead of the Hong Kong Summit, it is likely that the controversy would have unraveled differently. 

The image above is a photo of a group of reporters during the summit taking pictures of He Jiankui, the scientist behind the clinical trial in Shenzhen that aimed to use CRISPR-Cas9 to confer genetic immunity to HIV in embryos. Subsequent media coverage of the controversy drew from interviews with high-profile, U.S.-based scientists in the field. These scientists argued that He Jiankui was an outsider on the fringe of the field. The resulting articles framed him as a “rogue,” “a mad scientist,” and a “Chinese Frankenstein.” This “bad actor” framing tells us that on the whole, the field is responsible and CRISPR itself is good, essentially repairing the crisis.

However, in alignment with more recent investigative reporting, my ethnographic research found that a handful of U.S.-based scientists had helped He Jiankui with his project. He had earned his PhD at Rice and was a postdoctoral fellow at Stanford. Scientists at UC Berkeley had given him technical advice on the project, as well. To me, this suggested that the “bad actor” framing — and the Orientalism surrounding how he was talked about – obfuscated the broader moral order of genome editing.

CRISPR is a relatively contemporary invention, but the idea of genome editing has a much longer history. How does this history appear in your research, and what does Charles Davenport have to do with it?

Photograph of Charles Davenport hanging in the common area of one of the buildings at Cold Spring Harbor Laboratory.
Photograph of Charles Davenport hanging in the common area of one of the buildings at Cold Spring Harbor Laboratory.

It’s interesting how little history appeared in my research. There is a sort of presentism that comes with “cutting-edge science.” CRISPR technology is part of a lineage of genetic engineering tools, going back to the 1970s, when recombinant DNA (rDNA) was invented. This biotechnology, rDNA, allowed scientists to mix the DNA of different organisms. It gave rise to a whole industry of using engineered bacteria to produce biologics and small molecules like insulin. The history of rDNA is important because the debates around its use in the 1970s came to be the dominant model of decision-making surrounding new technologies in the United States. Indeed, a handful of the top scientists from these debates have held top positions on committees that have been tasked with debating the ethics of genome editing over the past five years. 

Charles Davenport predated these debates, and has been largely an invisible figure for modern genome-editing scientists. Davenport was a prominent scientist in the early 20th century. He was a eugenicist and racist scientist who served as the director of Cold Spring Harbor Laboratory, a private, non-profit research institution, from 1898-1924. While at CSHL, Davenport founded the Eugenics Record Office, which published research to support the eugenics movement. I found this photo of Davenport in Blackford Bar, the pub at Cold Spring Harbor Laboratory, where I went to the first meeting, titled “Genome Engineering: The CRISPR/Cas Revolution,” in 2015. While the scientific community eventually came to reject Davenport, and the eugenics movement fell out of fashion after World War II, this history is important to recognize as we usher in a new technology aimed at eliminating genetic diseases and improving human health. At the conference in 2015, I thought, if Davenport’s ghost had been hanging out at the pub, he would have been thrilled.

The scientists I worked with vehemently rejected the idea that what they were doing could be considered eugenics, or what one scientist called it, the “E-word.” But people often forget that the eugenics movement in the United States was both mainstream and progressive at the time. Eugenics laws were drafted and passed by Democratic legislators who aimed to address poverty by drawing on the most up-to-date science, medical knowledge, and expert opinion. When this history was brought up at modern conferences and meetings, it was either subtly discredited as fear-mongering or tucked into a panel at the end of the conference to entertain philosophical discussion.   

Your research also contends with the way research is conducted between different laboratories, even when many of the plasmids (a kind of DNA molecule commonly used in CRISPR applications) and techniques that they use are proprietary. The shipping area in this image is how Addgene, what has been called “the Amazon of CRISPR,” sends reagents and plasmids used in scientific research to laboratories around the world, and manages many intellectual property issues. What is Addgene’s role in the scientific process?

Hundred of plasmids await daily FedEx pickup in Addgene’s shipping room.
Hundred of plasmids await daily FedEx pickup in Addgene’s shipping room.

While I was doing my research, there was a raging patent dispute between the University of California, Berkeley and the Broad Institute, where each institute claimed to have invented the technique for modifying mammalian cells with CRISPR. So the proprietary aspects of CRISPR were always in the background. But I think if it wasn’t for Addgene, these concerns would have really slowed down the spread of genome editing.

Addgene is a non-profit organization that operates as a mediator between the exchange of practices and biological materials between labs. What they do is manage a plasmid repository, a sort of technique library, and fulfill the requests for plasmids to send them to those labs. Because plasmids are central to many biological experiments, and are key for CRISPR-based techniques, scientists rely on the availability of these circular pieces of DNA as a key reagent. Since receiving its first CRISPR plasmid in 2012, Addgene now has over 8,000 different CRISPR plasmids in the repository, and has shared them over 140,000 times with laboratories across 75 different countries. They essentially took over the logistics of CRISPR distribution, moving biological materials from place to place. By doing it at a really low cost, this effectively contributed to what scientists described as the “democratization” of genome editing. 

They also keep patent lawyers at universities happy with detailed record-keeping and by electronically managing material transfer agreements (MTAs), which sort out the proprietary issues, through a Universal Biological Material Transfer Agreement (UBMTA). This UBMTA relaxes the institutional constraints on the transfer of biological materials. Scientists love this because it reduces a lot of paperwork.

Last but not least, Addgene contributes to the institutionalization of CRISPR-Cas9 by producing guidelines and protocols that support the use of some of the plasmids. For example, Addgene was the first to develop a textbook for CRISPR. Their CRISPR 101 eBook has been downloaded more than 30,000 times, and their informative CRISPR blog posts had been visited over 500,000 times as of 2019. In these materials, detailed definitions of new genome editing techniques and terms of art are spelled out for curious adopters. Additionally, the scientific team at Addgene works with the scientists who are depositing plasmids to coproduce useful documentation to accompany the plasmids. Addgene does not share plasmids with for-profit organizations, but acts as an up-to-date clearing house and tracker of CRISPR innovations in academic and non-profit laboratories.

As part of your research, you spent time at different labs around the Bay Area to understand how CRISPR research has become an ordinary part of scientific research. Can you walk us through some of these images of lab life and what they show us about how CRISPR has become institutionalized? 

Sculpture of a ribosome in an atrium.


Rows of backed lab benches.


The first image is of the atrium in one of the buildings I often found myself in for fieldwork. The huge sculpture of ribosomes on the side looks so abstract to me. A lot of these spaces required keycard entry, and for me, the emptiness of some of the spaces made them all the more isolating. I would have to get lost sometimes just to find the right room, where a small group of scientists were discussing the next big breakthrough or the next application of CRISPR-Cas9. The public-facing image of the field was really different from the behind-the-scenes shop-talk environments where I took notes. It was different because it wasn’t open to anybody, and you would need a lot of intellectual and cultural capital to enter those places.

The second picture, to me, represents the ordinary that is behind those barriers of access. Lab benches are workshops. They are shared spaces that are a lot like kitchens in a restaurant. Everything has its place, every tool is in its nook, you might find some remnants of an experiment in the fridge, or old reagents in the freezer. But you can tell there is some fun in the mix. The folks who are working at those benches are doing it because they love it. For these graduate students and postdocs, CRISPR-Cas9 was an exciting opportunity, something that would help them finish their PhD, or if they were an undergrad-volunteer, it was a key skill to move forward. Lab life a lot of times felt banal: scientists moving through their careers, with lots of failed experiments, meetings that could have been emails, day-to-day conflict with coworkers, late hours, etc. I wish people could see the contrast between the hype surrounding something like CRISPR-Cas9 and the on-the-ground struggles of scientists in the lab.

In these pictures below, you show a humorously decorated doorway that tells us a lot about how scientific work happens at a university. What does this tell us about who conducts science, and about equity issues within the lab?

Threshold of the lab as an angry doorway with a top-hat and mustache, hungry for the labor of postdoctoral fellows, undergraduate, and graduate students.
Threshold of the lab as an angry doorway with a top hat and mustache, hungry for the labor of postdoctoral fellows, undergraduate, and graduate students.

Threshold of the lab as an angry doorway with a top-hat and mustache, hungry for the labor of postdoctoral fellows, undergraduate, and graduate students.

This personification of the lab was interesting to me because it draws attention to those struggles I just mentioned. Of course the decoration is a lovely piece of satire, but scientific discoveries and breakthroughs are the products of years of labor. A lot of this work is done by unpaid undergraduate volunteers, graduate students who are often in precarious financial situations, and some paid research associates, and it is coordinated by postdoctoral fellows. Sometimes, because of the demands of experimental work, lab workers would have to come in in the middle of the night to feed cells, check on experiments, or manage instruments. In the lab I worked in, one research associate worked as a Lyft driver on the side because their salary wouldn’t cover their cost of living. While the hierarchies of labor are still very strong, some universities and labs, like the Innovative Genomics Institute at UC Berkeley, are now requiring that all undergraduate workers be paid. I think this is a step in the right direction, but there are still equity issues both between and within ranks of the lab. 

This disparity is even more extreme when you consider how senior scientists and universities benefit from scientific labor. Social capital in the form of reputation and financial capital both accumulate as a result of this work. Partnerships between university laboratories and the biotech and pharma industries in particular have become commonplace in 21st-century biomedicine. Research examining these partnerships describes this as academic capitalism or neoliberal science. My research adds to this line of social scientific research that has traced this institutional shift, where academic organizations are increasingly adopting the practices and bureaucratic frameworks of for-profit organizations in industry. Those patent disputes I mentioned previously are a good example of this. 

With CRISPR research, as with much other biological research, the institutionalization of scientific norms is essential to conducting scientific research. What does Michael Jackson have to do with that? 

DIY biohazard safety sign posted on the lab doors.
DIY biohazard safety sign posted on the lab doors.

There are three proximate institutions of social control surrounding scientific work, in my view: biosafety, bioethics, and the ethics of research misconduct. This poster is an example of a biosafety rule being operationalized in the lab. It is posted on the doors so you would see it as you exit the lab space to the common area and kitchen. Biosafety essentially aims to contain the materials, reagents, and products of scientific experiments to the lab. Lab managers and principal investigators must fill out detailed forms describing the experiments being done and submit these to the biosafety office at their university. These are then reviewed and evaluated by biosafety experts, who then make recommendations about infrastructure requirements for the spaces where the experiments are conducted and prescribe mandatory training for any personnel conducting those experiments.

Biosafety is a really interesting social institution because it must constantly keep up with new techniques and develop risk frameworks for assessing them. For innovations like CRISPR-Cas9 that are revolutionary, this sometimes requires some finesse. When you consider the modifications being made to bacteria, plants, non-human animals, and human cells, you can bet there is considerable work going into making sure those biologics don’t end up where they aren’t supposed to. Consequently, scientists must follow strict protocols for waste disposal and use the appropriate personal protective equipment (PPE).

But then consider who is doing those experiments. There can sometimes be a disconnect between the official protocols and how they are enacted. This poster captures that disconnect and suggests that more immediate forms of social control might work better in some cases than extensive bureaucratic procedure. Plus, Michael is iconic.

As with any social process, there are bound to be accidents. In the lab I observed, for example, a graduate student accidentally cut himself through his gloves on some broken glass while conducting some genome-editing experiments with lentiviral packaged Cas9. This lentivirus could, in principle, infect any mammalian cell. While he was working under the fume hood, which creates negative pressure to suck up the air where the experiment is being done, there was still a risk that Cas9, which would edit the DNA, could enter his blood stream. He then went to the post-doc he was working under and the lab manager, who advised him to report it to the Office of Environment, Health & Safety (Eh&S). EH&S then told him to go to the student health center. Once at the health center, the grad student with his bandaged hand informed the nurse that his lab was categorized as BSL-3 (biosafety level 3), to which the nurse responded, “What is BSL-3?” He was ultimately fine, as far as we know, but the example shows a further disconnect between the different offices tasked with managing the risks of scientific work.

As genome editing continues to develop as a broader institution in biomedicine, there are going to be accidents, and there is going to be misuse. No number of guidelines or codified norms can prevent that. This is why it is crucial that we continue having debates about the norms governing the use of the CRISPR-Cas9 system, both as a promising clinical technique and as a sociocultural institution. My hope is that these debates will lead to concrete regulatory and legal changes that can more directly shape this technology’s use. 


The Terracene: An Interview with Salar Mameni

Salar Mameni

At the intersection of the War on Terror and the Anthropocene lies Salar Mameni’s concept of the Terracene, which describes the co-emergence of these two terms as a means to understand our contemporary social and ecological crises. Mameni, an Assistant Professor in the department of Ethnic Studies at University of California, Berkeley, is an art historian specializing in contemporary transnational art and visual culture in the Arab/Muslim world, with an interdisciplinary research focus on racial discourse, transnational gender politics, militarism, oil cultures, and extractive economies in West Asia. They have published articles in Signs, Women & Performance, Resilience, and Al-Raida Journal, among others.

In this visual interview, Julia Sizek, Matrix Content Curator and a PhD candidate in the UC Berkeley Department of Anthropology, talked with Professor Mameni about their research, working with select images of art discussed in their forthcoming book, Terracene: A Crude Aesthetics.

The concept that you propose in your book, the Terracene, foregrounds the War on Terror as necessary for understanding not only our contemporary political crises, but also our contemporary ecological crisis. Describe your concept, and what it adds to our understanding of the links between terrorism and environmental issues.

My book coins the term “Terracene” in order to bring attention to the role of militarism in enacting the ongoing ecological crises we currently face. I insist that contemporary forms of warfare – such as the infamous War on Terror – are concurrent with and continuations of settler colonial land grabs and habitat destructions that have created wastelands across the globe. In their initial timeline for the Anthropocene,  scientists traced the origins of this new epoch to technological innovations in early 19th-century Europe that brought about industrialization. In my view, this is an inadequate historiography that does not take into account longer histories of European settler colonialisms, as well as the ongoing role of militarism in maintaining wastelands. The term “Terracene” is a way of highlighting the terror that is tied to the current geological timeline.

Terror, however, is not the only idea I intend to highlight with the notion of the Terracene. I also take advantage of the sonic resonance of “terr” (meaning earth/land) in the word “terror” in order to direct our attention to the significance of thinking with the materiality of the earth itself. In my work, I consider this through toxicity of militarism and extractive economies, which turn the earth itself into a weapon that continues to poison even after the troops and the industries have receded. Scholars of environmental racism often highlight the dumping of toxic waste on lands inhabited by racialized, poor, and devalued communities. My book emphasizes the production of “terror” out of “terra,” which can mean the weaponization of the earth itself. Yet, I believe that the very shift of attention to the earth’s many potentialities can also allow for conceptualizing futures out of toxic wastelands. For me, new theories are only useful if they do not simply mount a critique of systems of oppression but also offer new imaginaries as foundations for future directions. Much of my book is attentive to materialities and thought systems that do not align with scientific conceptualizations of ecological thinking as a way of opening up new modes of thought.

Part of the reason you relate the Anthropocene and War on Terror is because of their coeval histories. Aside from emerging during the same era, how are the histories of these two concepts — terrorism and the Anthropocene —related?

Yes, the so-called War on Terror, as well as the scientific notion of the Anthropocene, were both popularized in 2001, each proposing a new way of conceptualizing the globe. What is fascinating to me is how each of these ideas revolves around an antagonist: the terrorist in one case, and the Human (Anthropos) who caused climate change in the other.

The question I raise in the book is this: why is it that the term “terrorist” cannot be applied to the Human who has caused deforestations, temperature rise, and oil spills, making the globe uninhabitable for endangered species, as well as threatening the livelihood of multi-species communities globally? Why is the notion of the “terrorist” instead reserved for those who protest the building of oil pipelines on Indigenous lands, or those who resist settler colonialism in places such as Palestine? This tension brought me to see that the idea of the Human (Anthropos) continues to be limited to those engaged in settler colonial ventures, those who are protected against the “terrorist” through the security state.

What do you think the study of art history can bring to the Anthropocene, which is often described through science?

Great question! The book argues that “science” is a provincial worldview that has displaced a plethora of diverse thought systems that are in turn called “art” (or “myth” or “superstition” or “religion”). So my first approach in the book is to question the very art/science divide that disallows those deemed non-scientists to participate in knowledge production. Non-scientists have of course included very large groups, such as women, non-Western knowledge producers, and non-human intelligent beings. This vast array of intelligence left out of “science” says much about the limits and hubris of scientific thought. My book opens up space for artists who think beyond the reaches of scientific ecologies. A part of the book, for example, is dedicated to ecologies of ancient deities. For instance, I consider Huma, the Mesopotamian deity who has been conjured and resurrected by the contemporary Iranian artist Morehshin Allahyari (Fig. 1).

Morehshin Allahyari, "She Who Sees the Unknown: Huma" (2016), Image courtesy of the artist.
Figure 1: Morehshin Allahyari, “She Who Sees the Unknown: Huma” (2016), Image courtesy of the artist.


As the artist explains, this is the deity of temperatures. Huma’s body is multi-layered and mutative. It has three horned heads, a torso hung with large breasts, and two snake-like tails. Huma is multi-species and multi-gendered and is the deity that rules temperatures. In a time of temperature rise, wildfire, and fevers brought about by the COVID-19 pandemic, Huma is the deity to conjure. Indeed, Allahyari conjures her as a protector, but also builds her out of petrochemicals, the plastic used in 3D printers.

I also take seriously the intelligence of non-human phenomena such as oil. In the book, I consider images of explosions at a Southern Iranian oil field, as documented by the Iranian filmmaker Ibrahim Golestan in a film called A Fire! (1961) (Fig. 2).

Fig 2. Still from “A Fire!”(Dir. Ebrahim Golestan, 1961)

Rather than thinking about the human triumph of putting out the explosive fire, which took 70 days to extinguish, I consider the intelligence of petroleum that refuses to be extracted from bedrock. I call this human/oil relationality “petrorefusal” in order to call attention to the unidirectional master narrative of extraction. What would it mean, for instance, if we understood explosions as petroleum’s refusal to leave the ground? Would engaging such a refusal mean an end to extractive practices at the current industrial-capitalist scale?

Though you are an art historian, you are attentive to the limits of the visual as a mode of sensing the world. How do you bring other modes of sensing into your work, and how does this shape your approach to art history, which is often imagined as a visual discipline?

Yes, the dominance of the visual within traditions of art history cannot tackle the rich sensorial relations that ecological thinking needs. In the examples of the artworks I cite above, for instance, my theories do not arise from the visual aspects of the works alone. In the case of Huma, a visual reading would miss the spiritual and ethical significance of the deity’s conjuring. Instead, my reading of Huma engages with the object’s deep time, a time that dissolves its plastic materiality into the microbial temporality of oil’s production. In this sense, the sculpture is not simply and statically visual or coeval with our present moment. If we focus on the time of oil and plastic, the sculpture moves into a performative, mutative flux of multi-species organisms across temporalities that are beyond our own. The book as a whole treats the visual as embedded within (and inseparable from) multiple sensorial experiences.

How does art add to our understanding of the Terracene?

I coined the term Terracene as a critique of the notion of the Anthropocene. It is meant to question the centering of a destructive Human (Anthropos) at the core of a planetary story. In this sense, I probe the narrative structure of this scientific story of the Anthropocene — a story that is proposed to be a fact. Usually, storytelling is understood to belong to the domain of arts and humanities. By definition, stories are not checked for factual accuracy, but engaged with at the level of the creative imagination. This is precisely what gives stories their power. Stories can build alternate worlds and offer alternatives to how we perceive reality to be. So if the Anthropocene is a story, then surely other stories can be told. The Anthropocene story is a story of the destructive human, which is why I propose that it is better called the Terracene.

What if we began to tell creation stories at the moment of planetary destruction? Indigenous cultures across the world have creation stories that have been vehemently suppressed by destructive (settler) colonial knowledge productions and worldviews. In the book, I make a case for ethical engagements with subjugated forms of knowledge that offer alternatives to thought systems that have brought the Terracene into being. One such story I relate in the book comes from my own vernacular Islamic culture that imagines the world as a sacred mountain balancing on the horns of a bull, the bull standing on the back of a fish, and the fish, in turn, being held up by the wings of an angel.

Salar Mameni, "Creation Story" (2022)
Fig. 3: Salar Mameni, “Creation Story” (2022)

I argue that such a creation story emphasizes the inter-relatedness and inter-reliance of all things. The world hangs together in a fine balance, with every creature mattering to its overall existence. Art, in this sense, is not an alien other to science, but an equal participant in the creation of worlds we inhabit.



What Happened to the Week? An Interview with David Henkin

David Henkin

We take the seven-day week for granted, rarely asking what anchors it or what it does to us. Yet weeks are not dictated by the natural order. They are, in fact, an artificial construction of the modern world.

For this episode of the Matrix podcast, Julia Sizek interviewed David M. Henkin, the Margaret Byrne Professor of History, about his book, The Week: A History of the Unnatural Rhythms that Make Us Who We Are. With meticulous archival research that draws on a wide array of sources — including newspapers, restaurant menus, theater schedules, marriage records, school curricula, folklore, housekeeping guides, courtroom testimony, and diaries — Henkin reveals how our current devotion to weekly rhythms emerged in the United States during the first half of the 19th century.

Reconstructing how weekly patterns insinuated themselves into the social practices and mental habits of Americans, Henkin argues that the week is more than just a regimen of rest days or breaks from work, but a dominant organizational principle of modern society. Ultimately, the seven-day week shapes our understanding and experience of time.

Excerpts from the interview are included below (with questions and responses edited).

Listen to this interview as a podcast below, or listen and subscribe on Google Podcasts or Apple Podcasts.



What are the different ways people have thought about the week?

The seven-day week does many things for us in the modern world, but we tend to focus exclusively on one of them, and that’s the idea that we have a unit of time that divides weekdays and weekends, work from leisure, profane time from sacred time. The week creates two kinds of days. But by its very structure, the week also divides time into seven distinct, heterogeneous units. Every day is fundamentally different from the day that precedes or follows it. The names we use for the days of the week suggest no numerical relationship between days. The week also lumps time together for us in interesting ways. We talk about what we did this week, what we hope to get done next week. What the week does most conspicuously and powerfully for us in the modern world is coordinate our schedules. It sequesters or regulates the timing of certain activities, especially activities that we try to do in conjunction with strangers.

How did people begin to use the week for stranger sociality?

The best example might be a market day, where you want to only have a public market every so often, and you want to make sure everyone can be there. And everyone remembers when it is and it doesn’t conflict with other things. That’s one model for it. But I argue in the book that it was really only in the early 19th century that large numbers of people began to have schedules that were different from one day of the week to another.

The institutions that helped produce that are varied. They included things like mail schedules, newspaper schedules, school schedules, voluntary associations (like fraternal orders or lodges), and commercial entertainment, like theater or baseball games. The more people lived in large towns and cities, the more they were bound to patterns of mail delivery or periodical publication, and the more likely they were to have regular activities that took place every seven days, or on one day of the week or another. Once they had that, it was a self-perpetuating cycle, because then you’ll begin to schedule other activities so as not to conflict with them, or to be memorable and convenient. The weekly calendar began to be used to organize these regularly recurring activities, typically that involved strangers and were open to the public.

Today, we often think about having the work week, and then the weekend, if we are so lucky. What are the ways that historians think about this division of either week and weekend, based on work or leisure?

Historians haven’t really thought too much about the weekly calendar at all, but to the extent that they have, they have focused exclusively on this question of the work week. Most commonly, they’ve studied the ways in which organized labor or capital have sought to control or regulate the length, pace, and even the timing of the work week.

The Industrial Revolution brought about a hardening of the boundaries between work and leisure, rather than having leisure bleed into Monday, or having work bleed into Saturday or Sunday. Something industrial the week has done for for centuries, even for millennia, is from biblical origins. The concept of a Sabbath is essentially an industrial one, which says there’s a time for work, and a time for rest or “not work.” That’s how historians have written about it.

Historians have not paid much attention to the role of leisure in organizing weekdays. They have paid attention to the role of leisure in giving special meaning to Sunday, and the great debates over how one should spend one’s Sunday — whether it should be in church, or going to the theater, or whether it must not involve alcohol, or whether it can involve sex, or whether the mail can be delivered. That all features prominently in the historiography of 19th-century America. But few have noticed that people’s lives have these other weekly rhythms, too.

What were the sources you drew upon to come to your conclusions about how the week is shifting and changing?

There were two kinds of sources. The first is a bit boring, but phenomenally important, which is that if you look at any newspaper or city directory, or anyone’s account of their lives, you suddenly realize how many activities they engage in that are pegged to the week, whether it’s going to musical societies or temperance lectures or anti-slavery organizations. You notice that they’re organizing by the week. It’s glaring at you and in plain view, but if you don’t ask the question, then you won’t actually see it. We know that newspapers typically came out once a week, but on which day of the week did they come out? Was it the same? Did it vary? Things like that don’t require a huge amount of digging. It just requires asking the question. You can basically ask that question to almost every public document from the first half of the 19th century in the United States, and those documents that register life in an urban or semi-urban society create a thick catalogue of weekly activities, obligations, and habits.

You also look at diaries. What are some of the insights you can get from diaries, and how did the practice of diary-making change during the period of time you’re looking at?

Diaries tell us what whether people went to French class on a Wednesday or not, but the cool thing that they do, along with correspondence and other kinds of recollections, is allow people to narrate their own experiences. Those are fascinating because you can not only see what they did, but how they remembered — or sometimes failed to remember — what day of the week it was. One of the things I came to be especially impressed by during the course of my research for this book was the link between the week and memory. We can use diaries as the main example, because that’s probably the single source type that I immersed myself in most most deeply. Diaries are not hard to find. They are everywhere. The challenge there was to spend years looking at as many of them as I could, then thinking about the various kinds of archival biases I needed to overcome to make sure I was looking at a broad range of diaries.

Diary-keeping is a very old activity. I would say it became a mass practice in the United States in the early 19th century. In New England, it was somewhat widely practiced even in the 18th century, but became much more so in the 19th century, and the 19th century also saw the rise of the pre-formatted diary book. It had been introduced as a consumer good in the United States in the 1770s, but totally bombed. No one really wanted such a thing. Instead, people used almanacs with a standard format of calendar as a material artifact. Almanacs are organized around the month, and they tend to focus on naturally observable things, like the weather. People didn’t really see any need for a pocket diary that you could write stuff in. But by the 1820s, these were suddenly quite popular. The most common format was six days to a spread, sometimes seven. It conditioned people to thinking about their lives in chunks of time that were much smaller than a month, but bigger than a day.

You mentioned that a lot of historians of industrial capitalism have focused on the work day. What do your insights about the week have to bear against the focus on the hour?

The hour is by far the time unit that has been of greatest interest not only to historians of labor, but also to historians of time, who have been far more interested in the clock than the calendar, in part because the clock is a mechanical device, and we tend to look for technologies to explain fundamental changes in temporal consciousness, whereas calendars don’t seem to be that kind of technology. The week is not measured any more precisely today than it was 100 years ago, or even 500 or 1000 years ago. The hour is very much associated with punctuality, and with discipline. The 19th century is really also when large parts of the world began calculating hours the way we do today, which is to conceive of it as 60 minutes, and as 1/24 of a full daily cycle, which is not how most societies used to define which they define it, which was as 1/12 of the variable amount of daylight.

When you read about the week, you realize that you’re looking at a unit of time that doesn’t fit into any of the big paradigms that have drawn our interest to the hour. We’re interested in the hour because we think that pre-modern time was natural and observable. Modern time is homogeneous. It’s arithmetically calculable, and fundamentally alienated from nature. But the week is equally artificial. It’s not actually rooted in natural rhythms, and it’s not confirmed or correctable by observable natural phenomena. It’s very rigid and artificial, but it’s also very, very old. So once you stop assuming that clock time is the way to look for the hallmarks of modernity, I think it opens up new ways of being interested in the week. The week wasn’t even a universal system of any kind in large parts of the world, including East Asia, which did just fine without thinking of the seven-day cycle as a timekeeping register of any kind. My research into the week makes me think of the hour as a less less apt symbol for the difference between modern and pre-modern timekeeping. The week is a heterogeneous timekeeping system. The homogeneity of time is a powerful feature of modern timekeeping, but the seven-day week says that no two days are alike. We speak about daily life, everyday life, but the week resists that whole notion. It insisted that no two consecutive days are substitutable. It would seem to correspond with pre-modern notions of time movement and heterogeneity that used to interest anthropologists about timekeeping in primitive societies, and yet it is fundamentally modern and has only now in the last 100 years become a global timekeeping system.

The week is more about the calendar that you keep, and not about the town square, which doesn’t raise a different flag on Mondays or Tuesdays. It raises the question about the way that the week has been seen to be subpar, or irrational. There have been different projects to try to remake the week into something that is more like a clock tower. What have some of those projects been?

There have been three big ones. They’re all big, because they all represent an attack on the seven-day week from very powerful, and in many other respects, successful revolutionary movements.

The first was the French Revolution, which sought to rationalize and standardize measurements of all kind, and succeeded. Many of the ways in which we measure things, especially outside the United States, are a product of the French Revolution and its belief in enlightened rationality. The French Revolution also had another gripe with the week, apart from the fact that it’s awkward and irrational, which is that it seemed to be the fundamental anchor of the power of the Catholic Church, in old regime France. So the French revolutionaries created a new calendar. They not only renamed months and years, but they also more radically introduced a 10-day week, called a decade. And it was fundamentally different from the seven day week. And it was a failed experiment.

The next big one was the Soviet attack on the week. Soviets were mostly interested in continuous production in factories, but they also wanted to undermine the power of the Russian Orthodox Church. They first went to a five-day week, then a six-day week, and then weeks were not coordinated. That was the part that had to do with continuous production, similar to a hospital or any other operation that seeks continuous operation: I have one day off, but my best friend or wife might have another one. That failed, in part because of resistance to having a non-coordinated week.

The third attack is less well known, but represents American and European corporate capitalism, and the rational reforms favored by big businesses that they largely succeeded in creating by World War One. It was a system of timekeeping that’s universal, that gave us things like time zones, where you can divide the world in the 24 zones, and also a line that marks where the day officially ends and begins somewhere in the Pacific Ocean that’s antipodal to Greenwich, England. Or daylight savings time, the idea that you can manipulate the clock for various social or economic benefits. All these things are product of what my colleague Vanessa Ogle calls the global transformation of time between 1880 and 1920.

The one thing that the many those same reformers wanted to do — and failed to do — was to tame the week by making it an even subdivision of months, and especially of years. And that’s not a very big change, right? They’re not making the week longer or shorter. They’re not making it non-coordinated. All they’re doing is saying that at the end of every year, there’ll be one day, or two if it’s a leap year, that are blank. Most proposals to tame the week as I would call it, or reform the week, simply asked for one or two blank days that would have no weekly value. And the purpose was so the cycle of weeks would be 364 days, not 365, and therefore divisible by seven, and therefore every January 28, would be a Monday. The League of Nations took it up and considered it, but rejected it. Many people assumed that this was the wave of the future, but instead it suffered the fate of Esperanto, and not the fate of timezones.

Meanwhile, the week was entering, without much resistance, all these societies that never had one. If I were a historian of Japan, I would really want to study, what was the cognitive process, the cultural process, and the political process by which a society that had never counted continuous seven-day cycles suddenly began organizing not only its work life, but life more generally, around this complete innovation? It’s not flashy like the internet. But it is a technology, and it was completely new in Japan. It’s a different story in the United States, where the technology was quite old. and was doing new things for people without anyone really commenting on it.