Article

The Effects of Reparations: A Visual Interview with Arlen Guarin

Arlen Guarin

What are the impacts of reparations on the lives of victims of violence? Arlen Guarin, a PhD Candidate in Economics at UC Berkeley, studies the effects of policies that aim to reduce poverty and inequality, including reparations given to victims of human rights violations in Colombia.

His research draws upon tools in applied econometrics to identify the causal impacts of various policies by linking large administrative datasets that capture information on a broad range of outcomes, including labor market outcomes, consumption, health, and human capital formation. His research uses careful analysis of administrative data to demonstrate how unrestricted reparations improve the lives of recipients.

For this interview, Matrix content curator Julia Sizek asked him about a working paper that he developed with Juliana Londoño-Vélez (UCLA) and Christian Posso (Banco de la República). 

 

bar graph
This figure displays the frequency of human rights violations during Colombia’s internal conflict, based on the date when the victimization (or human rights violation) occurred. Source: Authors’ calculation using Unified Victim’s Registry (Registro Único de Víctimas, RUV) data from National Information Network Subdirectorate (Subdirección Red Nacional de Información, SRNI).

 

The internal armed conflict of Colombia, which has included a prolonged conflict between the FARC-EP (Revolutionary Armed Forces of Colombia-People’s Army) and the government, has been a central part of Colombian politics since the 1960s. Can you describe the history of the conflict, and how the Colombian government decided to address the effects of this conflict through a reparations program?  

Colombia has had a very long internal armed conflict, the most prolonged in the Western Hemisphere. In the mid-1960s, some left-wing rebel groups like FARC-EP and ELN (National Liberation Army) emerged in remote regions of the country. In the 1980s, the violence escalated as right-wing paramilitary groups developed in order to contain the emergence of left-wing guerrillas and protect landowners and drug lords who were involved in the increasingly profitable cocaine trade. This intensified conflict caused an increased number of attacks against civilians. 

Between 1980 and 2010, the conflict claimed hundreds of thousands of lives, and almost nine million people were affected by the conflict. Attacks were widespread, with rural and poorer areas disproportionately affected by the violence. The majority of victims were forcibly displaced; the remainder had family members who were forcibly disappeared or murdered.

After a failed peace negotiation between the government and FARC-EP in 2002, violence peaked, as did victimizations of civilians. Following the peak, the number of victimizations decreased as Colombia attempted to transition toward peace and reconciliation. In 2005, Colombia demobilized paramilitary groups and reintegrated them into civilian life through the Peace and Justice Law. And in 2016, the government negotiated and signed a peace treaty with FARC–EP. 

As part of its attempts to transition into post-conflict reconciliation, the government passed the Victims’ Law in 2011. Considered one of the world’s largest and most ambitious peacebuilding and recovery programs, the law seeks to award reparations by 2031 to 7.4 million individuals victimized by guerrilla, paramilitary, or state forces . (While approximately 8.9 million people were victimized, today only 7.4 million people are eligible for reparations, as some people are deceased or unreachable). In addition to providing reparations, the law aims to restitute dispossessed lands, award humanitarian aid to households in emergency conditions, and enhance access to micro-credit and subsidized housing.

The Victims’ Law has personal significance to me. I was born and raised in a rural Colombian town called Granada. When I was nine years old, the conflict dramatically intensified there. Bombings and massacres claimed the lives of many of my neighbors. Innocent people were routinely taken away for “questioning,” and we would later learn they had been murdered.

In 2014, I heard that some of my neighbors and relatives had received the reparation. Shortly after that, I arrived at UC Berkeley to start the PhD. I discussed the idea of studying the impacts of the Victims’ Law with one of my fellow students, Juliana Londoño-Vélez. We decided to work together on the project, and she encouraged me to start working on the necessary data. 

This figure plots the different victims of each type of victimization, as tracked by the Colombian government. Because the Colombian state tracked reparations by the harms suffered by individuals, a victim can count in the categories of forced displacement and homicide or forced disappearance if (s)he was both forcibly displaced and has relatives who were victims of homicide or forced disappearance. The category “other” includes victims of torture, rape, or kidnapping. Source: Authors’ calculation using RUV data from SRNI.

Over seven million Colombians — more than ten percent of the population — suffered as a result of the conflict. Can you describe how the different types of victimization are being compensated through the 2011 Victims’ Law? 

Almost one in five Colombians is a victim of the conflict, or approximately 8.9 million people. During the last three decades, nearly eight million individuals were forcibly displaced, and 1.2 million people had their relatives murdered or forcibly disappeared. Thousands of others were raped, kidnapped, tortured, injured by landmines, or forcibly recruited as minors.

The Victim’s Law aimed to award reparations to the nearly 7.4 million who registered as victims. Victims included those who suffered forced displacement, homicide, forced disappearance or kidnapping, rape, injury from landmines, or other injustices. For those who died or disappeared during the conflict, their family members were awarded in their stead. The law also defined the size of the reparations and indexed them to the monthly national minimum wage (currently $250 USD), a figure that changes each year. Reparations to victims or their families are delivered at the household level. The size of the reparation depends only on the type of the victimization, with victims whose relatives were murdered or forcibly disappeared receiving 40 times the minimum wage – approximately 10,000 USD – and victims of forced displacement receiving 27 times the minimum wage. 

The amount of money is often sizable for the receiving victim, especially since many Colombians earn below the minimum wage.  For the population we study, the average reparation represents more than six years of income and thereby has the potential to improve victims’ wellbeing in the long run. The goal of our research project was to understand whether this money could help undo some of the socioeconomic gaps induced by victimization. For example, could it help victims find better jobs? Could it improve their health? Could it increase the educational opportunities available to their children?

This picture was taken during one of the meetings held by UARIV (Unidad Para La Atención Y Reparación Integral a Las Víctimas, or Unity for the Complete Attention and Reparation for Victims), a government entity responsible for dispensing reparations. The UARIV informs victims when they will receive a reparations check. Photo credit: Arlen Guarin.

This image shows one of the victim reparation meetings, in which victims are informed that they are going to receive reparation payments. How has the reparations process been run at the state level, and how are victims informed about when they will receive their payments?

The Victims’ Law created the Victims’ Unit, a government-run agency that has been in charge of the administration and delivery of reparations. Despite being logistically and operationally managed from Bogotá, the capital of Colombia, the Victims’ Unit has more than 30 regional centers and hundreds of contact centers around the country, where the final details of the delivery of the reparations are coordinated.

From the victims’ perspective, the process of receiving the reparation is as follows. First, they receive an unexpected phone call from the Victims’ Unit. The caller instructs them to attend an “important” meeting at a specified time and location but does not mention a reparation. At that time, some victims may suspect that they are going to be given the reparation since they may have learned from others’ experiences, but the timing of the call itself is unexpected.

A few days later, the victim arrives at said meeting, usually at one of the regional centers. During the meeting, the victim is informed that they will receive reparation and is given a letter. The letter formally acknowledges that the victimizations never should have happened and describes when the reparation check can be collected from Banco Agrario, Colombia’s state bank, which is usually 1–2 weeks later.

graph
This figure plots when reparations were paid to victims. The figure shows the series by victimization type: homicide or forced disappearance, forced displacement, and all other types of victimizations.

Because of the large scale of the program, reparations were not paid all at once. How were you and your coauthors, Juliana Londoño-Vélez and Christian Posso, able to use the timing of the reparations to understand the causal effects of these payments? 

We used microdata from the universe of registered victims, a unified and centralized registry covering the more than eight million individuals who reported being victimized during the Colombian internal conflict by August 2019. We linked the victims registry to eight other national administrative data sets containing information on formal employment, entrepreneurship, access and use of formal loans, land and homeownership, health care system utilization, postsecondary attendance, and high school performance for all members of the victims’ households. 

Our final dataset has information on millions of victims eligible to receive the reparation payment and a comprehensive list of outcomes observed before and after the arrival of the payment. These types of data, in which the outcomes for the same individual can be observed over time, are called panel data sets.

Importantly for us, due to government budget and operational constraints – you can imagine the constraints associated with compensating one in seven Colombians – the rollout of the reparations program was staggered over time. This feature, together with the fact that the arrival times of the payments were unanticipated, have allowed us to identify the causal effect of the reparations using an empirical econometric approach called an event study.

An event study is a methodology that is used in contexts when the implementation of the program that is subject to be evaluated occurs over time rather than at once (staggered adoption), and where we can observe individuals and their characteristics at different points in time (panel data),  as in our case. Intuitively, this methodology compares outcomes for victims who have received the reparation to those who have not yet received it. By comparing outcomes between these groups, we are able to isolate the causal impacts of the program on victims’ long-term outcomes. 

This picture was taken at one of the victim reparation meetings between UARIV and a group of beneficiaries in Medellin, Colombia. After receiving reparations, victims could voluntarily participate in investment workshops, where they would receive information in budgeting and investing, including getting help to obtain small business or student loans and pay off old debts. (This program was known as “Programa de acompañamiento de inversión adecuada de los recursos.”)

This image shows an educational fair in which the victims are taught about how to use their reparations. How does the Colombian government view reparations as a tool for development? 

The government presented reparations to victims as seed money to transform their lives; specifically, they suggested that the victims use the money to invest in productive activities, such as postsecondary education, business creation, or housing, which could improve their families’ long-term wellbeing. By presenting the reparations in this way, the government was treating reparations like “labeled” cash transfers, as they suggest that victims invest the money in specific activities. 

In line with this purpose, the government held fairs to connect victims with local public and private institutions providing investment opportunities in education, housing, land, and small businesses. Victims could also voluntarily participate in investment workshops, where they would receive training in budgeting and investing, including getting help to obtain small business or student loans and pay off old debts .

The government also used the reparations to recognize the harm suffered by victims. The letter received at the time of the reparation also includes a dignification message about what the reparation means that reads roughly as follows:

“As the Colombian State, we deeply regret that your rights have been violated by a conflict that never should have happened. We know that the war has differentially affected millions of people in the country, and we understand the serious consequences it has had — it is impossible to imagine how much pain this conflict has caused. However, from the Victims’ Unit, we have witnessed conflict survivors’ capacity for transformation over these years. We have witnessed their spirit to keep going, their strength to raise their voices against those who have wanted to silence them, their ability to rebuild their lives… For this reason, with your help, we are working so that you can live in a peaceful Colombia since it is the victims who actively contribute to the development of a new society and a better future.”

As you mention in the previous answer, the Colombian government treats these reparations not only as a recognition for harms suffered, but as a means to raise the standard of living. How do the insights from this case help us understand poverty reduction and basic income programs, and how does this differ from previous research on the topic? 

The literature on reparations has largely consisted of qualitative work by political scientists, lawyers, sociologists, and other experts on transitional justice. We differ from prior approaches by offering one of the first known quantitative studies of a large-scale reparation program, in which we exploit rich existing administrative-level data on millions of victims of the conflict in Colombia to provide evidence on the causal effects of the reparations.

We also contribute to the literature on the effectiveness of cash transfers for poverty alleviation. Despite sharing similar features, Colombia’s reparations differ from the traditional version of those programs in two ways. First, the average reparation is over three years’ worth of household income and, therefore, substantially larger than most unconditional cash transfers. Second, reparations target victims of human rights violations, a uniquely vulnerable population. Adverse shocks in conflict settings, like forced displacement, can have lifelong detrimental effects and trap victims in poverty. We show that by providing households with a large, lump-sum grant, reparation can serve as a “big push” policy for the victim to transform their lives and escape poverty traps.

This figure summarizes the relative effects of reparation on adult victims and their children, using data collected either three or four years after reparations were paid. Each row reports the change based on the variable, with a 95 percent confidence interval. The variable ED is emergency department visit.

In this chart, we see how the reparations changed the lives of conflict victims. What were the effects of these reparations, both economic and non-economic, and what does this mean for thinking about reparations and universal basic income programs more broadly? 

We divide our results into three sections: the impacts on work and living standards, health, and human capital accumulation. 

For impacts on work, we find that reparations only have an economically small effect, with the money allowing victims to improve their working conditions, earn more money, and create more businesses. We also find that reparations increase victims’ consumption and wealth, as it allows them to buy a home or more land.

We also find that reparations cause an economically meaningful decrease in health care utilization. Victims are less likely to visit the emergency department, less likely to be hospitalized, and have fewer medical procedures after receiving the reparation. These findings are consistent with improved health due to better working and living conditions stemming from the reparation, findings that are novel in light of the inconclusive evidence for the impacts of money on the use of health services and physical health outcomes. 

Finally, we find that reparations close most of the intergenerational educational gap caused by the victimization. Victims frequently use reparations to enroll in and attend college for the first time. They also improve the high school test scores of the younger members of the households, an effect that is not explained by changes in the high schools that they attend. In the study, we conduct a back-of-the-envelope cost-benefit analysis that shows that the gains from reparation outweigh the monetary costs. This makes them both a progressive and efficient policy tool to promote recovery and development. 

Overall, our findings suggest that reparations programs improve long-term wellbeing along many dimensions. My hope is that this research can inform governments that are considering ways to heal the wounds induced by human rights violations. 

 

Podcast

A Changing Landscape for Farmers in India: An Interview with Aarti Sethi and Tanya Matthan

Aarti Sethi and Tanya Matthan

In countries around the world, the “Green Revolution” has changed the scale and economy of growing crops, as pesticides, fertilizers, and new kinds of hybrid seeds have transformed the production process. In this episode of the Matrix Podcast, Julia Sizek spoke with two UC Berkeley scholars who study agrarian life in India, where farmers have been forced to adapt to changes in technology.

Aarti Sethi is Assistant Professor in the Department of Anthropology at UC Berkeley. She is a socio-cultural anthropologist with primary interests in agrarian anthropology, political-economy, and the study of South Asia. Her book manuscript, Cotton Fever in Central India, examines cash-crop economies to understand how monetary debt undertaken for transgenic cotton-cultivation transforms intimate, social, and productive relations in rural society.

Tanya Matthan is a S.V. Ciriacy-Wantrup Postdoctoral Fellow in UC Berkeley’s Department of Geography. An economic anthropologist and political ecologist, she finished her PhD in Anthropology at UCLA in 2021. Her current book project, tentatively titled, The Monsoon and the Market: Economies of Risk in Rural India, examines experiences of and responses to agrarian uncertainty among farmers in central India. 

Listen to the full podcast below or on Google Podcasts or Apple Podcasts.  Visit the Matrix Podcast page for more episodes.


Excerpts from the interview are included below, edited for length and clarity.


Q: You both study agriculture in India, but India has many different agricultural and ecological zones. Can you help us understand your research sites and how they fit into agricultural production in India more broadly?

Tanya Matthan: The region in which I work is called Malwa, which is located in central India, in the state of Madhya Pradesh. The history of Malwa is interesting, because prior to Indian independence, it was ruled by a number of princely states. Ecologically, it’s a semi-arid region, and it’s known for its very fertile black soil. And it is also a region that has always been tied to global networks of trade and markets, through the cultivation of crops such as cotton and opium  in the past, and now soybean and wheat, which are grown for national and global markets. Ecologically, it’s a very interesting region, and both different and similar to other parts of agrarian India.

Aarti Sethi: We work in regions that are both close by and also very far away. Subcontinental India is agriculturally very diverse and also very vast. I work in a region in east central India called Vidarbha. It’s about 500 kilometers inland from Bombay, in the state of Maharashtra. Vidarbha is part of the central Deccan Plateau, and it has black soils. Cotton is a very, very old crop in Vidarbha.

The reason I find Vidarhba to be a very interesting region to understand the long history of agrarian capitalism in India is because, in Vidarbha, local cotton production has been entangled with a global capitalist market — we could say a colonial capitalist market — for a very long time. We have evidence for cotton cultivation in this region for three millennia. But to take a more recent history, this is a region that became settled to the intensive cash cropping of cotton after it was taken over by the British colonial state in the mid-19th century. This happened in the wake of the fall in global cotton production and supply in the wake of the American Civil War. So there’s actually a very interesting historical relationship between Vidarbha and the American South.

This is the period when the British colonial state expanded what were called “settlement operations” and created new villages. A new peasantry came into being in what used to be an agro-pastoral region, that was specifically cropping cotton for a colonial market. And so you can see in Vidarbha a peasantry that is entangled wit international commodity markets in a very specific way. You can see this in the forms of land tenure that came into place at this time for instance. It’s an early form and moment of agrarian capitalism, and these processes that we see beginning in the late 19th century have a bearing on the cotton crisis in Vidarbha today. It is also an arid agro-ecological region that is very prone to droughts. These are the kinds of agricultural and ecological constraints within which agriculture in Vidarbha happens.

Q: You alluded to the fact that agriculture is changing in India and that farmers are facing new challenges, which both of you study in different ways. Can you tell us more about what those challenges are today?

Sethi: The specific challenges that we see vis-à-vis cotton production in Vidarbha today have to do with the emergence of a sharp economy of indebtedness, which begins from the mid-1990s. Over the next two decades, this becomes a very widespread mode of agriculture in Vidarbha. And this expansion of monetary debt as a critical component in the agricultural process in Vidarbha has had several economic and social consequences. One of the most tragic of them has been that Vidarbha is at the center (and has been for the last two decades) of a suicide epidemic where over a quarter of a million farmers have taken their lives across India. This is not a crisis only focused on Vidarbha, but Vidarbha is one of the earliest regions where the suicide epidemic began, so Vidarbha has become emblematic of a broader crisis in agriculture. The introduction of a new transgenic crop, Bt cotton, has sharply exacerbated the general prolonged agrarian crisis in which India finds itself.

Matthan: A place like Malwa also exhibits a lot of these same dimensions of this agrarian crisis. So you have, for instance, high levels of indebtedness, rising costs of production, extremely volatile prices of commodities. And ecologically we can see in Malwa the falling water tables. So many aspects of this crisis are evident in a place like Malwa.

One of the reasons I was interested in studying a region like Malwa, which is quite under-studied in Indian agrarian history, is because this region has been hailed as a sort of recent agricultural growth story. It’s emerging as a horticultural hub for the production of these high-value vegetables. But it’s also very recently been a site of protest. For instance, in 2018, six farmers were killed by the police as they were protesting crushingly low prices for their commodities.

One of the reasons why Malwa was interesting is because the state government has been at the fore of implementing and promoting a lot of risk management policies, trying to address some of these challenges through things like crop insurance, price support schemes, and so on. I was interested in how the Indian state is responding to these agrarian challenges and with what social and ecological effects. So, I’m looking at the crisis and some responses to it, and the implications of that.

Q: This seems like a complicated story. On the one hand, farmers’ debts are accruing, but there are also emerging forms of crop insurance that are presumably replacing other forms of government support that existed previously for farmers. From the Green Revolution to today, how have the forms of support for farmers changed? And what are the reasons why farming has become so much more expensive to do?

Sethi: If you look at cotton production over a recent historical durée — say, from the mid-19th century onwards — then we can think of three phases of cotton production: a precolonial economy of cotton, a postcolonial economy of cotton, and then a recent neoliberal economy of cotton.

The Green Revolution is very central now in the imaginations of the postcolonial economy, but the Green Revolution had a variegated uptake across the country. It was first introduced in the northern states of Punjab and Haryana, with wheat and rice as the primary Green Revolution crops. This turn to science and technology then had ancillary effects across the agrarian landscape.

The improvement of cotton has a very long history in India, beginning from the cotton improvement projects started by the colonial state. This is because cotton is such an important fiber crop in India. One thing to remember is that the Green Revolution produces a kind of economy of agricultural production that is entirely reliant on state support. Through the Green Revolution, the state undertakes different sorts of functions towards agriculture such as introducing a minimum price support for farmers, encouraging the use of chemicals and pesticides, creating pesticide and fertilizer subsidies and electricity subsidies, and very importantly, a state scientific establishment that is heavily involved in the development of new hybrid and cotton varieties. It is a public commitment that the postcolonial state undertakes towards agriculture in India. This included the All India Coordinated Research Project on Cotton, the establishment of 21 agricultural research universities, and the Central Institute for Cotton Research.

What the state does, and what scientists working in the public scientific apparatus do at this time, is take a very central role in developing new forms of seeds, and, through state extension mechanisms, getting those seeds to cultivators. This is very important to the Bt cotton story, as it is through this moment of what we could call the Green Revolution that the first hybrid cotton seed is created for the first time in India. And these hybrid seeds have far greater yields than conventional cotton varieties. This is the moment at which farmers who have access to large land holdings begin to adopt these new technologies and increase cotton yields and cotton production.

Now, this also comes with its problems. But the point I want to make is that the Green Revolution has a complex history in India. On the one hand, it introduces a non-capitalized, but intensified form of agricultural production, which increases yields. On the other hand, it also produces an ecologically vulnerable form of production that is dependent on high outlays. And this sets the stage for what comes later.

Matthan: Much of that story is a story of Malwa, but Malwa wasn’t initially a Green Revolution state. This was very geographically variegated, and Malwa was not a region that was considered for the introduction of these technologies. So it has a different history, but with many similar effects over the last sort of five decades or so.

What Malwa did see, which is analog to and parallel with the Green Revolution, was what is called the Yellow Revolution in the 1970s, with “yellow” referring to the color of soybeans. As soybean cultivation was introduced and expanded, you see a huge number of transformations in agricultural production: the displacement of crops such as cotton, sugarcane, sorghum that were grown in this region, and a shift to this industrialized model of agricultural production, which is built on monocropping, a huge capital-intensive form of cultivation. So even though it wasn’t directly impacted by initial green revolution years, you see many of the same technologies and logics at work.

Q: The Green Revolution helps to lay out how the government became intimately involved in the production of these crops. But today, a lot of farmers are protesting against the government. How have the conditions changed?

Sethi: What changed was the 1991 liberalization of the Indian economy and the reforms that came with it. Agriculture all over the country was impacted after the reforms phase.  Many, many things change. One of the things that changes is that, prior to 1991, domestic agricultural markets are protected from market volatility. So, if you look at cotton for instance, in Maharashtra, there was something called the Monopoly Procurement Scheme for Cotton, which was meant to support cultivators and increase the cultivation of cotton from the 1970s onwards, all the way till 2002. During this period the state was a monopoly procurer of cotton. All the cotton that cultivators produced could only be acquired by the state, and the state acquired all the cotton cultivators produced. And import duties on fiber imports from other countries were very high.

All of this changes in the post-reforms period. Agricultural products are brought under the General Agreement on Tariffs and Trade (GATT), and import duties on agriculture that used to be up to 100% for certain crops fall to 30% in the space of two or three years. The state raises rates on agricultural loans, and it withdraws from providing input support and infrastructure investment in irrigation and scientific research. There are upward revisions of the prices of diesel, of electricity, and of petrol. And all of this precipitously raises the cost of cultivation for farmers, without any change in the actual nature of production. There is no increase in irrigation. There’s no consolidation of land holdings. What you have is widespread adoption of hybrid seeds, which on the one hand, provide much more yield, but they’re also very vulnerable to pest depredation. So from the 1990s onwards, agriculture all over the country enters a huge crisis, and specifically cotton cultivation in Vidarbha.

Matthan: The Green Revolution was only a success, if it can be called a success at all, because of the state supports. So what happens when the state supports are withdrawn? You can see that in a range of arenas of agricultural production, whether it’s subsidies, agricultural extension service — so even the circuits of knowledge on which farmers depended now are increasingly privatized — and there’s less investment in agricultural infrastructures, whether that’s storage infrastructures, or irrigation, and so on. So since the 1990s, a lot of the state support for agriculture on which this model depended is taken away. And alongside that, not only is the cost of production increasing alongside the removal of these subsidies and support, but more broadly, the privatization of education, of health, and so on are also increasing the cost of social reproduction for agricultural households — where they send their children to school, what kinds of health services they access, and so on. So you have a situation in which costs of production are rising while state support and investment are declining.

Q: This obviously has tangible effects for the people who are trying to continue to farm. Both of you actually did research with individual farmers involved, sometimes being out there doing agricultural labor alongside them. Can you just give us an idea of what that looks like, especially since these aren’t big industrial farms that we might imagine here in the American Midwest?

Sethi: Let me answer that question in two parts. One is to actually address what Bt cotton is. I think that’s important because of the extraordinary change that that seed has produced economically, socially, and in terms of the labor regimes on the farm. Bt cotton is a seed that has been genetically modified to resist predation from a certain class of pests: lepidopteran pests. This is the larva of the gray moth, called the pink bollworm. Bt cotton is a trans-gene inserted into the plant, which makes the plant toxic to this larva. When the larva eats GM cotton, it dies. The justification for Bt cotton was that it offered a non-chemical solution to pesticide. And the reason that was important was, as I said, because of the introduction of these hybrid seeds, which are highly vulnerable to pest attacks.

BT cotton as a technology has a very interesting relationship to the legal regime, which is that what Monsanto did was, it nested this technology into a hybrid seed, which cannot be resown. All cotton grown everywhere in the world comes in two forms: something called hybrid cotton, and something called straight line cotton. With straightline cotton, you save your seed this year, you preserve it, and you resow it the next year, and you plant it in density across a field. So this is where I mean a laboring regime. A farmer will plow that field and then dribble seed into furrows in the field with lots and lots of smaller plants produced in a field. What hybrid seeds do is, you can’t resow them the next year. And so you are forced to buy that seed from the market. And the reason Monsanto did this was to protect its patent.

Hybrid seeds transform labor in a very big way. Fewer hybrid seeds are planted in a field, as they need to branch and bowl. Secondly, they have to be fed large amounts of fertilizer and pesticide. This increases costs, and the large amounts of fertilizer and pesticide actually produces huge amounts of weeds. And so things like weeding, which would be done a few times a season, is now done continuously through a season. Weeding is an activity primarily conducted by women. So it has increased the labor days that women spend on a field. Pesticide has to be sprayed very, very often because hybrid seeds foliage a lot, so all kinds of other pests get attracted, which means that men also now are involved in field labor in a different way. It means that women earn more income in their hands than they did earlier, because they have access to this kind of continuous wage labor. But it also means that their forms of domestic labor have vastly increased. So these are all the ways in which these new hybrid seeds and Bt cotton — besides the other social and economic costs — also transform laboring relations between farmers and their fields.

Matthan: I didn’t focus necessarily on one crop in the way that Aarti does with cotton, I found a slew of crops growing across the agricultural year: soybean, wheat, a range of vegetables. And the rhythms of agricultural production change according to the crop and according to the season. 

But in the day-to-day, these are very small farms. The average landholding in a place like India is about one hectare, which is about two and a half acres. These are extremely small farms, and a lot of the labor is done by people in the household alongside agricultural wage labor. It changes based on the crop and based on the season. Across the agricultural year, you have various kinds of activities going on in the field, from weeding, which happens a lot more in the wake of these new seeds and crops, to transplanting seedlings, in the case of onions, to long days of the difficult harvesting in the case of wheat. So you have very different kinds of work being done in the field, depending on the crop and the season. And even though a lot of my work involves going to fields and farms and walking and talking to people in these spaces, the nature of farming is such that it also entails a lot of work in the home, for example. There’s women who are cleaning seed in the home, or sorting produce in the home. There’s a lot of work that happens in the home, in the market, and so on. 

Q: You mentioned that so many of these different crops that people are growing, they’re being grown throughout the year, it’s not just one period of time, and they’re also highly dependent on rainfall, and on different climatic conditions. Can you tell us a little bit about how this has changed and how it relates to the risk that farmers are taking when they’re participating in this market?

Matthan: As I mentioned, there’s a range of crops that are central to agrarian life in a place like Malwa. There’s soybean, which is the primary crop in the monsoon season, roughly between June and October. And then farmers move to growing a range of other crops, most predominantly wheat and gram (chickpea), but also things like onions, potatoes, and garlic, have become increasingly important crops in this region. Each of these crops has a range of different qualities, ecologically, politically, economically, and so on. 

Farmers are making a range of choices and decisions in deciding what to plant, how much to plant, and so on. For instance, things like, how long does this crop take to harvest? So one reason soybean is still popular is because it’s a short duration crop, and certain varieties of seeds have been introduced in Malwa that are extremely short duration. So within 80 days, you can harvest soybean, which allows you to then plant two or three more crop cycles on the same plot of land, which is really important to farmers who don’t have huge land parcels. They can get more and more out of the same plot. 

To go back to the question of how risk plays into this, farmers are making calculations based on engagements with risk and uncertainty. Wheat, for instance, is an extremely water-intensive crop. It requires irrigation, so you have to invest in irrigation. But it’s also considered a safe crop because it can be sold at government procurement centers for a fixed price. So you don’t have to deal with the volatility of the market, you can just take your wheat at the end of the season, and you can be assured of a price. So it’s considered less risky. 

Onions, for example, which are increasingly grown by farmers across class and caste in Malwa, is seen as a risky crop. It requires a great deal of investment in inputs and in labor costs. But it’s also seen as very high-yielding. And it’s risky, because onions are incredibly price-volatile. In India, there’s huge price risks associated with growing onions. Onion prices can shift dramatically within the span of days, and you could potentially garner huge profits, but also face crushing losses if prices crash. There’s a range of risks and opportunities associated with different crops, and farmers are actually making a lot of careful calculations in deciding what to grow and how much to grow and when.

Sethi: One of the peculiar things about the way in which risk is absorbed into an agricultural milieu — and I see this with hybrid GM cotton in a very intense form — is that risk has acquired a new valence in the agricultural milieu where on the one hand, cotton yields have vastly expanded. The potential of what you can reap from cotton has vastly expanded from the pre-hybrid economy of cotton, but so have the risks associated with cotton cultivation.

So the kind of calculations that farmers make is one where farmers both engage in this form of production, and it has produced a sense of an everyday wearing stress. The English word “tension” has now become vernacularized into village speech. Beyond the economic risks, which are manifold and which a lot of scholars and the press have written about, is that cotton cultivation is economically intense. It costs now 25,000 rupees. And the return on investment is very small. It’s about three to five percent. 98% of farming is unirrigated, the monsoons are completely erratic, every farmer has to make a calculation depending on how much debt you have, how long you can hold on to cotton, how you can play the market. If you can store your cotton, you will get a higher price later in the in the in the buying season. But if you are carrying a lot of debt, for your seed costs, your fertilizer costs, your pesticide cost, you have to pay back that debt, and so a lot of small farmers will offload their cotton as soon as the sowing season ends and the cotton procurement season begins.

Risk is both an operative emotion for farmers, because we are talking about a personal relationship to this no-longer-new economy of cotton, and also an economic fact of current agricultural production, which operates at every level of the socioeconomic agricultural order. It is operating at the level of financial risk. It is operating at the level of climatic risk. It is operating at the level of crop failure. It is operating also through family relationships in a really intense way, because everybody requires money to cultivate, and everyone is taking debt from everyone else. So people undertake debt within kinship networks. Which means there is a social and familial risk in which social relations are also placed at risk of fraying. Supposing you take a loan from your maternal uncle, and you can’t pay back that loan in time, then that’s a family relation that has been placed at great risk. So one way to think of risk is to look at it in this expanded sense.

Matthan: You put it beautifully about how risk sort of pervades, and elsewhere you’ve said that risk is the structuring condition of agrarian life. It permeates the economy, but also intimate relations within family. And so while I was interested in using risk as a sort of analytical lens into agrarian change, what I found was as with the use of the term “tension,” the term “risk” was used all the time in rural India. 

So everything was understood in terms of, what is the risk of this? People were using this term all the time to describe a range of activities and practices, not just in relation to farming, but also beyond. There are highly differentiated engagements with risk, based on caste, class, and gender. Many other kinds of calculations go into how people are dealing with with it.

 

 

Article

How Climate Change Became a Security Emergency: An Interview with Brittany Meché

Brittany Meche

How has climate change become a security issue? Geographer Brittany Meché argues that contemporary anxieties about climate change refugees rearticulate colonial power through international security. Through interviews with security and development experts, her research reveals how the so-called “pragmatic solutions” to climate change migration exacerbate climate change injustice. 

For this interview, Julia Sizek, Matrix’s Content Curator, asked Meché about her forthcoming article in New Geographies from the Harvard Graduate School of Design, which considers how expert explanations of climate migration rework the afterlives of empire in the West African Sahel, an area bordering the southern edge of the Sahara, stretching from Senegal and Mauritania in the West to Chad in the East.

Meché is an Assistant Professor of Environmental Studies and Affiliated Faculty in Science and Technology Studies at Williams College. She earned her PhD in Geography from UC Berkeley. Her work has appeared in Antipode, Acme, Society and Space, and in the edited volume A Research Agenda for Military Geographies. Meché is currently completing a book manuscript, Sustainable Empire, about transnational security regimes, environmental knowledge, and the afterlives of empire in the West African Sahel.

Q: Climate change is happening everywhere, but the effects of climate change are highly variable. Your research examines how climate change has come to be seen as a security issue for organizations like the UN and governments like the EU and US. How do they understand the problem of climate change in the West African Sahel?

One of the things that I examine in my research are the interrelations between environmental knowledge and security regimes more broadly. In so many ways, environmental knowledge can’t be divorced from militarism, empire, and other forms of institutionalized power. One of the things that often surprises my students is when they learn that one of the reasons we even know climate change is happening is because the US military poured billions of dollars into environmental science after World War II. That historical context is important, but in the contemporary moment, the consequences of what some scholars have described as “everywhere war” mean that so many aspects of social, political, and economic life become infused with and tied to the logics and infrastructures of security. 

The West African Sahel, where I conduct my research, is a region that is already experiencing the impacts of climate change, from rising temperatures to erratic rainfall patterns. At the same time there have been increasing rates of different forms of armed revolt, which get lumped together as Islamic terrorism. It becomes easy for foreign militaries to say that worsening environmental strain is linked to social and political collapse. In response, foreign militaries propose fortifying the local security sector through security cooperation agreements, military training, and investments in border security. In that way, security solutions replace any careful consideration of the structural inequities of climate change. My research seeks to challenge these approaches through a detailed accounting of how these kinds of security imperatives further imperil already vulnerable communities. 

A map illustrating the Sahel region of Africa. Source.

Q: In your forthcoming article on border security and climate change in the West African Sahel, you address how security actors like the UN respond to what they see as the threat of climate refugees. How are climate refugees understood as a security problem?   

The issue of climate refugees was one of the most vexing issues I encountered during my research. There are no formalized legal conventions about what constitutes a climate refugee or climate migrant, so the terms themselves are capacious and vague in ways that make it difficult to know what they actually describe. Is a climate migrant someone who is displaced during an acute event like a hurricane or earthquake? Someone who is no longer able to grow crops and chooses to relocate elsewhere? Someone who lives in a coastal area or on an island where sea level rise makes reliable habitation less feasible? Or all of the above? And, if so, how can we alter a global refugee system that many scholars — like Harsha Walia, Leslie Gross-Wyrtzen, and Gregory White — have noted is already at times violent, strained, and ineffectual, to accommodate these different categories? 

But more vexing than these conceptual and legal indeterminacies are the ways that present investments in border security and fortification make use of the figure of the climate refugee to whip up xenophobic fears. In my article, I note the ways that climate refugees, almost always depicted as people of color, become ways of making climate change knowable and actionable. Climate change becomes located on the body of migrants of color amid claims that “hordes” of climate migrants from the Global South will inundate the Global North. The embodiment aspect is key, as the literal bodies of these migrants come to signal and stand in for climate change as a security problem. This often leads to calls for “pragmatic solutions” like more border security and more heavily regulated immigration systems. 

Q: How do ideas about migration align (or not) with the realities of how migration works in reality? 

I think popular framings of migration in and from the Sahel miss the ways that circular migratory patterns have been a staple of life for centuries. The Sahel has a number of pastoral communities that migrate with their herds. There are also cycles of migration between rural and urban areas, and education and religious pilgrimages that take place. This is not to say there have not been people forced to move because of violence, or because of economic or environmental stress. But many aspects of how and why migration happens get lost when migration is simply offered up as a problem to be solved. 

One central aspect of this issue is how my informants framed climate change migration as a South-North issue: that is, people from the Global South going to the Global North. In reality, most migration is South-South. Most of the migration happening in the West African Sahel and across the African continent more broadly is intra-regional migration. But this fact does not receive the same level of attention. I had informants at the International Organization of Migration admit that, while their figures show the predominance of intra-regional migration, for funding purposes, they had to frame their work as speaking to the “migration crisis” in Europe. The fear of Africans inundating Europe obscures the realities of this South-South migration.  

Q: Your research also considers the longer history of anxieties about migrants by showing how contemporary takes on climate change migration have supplanted and reinforced colonial anxieties about overpopulation that bring back Malthusian ideas about scarcity and overpopulation. How do these anxieties appear in security policy, and how do security experts think about these colonial legacies in their work?

In many aspects of my research, it seems that Malthus never really left. The West African Sahel has some of the highest birth rates in the world, and that fact lends itself to easy, though ultimately false, claims that overpopulation is at the root of the region’s problems. Still, for me, it was important to trace the ways that the different institutions I study absorb criticisms and attempt to re-orient their work. For instance, when informants at different UN agencies, such as the UN Development Program (UNDP), UN Office on Drugs and Crime (UNODC), and International Organization of Migration (IOM), would mention population in the Sahel, they would do so with the acknowledgment that it was a “third rail” issue. So even as the ghost of Malthus lives on, I think it’s important to account for different mutations.

Similarly, when interviewing US military officials working for US Africa Command (the Department of Defense’s command dedicated to African affairs), they were very mindful of accusations of colonialism and empire, and attempted to cultivate what I call in my work a “non-imperial” vision of US empire. That is to say, their disavowals, far from being just a PR move, were being used to strategize new kinds of circumscribed actions that would allow for a US presence in the region without inviting anti-imperial protest. 

Q: You’ve mentioned that many of your interlocutors are experts in the international development and security fields. How did you conduct your research on such a transnational project, and how did you get access to the experts you interviewed? 

I knew at the outset that this project had a number of different threads, including multiple actors, and therefore demanded a multi-sited approach. I started in Washington, DC, where I interviewed US government officials who put me in touch with informants in Stuttgart, Germany, the headquarters of US Africa Command. In turn, these informants put me in touch with other military, diplomatic, development, and humanitarian workers in Senegal, Burkina Faso, and Niger. I also previously worked at the US State Department and have family ties to the US military, which facilitated access. 

But still, you can never underestimate the usefulness of showing up. Many of my most memorable interviews and points of contact were serendipitous. Similar to most fieldwork, it’s all about cultivating relationships. I primarily used snowball interviewing, which involved seeking additional recommendations from existing contacts and using those suggestions to map out a network of informants. Given their positions in “elite” institutions, many of my informants were very much interested in preserving their anonymity, especially when they offered criticism of work they were doing within those institutions. 

Q: While this new article focuses on migration, your broader book project focuses more on the role that a network of experts plays in constructing a past and predicting a specter of future catastrophe in Sahel. In addition to climate migrants, what other climate issues appear in your book? 

The broader book project, currently titled Sustainable Empire: Nature, Knowledge, and Insecurity in the Sahel, makes the central claim that attending to what has happened historically — and what continues to happen in the West African Sahel — is crucial for understanding the possibilities of just global environmental futures. It supports this claim in a number of ways. First, it explores how environmental knowledge in and from the Sahel helped assemble a conceptual and institutional bedrock for global climate change knowledge. I do this through a critical genealogy of desertification, considered the first global climate change issue in the mid-20th century. I then trace the place of West Africa in predictions about the “coming climate change wars,” reflecting on how racial and gendered fears helped set the stage for what became the global war on terror. The book then concludes with a consideration of the kinds of climate solutions being workshopped in the region, ranging from ongoing security projects to large-scale green-tech projects.

Podcast

Institutionalizing Child Welfare: An Interview with Matty Lichtenstein

Matty Lichtenstein

How do American child welfare and obstetric healthcare converge? Matty Lichtenstein, a recent PhD from UC Berkeley’s Department of Sociology, studies how state and professional organizations shape social and health inequalities in maternal and child welfare. Her current book project focuses on evolving conceptions of risk in social work and medicine, illustrated by a study of the intertwined development of American child and perinatal protective policies. She is working on several collaborations related to this theme, including studies of maltreatment-related fatality rates, the racialization of medical reporting of substance-exposed infants, and risk assessment in child welfare.

In another stream of research, she has written on social policy change, with a focus on educational regulation and political advocacy, and she has conducted research on culture, religion, and politics. Dr. Lichtenstein’s work has been published in American Journal of Sociology, Qualitative Methods, and Sociological Methods and Research. She is currently a postdoctoral research associate at the Watson Institute for International and Public Affairs at Brown University.

In this podcast episode, Matrix content curator Julia Sizek speaks with Lichtenstein about her research on the transformation of American child welfare — and the impact of that transformation on contemporary maternal and infant health practices.

Excerpts from the interview are included below (edited for length and content).

How has the child welfare system changed over the span of time that you study?

I focused my research starting after the passage of the Social Security Act, because that is the major dividing line for American child welfare. Prior to 1935, when the Social Security Act was passed, we had a fragmented patchwork of mostly private child welfare agencies throughout the United States. The passage of the Social Security Act enabled an expansion of funding for state and local public child welfare. The main shift had to do with thinking about what welfare meant, and what it still means today.

In general, when we think about welfare, we are referring to government support for individuals or groups. The main distinction, especially in the 1930s, was between financial support — giving people money when they needed it, and couldn’t get it any other way — or providing services, such as funded medical services, educational services, or psychological counseling. Across social work, which was in a way the parent discipline of child welfare, there was a tension there. How do we help people — by giving them financial aid, or do we help them through social services?

The Social Security Act made that distinction quite clear for child welfare services, because the section that focused on child welfare services emphasized that this was about services in general, and financial aid was a separate part of the Social Security Act for families. One of the things that needed to be figured out was, what is child welfare, and how do you best serve children?

I’ve found in my research that there was an increased emphasis in the 1930s and 40s on the argument that child welfare should serve all the various needs children have. It was not just poverty-related needs. In fact, they veered away from poverty-related needs toward psychological needs, medical needs, health needs, etc. Child welfare advocates pushed for more funding and more resources for child welfare. What happened is that public child welfare grew exponentially in the 1950s and 1960s. The number of child welfare workers started rising dramatically. This led to a larger shift in child welfare and thinking about what child welfare meant in the 60s and 70s.

What was the focus of the child welfare system in the 1960s and 70s?

One of the major findings of my dissertation conflicts with the conventional narrative of child welfare history. The classic narrative is that the late 50s and 60s saw the discovery of child abuse as a social problem. Before then, scholars argue, nobody was talking about child abuse and neglect, and social workers and the public did not see it as a problem. And then by the 60s, it became a public and political issue, and you saw a number of laws being passed to mandate reporting of child abuse. This led to the creation of child welfare as we know it today, which is heavily focused on child abuse prevention and response.

The problem was that, as I dug through more archival resources, I found that that just wasn’t the case. The most damning piece of evidence I found was a publicly available report put out by the Children’s Bureau in 1959, which stated that 49% of public child welfare in home services related to abuse and neglect. This was in 1959, when current scholars were saying nobody talked about abuse and neglect.

I spent a few months in a sort of existential crisis: what is the meaning of my dissertation if everything is wrong? Eventually, I figured out that not everything is wrong, and that a lot of what was written about the history of child welfare was correct. There was much more of an emphasis on child abuse. But what it missed was this larger moment of transformation in child welfare.

What I show is that it’s not so much that child welfare agencies rediscovered child abuse, as much as they relinquished (sometimes willingly and sometimes unwillingly) jurisdiction over most other child welfare issues, including poverty, health issues, and education, and they retained jurisdiction only over child abuse and child neglect. I show that this happened largely due to larger trends in the American welfare state, specifically welfare state retraction and an increasing focus on efficiency and welfare governance in the late 60s and 1970s, which demanded that child welfare focus on issues that could be easily defined and services that you could put a price on.

The Children’s Bureau could no longer say they serve all of the needs of the population of children. Instead, there was an increasing shift toward, what is the problem you’re here to resolve? There were advocates that pushed for more focus, but it was all part of this larger shift in the American welfare state.

I also emphasize that the massive expansion of child welfare — that growth of staffing and funding — was also made possible by laws saying, you need to report child abuse. Where do you report it? To a child welfare agency. So now there were thousands of child welfare workers. It had unintended consequences. All the child welfare workers who were supposed to solve all of children’s problems were now there to solve one problem, which was the increasing the number of reports of child abuse and neglect.

How was the category of child abuse and neglect defined, and how did it transform over time?

Early research that tried to define what it meant to have abusive parents was primarily in medical journals. That was usually based on things like X-rays of children with broken bones and trying to figure out, was this an accident, or who caused this? There were also psychiatric evaluations of parents saying, what is wrong with parents who do this? It was a diagnostic model of approaching child abuse and neglect. The cases they were referring to were usually fairly severe cases of child abuse and neglect.

Originally, a lot of the laws addressed medical professionals, but they quickly expanded, in part because medical professionals pushed back and said, we can’t be the only ones mandated to report this. And so it quickly started to expand throughout the 1960s and 1970s to include professionals across the board who have any sort of interaction with children, including anyone in an educational setting, anyone in a medical setting, or people who work in funeral homes, for example. They became mandated reporters, which means they were supposed to be penalized if they did not report what were often very vaguely defined forms of abuse and neglect.

This varied greatly across states. Every state had different laws and different sets of mandated reporters, but child welfare agencies across the country started to receive a skyrocketing number of reports. This does not mean that everyone was reporting every suspicion, but there were enough reports pouring into child welfare that they had to figure out what to do with all these reports. In the 1970s, and increasingly in the 1980s, that forced a reckoning of the question of how to define child abuse — and how to figure out if what’s happening is child abuse and neglect.

Out of these millions of reports that started pouring in during this era, the majority were usually unsubstantiated. In the mid-1970s, usually around 60% of reports were unsubstantiated. The majority of reports that are substantiated were neglect reports that were highly correlated with poverty. There were eight times the rate of substantiated reports of physical neglect among low socioeconomic-status children versus non-low socioeconomic status children. So you had a broad category of neglect, which could include everything from passively allowing your child to starve to leaving your child home alone for a few hours when you go out to work. There was a huge range that varied by county and state.

The question then became, if you have this huge number of reports coming in, and the majority of them are not even abuse and neglect, or it’s not clear if it’s neglect or poverty, how do you create a system to prevent and treat a problem that we’re not even sure exists? And that’s really where you started to see this focus on risk. Child welfare and medical professionals affiliated with child welfare began to develop practical risk assessment tools to determine the risk that there’s an actual case of child abuse happening, or that it might happen in the future. These tools had all sorts of problems built into them.

What was wrong about the risk assessment tools that professionals were using?

In the 70s and 80s, the tools were often built on what was called a consensus approach to risk assessment. That was based on what social workers considered risk variables. This was deemed very problematic by the 1990s, but they were still widely used for the first 20 or so years. These tools tended to incorporate all kinds of variables having to do with the environment of the child. There may not have been any sign that the child was harmed directly, but you look at the environment and try to assess if there are risk variables there. That had to do with everything from the income status of the family to health issues of the parents to the marital status of the mother.

Childcare access could be a risk factor, as well as issues like the stability of the home. In the 1970s, there were risk assessment tools that had factors like, do the parents take this child to movies? Do they have a camera? Do they take the child fishing? Does the child have a mattress? You can see that it’s really hard to disentangle poverty from this.

There were also sometimes cultural factors. There was an early tool that was approved by the predecessor to the Department of Health and Human Services that asked whether the parent had wider family support in child care, and whether they were overly dependent on their family. That gets at something that is cultural, not just economic: studies have found that in families of color, there’s more interdependence and less of an emphasis on nuclear family units, so this could be problematic.

Drug or alcohol use was assessed as a risk factor. When you look at earlier surveys about child welfare services before this transformation toward a focus on child abuse, they would talk about health and family issues as issues of child welfare, but they weren’t risk factors for abuse. Child welfare might intervene if there was some sort of health issue with a parent, but that was seen as distinct, whereas when you look at the studies in the 1970s and 1980s, those same factors were not just a health issue, but a risk factor for abuse or neglect. So you saw a trend of structural inequalities and health issues turning into risk factors.

So instead of trying to say, how do we help this family as a whole, it became, how do we assess the assumption that the parent is harming the child? It’s an approach in which parent and child are seen as distinct units, and the question is, are they in some sort of conflict? What’s interesting is that this is a relatively rare problem, in which there’s an intentional effort by the parents to harm the child. It certainly happens, but it’s relatively rare.

How does what you’ve learned matter for people thinking about child welfare policy today?

First, child welfare is under-equipped for multi-dimensional problems. In some states, they might have access to more resources, and in other states, the only thing that can really do is child removal or interventions that are often quite disruptive to the family. Having child welfare in charge conflicts with the multidisciplinary approach that’s favored by most professionals.

Second, child welfare is associated with an enormous amount of trauma, especially for families that are low-income and for families of color in the United States. Fifty percent of African-American children in the United States today have experienced a child welfare investigation — one out of two. That’s just crazy. Huge numbers of children are experiencing these kinds of investigations. Perhaps some are very minimal, but some of them are not going to be so minimal.

What we have is potentially traumatic family surveillance and separation that’s intrinsically linked to child welfare, because no matter how helpful or well-meaning a child welfare worker might be, ultimately child welfare has the authority to take your child away, possibly forever. Even if they do that rarely, it can still be something that is laden with fear and anxiety for families.

Adding to that, lower standards of evidence are applied in child welfare proceedings, so that makes it particularly problematic to have child welfare involved in cases of substance-exposed infants, especially because (at least based on the limited data we have, for example, for California), a significant percentage of these infants are taken away from their mothers. Taking a newborn away from their mother is not necessarily an evidence-based approach to dealing with substance use issues. But the paradigm of child welfare is not necessarily to approach the best interests of the family as a whole. The paradigm of child welfare is to reduce and mitigate risk of future child abuse and neglect.

There have been significant shifts in child welfare over time. My research largely ends in about 2000. In the first couple of decades of the 21st century, there has been a concerted effort by child welfare agencies on every level to try to counter some of the intense racialization and income inequality that is reproduced by the child welfare system. We’ve seen a dramatic decline in child removals. For example, in New York City in 1995, there were 50,000 children in foster care. In 2018, there were 8,000 children in foster care. That is a dramatic decline. However, even though there were 8,000 children, there have been an enormous number of children investigated, and in New York City in 2019, 45,000 cases were in preventative services. So you still have a lot of child welfare involvement. What that means for families is not really clear yet.

The second major shift is that there’s been an intensification of the focus on risk assessment. We have seen the development of quite sophisticated risk assessment tools, not just the consensus tools, but actuarial tools and algorithmic tools that use computational methods to assess risk. And there have been a lot of critiques of some of these tools. The main issue is, do these tools funnel multiple problems, many of them poverty-related, into child welfare? And even if racial disproportionality in some states has declined, we still have a lot of racial disproportionality in child welfare, and income inequality continues. We don’t have enough data on that to fully assess it. And so we’ve continued to have significant issues with child welfare today, even as it has changed in this new century.

What are the approaches that different states take to the issue of infants who have been exposed to substance use during pregnancy?

In the 1980s, you have an increasing number of reports coming into child welfare of substance use during pregnancy, and a lot of this was highly racialized, in terms of how it was conceptualized. During the 1980s, this problem received a lot of media coverage. And what that means is that state legislators felt they had to do something; they had to respond in some way. And their options were basically to say, well, we can mandate medical intervention in such cases, we can criminalize these women for harming their children and mandate essentially law enforcement interventions, or we can mandate civil interventions through child welfare. The current scholarship on this period — and really on this issue — tends to focus a lot on criminalization, on how pregnant women are thrown into jail and how women are jailed or prosecuted for these kinds of uses. And then there’s also a lot of conflation of child welfare interventions and medical interventions, all part of this larger criminalization and policing of pregnant women. And there’s a lot to be said for that framework. But I think it’s actually really important to distinguish between those things, because criminalization is actually relatively rare compared to the thousands of women who are reported in each state to child welfare every year. By far the predominant response is child welfare reporting.

So how do we essentially manage and mitigate this risk of substance-exposed infants? Child welfare has this risk prevention framing, and also, it’s supposed to be dedicated to protecting children. So they are the perfect response. And what’s interesting about this is that child welfare increasingly across states becomes the primary authority for intervening in such cases, even as simultaneously, the professional consensus increasingly converges on the idea that we need a multidisciplinary response to the issue of substance-exposed infants. If you’ve read reports that are put out on this issue of substance-exposed infants, including from the federal government, the consensus is that we need doctors and social workers and financial aid, and perhaps even law enforcement. Everyone needs to work together to deal with this issue of substance-exposed infants. But in practice, the state laws overwhelmingly favor child welfare interventions, and child welfare is mandated to mitigate risk of child abuse and neglect. They’re not there to provide a multidisciplinary approach. They can and sometimes they do; it varies greatly by state. But that’s not their primary mandate. And there are very concrete consequences to having a child welfare response to this issue.

Listen to the full podcast above, or listen and subscribe on Google Podcasts or Apple Podcasts. For more Matrix Podcasts, including interviews and recordings of past events, visit this page.

 

 

Article

How CRISPR Became Routine

A visual interview with Santiago Molina, a recent UC Berkeley PhD, on the normalization of CRISPR technologies and the new era of gene editing.

Santiago Molina

Santiago J. Molina (he/they) is a Postdoctoral Fellow at Northwestern University, with a joint appointment in the Department of Sociology and the Science in Human Culture program. They received a PhD in Sociology from the University of California, Berkeley in 2021 and a BA from the University of Chicago. Their work sits at the intersections of science and technology studies, political sociology, sociology of racial and ethnic relations, and bioethics. On a theoretical level, Santiago’s work concerns the deeply entangled relationship between the production of knowledge and the production of social order. Their research included fieldwork at conferences and in labs around the Bay Area.

In this visual interview, Julia Sizek, Matrix Content Curator and a recent PhD graduate in Anthropology from UC Berkeley, interviewed Molina about their research on CRISPR, the genetic engineering technology that has reshaped biological research through making gene editing easier. This new tool has excited biologists at the same time that it has worried ethicists, but Molina’s research shows how CRISPR has become institutionalized — that is, how CRISPR has become an everyday part of scientific practice.

This image depicts a model of the CRISPR-Cas9 system. How did you come to encounter this model of CRISPR, and how does CRISPR work? 

3D Printed interactive model of Cas9.

This model was passed around the audience at a bioethics conference in Davis, California back in 2014 when I started my fieldwork. I remember the speaker holding it high above his head and pronouncing, “This! This is what everyone is so excited about!” While he meant it as a way to demystify the new genome-editing technology, a 3D-printed model of a molecule doesn’t tell us a lot about the process behind the technology. 

What is a bit disorienting is that technically, this isn’t a model of CRISPR at all, but a model of Cas9 (CRISPR-associated protein 9, a kind of enzyme called a nuclease) in white, an orange guide RNA, and a blue DNA molecule. To put it really simply, CRISPR (Clustered-regularly-interspaced-palindromic-repeats) describes a region of DNA in bacteria where the molecular “signatures” of viruses are stored so that the bacteria can defend itself. This bacterial immune system was repurposed by scientists into a biotechnology.  At its core, CRISPR-Cas9 technology is just the white and orange parts. The Cas9 does the heavy lifting of cutting DNA, and the guide RNA, or gRNA, acts as the set of instructions that the Cas9 uses to find the specific sequence of DNA where it should cut.

While people use CRISPR as a shorthand for the entire CRISPR-Cas9 system, you won’t actually find a single Eppendorf tube in a lab marked “CRISPR.” As a process, the way scientists get this to work is by adding Cas9 and the “programmed” gRNA to cells via one of several delivery techniques, such as a plasmid or viral vector, so that the Cas9 will make a specific DNA cut. In the years since then, scientists have developed a whole toolbox of different Cas proteins, and each can make many different kinds of modifications. 

What is interesting about this sociologically is that CRISPR has a wide scope of potential application, and early in its development, every possible use was on the table, from bringing back the wooly mammoth to ending world hunger. This meant that exactly what it would be, ontologically, was really open. Scientists would describe the technology as a pair of scissors, as a scalpel, as a find-and-replace function for DNA, a guided missile, a sledgehammer, etc. I became obsessed with these metaphors because they were traces of the active construction of CRISPR as a technology. 

My research takes this focus on the development of genome editing technology and reframes it as a problem of institutionalization, which sociologists generally understand as the process by which a practice acquires permanence and reproducibility in society. I look at how the ideas around what the technology is, how it should be used, and what it should be used for come to be settled, legitimized, and eventually taken for granted.

CRISPR has recently been in the news, not only because of Emmanuelle Charpentier and Jennifer A. Doudna’s 2020 Nobel Prize, but because of the 2018 announcement that a Chinese researcher had used CRISPR to gene-edit babies. How has the media covered CRISPR and the ethics of the technology? 

A crowd of photographers and reporters gearing up for He Jiankui’s presentation in Hong Kong.

Most media articles go something like this: “The idea that scientists can modify your DNA at will sounds like science fiction. But now it’s reality!”

This framing does important work to normalize futures that are in active construction. When newspapers and magazines cover CRISPR, they are bridging the social worlds of science and civil society and making concrete a very fluid social process of knowledge production and technological development. In doing so, some media coverage amplifies the hype around CRISPR and genome editing.

That said, it’s more complicated than saying they sensationalize it, because most coverage draws directly from interviews with actual genome-editing scientists, and they do their best to represent the science accurately. Instead, I think about media coverage as part of the cultural side of institutionalization. News articles offer interpretive scripts though framing that audiences can use to make sense of what CRISPR is, how it is used, and what the ethical issues are. This “making sense” is part of how genome editing is coming to be seen as a normal practice in biomedicine.

The distinction between investigative reporting and general media is important to keep in mind. Take, for example, the controversy surrounding the birth of genetically modified twins in Shenzhen, China in November 2018. If it wasn’t for keen investigative reporting by Antonio Regalado of the MIT Technology Review ahead of the Hong Kong Summit, it is likely that the controversy would have unraveled differently. 

The image above is a photo of a group of reporters during the summit taking pictures of He Jiankui, the scientist behind the clinical trial in Shenzhen that aimed to use CRISPR-Cas9 to confer genetic immunity to HIV in embryos. Subsequent media coverage of the controversy drew from interviews with high-profile, U.S.-based scientists in the field. These scientists argued that He Jiankui was an outsider on the fringe of the field. The resulting articles framed him as a “rogue,” “a mad scientist,” and a “Chinese Frankenstein.” This “bad actor” framing tells us that on the whole, the field is responsible and CRISPR itself is good, essentially repairing the crisis.

However, in alignment with more recent investigative reporting, my ethnographic research found that a handful of U.S.-based scientists had helped He Jiankui with his project. He had earned his PhD at Rice and was a postdoctoral fellow at Stanford. Scientists at UC Berkeley had given him technical advice on the project, as well. To me, this suggested that the “bad actor” framing — and the Orientalism surrounding how he was talked about – obfuscated the broader moral order of genome editing.

CRISPR is a relatively contemporary invention, but the idea of genome editing has a much longer history. How does this history appear in your research, and what does Charles Davenport have to do with it?

Photograph of Charles Davenport hanging in the common area of one of the buildings at Cold Spring Harbor Laboratory.
Photograph of Charles Davenport hanging in the common area of one of the buildings at Cold Spring Harbor Laboratory.

It’s interesting how little history appeared in my research. There is a sort of presentism that comes with “cutting-edge science.” CRISPR technology is part of a lineage of genetic engineering tools, going back to the 1970s, when recombinant DNA (rDNA) was invented. This biotechnology, rDNA, allowed scientists to mix the DNA of different organisms. It gave rise to a whole industry of using engineered bacteria to produce biologics and small molecules like insulin. The history of rDNA is important because the debates around its use in the 1970s came to be the dominant model of decision-making surrounding new technologies in the United States. Indeed, a handful of the top scientists from these debates have held top positions on committees that have been tasked with debating the ethics of genome editing over the past five years. 

Charles Davenport predated these debates, and has been largely an invisible figure for modern genome-editing scientists. Davenport was a prominent scientist in the early 20th century. He was a eugenicist and racist scientist who served as the director of Cold Spring Harbor Laboratory, a private, non-profit research institution, from 1898-1924. While at CSHL, Davenport founded the Eugenics Record Office, which published research to support the eugenics movement. I found this photo of Davenport in Blackford Bar, the pub at Cold Spring Harbor Laboratory, where I went to the first meeting, titled “Genome Engineering: The CRISPR/Cas Revolution,” in 2015. While the scientific community eventually came to reject Davenport, and the eugenics movement fell out of fashion after World War II, this history is important to recognize as we usher in a new technology aimed at eliminating genetic diseases and improving human health. At the conference in 2015, I thought, if Davenport’s ghost had been hanging out at the pub, he would have been thrilled.

The scientists I worked with vehemently rejected the idea that what they were doing could be considered eugenics, or what one scientist called it, the “E-word.” But people often forget that the eugenics movement in the United States was both mainstream and progressive at the time. Eugenics laws were drafted and passed by Democratic legislators who aimed to address poverty by drawing on the most up-to-date science, medical knowledge, and expert opinion. When this history was brought up at modern conferences and meetings, it was either subtly discredited as fear-mongering or tucked into a panel at the end of the conference to entertain philosophical discussion.   

Your research also contends with the way research is conducted between different laboratories, even when many of the plasmids (a kind of DNA molecule commonly used in CRISPR applications) and techniques that they use are proprietary. The shipping area in this image is how Addgene, what has been called “the Amazon of CRISPR,” sends reagents and plasmids used in scientific research to laboratories around the world, and manages many intellectual property issues. What is Addgene’s role in the scientific process?

Hundred of plasmids await daily FedEx pickup in Addgene’s shipping room.
Hundred of plasmids await daily FedEx pickup in Addgene’s shipping room.

While I was doing my research, there was a raging patent dispute between the University of California, Berkeley and the Broad Institute, where each institute claimed to have invented the technique for modifying mammalian cells with CRISPR. So the proprietary aspects of CRISPR were always in the background. But I think if it wasn’t for Addgene, these concerns would have really slowed down the spread of genome editing.

Addgene is a non-profit organization that operates as a mediator between the exchange of practices and biological materials between labs. What they do is manage a plasmid repository, a sort of technique library, and fulfill the requests for plasmids to send them to those labs. Because plasmids are central to many biological experiments, and are key for CRISPR-based techniques, scientists rely on the availability of these circular pieces of DNA as a key reagent. Since receiving its first CRISPR plasmid in 2012, Addgene now has over 8,000 different CRISPR plasmids in the repository, and has shared them over 140,000 times with laboratories across 75 different countries. They essentially took over the logistics of CRISPR distribution, moving biological materials from place to place. By doing it at a really low cost, this effectively contributed to what scientists described as the “democratization” of genome editing. 

They also keep patent lawyers at universities happy with detailed record-keeping and by electronically managing material transfer agreements (MTAs), which sort out the proprietary issues, through a Universal Biological Material Transfer Agreement (UBMTA). This UBMTA relaxes the institutional constraints on the transfer of biological materials. Scientists love this because it reduces a lot of paperwork.

Last but not least, Addgene contributes to the institutionalization of CRISPR-Cas9 by producing guidelines and protocols that support the use of some of the plasmids. For example, Addgene was the first to develop a textbook for CRISPR. Their CRISPR 101 eBook has been downloaded more than 30,000 times, and their informative CRISPR blog posts had been visited over 500,000 times as of 2019. In these materials, detailed definitions of new genome editing techniques and terms of art are spelled out for curious adopters. Additionally, the scientific team at Addgene works with the scientists who are depositing plasmids to coproduce useful documentation to accompany the plasmids. Addgene does not share plasmids with for-profit organizations, but acts as an up-to-date clearing house and tracker of CRISPR innovations in academic and non-profit laboratories.

As part of your research, you spent time at different labs around the Bay Area to understand how CRISPR research has become an ordinary part of scientific research. Can you walk us through some of these images of lab life and what they show us about how CRISPR has become institutionalized? 

Sculpture of a ribosome in an atrium.

 

Rows of backed lab benches.

 

The first image is of the atrium in one of the buildings I often found myself in for fieldwork. The huge sculpture of ribosomes on the side looks so abstract to me. A lot of these spaces required keycard entry, and for me, the emptiness of some of the spaces made them all the more isolating. I would have to get lost sometimes just to find the right room, where a small group of scientists were discussing the next big breakthrough or the next application of CRISPR-Cas9. The public-facing image of the field was really different from the behind-the-scenes shop-talk environments where I took notes. It was different because it wasn’t open to anybody, and you would need a lot of intellectual and cultural capital to enter those places.

The second picture, to me, represents the ordinary that is behind those barriers of access. Lab benches are workshops. They are shared spaces that are a lot like kitchens in a restaurant. Everything has its place, every tool is in its nook, you might find some remnants of an experiment in the fridge, or old reagents in the freezer. But you can tell there is some fun in the mix. The folks who are working at those benches are doing it because they love it. For these graduate students and postdocs, CRISPR-Cas9 was an exciting opportunity, something that would help them finish their PhD, or if they were an undergrad-volunteer, it was a key skill to move forward. Lab life a lot of times felt banal: scientists moving through their careers, with lots of failed experiments, meetings that could have been emails, day-to-day conflict with coworkers, late hours, etc. I wish people could see the contrast between the hype surrounding something like CRISPR-Cas9 and the on-the-ground struggles of scientists in the lab.

In these pictures below, you show a humorously decorated doorway that tells us a lot about how scientific work happens at a university. What does this tell us about who conducts science, and about equity issues within the lab?

Threshold of the lab as an angry doorway with a top-hat and mustache, hungry for the labor of postdoctoral fellows, undergraduate, and graduate students.
Threshold of the lab as an angry doorway with a top hat and mustache, hungry for the labor of postdoctoral fellows, undergraduate, and graduate students.

Threshold of the lab as an angry doorway with a top-hat and mustache, hungry for the labor of postdoctoral fellows, undergraduate, and graduate students.

This personification of the lab was interesting to me because it draws attention to those struggles I just mentioned. Of course the decoration is a lovely piece of satire, but scientific discoveries and breakthroughs are the products of years of labor. A lot of this work is done by unpaid undergraduate volunteers, graduate students who are often in precarious financial situations, and some paid research associates, and it is coordinated by postdoctoral fellows. Sometimes, because of the demands of experimental work, lab workers would have to come in in the middle of the night to feed cells, check on experiments, or manage instruments. In the lab I worked in, one research associate worked as a Lyft driver on the side because their salary wouldn’t cover their cost of living. While the hierarchies of labor are still very strong, some universities and labs, like the Innovative Genomics Institute at UC Berkeley, are now requiring that all undergraduate workers be paid. I think this is a step in the right direction, but there are still equity issues both between and within ranks of the lab. 

This disparity is even more extreme when you consider how senior scientists and universities benefit from scientific labor. Social capital in the form of reputation and financial capital both accumulate as a result of this work. Partnerships between university laboratories and the biotech and pharma industries in particular have become commonplace in 21st-century biomedicine. Research examining these partnerships describes this as academic capitalism or neoliberal science. My research adds to this line of social scientific research that has traced this institutional shift, where academic organizations are increasingly adopting the practices and bureaucratic frameworks of for-profit organizations in industry. Those patent disputes I mentioned previously are a good example of this. 

With CRISPR research, as with much other biological research, the institutionalization of scientific norms is essential to conducting scientific research. What does Michael Jackson have to do with that? 

DIY biohazard safety sign posted on the lab doors.
DIY biohazard safety sign posted on the lab doors.

There are three proximate institutions of social control surrounding scientific work, in my view: biosafety, bioethics, and the ethics of research misconduct. This poster is an example of a biosafety rule being operationalized in the lab. It is posted on the doors so you would see it as you exit the lab space to the common area and kitchen. Biosafety essentially aims to contain the materials, reagents, and products of scientific experiments to the lab. Lab managers and principal investigators must fill out detailed forms describing the experiments being done and submit these to the biosafety office at their university. These are then reviewed and evaluated by biosafety experts, who then make recommendations about infrastructure requirements for the spaces where the experiments are conducted and prescribe mandatory training for any personnel conducting those experiments.

Biosafety is a really interesting social institution because it must constantly keep up with new techniques and develop risk frameworks for assessing them. For innovations like CRISPR-Cas9 that are revolutionary, this sometimes requires some finesse. When you consider the modifications being made to bacteria, plants, non-human animals, and human cells, you can bet there is considerable work going into making sure those biologics don’t end up where they aren’t supposed to. Consequently, scientists must follow strict protocols for waste disposal and use the appropriate personal protective equipment (PPE).

But then consider who is doing those experiments. There can sometimes be a disconnect between the official protocols and how they are enacted. This poster captures that disconnect and suggests that more immediate forms of social control might work better in some cases than extensive bureaucratic procedure. Plus, Michael is iconic.

As with any social process, there are bound to be accidents. In the lab I observed, for example, a graduate student accidentally cut himself through his gloves on some broken glass while conducting some genome-editing experiments with lentiviral packaged Cas9. This lentivirus could, in principle, infect any mammalian cell. While he was working under the fume hood, which creates negative pressure to suck up the air where the experiment is being done, there was still a risk that Cas9, which would edit the DNA, could enter his blood stream. He then went to the post-doc he was working under and the lab manager, who advised him to report it to the Office of Environment, Health & Safety (Eh&S). EH&S then told him to go to the student health center. Once at the health center, the grad student with his bandaged hand informed the nurse that his lab was categorized as BSL-3 (biosafety level 3), to which the nurse responded, “What is BSL-3?” He was ultimately fine, as far as we know, but the example shows a further disconnect between the different offices tasked with managing the risks of scientific work.

As genome editing continues to develop as a broader institution in biomedicine, there are going to be accidents, and there is going to be misuse. No number of guidelines or codified norms can prevent that. This is why it is crucial that we continue having debates about the norms governing the use of the CRISPR-Cas9 system, both as a promising clinical technique and as a sociocultural institution. My hope is that these debates will lead to concrete regulatory and legal changes that can more directly shape this technology’s use. 

Article

The Terracene: An Interview with Salar Mameni

Salar Mameni

At the intersection of the War on Terror and the Anthropocene lies Salar Mameni’s concept of the Terracene, which describes the co-emergence of these two terms as a means to understand our contemporary social and ecological crises. Mameni, an Assistant Professor in the department of Ethnic Studies at University of California, Berkeley, is an art historian specializing in contemporary transnational art and visual culture in the Arab/Muslim world, with an interdisciplinary research focus on racial discourse, transnational gender politics, militarism, oil cultures, and extractive economies in West Asia. They have published articles in Signs, Women & Performance, Resilience, and Al-Raida Journal, among others.

In this visual interview, Julia Sizek, Matrix Content Curator and a PhD candidate in the UC Berkeley Department of Anthropology, talked with Professor Mameni about their research, working with select images of art discussed in their forthcoming book, Terracene: A Crude Aesthetics.

The concept that you propose in your book, the Terracene, foregrounds the War on Terror as necessary for understanding not only our contemporary political crises, but also our contemporary ecological crisis. Describe your concept, and what it adds to our understanding of the links between terrorism and environmental issues.

My book coins the term “Terracene” in order to bring attention to the role of militarism in enacting the ongoing ecological crises we currently face. I insist that contemporary forms of warfare – such as the infamous War on Terror – are concurrent with and continuations of settler colonial land grabs and habitat destructions that have created wastelands across the globe. In their initial timeline for the Anthropocene,  scientists traced the origins of this new epoch to technological innovations in early 19th-century Europe that brought about industrialization. In my view, this is an inadequate historiography that does not take into account longer histories of European settler colonialisms, as well as the ongoing role of militarism in maintaining wastelands. The term “Terracene” is a way of highlighting the terror that is tied to the current geological timeline.

Terror, however, is not the only idea I intend to highlight with the notion of the Terracene. I also take advantage of the sonic resonance of “terr” (meaning earth/land) in the word “terror” in order to direct our attention to the significance of thinking with the materiality of the earth itself. In my work, I consider this through toxicity of militarism and extractive economies, which turn the earth itself into a weapon that continues to poison even after the troops and the industries have receded. Scholars of environmental racism often highlight the dumping of toxic waste on lands inhabited by racialized, poor, and devalued communities. My book emphasizes the production of “terror” out of “terra,” which can mean the weaponization of the earth itself. Yet, I believe that the very shift of attention to the earth’s many potentialities can also allow for conceptualizing futures out of toxic wastelands. For me, new theories are only useful if they do not simply mount a critique of systems of oppression but also offer new imaginaries as foundations for future directions. Much of my book is attentive to materialities and thought systems that do not align with scientific conceptualizations of ecological thinking as a way of opening up new modes of thought.

Part of the reason you relate the Anthropocene and War on Terror is because of their coeval histories. Aside from emerging during the same era, how are the histories of these two concepts — terrorism and the Anthropocene —related?

Yes, the so-called War on Terror, as well as the scientific notion of the Anthropocene, were both popularized in 2001, each proposing a new way of conceptualizing the globe. What is fascinating to me is how each of these ideas revolves around an antagonist: the terrorist in one case, and the Human (Anthropos) who caused climate change in the other.

The question I raise in the book is this: why is it that the term “terrorist” cannot be applied to the Human who has caused deforestations, temperature rise, and oil spills, making the globe uninhabitable for endangered species, as well as threatening the livelihood of multi-species communities globally? Why is the notion of the “terrorist” instead reserved for those who protest the building of oil pipelines on Indigenous lands, or those who resist settler colonialism in places such as Palestine? This tension brought me to see that the idea of the Human (Anthropos) continues to be limited to those engaged in settler colonial ventures, those who are protected against the “terrorist” through the security state.

What do you think the study of art history can bring to the Anthropocene, which is often described through science?

Great question! The book argues that “science” is a provincial worldview that has displaced a plethora of diverse thought systems that are in turn called “art” (or “myth” or “superstition” or “religion”). So my first approach in the book is to question the very art/science divide that disallows those deemed non-scientists to participate in knowledge production. Non-scientists have of course included very large groups, such as women, non-Western knowledge producers, and non-human intelligent beings. This vast array of intelligence left out of “science” says much about the limits and hubris of scientific thought. My book opens up space for artists who think beyond the reaches of scientific ecologies. A part of the book, for example, is dedicated to ecologies of ancient deities. For instance, I consider Huma, the Mesopotamian deity who has been conjured and resurrected by the contemporary Iranian artist Morehshin Allahyari (Fig. 1).

Morehshin Allahyari, "She Who Sees the Unknown: Huma" (2016), Image courtesy of the artist.
Figure 1: Morehshin Allahyari, “She Who Sees the Unknown: Huma” (2016), Image courtesy of the artist.

 

As the artist explains, this is the deity of temperatures. Huma’s body is multi-layered and mutative. It has three horned heads, a torso hung with large breasts, and two snake-like tails. Huma is multi-species and multi-gendered and is the deity that rules temperatures. In a time of temperature rise, wildfire, and fevers brought about by the COVID-19 pandemic, Huma is the deity to conjure. Indeed, Allahyari conjures her as a protector, but also builds her out of petrochemicals, the plastic used in 3D printers.

I also take seriously the intelligence of non-human phenomena such as oil. In the book, I consider images of explosions at a Southern Iranian oil field, as documented by the Iranian filmmaker Ibrahim Golestan in a film called A Fire! (1961) (Fig. 2).

Fig 2. Still from “A Fire!”(Dir. Ebrahim Golestan, 1961)

Rather than thinking about the human triumph of putting out the explosive fire, which took 70 days to extinguish, I consider the intelligence of petroleum that refuses to be extracted from bedrock. I call this human/oil relationality “petrorefusal” in order to call attention to the unidirectional master narrative of extraction. What would it mean, for instance, if we understood explosions as petroleum’s refusal to leave the ground? Would engaging such a refusal mean an end to extractive practices at the current industrial-capitalist scale?

Though you are an art historian, you are attentive to the limits of the visual as a mode of sensing the world. How do you bring other modes of sensing into your work, and how does this shape your approach to art history, which is often imagined as a visual discipline?

Yes, the dominance of the visual within traditions of art history cannot tackle the rich sensorial relations that ecological thinking needs. In the examples of the artworks I cite above, for instance, my theories do not arise from the visual aspects of the works alone. In the case of Huma, a visual reading would miss the spiritual and ethical significance of the deity’s conjuring. Instead, my reading of Huma engages with the object’s deep time, a time that dissolves its plastic materiality into the microbial temporality of oil’s production. In this sense, the sculpture is not simply and statically visual or coeval with our present moment. If we focus on the time of oil and plastic, the sculpture moves into a performative, mutative flux of multi-species organisms across temporalities that are beyond our own. The book as a whole treats the visual as embedded within (and inseparable from) multiple sensorial experiences.

How does art add to our understanding of the Terracene?

I coined the term Terracene as a critique of the notion of the Anthropocene. It is meant to question the centering of a destructive Human (Anthropos) at the core of a planetary story. In this sense, I probe the narrative structure of this scientific story of the Anthropocene — a story that is proposed to be a fact. Usually, storytelling is understood to belong to the domain of arts and humanities. By definition, stories are not checked for factual accuracy, but engaged with at the level of the creative imagination. This is precisely what gives stories their power. Stories can build alternate worlds and offer alternatives to how we perceive reality to be. So if the Anthropocene is a story, then surely other stories can be told. The Anthropocene story is a story of the destructive human, which is why I propose that it is better called the Terracene.

What if we began to tell creation stories at the moment of planetary destruction? Indigenous cultures across the world have creation stories that have been vehemently suppressed by destructive (settler) colonial knowledge productions and worldviews. In the book, I make a case for ethical engagements with subjugated forms of knowledge that offer alternatives to thought systems that have brought the Terracene into being. One such story I relate in the book comes from my own vernacular Islamic culture that imagines the world as a sacred mountain balancing on the horns of a bull, the bull standing on the back of a fish, and the fish, in turn, being held up by the wings of an angel.

Salar Mameni, "Creation Story" (2022)
Fig. 3: Salar Mameni, “Creation Story” (2022)

I argue that such a creation story emphasizes the inter-relatedness and inter-reliance of all things. The world hangs together in a fine balance, with every creature mattering to its overall existence. Art, in this sense, is not an alien other to science, but an equal participant in the creation of worlds we inhabit.

 

Podcast

What Happened to the Week? An Interview with David Henkin

David Henkin

We take the seven-day week for granted, rarely asking what anchors it or what it does to us. Yet weeks are not dictated by the natural order. They are, in fact, an artificial construction of the modern world.

For this episode of the Matrix podcast, Julia Sizek interviewed David M. Henkin, the Margaret Byrne Professor of History, about his book, The Week: A History of the Unnatural Rhythms that Make Us Who We Are. With meticulous archival research that draws on a wide array of sources — including newspapers, restaurant menus, theater schedules, marriage records, school curricula, folklore, housekeeping guides, courtroom testimony, and diaries — Henkin reveals how our current devotion to weekly rhythms emerged in the United States during the first half of the 19th century.

Reconstructing how weekly patterns insinuated themselves into the social practices and mental habits of Americans, Henkin argues that the week is more than just a regimen of rest days or breaks from work, but a dominant organizational principle of modern society. Ultimately, the seven-day week shapes our understanding and experience of time.

Excerpts from the interview are included below (with questions and responses edited).

Listen to this interview as a podcast below, or listen and subscribe on Google Podcasts or Apple Podcasts.

 

 

What are the different ways people have thought about the week?

The seven-day week does many things for us in the modern world, but we tend to focus exclusively on one of them, and that’s the idea that we have a unit of time that divides weekdays and weekends, work from leisure, profane time from sacred time. The week creates two kinds of days. But by its very structure, the week also divides time into seven distinct, heterogeneous units. Every day is fundamentally different from the day that precedes or follows it. The names we use for the days of the week suggest no numerical relationship between days. The week also lumps time together for us in interesting ways. We talk about what we did this week, what we hope to get done next week. What the week does most conspicuously and powerfully for us in the modern world is coordinate our schedules. It sequesters or regulates the timing of certain activities, especially activities that we try to do in conjunction with strangers.

How did people begin to use the week for stranger sociality?

The best example might be a market day, where you want to only have a public market every so often, and you want to make sure everyone can be there. And everyone remembers when it is and it doesn’t conflict with other things. That’s one model for it. But I argue in the book that it was really only in the early 19th century that large numbers of people began to have schedules that were different from one day of the week to another.

The institutions that helped produce that are varied. They included things like mail schedules, newspaper schedules, school schedules, voluntary associations (like fraternal orders or lodges), and commercial entertainment, like theater or baseball games. The more people lived in large towns and cities, the more they were bound to patterns of mail delivery or periodical publication, and the more likely they were to have regular activities that took place every seven days, or on one day of the week or another. Once they had that, it was a self-perpetuating cycle, because then you’ll begin to schedule other activities so as not to conflict with them, or to be memorable and convenient. The weekly calendar began to be used to organize these regularly recurring activities, typically that involved strangers and were open to the public.

Today, we often think about having the work week, and then the weekend, if we are so lucky. What are the ways that historians think about this division of either week and weekend, based on work or leisure?

Historians haven’t really thought too much about the weekly calendar at all, but to the extent that they have, they have focused exclusively on this question of the work week. Most commonly, they’ve studied the ways in which organized labor or capital have sought to control or regulate the length, pace, and even the timing of the work week.

The Industrial Revolution brought about a hardening of the boundaries between work and leisure, rather than having leisure bleed into Monday, or having work bleed into Saturday or Sunday. Something industrial the week has done for for centuries, even for millennia, is from biblical origins. The concept of a Sabbath is essentially an industrial one, which says there’s a time for work, and a time for rest or “not work.” That’s how historians have written about it.

Historians have not paid much attention to the role of leisure in organizing weekdays. They have paid attention to the role of leisure in giving special meaning to Sunday, and the great debates over how one should spend one’s Sunday — whether it should be in church, or going to the theater, or whether it must not involve alcohol, or whether it can involve sex, or whether the mail can be delivered. That all features prominently in the historiography of 19th-century America. But few have noticed that people’s lives have these other weekly rhythms, too.

What were the sources you drew upon to come to your conclusions about how the week is shifting and changing?

There were two kinds of sources. The first is a bit boring, but phenomenally important, which is that if you look at any newspaper or city directory, or anyone’s account of their lives, you suddenly realize how many activities they engage in that are pegged to the week, whether it’s going to musical societies or temperance lectures or anti-slavery organizations. You notice that they’re organizing by the week. It’s glaring at you and in plain view, but if you don’t ask the question, then you won’t actually see it. We know that newspapers typically came out once a week, but on which day of the week did they come out? Was it the same? Did it vary? Things like that don’t require a huge amount of digging. It just requires asking the question. You can basically ask that question to almost every public document from the first half of the 19th century in the United States, and those documents that register life in an urban or semi-urban society create a thick catalogue of weekly activities, obligations, and habits.

You also look at diaries. What are some of the insights you can get from diaries, and how did the practice of diary-making change during the period of time you’re looking at?

Diaries tell us what whether people went to French class on a Wednesday or not, but the cool thing that they do, along with correspondence and other kinds of recollections, is allow people to narrate their own experiences. Those are fascinating because you can not only see what they did, but how they remembered — or sometimes failed to remember — what day of the week it was. One of the things I came to be especially impressed by during the course of my research for this book was the link between the week and memory. We can use diaries as the main example, because that’s probably the single source type that I immersed myself in most most deeply. Diaries are not hard to find. They are everywhere. The challenge there was to spend years looking at as many of them as I could, then thinking about the various kinds of archival biases I needed to overcome to make sure I was looking at a broad range of diaries.

Diary-keeping is a very old activity. I would say it became a mass practice in the United States in the early 19th century. In New England, it was somewhat widely practiced even in the 18th century, but became much more so in the 19th century, and the 19th century also saw the rise of the pre-formatted diary book. It had been introduced as a consumer good in the United States in the 1770s, but totally bombed. No one really wanted such a thing. Instead, people used almanacs with a standard format of calendar as a material artifact. Almanacs are organized around the month, and they tend to focus on naturally observable things, like the weather. People didn’t really see any need for a pocket diary that you could write stuff in. But by the 1820s, these were suddenly quite popular. The most common format was six days to a spread, sometimes seven. It conditioned people to thinking about their lives in chunks of time that were much smaller than a month, but bigger than a day.

You mentioned that a lot of historians of industrial capitalism have focused on the work day. What do your insights about the week have to bear against the focus on the hour?

The hour is by far the time unit that has been of greatest interest not only to historians of labor, but also to historians of time, who have been far more interested in the clock than the calendar, in part because the clock is a mechanical device, and we tend to look for technologies to explain fundamental changes in temporal consciousness, whereas calendars don’t seem to be that kind of technology. The week is not measured any more precisely today than it was 100 years ago, or even 500 or 1000 years ago. The hour is very much associated with punctuality, and with discipline. The 19th century is really also when large parts of the world began calculating hours the way we do today, which is to conceive of it as 60 minutes, and as 1/24 of a full daily cycle, which is not how most societies used to define which they define it, which was as 1/12 of the variable amount of daylight.

When you read about the week, you realize that you’re looking at a unit of time that doesn’t fit into any of the big paradigms that have drawn our interest to the hour. We’re interested in the hour because we think that pre-modern time was natural and observable. Modern time is homogeneous. It’s arithmetically calculable, and fundamentally alienated from nature. But the week is equally artificial. It’s not actually rooted in natural rhythms, and it’s not confirmed or correctable by observable natural phenomena. It’s very rigid and artificial, but it’s also very, very old. So once you stop assuming that clock time is the way to look for the hallmarks of modernity, I think it opens up new ways of being interested in the week. The week wasn’t even a universal system of any kind in large parts of the world, including East Asia, which did just fine without thinking of the seven-day cycle as a timekeeping register of any kind. My research into the week makes me think of the hour as a less less apt symbol for the difference between modern and pre-modern timekeeping. The week is a heterogeneous timekeeping system. The homogeneity of time is a powerful feature of modern timekeeping, but the seven-day week says that no two days are alike. We speak about daily life, everyday life, but the week resists that whole notion. It insisted that no two consecutive days are substitutable. It would seem to correspond with pre-modern notions of time movement and heterogeneity that used to interest anthropologists about timekeeping in primitive societies, and yet it is fundamentally modern and has only now in the last 100 years become a global timekeeping system.

The week is more about the calendar that you keep, and not about the town square, which doesn’t raise a different flag on Mondays or Tuesdays. It raises the question about the way that the week has been seen to be subpar, or irrational. There have been different projects to try to remake the week into something that is more like a clock tower. What have some of those projects been?

There have been three big ones. They’re all big, because they all represent an attack on the seven-day week from very powerful, and in many other respects, successful revolutionary movements.

The first was the French Revolution, which sought to rationalize and standardize measurements of all kind, and succeeded. Many of the ways in which we measure things, especially outside the United States, are a product of the French Revolution and its belief in enlightened rationality. The French Revolution also had another gripe with the week, apart from the fact that it’s awkward and irrational, which is that it seemed to be the fundamental anchor of the power of the Catholic Church, in old regime France. So the French revolutionaries created a new calendar. They not only renamed months and years, but they also more radically introduced a 10-day week, called a decade. And it was fundamentally different from the seven day week. And it was a failed experiment.

The next big one was the Soviet attack on the week. Soviets were mostly interested in continuous production in factories, but they also wanted to undermine the power of the Russian Orthodox Church. They first went to a five-day week, then a six-day week, and then weeks were not coordinated. That was the part that had to do with continuous production, similar to a hospital or any other operation that seeks continuous operation: I have one day off, but my best friend or wife might have another one. That failed, in part because of resistance to having a non-coordinated week.

The third attack is less well known, but represents American and European corporate capitalism, and the rational reforms favored by big businesses that they largely succeeded in creating by World War One. It was a system of timekeeping that’s universal, that gave us things like time zones, where you can divide the world in the 24 zones, and also a line that marks where the day officially ends and begins somewhere in the Pacific Ocean that’s antipodal to Greenwich, England. Or daylight savings time, the idea that you can manipulate the clock for various social or economic benefits. All these things are product of what my colleague Vanessa Ogle calls the global transformation of time between 1880 and 1920.

The one thing that the many those same reformers wanted to do — and failed to do — was to tame the week by making it an even subdivision of months, and especially of years. And that’s not a very big change, right? They’re not making the week longer or shorter. They’re not making it non-coordinated. All they’re doing is saying that at the end of every year, there’ll be one day, or two if it’s a leap year, that are blank. Most proposals to tame the week as I would call it, or reform the week, simply asked for one or two blank days that would have no weekly value. And the purpose was so the cycle of weeks would be 364 days, not 365, and therefore divisible by seven, and therefore every January 28, would be a Monday. The League of Nations took it up and considered it, but rejected it. Many people assumed that this was the wave of the future, but instead it suffered the fate of Esperanto, and not the fate of timezones.

Meanwhile, the week was entering, without much resistance, all these societies that never had one. If I were a historian of Japan, I would really want to study, what was the cognitive process, the cultural process, and the political process by which a society that had never counted continuous seven-day cycles suddenly began organizing not only its work life, but life more generally, around this complete innovation? It’s not flashy like the internet. But it is a technology, and it was completely new in Japan. It’s a different story in the United States, where the technology was quite old. and was doing new things for people without anyone really commenting on it.

Article

The Labor Market and the Opioid Epidemic: A Visual Interview with Nathan Seltzer

Nathan Seltzer is a postdoctoral scholar in the UC Berkeley Department of Demography. He received his PhD in sociology from the University of Wisconsin-Madison, where he also trained in demography at the Center for Demography and Ecology. His research explores the relationship between economic change and population trends. In published and ongoing work, he investigates how the decline of the American manufacturing sector has impacted fertility rates, mortality rates, and economic mobility. 

Social Science Matrix content curator Julia Sizek interviewed Seltzer about his recent research, using images from his article, “The economic underpinnings of the drug epidemic,” which was published in Social Science and Medicine – Population Health in December 2020. (Please note that captions have been revised for this article.)

 

graph showing rise in opioid rates
Fig. 1: Annual number of total drug overdoses, specified opioid overdoses, and corrected estimates of opioid overdoses, which include specified opioid overdoses and predicted opioid overdoses for death records that had an unspecified contributing cause in the United States.

 

During the last 20 years, the number of opioid-related deaths has been dramatically increasing, as Figure 1 shows. How have scholars typically understood the causes of the opioid epidemic?  

There are a number of reasons why drug and opioid overdose deaths have increased over the past two decades. To begin, pharmaceutical companies began ramping up the manufacturing and distribution of prescription opioids in the 1990s. Foremost, Purdue Pharma is known for its role in pushing OxyContin, but the widespread adoption of prescription opioids for pain ailments extends to the broader pharmaceutical industry, which promoted the idea that opioids were non-addictive and safe to use with minimal risks. 

The deliberate distribution of prescription opioids by pharmaceutical companies is a supply explanation for what propelled the opioid epidemic. At the same time, we know that supply cannot exist without demand. Recent academic literature, including my study, has found that the success of the pharmaceutical companies in distributing prescription opioids was driven in part by below-par social and economic conditions. In particular, economists Anne Case and Angus Deaton have emphasized in their research how deteriorating quality of life and economic “despair” have proliferated in recent decades. Indeed, there is a strong correlation between measures of economic precarity and opioid prescribing patterns.

While the drug epidemic was initially spurred by the over-prescription of opioid medications, two additional developments kept it going. First, the rise of heroin supply started at the beginning of the 2010s. Second, the rise of synthetic opioids, such as fentanyl, started shortly after the rise of heroin. Yet, the drug epidemic is wider than just opioid use – there has also been an increase in deaths involving psychostimulants and cocaine. In my research, I focus on the broader drug epidemic, rather than just the opioid epidemic, to call attention to this broader development.

Fig. 2: Total number of workers employed in the manufacturing sector in the United States, 1980–2019. (Data Source: U.S. Bureau of Labor Statistics, All Employees, Manufacturing [MANEMP], retrieved from FRED, Federal Reserve Bank of St. Louis; January 28, 2020.

Your research examines the link between the labor market and opioid overdose mortality. In the graph above (Figure 2), we can see the general decline in the number of workers employed in manufacturing. How do scholars normally explain the link between this decline in jobs in the manufacturing sector and opioid deaths, and what is important about manufacturing-sector jobs compared to other declining industries? 

The decline of U.S. manufacturing is one of the most important labor market events of the past fifty years. Between 1970 and today, manufacturing jobs went from representing a quarter to less than a tenth of all jobs in the U.S. The issue with this decline is that manufacturing jobs have traditionally functioned as a ladder for upward economic mobility, especially for those without a college degree. As manufacturing employment has decreased, no other industry has taken manufacturing’s place to provide a similar ladder for upward economic mobility. Instead, most employment growth has been in the “low-skill” service sector, which provides wages that are not comparable to those commanded by manufacturing workers.

Scholars have recently begun to examine how these sorts of labor market changes are impacting different facets of society, including trends in drug overdose mortality rates. My research builds on this new literature by examining how the loss of manufacturing jobs predicted the rise of the drug epidemic. The mechanism behind this association is that manufacturing decline heightens economic uncertainty for both workers who are directly laid off, as well as the broader community that experiences reduced employment opportunities. This economic uncertainty fosters a risk environment that increases the likelihood of substance use.

Fig. 3. Change in the share of employees in the manufacturing sector by state, 1998-2016. Data: U.S. Census Bureau, County Business Patterns Program. (For interpretation of the references to color in this figure legend, see the web version of the article.)

 

As we can see in Figure 3 (above), there is significant variation across states in the extent to which manufacturing declined. What does examining the opioid epidemic at the state scale show us that’s less visible at other scales, and what did you find when you examined smaller scales at the county level?

I chose the state scale for the primary analysis because there is substantial variation in both drug overdose deaths and manufacturing employment across states. This state-level variation is not just random noise, but the result of different social, economic, and health policies that have been implemented by states over the course of decades. These policies range from labor deregulation to Naloxone access laws (Naloxone is a drug that immediately reverses an opioid overdose) and the creation of prescription drug monitoring programs. Accordingly, population health outcomes are now increasingly determined by state-level policies and regulations, and it is important to take into account these broader socio-political policy regimes when conducting a statistical analysis.

The results of the state-level analysis indicate that states with higher levels of manufacturing employment had lower rates of drug overdose deaths. Specifically, for every one percentage point increment in manufacturing employment, there is a 3.2% reduction in drug overdose rates for women and a 4.7% reduction in drug overdose mortality rates for men. Between 1999-2017 (the length of the study period), the overall decline in manufacturing employment experienced by all states accounted for approximately 92,000 overdose deaths for men and up to 44,000 overdose deaths for women.

In addition to the state scale, I examined whether the association between manufacturing employment and drug overdose deaths held at smaller geographic levels, including the commuting zone level (a level slightly larger than a metropolitan statistical area) and the county level. The results demonstrate that the statistically significant association remains, although the effect size attenuates slightly. This attenuation can be explained by the effects of shifting to a smaller level of geography: by studying a commuting zone or county level on its own, spillover effects, like work commuting patterns across counties, are ignored.

 

Fig. 4. Percentage of drug deaths between 1999 and 2017 predicted by manufacturing decline.

 

 

These maps in Figure 4 show the percentage of drug deaths you were able to predict using your model that factored in manufacturing decline. How were you able to use the data from a decline in manufacturing jobs to predict opioid deaths? What were some of the challenges of trying to put together this predictive model, and what were you able to find in terms of the predictive power of manufacturing decline on opioid-related deaths?  

The findings of my study indicate that up to 92,000 overdose deaths for men and up to 44,000 overdose deaths for women are attributable to the decline in manufacturing jobs between 1999-2017. These total figures represent the percentage of all drug deaths that are predicted by manufacturing employment levels in each state. As you can see in the maps, the share of drug deaths that are predicted by manufacturing decline varies considerably across state contexts, as well as by sex. I derived these figures using data on the overall percentage point decline in manufacturing employment for each state and data from the estimated statistical models. 

The biggest challenge of this project was assembling a dataset that combined data on drug overdose mortality rates with data on manufacturing employment, as well as other social, economic, and policy variables. Assembling this unique dataset allowed me to statistically adjust the models for important alternate explanations other than manufacturing decline that might better explain the rise of drug overdose deaths. To generate mortality rates, I combined data on state-level populations with restricted-use death certificate records from the National Center for Health Statistics at the CDC. For manufacturing employment levels, I worked with data from the Census Bureau’s County Business Patterns program. I then accessed data from various other sources, including the Current Population Survey, the Census Bureau’s Local Area Unemployment Statistics program, and a database on prescription drug policies. Including variables in the model from all of these individual datasets improved the theoretical and methodological rigor of the research.

What racial and gender differences did you find in your model?

Much of the previous literature on the opioid and drug epidemic has focused on middle-aged white males because they initially had the highest levels of drug and alcohol usage in the 2000s in comparison to other race and sex groups. In my research, I sought to examine whether the effect of manufacturing decline on drug overdose deaths was generalizable to other population subgroups. Generally, the effect remains the largest for middle-aged white males between the ages of 45-54, but the effect is also large for adult white males of other ages, as well as for adult white females of all ages. For Black males and females, the effect is generally not statistically significant, but there are important exceptions: manufacturing decline was associated with drug overdose deaths for Black females ages 45-54 and Black males ages 35-44 and 55-64. These findings go against the widespread, but unfounded notion that manufacturing decline has primarily impacted white male workers. In fact, as evidenced by William Julius Wilson’s research, Black workers experienced substantial losses in manufacturing employment over the course of the final two decades of the 20th century.

What are some of the implications of your research for policymakers and institutions? 

This paper speaks to a growing literature that finds a relationship between social conditions and the rise of the opioid and drug epidemic. The implications of the results – that higher manufacturing employment is associated with lower rates of drug overdose deaths – signal the importance of policy interventions that aim to reduce persistent economic precarity experienced by individuals and communities, especially the economic strain placed upon the middle class. We live in a world where it is unlikely that major growth will occur in the U.S. manufacturing industry; however, an emphasis on improving jobs in the service sector should be the focus. Improvements in wages, benefits, and job stability in the low-wage service sector might decrease economic uncertainty and therefore provide a pathway toward reducing drug and opioid overdose mortality.

 

 

Article

A Visual Interview with Eric Stanley on “Atmospheres of Violence”

Atmospheres of Violence Book Cover
Professor Eric Stanley
Professor Eric Stanley

How should we understand violence against trans/queer people in relation to the promise of modern democracies? In their new book, Atmospheres of Violence: Structuring Antagonisms and the Trans/Queer Ungovernable (Duke 2021), Eric A. Stanley, Associate Professor in the Department of Gender and Women’s Studies, argues that anti-trans/queer violence is foundational to, and not an aberration of, western modernity.

Their other projects have included the anthology Trap Door: Trans Cultural Production and the Politics of Visibility, co-edited with Tourmaline and Johanna Burton; and the films Criminal Queers (2019) and Homotopia (2008), in collaboration with Chris Vargas.

For this visual interview, Julia Sizek, Matrix Content Curator and a PhD candidate in the UC Berkeley Department of Anthropology, asked Professor Stanley about their research, drawing upon images and videos referenced in the book.

 

Your book begins at the site of the death of Marsha P. Johnson, a pioneering transgender activist, and trans/queer death is generally the subject of the book. In what ways has death become central to understanding both LGBT history and trans/queer people today? 

Marsha P. Johnson
Marsha P. Johnson pickets Bellevue Hospital to protest treatment of street people and gays, ca. 1968–75. Photo by Diana Davies, Manuscripts and Archives Division, New York Public Library

The book does dwell in the space of death, and the first pages include a note on “reading with care” so people will be aware of its content. However, my attention to the work of violence is not because I believe it to be the limit of trans/queerness but because, under the order of the settler colonial state, harm is any and everywhere. What this means is that we must work to understand the various ways violence delineates trans/queerness if we want to end it. To this end, I investigate how racialized anti-trans/queer violence is foundational to and not an aberration of the social world.

That said, rather than simply argue that we are “against violence,” I reposition the demand by way of a question: what constitutes the time of violence for those living in the crucible of total war? In other words, saying that we want to respond to specific instances of violence is not enough if we have not rendered unworkable the structures that do not simply allow it, but mandate its continuance.

This is one of the many lessons that I continue to learn from theorists like Marsha P. Johnson. She was a marginally housed radical organizer whose Black trans politics were fashioned from living in and against the anti-blackness of a transmisogynist world. Her death, which was deemed a suicide by the NYPD, remains under speculation by her friends, who believe it was perhaps a violent trick or even a police officer who murdered her. While the case of her death has become a focus for organizing, Marsha’s commitments — her life in struggle — instruct us to organize against the conditions that stole her from the world.

While direct attacks against trans/queer people are one focus of the book, I also theorize that the state perpetuates violence against trans/queer people through paradigmatic neglect. We can look at trans/queer houselessness, incarceration, and the ongoing HIV/AIDS pandemic to see the ways inaction is, perhaps counterintuitively, an active process. It is, I believe, in these spaces of seeming contradiction where power becomes most visible.

In this video, Sylvia Rivera, a contemporary of Martha P. Johnson, is met with resistance by the crowd when she takes the stage at the 1973 Christopher Street Liberation Day celebration. Today, she is considered to be a trans icon. What does Rivera’s acceptance today reveal about how we consider LGBT history?

This video depicts transgender activist Sylvia Rivera’s monologue at a demonstration in 1973. 

The introduction of my book, River of Sorrow, attempts to think about this antagonism. The amazing documentation of Sylvia fearlessly climbing the stage at this celebration gathers up so much of what the book theorizes. Sylvia was a Puerto Rican trans organizer, sex worker, anti-imperialist and one of Marsha’s closest friends. She was not given space to speak because cis lesbians and gays diagnosed her, and all trans women, as perpetrators of a misogynist culture by way of their identities. The transmisogyny of the event organizers who attempted to force her physically and ideologically off the stage tragically still lives in the ongoing harassment of trans people in general, and trans women specifically, by Trans Exclusionary Radical Feminists (TERFs). Not unlike anti-trans “feminists” of 1973, today we see trans people attacked much more than the patriarchal order they blame us for reproducing. Luckily, Sylvia was able to eventually take the microphone that day, and as you can see, she then delivered a devastatingly beautiful speech about the importance of not leaving behind those hidden by calls of “gay respectability,” namely trans/queer people of color in jails, shelters, and other “street queens” like her and Marsha.

The mainstream LGBT movement that Sylvia declared war against continues its legacy of assimilation in our current moment. Yet what is different, and perhaps even more dangerous, is that it now primarily terrorizes through incorporation. What this means is that, rather than working through exclusion and exile as it did in 1973, we now see the inclusion of those historically forced out not toward the end of reorganizing normative power, but to maintain it. The goal of inclusion is not to challenge the political order, as we are often told, but to extinguish radical critique and our dreams of freedom.

This dispossession through incorporation was again clarified after I finished the book and I noticed that the “all power to the people” photo of Marsha was being sold on a shirt at Target during their rainbow-washed June. The brutal irony is that they were selling the image of radical Black anti-capitalist action while underpaying their workers and racially profiling Black people in their stores. They want Marsha’s image, but they don’t want her. It’s this knot that I’m trying to apprehend in the book, so that we might find a way out.

This photograph was taken in 1992 at a political action by ACT UP, in which activists flung ashes of loved ones on George H.W. Bush’s White House lawn and transformed an act of grief into a political act. How does this act combat what you call “necrocapitalism”?

protestors throwing ashes on white house lawn
ACT UP Ashes Action, 1992. © Meg Handler

The ashes action leaves me undone. While political funerals were often organized by ACT UP and many other groups, this one harnessed the brutal eloquence of those forms of protest with the material act of “returning the dead” to the house of their executioner, specifically Bush’s White House. Here, friends, lovers, and families marched with boxes of ashes toward the White House under threat of the swinging clubs of mounted DC police, and then once they arrived at the gates, they tossed the remains onto the green of the lawn.

One of the practices developed by ACT UP was to name governmental inaction as a method of active killing. The disappearance of their loved ones was the unfolding of what Ruth Wilson Gilmore might call “organized abandonment,” instigated by a straight state that understood HIV/AIDS as the wish fulfillment of those already damned to hell. This idea that HIV is the materialization of God’s wrath might circulate less openly today, but the logical structure of this belief — that a virus is the punishment for wrongdoing — maintains the crushing stigma many still endure. 

The desperation in the videos and photos of the action overwhelms. Revenge and mourning meet in the act of exhuming bodies. While the open secret of mass deaths from AIDS-related illnesses was spoken in quiet whispers and hidden under homophobic silence, here ACT UP materialized their loss in the form of ground bones, the remains of trans/queer life, scattered to the winds so that their pain might become all of ours.

Through thinking with this action, along with the murders at the Pulse nightclub in Orlando, Florida, and the longer colonial history of HIV and current practices of blood banking, I develop a theory of necrocapital. Here I work with, and sometimes against, materialist feminists and others who have helped us understand the centrality of reproductive labor. With necrocapital, I’m paying attention to how speculation is not tied exclusively to the category of “life,” and indeed financialization has opened the entirety of the worker, even in death, to increased profits. One of the reason ACT UP’s direct action is so powerful is because it materializes the symbolics of trans/queer blood — the feared yet valued substance that is, at least under the logic of a phobic social, a vector of death. Here it is returned as a bio-strike, a labor stoppage, and a refusal to privatize our grief.

In this short film produced by the Barnard Center for Research on Women, Miss Major Griffin-Gracy, a trans activist, discusses how her personal activism has taken a new form. She says that, “on a personal level, what I did was change all of my identification back to male” as a way to highlight her transgender identity and “strike back.” How do you read this “striking back,” and what does it show us about the relationship between trans people and the state? 

Major’s irreverence for a world that demands respect but delivers none shows us that what is offered is not all that is available. Through a reading of her words and Tourmaline’s film, I suggest that her ungovernability — her life in refusal — is a pedagogy of Black trans sociality, an escape hatch out of the dreadful pragmatism of the current order. Importantly, as with Major, Marsha, Sylvia, and many others who appear in the text, I’m emphatic that they are theorists of trans life and not simply examples of it. This is necessary if we are to build a trans study that at least attempts to disorganize the organization of cis knowledge production.

Among the ways Major offers us this gift is through the story of her IDs. At one point, she switched her IDs from “male” to “female,” as many trans women do in hopes of decreasing harassment by those who demand papers. But then the short film repositions the narrative of transition, as she “switched them all back” to “male” because she is a transgender woman and she wanted to be known as such. She is clear, and I also underscore this in the book, that she is not making a prescription, but this “personal act” was, as you noted, one of her ways of “striking back.” 

I’m dedicated to charting these otherwise minor acts, moments of rebellion and striking back that might slip past the telling of revolutionary social change. This is important as it not only connects to the larger moment histories, but as Majors makes clear, it’s where the force necessary to continue to struggle is often found. For her, community care and sedition fall into each other and build out an underground of laughter and beautiful negation.

Your book concerns questions of death and violence against trans/queer people and asks readers to confront scenes of death and violence. What were some of the challenges in representing anti-trans/queer violence in this book, and what alternatives do you imagine to trans/queer death today?

“ANOTHER END OF THE WORLD IS POSSIBLE” Notes on a Burning Kmart, Minneapolis uprising, 2020. Photo by Aren Aizura.

This is a central concern of the book and an excellent question. However, throughout the text, I am unable to reconcile the fact that representing violence and allowing it to disappear are both, in different but related ways, among the technologies that ensure harm continues. Instead of assuming I might know the answer, I hold this contradiction with as much love and precision as I can to move through it under the banner of collective liberation. Methodologically, I don’t represent, at least in image, the violence I theorize. I do, however, at times narrate the scenes, as I believe we must work to understand its world-shattering force if we are to stop it. The answer then cannot simply be to look away, although we all must do that at times to preserve enough of us. 

Yet what I believe the project must be, if we want to “end violence,” is the destruction of the racist anti-trans/queer social that has taken so many and continues to threaten the very possibility of anything else. If, rather than an aberration of settler modernity, these woven forms of terror constitute the world, then I ask, with Frantz Fanon, “is another end of the world possible?” I’m not sure. I do know that we must continue to think, which is also to continue to learn that, as Major reminds us, there is abundance here and now. Following the ungovernable, among our tasks is life’s radical redistribution and the abolition of the world as it is. Rather than defeat, we must also know that there is a long and unfolding tradition of trans/queer action that builds a world beyond this one, where we might all feel the safety and joy of ease.

Article

Innovation Matters: Competition Policy for the High-Tech Economy

An interview with Professor Richard Gilbert

What’s wrong with antitrust policy for regulating the tech sector? In his new book, Innovation Matters: Competition Policy for the High-Technology Economy, Richard Gilbert, Distinguished Professor Emeritus of Economics at UC Berkeley, argues that regulators should be considering the effects of mergers and monopolies on innovation, rather than price.

From 1993 to 1995, Gilbert served as Deputy Assistant Attorney General in the Antitrust Division of the U.S. Department of Justice. He also served as Chair of the Berkeley Economics Department from 2002 to 2005, as President of the Industrial Organization Society from 1994 to 1995, and as the non-lawyer representative to the Council of the Antitrust Section of the American Bar Association from 2011 to 2014.

Julia Sizek, Matrix Content Curator, interviewed Professor Gilbert about the arguments in his new book. (Please note that responses have been edited, and links were added for reference.)

Q: As large technology companies have increasingly come under fire for their monopoly-like powers, many have been asking about how antitrust policy needs to change to address this industry. What motivated you to investigate the changing landscape of antitrust policy?

Traditionally, antitrust policy has been about prices, and antitrust officials have focused on stopping mergers that would increase prices or limiting conduct that would cause prices to rise or prevent them from falling. But we know that innovation — new or improved products or production methods — is more important for the economy and consumer welfare than a reduction in prices. We need to change antitrust policy from price-centric to innovation-centric.

Antitrust authorities appreciate the importance of innovation, but until recently they have not had the tools to analyze how mergers or the conduct of dominant firms might suppress innovation. Many antitrust enforcers and academics endorsed views associated with the writings of Joseph Schumpeter in the 1940s. He wrote that progress proceeds through a process of creative destruction, with new technologies replacing old products and methods, and that large firms were often better suited than small firms to create these new technologies. This Schumpeterian perspective suggested a defense for mergers and monopolization, rather than a basis to challenge them. Indeed, the Merger Guidelines published by the Department of Justice and Federal Trade Commission barely mentioned innovation as a merger concern until they were revised in 2010.

More recent economic research challenges the Schumpeterian perspective and shows how the lack of competition can suppress innovation incentives. Having fewer firms engaged in research and development lowers the probability of discovery. A firm that has monopoly power has little incentive to invest in costly R&D if a successful discovery would merely replace the profits it earns from its existing products. It is no surprise that many major discoveries have been made by firms that do not have existing products that would be threatened by the discovery. Electric vehicles, the smartphone, digital photography, ride-hailing services, digital mapping, photolithography, and mRNA vaccines are some examples of innovations that emerged out of non-dominant firms.

So, the motivation for my book was to collect in one place what we now know about the relationship between competition and innovation. That includes the Schumpeterian perspective, but also more recent scholarship that shows how monopoly is a threat to innovation. My objective was to describe the central principles that support an innovation-centric antitrust policy.

Q: As you note in the book, current antitrust policy in the U.S. asks how consumers would suffer if a merger or acquisition were to be completed, and that this harm to consumers is measured through looking at prices of products. What are the limits of using prices to measure competition (or lack thereof)?

A merger that results in a small reduction in the pace of innovation is likely to cause greater consumer harm than if it causes a small increase in price. That is why we need an innovation-centric antitrust policy when mergers or conduct are likely to affect the pace of innovation.

Sometimes we can account for innovation effects by incorporating quality into product prices. That is, we can measure the consumer benefit from an improvement in the quality of a product by an equivalent reduction in its price, or the consumer cost from a reduction in quality by an increase in price. This is straightforward for some products. If Hershey sells a smaller candy bar at the same price, it is equivalent to an increase in the price of the bar. If a new car gets lower gas mileage, it is equivalent to an increase in the price of the car.

This quality-adjusted price approach has limitations. It is difficult to apply to complex changes in the dimensions of a product. Moreover, in today’s digital economy, many services are provided without a monetary price. It doesn’t make much sense to ask whether the price could be lower, but instead we should ask whether companies are creating new services that benefit consumers or interfering with the ability of other firms to compete with new services.

Digital platforms such as Facebook and Google complicate the analysis because they provide services to consumers (e.g., social networks and search) without a price while generating revenues from advertising. The services that the platforms offer at a zero price and the advertising services that the platform sells at positive prices are interdependent. However, they raise different issues for antitrust analysis. For example, the Federal Trade Commission has filed an antitrust complaint related to Facebook’s acquisitions, including Instagram and WhatsApp. The complaint alleges that Facebook maintained its personal social networking monopoly by systematically tracking potential rivals and acquiring companies that it viewed as serious competitive threats.

A price-centric analysis might be appropriate for the advertising service, but an innovation-centric analysis is more appropriate for the effects of such acquisitions on the quality of Facebook’s social networking services.

Q: Your book offers innovation as a metric to understand antitrust policy. What is innovation, and how does one measure it?

Innovation is a new or improved product or process that differs significantly from previous products or processes. Innovation is more than invention, which is the act of discovering a new product or process, because innovation requires that an invention be put into active use or be made available for use by others.

Innovations can be measured in different ways. This can include direct measures, such as a technical or economic assessment of the value of the innovation. For pharmaceuticals, a new drug application that is approved by the Food and Drug Administration is a measure of innovation, although drugs differ greatly in their therapeutic and economic value. Indirect measures of innovation include the number of patents that cover the innovation. Because patents can differ greatly in significance, economic studies often use citation counts to determine the significance of the patents. Patent counts are generally better indicators of the value of innovations when they are adjusted by citations to measure quality, but there is still a gap between citation-weighted patent counts and the value of innovations. The gap depends on the industry. For example, patent counts tend to be aligned with the values of pharmaceutical and chemical innovations. However, in other industries, patents provide a measure of protection from competition that is not necessarily related to the value of the innovation that is disclosed by the patent. This disconnect is particularly problematic for industries in which many patents cover the same product, such as electronics, software, and communications technologies. In that case, a patented technology can represent a small fraction of the value of a product, yet the patent owner might be able to demand a high royalty because the product cannot be produced with the right to use the patent.

Economic studies of competition and innovation often use research and development (R&D) expenditure to measure innovative effort. R&D, however, is an input to the activity of innovation. It does not measure the output of innovation. R&D expenditures can increase with no effect on the output of innovation, or R&D can become more efficient and decrease with the same or greater output of innovation. Nonetheless, because R&D expenditures are often more accessible than measure of actual innovation, many empirical studies have used R&D expenditure as an indirect measure of innovation.

Q: In the book, you note that the number of complaints about innovation loss increased from the 1990s through the 2010s. What do you think accounted for the new focus on innovation, rather than other kinds of complaints? (In other words, how did innovation emerge as a means of thinking about anti-trust law?)

Courts generally follow economic developments in their evaluations of antitrust law, but usually with a substantial lag. Economic analysis plays a central role in almost every merger case, but economic analysis was almost always absent in merger evaluations that took place before 1980. Economic analysis became important in merger analysis after courts recognized that economics has something to say about whether a merger is likely to result in a “substantial lessening of competition,” which is the standard for review under the antitrust laws.

Economics did not have much to say about the relationship between competition and innovation until the latter part of the 20th century. As I mentioned, the prevailing sentiment was a Schumpeterian view that some monopoly power is conducive to innovation. When innovation appeared in antitrust cases, it was mostly as a defense to otherwise anticompetitive conduct. Indeed, in the monopolization case against Microsoft brought by the Department of Justice and several states, the Federal Court of Appeals quoted Schumpeter in the introduction to its opinion.

Innovation became more of a concern for antitrust enforcement by the Department of Justice and the Federal Trade Commission in the 1990s. This coincided, perhaps incidentally, with the publication by the agencies of the Antitrust Guidelines for the Licensing of Intellectual Property. (I led the effort that resulted in these guidelines when I was Deputy Assistant Attorney General at the Department of Justice.) The Guidelines brought innovation to the forefront.

The DOJ and FTC publish and update guidelines that describe their enforcement intentions for mergers. The first edition was published in 1968. Neither the first edition nor many subsequent editions mentioned innovation as a competitive concern until the guidelines were revised in 2010. (UC Berkeley professors Carl Shapiro and Joe Farrell led the 2010 effort to revise the guidelines.) The 2010 guidelines describe several ways in which mergers might suppress innovation. This discussion paralleled economic developments beginning in the late 20th century that showed why mergers and monopoly power can harm incentives to innovate.

Q: Large technology companies like Alphabet (Google) and Meta (Facebook) are known for acquiring companies in their start-up phases, and this has become widely accepted for small companies in the technology sector. How do you think this model has shifted possibilities for innovation in technology, and how might regulators change their approach to regulating these acquisitions?

Google and Facebook (and Amazon, Apple, and Microsoft) have acquired hundreds of start-ups. Few of these acquisitions were even reviewed by the antitrust agencies, and none was blocked. The reasons for the lack of enforcement are complex. The companies operate in fast-moving technologies, so it is often difficult to know whether a start-up represented a competitive threat to the acquiring firm.

The US and European authorities reviewed, but did not challenge, Facebook’s acquisition of WhatsApp and Instagram. The European Commission noted that WhatsApp and Facebook were but two of many messaging services, and that WhatsApp did not compete with Facebook for online advertising. Both agencies should have paid greater attention to the possibility that WhatsApp could have become a rival social network, much as the multi-purpose messaging service WeChat has done in China (albeit censored by the authorities). Indeed, the $19 billion that Facebook paid for the app, despite little usage at the time in the US, should have been an indicator of its potential as an industry disruptor.

Some acquisitions escape review by the antitrust agencies because they fall below the required reporting thresholds. Many of these acquisitions are “aqui-hires.” They are groups of talented individuals that bear little resemblance to the corporate acquisitions that are the usual targets of antitrust enforcement.

In my opinion, the most significant reason why antitrust enforcers have not been able to restrain the growth of the dominant digital platforms through acquisition is their inability to deal with potential competition. The antitrust agencies are quick to challenge a merger of X and Y when both have large shares of a concentrated market. But what about a dominant company X that acquires a startup Y that has no product, but might develop a product that competes with X? Y is not an actual competitor of X, Y is a potential competitor. Antitrust legal precedents impose a high bar to challenge an acquisition that eliminates a potential competitor.

Congress is currently considering several proposed bills that would strengthen antitrust enforcement, particularly for dominant platforms. While some of these bills are not, in my opinion, a step in the right direction, those that make it easier to challenge acquisitions of potential competitors could, if properly crafted, be a positive change to antitrust enforcement.

Q: How do these approaches to innovation need to change in the context of platform markets, like Google Shopping or Amazon? How do platform economies change how we should think about antitrust issues?

Platforms are challenging for antitrust enforcement. First, for platforms such as Google or Facebook, one side is supplied without a monetary price, although consumers “pay” by supplying valuable data. Second, many platforms have powerful network effects and scale economies from the accumulation of data. Network effects imply that users of the platforms benefit from the participation of other users. Scale economies imply that rivals would have to incur large and irreversible costs to duplicate the values that the platforms obtain from their data. The presence of network effects and scale economies imply large barriers to entry of new platform competitors. For both of these reasons, new competitors can’t gain a toehold in the usual way by providing the same service at a lower price. They have to compete with a differentiated product. Third, there are competitive interactions between the “free” and paid sides of the platforms. Platforms have incentives to maintain service quality on the free side if it is useful to attract paying advertisers, although such incentives are limited. Fourth, innovation concerns are particularly relevant for many platforms because the pace of technology development is rapid and some platform services are provided without charge, which makes a price-centric analysis less useful.

Of course, antitrust is relevant for platforms, and the challenges they present are not entirely new. But enforcement has to be mindful of platforms’ unique characteristics. Designing workable remedies for antitrust abuses is challenging for platforms. Consumers do not benefit from breaking up a platform if network effects imply that only one firm will survive. And behavioral remedies can be difficult to enforce or may have little effect. The Google search remedy imposed by the European Commission is still being criticized as too weak, years after it was first implemented. The European Commission requirements to offer choice screens for default browsers and search engines (i.e., screens that allow users to choose their preferred search engine) has not had a significant effect on utilization. The alternative approach to remedies might involve a regulator that supervises conduct by the platforms or that can impose fines large enough to affect their behavior.

Q: What is the future of antitrust policy in the United States, especially now, when prominent antitrust lawyers Lina Khan and Jonathan Kanter have been confirmed as Chair of the Federal Trade Commission and Assistant Attorney General, respectively?

Interesting question. Lina Khan is a self-professed member of the New Brandeis (NB) movement. The NBs believe that monopoly is a corrosive force for the economy and an obstacle for social justice. They want to break up monopolies without having to demonstrate a pattern of abusive behavior. This will be a tough sell in the courts. Established precedent requires a finding of anticompetitive conduct for a finding of unlawful monopolization under the Sherman Act.

Nonetheless, as Chairman of the FTC, Khan might be able to make some significant changes in antitrust enforcement. The FTC Act empowers the Commission to challenge “unfair” competition. Courts have ruled that the standard for unfair competition is the same standard for violation of the Sherman Act. But the Commission might have wiggle room to bring cases that would be difficult to prove under the Sherman Act. That would be an important development. Furthermore, the FTC has an administrative structure that gives it enforcement leverage that is absent at the Department of Justice. Specifically, the FTC can send cases to an administrative law judge (ALJ) before they go to a traditional court of law. The ALJ process takes time, and some defendants are willing to make concessions to avoid the extra delays.

I don’t expect to see the same movement of the antitrust needle at the DOJ, because the DOJ can’t avoid or delay judgments in the courts. Both agencies can be tougher on merger cases. There is some evidence that is happening and I expect it to continue. (But again, they have to deal with the courts if merging parties contest a challenge.) The DOJ also can have an impact through a process called the business review letter, where it can state an intention not to challenge a practice. For example, Democrats tend to be softer on enforcement of intellectual property rights, and the DOJ can signal this intent through a business review letter.

 

Podcast

Individual Trauma, Social Outcomes: A Matrix Podcast Interview with Biz Herman

Biz Herman

In this episode of the Matrix Podcast, Julia Sizek, PhD Candidate in Anthropology at UC Berkeley, interviews Biz Herman, a PhD candidate in the UC Berkeley Department of Political Science, a Visiting Scholar at The New School for Social Research’s Trauma and Global Mental Health Lab, and a Predoctoral Research Fellow with the Human Trafficking Vulnerability Lab. Herman’s dissertation, Individual Trauma, Collective Security: The Consequences of Conflict and Forced Migration on Social Stability, investigates the psychological effects of living through conflict and forced displacement, and how these individual traumas shape social life. 

Herman’s research has been supported by the Fulbright U.S. Student Program, the University of California Institute on Global Conflict & Cooperation (IGCC) Dissertation Fellowship, the Simpson Memorial Research Fellowship in International & Comparative Studies, the Malini Chowdhury Fellowship on Bangladesh Studies, and the Georg Eckert Institute Research Fellowship. Along with collaborators Justine M. Davis & Cecilia H. Mo, she received the IGCC Academic Conference Grant to convene the inaugural Human Security, Violence, and Trauma Conference in May 2021. This multidisciplinary meeting brought together over 170 policymakers, practitioners, and researchers from political science, behavioral economics, psychology, and public health for a two-day seminar on the implications of conflict and forced migration. She has served as an Innovation Fellow at Beyond Conflict’s Innovation Lab, which applies research findings from cognitive and behavioral science to the study of social conflict and belief formation.

In addition to her academic work, Biz is an Emmy-nominated photojournalist and a regular contributor to The New York Times. In 2019, she pitched and co-photographed The Women of the 116th Congress, which included portraits of 130 out of 131 women members of Congress, shot in the style of historical portrait paintings. The story ran as a special section featuring 27 different covers, and was subsequently published as a book, with a foreword by Roxane Gay.

The Matrix Podcast interview focuses primarily on Herman’s research on mental health and social stability at the Za’atri Refugee Camp in Jordan, as well as her broader research on the psychological implications of living through trauma and the impacts of individual trauma on community coherence.

The research in the Za’atri Refugee Camp, Herman explains, was part of a project developed by Mike Niconchuk, Program Director for Trauma & Violent Conflict at Beyond Conflict, who created a psycho-educational intervention called the Field Guide for Barefoot Psychology. “The goal of The Field Guide is to provide peer-to-peer mental health and psychosocial support and education,” Herman explains. “It’s a low-cost intervention, and it can be scaled. The idea was that in Za’atari Camp, where mental health care is very stigmatized, there are a lot of barriers to entry. And there are a lot of needs — physical security needs and community needs — and mental health is often de-prioritized. [The Field Guide provides] one way to address the lingering psychological implications of living through conflict and forced migration in a way that is accessible, and that can be provided without attracting attention or producing any kind of stigma, and that’s really connected to the context.”

The Field Guide uses narrative storytelling and scientific education, paired with self-care exercises, Herman explains. “Each chapter starts with a narrative of a brother and sister and their lives in Syria before conflict, during conflict, during migration, and in resettlement,” she says. “Through the story, different themes and ideas and issues come up, with different physiological and psychological responses. As these different responses come up, the next part of the chapter talks about the science behind that in a way that allows for some psychoeducation on what’s happening, but allows people to engage with it through someone else’s story.”

Listen to the interview below, or on Apple Podcasts or Google Podcasts.

 

 

 

Article

Online Extremism and Political Advertising: A Visual Interview With Laura Jakli

Laura Jakli

How can we track online extremism through political advertisements? Using data from online advertising, Laura Jakli, a 2020 PhD graduate from UC Berkeley’s Department of Political Science, studies political extremism, destigmatization, and radicalization, focusing on the role of popularity cues in online media. She is currently working on her book project, Engineering Extremism

She is currently a Junior Fellow at the Harvard Society of Fellows. Starting in 2023, she will be an Assistant Professor at Harvard Business School’s Business, Government and the International Economy (BGIE) unit. From 2018 to 2020, she was a predoctoral research fellow at Stanford University’s Center on Democracy, Development and the Rule of Law, and at the Program on Democracy and the Internet.

Social Science Matrix content curator Julia Sizek interviewed Jakli about her work, with questions based on political advertisements and graphics from Jakli’s research.

Your research uses the Facebook Ad Library to understand far-right political parties. What insights do advertisements provide for understanding far-right parties? 

Since 2018, the Facebook Ad Library (also known as the Ad Archive) has publicly documented the political advertisements hosted on the platform, as well as some limited metadata for each ad (for example, the name of the ad buyer, the number of ad impressions, total ad expenditure, geographic target, and audience gender and age demographics). Initially, the Ad Library exclusively featured ads run in the United States, but it expanded to dozens of other countries within a year. Since I study European politics, this expansion of the Ad Library opened up a new way to explore party messaging at scale.

Much of my research considers the gap between the publicly stated and privately held beliefs and preferences of far-right voters (and party elites themselves). In line with this, I was interested in examining party ads because the far right may be incentivized to present a more mainstream right-wing ideological profile in formal documents and in mass media campaigns to appeal to a broad audience. Meanwhile, when the far-right is targeting a narrow, custom audience through online media, the party may use more extreme campaign content. This is because, with digital micro-campaigns, they do not have the same political incentive to appeal to the masses or signal ideological common ground with center-right parties.

With my current political ads research, the objective is to better understand far-right party strategy and political positions. The main advantage of ads in this regard is that most parties field hundreds of unique online ads in the months leading up to an election. The sheer volume of political ad text available means that it is quite feasible to construct reliable ideological profiles for small parties, and to create valid inferences about party strategy. Moreover, since online ads are time-stamped and geographically targeted, they can be used to trace how positions change over time, both sub- and cross-nationally.

How do political ads work on Facebook? Who buys them, and how are political ad purchases split between groups? In other words, who is posting these ads, and how do they find their audiences? 

Many party ads are purchased by the national party itself, meaning that they are sponsored by the party’s main Facebook page, even if the ad content is focused on a specific regional or candidate campaign. But it can be a more decentralized process, and each political party can choose to run its political campaign through a combination of national and local advertising. In some European countries, I see party candidates and local party organizations paying for and running their own ads. 

Facebook allows advertisers to target not just by age, gender, and geographic location, but also by political interests and hobbies. Email lists gathered through rallies, fundraisers, and other events can be used to target customized political audiences. Moreover, these inputs can be used to find “Lookalike Audiences” that share interests, traits, and demographics with the established email list. These advertising parameters allow campaigns to target political ads quite narrowly and precisely.

One weakness of the Ad Archive is that it doesn’t actually reveal how the campaigns found their audiences. All you have available as metadata is basic demographic information, including a breakdown of the audience by gender, age, and geographic location. You can make some inferences about whom parties targeted based on this information, but the ad algorithm may also be impacting that audience.

For example, you can’t distinguish between when the party directed ads to be delivered to males between the ages of 18-24, or when the ad algorithm picked up on the fact that men between 18-24 interacted with the ad at higher rates, and therefore “learned” to deliver more ads to this segment over time. In other words, the audience is curated both through what ad buyers specify as their parameters (e.g., let’s target XYZ demographics), and the algorithm independently determining who would be an efficient target to display the ad.

This advertisement (Figure 1) from Vlaams Belang, a far-right party in Belgium, is fascinating because of the way that it is designed to track viewer reactions. How are advertisements on social media different from ordinary advertisements, and are you able to track how people interact with these advertisements?

sample facebook ad
Figure 1: Translation: “They have gone completely mad and want to actively participate in the return of ISIS terrorists! Vlaams Belang resolutely says NO. We must protect our people from these time bombs. We must take their nationality and try them in the countries where they committed their crimes. What do you think? Return possible for terrorists? [Indicate Yes (with a smiley) or No (with a like).]

The ability to rapidly field and test the performance of different political ads is one aspect of online advertising that distinguishes it from older forms of campaigning. Parties don’t have to commit to one message or thematic policy focus through a campaign season. This flexible, feedback-based approach is precisely demonstrated in this ad from Vlaams Belang. It asks ad viewers to signal using the laugh emoji if they agree with the return of foreign terrorist fighters (known as “returnees”) and “like” if they disagree with the policy. Presumably, the idea is to quickly and cheaply test how salient this issue is for potential voters.

Researchers are not easily able to track how people interact with these advertisements unless the advertisement links to a post on a public Facebook page. But in the case of Vlaams Belang and most parties that do these quick polls through ads, the poll takes you to a party webpage so they can get more information about their audience (and possibly elicit donations). One other way to get a sense of how people interact is simply through the number of impressions the ad gets. Impressions count the total number of times the ad is displayed on viewers’ screens. This is broadly informative, but doesn’t mean that audiences are actually clicking on the ad or interacting with its content in any way, so the inferences researchers can draw are quite limited.

One of the benefits of online advertisements, in contrast to traditional advertising, is the ability to target certain groups. This ad (Figure 2) shows an ad that targeted audiences specifically in Austria. How did you find that targeted advertising worked for far-right groups, and how did advertisements differ at the local and national scales?

Facebook ad
Figure 2: Translation: “There is a huge boiling point at Europe’s borders, because masses of illegal migrants want to return to certain European target countries, including our Austria. While patriotic politicians like Matteo Salvini are doing everything possible to stop illegal migration, completely different signals are coming from Berlin. Angela Merkel even wants to have the refugees picked up from Africa….”

Broadly speaking, the demographic metadata suggests that the far right has a much higher ratio of male ad audiences than do other parties, which makes sense, given the male skew of their voter base. But there is such limited metadata provided by the Facebook Ad Library that I have not been able to establish any other notable demographic trends. I am currently working on understanding the geospatial trends of far-right advertising but cannot say anything definitive yet.

I will say that the more localized advertisements — typically fielded by regional party organizations or local candidates — differ substantially in content from national ads. The more localized campaign material is crafted to resonate with local news events and community issues. Far-right political ads that target a narrow geography appeal to voters less on abstract political platforms or ideological principles and more on tangible and immediate localized concerns. In effect, this represents a shift to digital “home style” politics, by which the far right frames their platforms such that constituents of each district are led to believe party representatives are “one of them” and have their immediate interests in mind when crafting policy. 

In my qualitative analysis, I found that regional far-right party branches often stylize themselves as accessible, populist, and anti-political, presenting their party as concerned with what is “happening on the ground” and what the “people” really want. Relatedly, these online campaigns are crafted and fielded rapidly, in a manner that is less professional, less polished, and more casual than offline campaigns. Knife crime is one example of a localized thematic focus common in far-right ads (see Figure 3).

Sample Facebook ad
Figure 3: Translation: “Migrants at the forefront of knife crimes. ‘Dangerous people have no place in the middle of our liberal society and therefore have to be deported.’ Those were the words of #CDU Interior Minister Roland Wöller after the horrific knife murder in Dresden in October 2020 by an Islamist Syrian. This should give the impression that the CDU-led government is finally taking action against serious criminal foreigners….”

Working from a large dataset of far-right political ads, you translated the advertisements into English, and then used the NRC Word-Emotion Association Lexicon to identify how the ads evoke emotions like fear, disgust, and anger. These images (Figure 4) show word clouds based on advertisements from the German AfD (Alternative for Germany) party. What do these word clouds show?

Disgust and anger word clouds
Figure 4: Disgust and anger word clouds for Alternative for Germany ads, using the NRC Word-Emotion Association Lexicon (aka EmoLex).

First, I want to note that the share of negative emotive ad content is typically much higher in far-right ads than in the ads of other parties. Their negative ad campaigns focus on — and often exaggerate — social and economic problems, while identifying other people, parties, and institutions as responsible for them. Consistent with much of the literature, I also found that the far right is associated with specific emotive appeals, most prominently with fear and disgust, but also with a higher share of anger emotion words, on average.

Using the NRC Word-Emotion Association Lexicon, the disgust word cloud visualizes the terms tagged in the far-right AfD’s (Alternative for Germany) ads as being words associated with disgust. The size of the term in the word cloud helps visualize its relative frequency in appearance across the ads. The anger word cloud visualizes the same, but for anger-associated terms in the ads. These figures show that illegality, criminality, and violence are some of the most prevalent disgust-associated themes found in German far-right ads. There is quite a bit of overlap here with the most frequently found anger-associated words. Themes of criminality, violence, and terror attacks are frequently discussed by AfD, presumably with the intent of evoking anger toward the political status quo.

One of your findings is that far-right groups in Europe tend to claim ownership over the topic of immigration, as is reflected in this advertisement (Figure 5). How did you measure the focus on immigration among far-right parties in comparison to their more moderate counterparts? 

sample facebook ad
Figure 5: Translation: “Swept under the rug: the huge refugee costs. The AfD has been talking about it for a long time, but the other parties and the associations and companies of the so-called ‘asylum industry’ that benefit from them consistently avoid talking about this topic….”

I use a method called structural topic modeling to determine whether the far right maintains issue ownership on immigration. In topic modeling, each document (in this case, party ad corpus) is modeled as a mixture of multiple topics. Topical prevalence measures how much each topic contributes to a document. Put simply, I use metadata on which party fielded each ad text to examine differences in topical prevalence across the ad texts, and sort topical prevalence by party family. I estimated the mean difference in topic proportions for far-right parties and all other parties to determine which topics are more prevalent in far-right ads.

I use this to gauge whether there is disproportionate emphasis on immigration in far-right campaign ads, or whether immigration topics are prevalent across different types of parties. In a large majority of sampled EU countries, I found a disproportionate emphasis on immigration issues on the far right, which is consistent with issue ownership. There are three notable patterns in how the far right discusses the immigration issue across Europe. First, many parties specifically emphasize Muslim migration and frame Islam as a unique threat to national values and cultural identity. Second, immigration is often tied to criminality as well as to issues of women’s safety. Third, it is linked to general Euroscepticism and the EU’s multiculturalism.

While your analysis focused on the text of far-right political advertisements, the images would seem to be an essential part of ads’ effectiveness, as we can see in this image (Figure 6). What do you think are the limits of a text-based analysis, and what are avenues for investigating visual complements to your text-based research? 

facebook ad
Figure 6: Translation: “A sobering word for Sunday: The persecution of Christians in many countries around the world is increasing. But the Christian churches in Germany have paid too little attention to it for years. They prefer to curse the AfD, although the protection of Christians abroad is an important issue for this party….”

It is definitely an important limitation. Many ads also have videos embedded, not just images. By reducing the current study to text analysis, I may miss the fundamental features that lead viewers to interact with the ad, click on related content, or mobilize for the party. 

More broadly, there seems to be a trend in recent years of decreasing emphasis on text and increasing emphasis on visuals and videos in political ads. These trends mirror other social media trends (e.g., the rise of TikTok and YouTube). I think the political parties that acknowledge this trend and craft their online ads accordingly have a leg up over those that do not.

Based on a small qualitative assessment of these ad visuals, what I can say is that the inflammatory, emotive content I try to capture through text comes through much more explicitly in images and video. My sense is that the visuals associated with far-right ads are quite striking and substantively different from the ad visuals of other parties, although I have not tried to quantify these differences systematically. As our tools for image and video analysis improve in social science, I hope to study these features more rigorously.