Matrix News

Q&A with David Robinson, Visiting Scholar at Social Science Matrix

David Robinson

Social Science Matrix is honored to welcome David Robinson as a Visiting Scholar for the 2021-2022 academic year.

A distinguished researcher working at the intersection of law, policy, and technology, David studies the design and management of algorithmic decision-making, particularly in the public sector. He served as a managing director and cofounder of Upturn, a Washington DC-based public interest organization that promotes equity and justice in the design, governance, and use of digital technology. Upturn’s research and advocacy combines technical fluency and creative policy thinking to confront patterns of inequity, especially those rooted in race and poverty.

David previously served as the inaugural associate director of Princeton University’s Center for Information Technology Policy, a joint venture between the university’s School of Engineering and its Woodrow Wilson School of Public and International Affairs. He came to Matrix from Cornell University’s AI Policy and Practice Initiative, where he was a visiting scientist. He holds a JD from Yale Law School, and bachelor’s degrees in philosophy from Princeton and Oxford, where he was a Rhodes Scholar.

We interviewed David to learn more about his research interests and the projects he will be pursuing while at UC Berkeley, including an upcoming book on the development of the algorithm used to determine recipients of kidney transplants in the United States. Please note that this interview has been edited for length and content.

Q: How did you develop your interest in the study of algorithms?

I have always been interested in the social impacts of technology. When I was a kid, I had terrible handwriting; because of a mild case of cerebral palsy, I had some fine motor impairment. When writing meant penmanship, I was a bad writer. But then, eventually, I got a word processor in school, and discovered that I loved writing, and it was a really empowering change for me. Word processors had been around for a number of years, so the key change that made the benefits possible in my life was that the rules changed. The school said, let’s get one of these computers into this setting, where it can be beneficial. Ever since then, I’ve been interested in the social impacts of new digital technologies.

I came of age during the first wave of internet optimism in the 1990s and early 2000s, and I returned to Princeton to help start the Center for Information Technology Policy, a growing, thriving organization that brought together people from different disciplinary backgrounds. Part of the idea was that, if you’re navigating the policy and the values choices that come up around new technologies, it’s a big help to have some real depth of technical expertise. My colleague from that center, Ed Felten, later became the Deputy Chief Technology Officer of the United States in the Obama administration. There was a style of work we had there that was very specific to understanding the factual pieces of new technology, and making sure that a clear shared map of the stakes of the debate would be available to all participants.

While there, I got very involved in one issue in particular: open government data, making data transparent to the public, and publishing it in a reusable format, so that, for example, if you have public records about pollution or crime or education, you can put that on a map and track it over time, and not only rely on the government’s presentation of that information.

This was an idea that really took off in the Obama administration, and they created something called Data.gov, and built a multilateral partnership called the Open Government Partnership, along with other different countries. I came together with Harlan Yu, who was a PhD student at Princeton, and we ended up starting a public interest organization, Upturn, to continue this work of informing the public debate.

In the beginning, there was an optimistic view that there was an inherent valence to the technology, that it would make things more democratic and more open and accountable. Over time, we saw that wasn’t the case. Data.gov and similar sites had great data about things like the weather or the real-time location of buses, but if you were thinking this was going to help uncover financial malfeasance or otherwise disrupt the status quo, that didn’t transpire. We published a mea culpa on this, called the “New Ambiguity of Open Government,” where we said, if you’re making the data open, that doesn’t necessarily mean that you’re making the government open. There’s a whole politics to this. It’s not inherent in the technology that things are going to get more open.

Upturn started out as consulting firm in DC and ended up as an NGO, and we ended up working very closely with civil rights organizations, addressing inequities that are based either on race or poverty or the conjunction of the two. We evolved over time into having a much clearer political or normative mission. While at Upturn, I worked on understanding questions like, how do predictive policing systems work? If we have systems in courtrooms telling us who’s dangerous, what does that mean? What danger or risk is being measured, and what is the impact on real people and their families? Those sorts of questions became more important over time.

Three years ago, I was teaching at the law school at Georgetown, and I was focused on, how do we make algorithms accountable? We’re having software make high-stakes decisions that are impacting people’s lives. What can we do to take the moral innards of these systems and make them visible, and give people a seat at the table who are not the engineers and have them help make some of these values choices? That’s a question that is very much alive today.

What will you be working on during the coming year as a Matrix Visiting Scholar?

One of the projects I’ll be working on is a book with the working title, Voices in the Code. The idea is, I can give you lots of examples of where a system has been built and the values choices have not been made in an accountable way. In courtrooms, in the pre-trial context, where someone hasn’t been convicted of a crime, you’re balancing the liberty of a presumptively innocent person against the risk to the community that they might go out and commit more crimes or something like that. There’s no visibility and no clear understanding of how many of those choices are made in many jurisdictions. The point of these courtroom systems is to predict who’s dangerous. We wrote a paper called “Danger Ahead” that said, we predict these systems are dangerous because they’re hiding the ball on what the moral trade-offs are.

Voices in the Code is about one place where people didn’t hide the ball: in organ transplantation in the United States. If a kidney becomes available, there are 100,000 people waiting for a transplant. So if an organ is donated, it’s a non-market resource. We’re not going to give it to the highest bidder, but we do have to decide collectively, who’s going to get this vital resource and the opportunity to resume a normal life, and not rely on dialysis?

There are all kinds of logistical factors that go into that: how far away is the person? There are also medical factors, like blood type. And there are moral factors: if we wanted to maximize the total benefit from our supply of organs, then we might choose to give the organs to younger, healthier, and by-and-large richer and possibly whiter recipients, with fewer social determinants, co-morbidities, or other health problems. Of course, this is dramatically unfair. If we were to do that in a completely utility-maximizing way, the result would be that people already disadvantaged would lose the chance to get transplants. It’s also the case that older recipients would be greatly disadvantaged in that system.

But what’s interesting about transplants is there’s a very public process of figuring out what that algorithm is going to be. And when they suggested this utility-maximizing idea, the public pushed back, and they switched to something that’s a lot more moderate and smarter than what they were originally going to do. They did that because there was a public comment process, and transparency about what the algorithm was. There was auditing and there were simulations of how it would work if we rolled out different versions of that algorithm.

Those are all things that people are arguing for in other contexts, whether in child welfare, courtrooms, or in the private-sector systems for hiring. We want transparency and accountability. And there are a lot of ideas on the whiteboard. But what does it look like in practice? How can it be done? From my point of view, the transplant example is a really valuable precedent for how to do the ethics inside an algorithm in an accountable way. My book is about this example and what we can learn from it. (Watch a video of a talk that Robinson gave about this work.)

The second half of the work is a book about how algorithms change the stories we tell about who people are. It is looking at how selves are constructed, so it has more of a philosophical bent. When I was working in policy, I noticed that if you tag somebody as having a high productivity score, or a high dangerousness score, it’s not only used to make some narrow decision, but it also changes how the person is perceived by others. If we think about the quantified self movement, with all these self measurements, like a smart watch giving me health points, that’s going to change my view about how healthy I am. If we rate surgeons based on how successful their patients are after the operation, we think we’re finding out who’s a good surgeon, when it turns out, we may really be finding out in part who cherry-picks their cases and takes easy cases or something like that. The book aims to help the public develop a greater sense of confidence in taking apart what some of these scores really mean, to recover a sense of being able to construct our own identities and not ending up outsourcing that to some piece of software. [See this short essay that previews Robinson’s book on the social meaning of algorithms.]

What other lessons does the kidney transplant example teach us about fairness in algorithms?

Sometimes you’ll hear people talk about going out to get public input through some process, and the input is treated like something we’re going to mine and collect. But one of the key insights from this transplant experience is that debate creates opinions. The opinions that people come to the table with tend to change and soften. I always visualize one of those machines for polishing rocks, where you have all of these sharp edges that go in at the beginning, and they tumble around and get polished. Eventually people see where others are coming from, and they are invested in hearing each other out.

The algorithm for transplants is perpetually being revised, which is part of what a real democratic process looks like. People arrived at something they may not have loved, but that they found tolerable. There was a kind of wearing down, a gradual acquiescence into something tolerable. Especially if we look at our politics today, it’s no small feat to find something that is mutually tolerable to people with very different points of view. At some level, that’s part of our ambition for the governance of algorithms.

Based on what you’ve learned about algorithms and transparency, what do you think should be the norm in this area in five or ten years?

People sometimes say there ought to be one centralized regulatory body for algorithms, and I’m skeptical about that, because I think the contexts do differ, and context really matters. If you’re dealing with something medical, you want medical experts, and if you’re dealing with criminal law, then you want experts in the criminal legal system, as well as people and families who’ve encountered the system who can provide input into that.

But I do think there can be a shared layer that emerges, where people in one area talk to people in another and recognize that we have problems of the same shape. We’re doing data science, but we want to do it in an accountable, inclusive, and democratic way. There are places where we can learn how to do that, and we can take examples from one domain and share them with another.

So what does that mean? It means getting people involved early in the design process as early as possible to frame a shared understanding of the problem. It means publishing and auditing and simulating. (This is a step I think that hasn’t gotten a lot of attention so far: how can we forecast the consequences of our  alternatives?) And then, once the thing is out there, continuing to pay attention to how it’s going and seeing if it needs to be revised. That’s a set of practices that people are learning how to do in parallel, in lots of different places. So it’s about how to share ownership of the ethical choices inside high-stakes software. That’s what I’m working on, and that’s where I think a shared literacy needs to emerge.

Sometimes there’s a pattern of technical “shock and awe,” and people say, you have to be a genius or an expert to have any clue what this system is doing. And yet, at the end of the day, there’s a conference room and a whiteboard somewhere where human beings are sitting around and saying, how does this work, and what do we want to change? The doors to that room can always be opened, no matter how complicated the software is, no matter if it’s changing every second. Answering that question is a job that can be shared.

Part of the mission of Social Science Matrix is to promote cross-disciplinary research. What academic disciplines does your work touch upon?

I’ve taken a deep dive into the legal and policy documents, because one of the things about this transparent process is that there are reams of documents and reports, which are not necessarily easy to understand. I added a qualitative component that draws draws on sociological and anthropological methods. I conducted semi-structured qualitative interviews with participants in this public deliberation process, including physicians who led committees, and a transplant patient, who argued that the original proposal was unfair. Although my original training was not in sociology, I learned a great deal from from colleagues and have been able to adapt those methods.

What brought you to UC Berkeley to continue this work?

Berkeley is just an extraordinary community. There’s a public service mission that is very strong because it’s a public university, and one of the world’s great intellectual communities is at Berkeley. It’s a tremendous place. It’s a tremendous opportunity to contribute to those conversations, and to share work in progress and get feedback.

Having looked at the transplant example, part of what I’m trying to do is to make that that experience available to other scholars and policymakers who are working on similar problems in other domains — maybe not in transplants, but in a courtroom or a human resources department, where they want to know, how can transparency be made to work? I really want the substance of what I’ve done to be available to people.

I’ve made an intentional choice to step away from the more immediate policy work and think longer term. It’s been a great opportunity to  think big picture, but also to think concretely about how we can take insights from the academic field and apply them to the social problems we have that relate to new technologies. In order for all this toil and time to pay off, I’ve got to weave in to the broader conversation around these issues. I am hoping Matrix and UC Berkeley will be a platform to bring these ideas into conversation with the wider world.

 

You May Like

Race

Interview

Published June 9, 2021

A Q&A with Social Psychologist Jack Glaser on Racial Bias and Policing

Jack Glaser, Professor in the Goldman School of Public Policy, is a social psychologist whose primary research interest is in stereotyping, prejudice, and discrimination. He investigates the implications of racial profiling and other forms of bias in law enforcement. We spoke with Professor Glaser for his insights on bias in policing in the wake of the past year's protests for racial justice and police reform.

Learn More >

Matrix News

News

Published June 9, 2021

Matrix Announces 2021-2022 Research Teams

How are digital technologies re-shaping property and development? How are unionized employees in the energy sector experiencing climate change? What are the impacts of Chinese surveillance systems deployed across the Global South? These are among the diverse questions that will be addressed by the eight Matrix Research Teams to be funded by Social Science Matrix during the 2021-2022 academic year.

Learn More >

Interview

Podcast

Published June 9, 2021

Matrix Podcast: Interview with Youjin Chung

In this episode of the Matrix Podcast, Professor Michael Watts interviews Youjin Chung, Assistant Professor of Sustainability and Equity, with a joint appointment in the Energy and Resources Group and the Department of Environmental Science, Policy, and Management.

Learn More >