What court decision on research to expose online discrimination means

April 3, 2020

Christian SandvigA federal court has cleared the way for academic researchers, computer scientists and journalists to continue work that investigates online company practices for racial, gender or other discrimination.

The ruling means that those who research online companies no longer have to fear prosecution for the work they do to hold tech companies accountable for their practices, said Christian Sandvig, the H Marshall McLuhan Collegiate Professor of Digital Media, professor of information and director of the Center for Ethics, Society, and Computing at the University of Michigan.

https://us.vocuspr.com/Publish/520243/vcsPRAsset_520243_157147_2adc75d7-f851-4936-8352-2db188184957_0.jpgThe lawsuit Sandvig v. Barr was filed against the Department of Justice by the American Civil Liberties Union in June 2016 on behalf of Sandvig and researchers at Northeastern University, the University of Illinois and First Look Media Works, publisher of the Intercept.

The suit challenged on First Amendment grounds a provision in the Computer Fraud and Abuse Act that made it a crime for researchers to set up false accounts in order to audit computer algorithms to check for hidden discriminatory practices. The ACLU argued this sort of audit procedure is not illegal in the offline world and should not be so in cyberspace.

The lawsuit was filed on Sandvig’s behalf as a private citizen, not as a representative of U-M, but the outcome has implications for researchers, computer scientists and journalists across the U.S. who were threatened with legal repercussions for violating a website’s terms of service.

An example noted by the ACLU involved investigative journalists who set up tester identities to expose advertisers that were using Facebook’s ad-targeting algorithm to exclude users from receiving job, housing or credit ads based on race, gender, age or other classes in federal and state civil rights laws.

Sandvig, who also holds an appointment in the Department of Communication and Media and is a member of the Center for Political Studies in the Institute for Social Research, talks more about the impact Monday’s decision has on future research.

Please elaborate on what this means to research and researchers.

Speaking frankly, researchers and journalists have been terrified by the government’s arguments about the CFAA, the primary anti-hacking law in the United States. For example, Julia Angwin is a Pulitzer-prize winning reporter and editor-in-chief of The Markup. She was not involved in this lawsuit, but she wrote about our ruling this week, noting that “[t]his matters so much for our newsroom, and many others.”

The last few years have shown us that it is extremely important for the future of Artificial Intelligence and online platforms that there is a robust system for accountability in place that allows outside parties to examine computer systems for potential problems. Journalists and researchers already perform this role in the offline world. But online they will not launch studies or investigations of these systems in a high-risk environment where they fear prison. This ruling underlines the public importance of this outside accountability work.

What does it mean for the public?

Legal experts have called this the “worst law in technology,” and noted that it is so vague that “[online] chatting with friends, playing games, shopping or watching sports highlights” at work might be defined as federal crimes. For example, the government’s arguments in our case take the position that lying online is equivalent to hacking. This is absurd. Have you ever lied online? Or bought something personal online using your work computer? If so, reforming this law is important to you. Without a clearer law, virtually any computer users could be charged as a hacker at the complete discretion of the government.

The ACLU calls this the first ruling of its kind. Can you elaborate?

There have been a number of attempts to change, reform, or invalidate this law. Perhaps the most prominent was the “Aaron’s Law” movement, named after Reddit co-founder Aaron Swartz. Swartz, then a fellow at Harvard University’s Safra Center for Ethics, committed suicide while under indictment for violating this law after learning he faced up to 50 years in prison for violating website terms of service. Aaron’s Law stalled in Congress, and for the most part efforts to change this law haven’t worked. Thanks to this case, we have an important ruling that clearly lays out well-argued reasons why the government’s interpretation of this law is not valid.

How did the lawsuit come about?

Investigating online systems from the outside is a normal part of my research. Although the problems with this law are widely known and discussed in my research community, in 2014 some collaborators and I wrote a paper specifically thinking through the example of online civil rights enforcement. We were noting the challenges that this law poses if we want to maintain our “offline” standards for civil liberties and civil rights after many processes move to online platforms. My involvement in the case came out of that paper. Indeed, all of the academic plaintiffs in the case are researchers who are performing this kind of accountability work and who have collaborated with each other in different combinations, if they didn’t co-author that paper. When I wrote that paper I noted that the law needed reform but I had no idea then that I would personally get the chance to do it. I’m grateful to the ACLU’s pro bono representation for making that possible.

What’s next?

I anticipate that many researchers and journalists who study online platforms will be emboldened to undertake more of this important work, both in the domain of civil rights and discrimination and in other domains. I know the plaintiffs are interested in pursuing the research on “algorithm auditing.”

Court ruling
ACLU news release

Contact:

Laurel Thomas, 734-853-9130, [email protected]

Scroll to Top