top of page
  • Writer's pictureKirk Hartley

Burgeoning AI Issues: Schwarcz, Daniel B. and Prince, Anya, “Proxy Discrimination in the Age

So many issues lie ahead for litigation involving AI. With that in mind, here’s the abstract from a new paper by the indefatigible Dan Schwarcz and Anya Prince. This is the link to the paper at SSRN.


Big data and artificial intelligence are revolutionizing the ways in which financial firms, governments, and employers classify individuals. Surprisingly, however, one of the most important threats to anti-discrimination regimes posed by this revolution is largely unexplored or misunderstood in the extant literature. This is the risk that modern algorithms will result in “proxy discrimination.” Proxy discrimination is a specific type of practice producing a disparate impact. It occurs when two conditions are met. The first is widely recognized: a facially-neutral characteristic that is relevant to achieving a discriminator’s objectives must be correlated with membership in a protected class. By contrast, the second defining feature of proxy discrimination is generally overlooked: in addition to producing a disparate impact, proxy discrimination requires that the predictive power of a facially-neutral characteristic is at least partially attributable to its correlation with a suspect classifier. For this to happen, the suspect classifier must itself have some predictive power, making it ‘rational’ for an insurer, employer, or other actor to take it into consideration. As AIs become even smarter and big data becomes even bigger, proxy discrimination will represent an increasingly fundamental challenge to many anti-discrimination regimes. This is because AIs are inherently structured to engage in proxy discrimination whenever they are deprived of predictive data. Simply denying AIs access to the most intuitive proxies for predictive variables does nothing to alter this process; instead it simply causes AIs to locate less intuitive proxies. The proxy discrimination produced by AIs therefore has the potential to cause substantial social and economic harms by undermining many of the central goals of existing anti-discrimination regimes. For these reasons, anti-discrimination law must adapt to combat proxy discrimination in the age of AI and big data. This Article offers a menu of potential responses to the risk of proxy discrimination by AI. These include prohibiting the use of non-approved types of discrimination, requiring the collection and disclosure of data about impacted individuals’ membership in legally protected classes, and requiring firms to eliminate proxy discrimination by employing statistical models that isolate only the predictive power of non-suspect variables.

Keywords: Proxy Discrimination, Artificial Intelligence, Insurance, Big Data, GINA

Suggested Citation:

Schwarcz, Daniel B. and Prince, Anya, Proxy Discrimination in the Age of Artificial Intelligence and Big Data (March 6, 2019). Available at SSRN:

6 views0 comments

Recent Posts

See All

AI Reveals Hidden Genomic Drivers of Autism

New findings – from a highly credible source – provide yet another example of the power of AI to find previously unseen patterns that drive diseases, including cancers. As the new findings illustrate,

More AI – What if Alexa Went to Law School?

What if Alexa went to law school? That’s the interesting headline used to tee of some exchanges about AI and changes to Lexis/Nexis products, including legal research and court dockets.  This February


bottom of page