Skip to content

Artificial Intelligence (AI)

Massive, cheap computer power – often combined with AI – has facilitated much of the recent progress in understanding and working against cancer. Accordingly, it’s good to see that research against cancer will be one of the uses for the world’s most powerful supercomputer, which is headed for  the federal government’s Argonne Labs in Chicago’s western suburbs. To be operational in 2021, the supercomputer will operate at exaFLOP scale. What that heck does that mean? It means a quintillion calculations per second. What is a quintillion? It is a thousand raised to the power of six. It is a million raised to the power of five. In other words, it’s a really big number. It is a 1, followed by 18 zeros. See the chart for more. Amazing.

Hopefully this incredible machine will help researchers against cancer fulfill  Jeff Huber’s call to “find a better way” to take on cancer. Who is Jeff Huber? He is a University of Illinois computer science grad who., among other things, led the teams that developed Google Ads, Google Maps and Google Earth. He also is highly motivated against because his wife died in her 40s because of a colon cancer no one saw coming. He provided the commencement talk for the 2016 graduating class at the U of I, and drew a standing ovation after he aired the “find a better way” theme as a mantra for many problems facing societies, including cancer.

Mr. Huber presently serves as a member of the Board of Directors for Grail. Created in 2016, the company’s mission, as in seeking the holy grail, is to make it possible to find cancer early on by sifting through cells in blood to find and identify cancer cells long before a tumor manifests itself.  Grail  is one of a small handful of companies working to bring this type of “liquid biopsy” to mass markets.  Taking advantage of relatively cheap and massive computing power is part of the equation for getting things done against cancer. I’m looking forward to Chicago taking on a larger role in research against cancer, building on decades of marvelous computing work at the University of Illinois. See generally Transforming Science – “Petascale Day” – Celebrating “In Silico” Research and the Blue Waters Supercomputing Project at the National Center for Supercomputing Applications at the University of Illinois.

The March 18, 2019 press release from the Department of Energy is pasted below.

______________________________________________________________________________________________

CHICAGO, ILLINOIS – Intel Corporation and the U.S. Department of Energy (DOE) will build the first supercomputer with a performance of one exaFLOP in the United States. The system being developed at DOE’s Argonne National Laboratory in Chicago, named “Aurora”, will be used to dramatically advance scientific research and discovery. The contract is valued at over $500 million and will be delivered to Argonne National Laboratory by Intel and sub-contractor Cray Computing in 2021.

The Aurora systems’ exaFLOP of performance – equal to a “quintillion” floating point computations per second – combined with an ability to handle both traditional high performance computing (HPC) and artificial intelligence (AI) – will give researchers an unprecedented set of tools to address scientific problems at exascale. These breakthrough research projects range from developing extreme-scale cosmological simulations, discovering new approaches for drug response prediction, and discovering materials for the creation of more efficient organic solar cells. The Aurora system will foster new scientific innovation and usher in new technological capabilities, furthering the United States’ scientific leadership position globally.

“Achieving Exascale is imperative not only to better the scientific community, but also to better the lives of everyday Americans,” said U.S. Secretary of Energy Rick Perry. “Aurora and the next-generation of Exascale supercomputers will apply HPC and AI technologies to areas such as cancer research, climate modeling, and veterans’ health treatments. The innovative advancements that will be made with Exascale will have an incredibly significant impact on our society.”

Argonne's Aurora supercomputer will launch in 2021.
Aurora is expected to be completed by 2021. | Photo: Argonne National Laboratory

“Today is an important day not only for the team of technologists and scientists who have come together to build our first exascale computer – but also for all of us who are committed to American innovation and manufacturing,” said Bob Swan, Intel CEO.  “The convergence of AI and high-performance computing is an enormous opportunity to address some of the world’s biggest challenges and an important catalyst for economic opportunity.”

“There is tremendous scientific benefit to our nation that comes from collaborations like this one with the Department of Energy, Argonne National Laboratory, and industry partners Intel and Cray,” said Argonne National Laboratory Director, Paul Kearns.  “Argonne’s Aurora system is built for next-generation Artificial Intelligence and will accelerate scientific discovery by combining high-performance computing and artificial intelligence to address real world problems, such as improving extreme weather forecasting, accelerating medical treatments, mapping the human brain, developing new materials, and further understanding the universe – and that is just the beginning.”

The foundation of the Aurora supercomputer will be new Intel technologies designed specifically for the convergence of artificial intelligence and high performance computing at extreme computing scale. These include a future generation of Intel® Xeon® Scalable processor, a future generation of Intel® Optane™ DC Persistent Memory, Intel’s Xcompute architecture and Intel’s One API software.   Aurora will use Cray’s next-generation Shasta family which includes Cray’s high performance, scalable switch fabric codenamed “Slingshot”.

“Intel and Cray have a longstanding, successful partnership in building advanced supercomputers, and we are excited to partner with Intel to reach exascale with the Aurora system,” said Pete Ungaro, president and CEO, Cray. “Cray brings industry leading expertise in scalable designs with the new Shasta system and Slingshot interconnect. Combined with Intel’s technology innovations across compute, memory and storage, we are able to deliver to Argonne an unprecedented system for simulation, analytics, and AI.”

For more information about the work being done at DOE’s Argonne National Laboratory visit their website HERE.

So many issues lie ahead for litigation involving AI. With that in mind, here’s the abstract from a new paper by the indefatigible Dan Schwarcz and Anya Prince. This is the link to the paper at SSRN.

“Abstract

Big data and artificial intelligence are revolutionizing the ways in which financial firms, governments, and employers classify individuals. Surprisingly, however, one of the most important threats to anti-discrimination regimes posed by this revolution is largely unexplored or misunderstood in the extant literature. This is the risk that modern algorithms will result in “proxy discrimination.” Proxy discrimination is a specific type of practice producing a disparate impact. It occurs when two conditions are met. The first is widely recognized: a facially-neutral characteristic that is relevant to achieving a discriminator’s objectives must be correlated with membership in a protected class. By contrast, the second defining feature of proxy discrimination is generally overlooked: in addition to producing a disparate impact, proxy discrimination requires that the predictive power of a facially-neutral characteristic is at least partially attributable to its correlation with a suspect classifier. For this to happen, the suspect classifier must itself have some predictive power, making it ‘rational’ for an insurer, employer, or other actor to take it into consideration. As AIs become even smarter and big data becomes even bigger, proxy discrimination will represent an increasingly fundamental challenge to many anti-discrimination regimes. This is because AIs are inherently structured to engage in proxy discrimination whenever they are deprived of predictive data. Simply denying AIs access to the most intuitive proxies for predictive variables does nothing to alter this process; instead it simply causes AIs to locate less intuitive proxies. The proxy discrimination produced by AIs therefore has the potential to cause substantial social and economic harms by undermining many of the central goals of existing anti-discrimination regimes. For these reasons, anti-discrimination law must adapt to combat proxy discrimination in the age of AI and big data. This Article offers a menu of potential responses to the risk of proxy discrimination by AI. These include prohibiting the use of non-approved types of discrimination, requiring the collection and disclosure of data about impacted individuals’ membership in legally protected classes, and requiring firms to eliminate proxy discrimination by employing statistical models that isolate only the predictive power of non-suspect variables.

Keywords: Proxy Discrimination, Artificial Intelligence, Insurance, Big Data, GINA

Schwarcz, Daniel B. and Prince, Anya, Proxy Discrimination in the Age of Artificial Intelligence and Big Data (March 6, 2019). Available at SSRN: https://ssrn.com/abstract=

 

What if Alexa went to law school? That’s the interesting headline used to tee of some exchanges about AI and changes to Lexis/Nexis products, including legal research and court dockets.  This February 11, 2019 post at Dewey B. Strategic is worth reading for some glimpses into the past and what’s ahead; it is titled: Lexis Prepares to Launch a Research Bot – And a CourtLink Makeover

SCOTUSblog includes a February 16, 2018 announcement of two interesting events as to predicting outcomes at SCOTUS. The entry is pasted below in full since the point seems to be to spread the word.

_______________________________________________________________________________

“Event announcement: The Supreme Court and wisdom of the crowds

On February 21 at 12:45 p.m. PST, Stanford University’s CodeX will host a presentation by Daniel Martin Katz entitled, “How Crowdsourcing Accurately and Robustly Predicts Supreme Court Decisions.” More information about this event, which will include remote access, is available at this link.

Relatedly, this blog is collaborating with Good Judgment to offer the SCOTUS Challenge, which invites forecasters to predict the outcomes of Supreme Court cases from this term. This opportunity for readers is available on the SCOTUS Challenge page.”

Suppose your next jury trial involves issues about cancer. Will you be able to use AI and big data – during jury selection – to quickly find all of a possible juror’s social media comments about cancer? Check out this April 26, 2017 article at Artificial Lawyer about a firm trying to make that happen. The future will be interesting.