June 22, 2023

Volume XIII, Number 173


Achieving Legal Compliance in AI: Minimizing Bias in Algorithms

We’ve all heard troubling stories involving emerging tools powered by artificial intelligence (AI), in which algorithms yield unintended, biased, or erroneous results. Here are a few examples:

  • A monitoring tool for sepsis that performs less well for patients of certain races
  • A selection app that prefers certain backgrounds, education, or experience, with no showing of job relatedness or business necessity
  • Facial recognition software that struggles with different skin tones
  • An employment screening tool that doesn’t account for accents
  • A clinical decision support tool for evaluating kidney disease that gives doctors inconsistent advice based on the patient’s race
  • Triage software that prioritizes one race over others

The list is long and growing, and companies that use these tools do so at increasing legal, operational, and public relations risk.

AI-powered tools, unchecked, pose real but hidden risks to our friends, neighbors, and countless others, often limiting economic opportunities or, in the extreme, causing physical harm. For organizations seeking to use these tools, they also create potentially expensive and disruptive legal liability, operational shortcomings that may impede greater success in the marketplace, and reputational damage in the court of public opinion. Currently, the impact of algorithms on organizations and target populations is poorly understood and rarely measured.

This virtual briefing focuses on the legal risks, methods for finding those risks, and solutions in the form of tailored compliance programs that address AI risks specifically.

Registration is complimentary, but pre-registration is required.

Key takeaways in labor and employment, health care and life sciences, as well as consumer product use cases:

  • identifying the key laws and regulations implicated in these domains,
  • learning the techniques for finding bias and discrimination,
  • developing a holistic approach to establishing a compliance program specific to the creation and use of AI tools in these domains,
  • navigating privacy laws while seeking solutions to bias and discrimination,
  • and predicting the future direction of regulation in this space.


1. Laws Driving the Need for a Compliance System
1:00-1:40 p.m. ET

  • Labor and Employment/Civil Rights

Alexander J. Franchilli, Senior Counsel, Epstein Becker Green

  • Health Care and Life Sciences, Including CMS, HHS, ONC, and FDA

Bradley Merrill Thompson, Member of the Firm, Epstein Becker Green

  • Product Liability and FTC

Stuart M. Gerson, Member of the Firm, Epstein Becker Green

  • Federal and State Legislatures and Future General Regulation of Algorithms

David McNitt, Partner of The National Group and a non-lawyer Partner of the affiliated firm, the Oldaker Group

2. A Holistic Approach to Establishing a Compliance Program for the Creation and Use of AI Tools in Health Care and Beyond
1:40-2:10 p.m. ET
Lynn Shapiro Snyder, Member of the Firm, Epstein Becker Green
Nathaniel M. Glasser, Member of the Firm, Epstein Becker Green

  • Creating a Framework for the Enterprise Risks
  • NIST Risk Management Framework and Some Recent State AI Laws in the Employment Context
  • White House's AI Bill of Rights
  • Federal Sentencing Guidelines and HHS OIG 7 Elements of an Effective Corporate Compliance Program
  • The Four-Part Framework for Potential Federal Legislation

2:10-2:20 p.m. ET

3. Expertise Needed to Implement a Compliance System, including Social Scientists and Data Scientists
2:20-2:35 p.m. ET
David Schwartz, CEO and Co-Founder, Ethics Through Analytics LLC

4. Navigating Privacy Laws While Seeking Solutions to Bias and Discrimination 
2:35-2:50 p.m. ET
Alaap B. Shah, Member of the Firm, Epstein Becker Green

5. Predicting the Future - How This All Plays Out 
2:50-3:10 p.m. ET

6. Interactive Panel Discussion with Audience Q&A
3:10-3:30 p.m. ET