NYC Soon To Enforce AI Bias Law, Other Jurisdictions Likely to Follow

NYC Soon To Enforce AI Bias Law, Other Jurisdictions Likely to Follow

Client Alert


New York City’s Department of Consumer and Worker Protection (DCWP) is expected to begin enforcing the City’s novel artificial intelligence (AI) bias audit law on July 5, 2023. This law prohibits the use of automated decision tools in employment decisions within New York City unless certain bias audit, notice, and reporting requirements are met. The law’s definition of an automated employment decision tool includes any automated process that either replaces or substantially assists discretionary decision-making for employment decisions. For example, an automated process that screens resumes and schedules interviews based on such screening would be deemed an automated decision tool and subject to the law’s requirements. Conversely, an automated process that simply transfers applicant information from resumes to a spreadsheet, without otherwise scoring or ranking the applicants, would not be subject to the law’s requirements. 

The AI law has taken a long, winding path to enforcement. While originally enacted in December 2021, the DCWP did not release proposed rules until September 23, 2022, after which commentators noted that they needed further guidance. In December 2022, DCWP released Revised Proposed Rules clarifying certain key terms, which were subsequently incorporated into the Final Rules released in April 2023. Although the law went into effect on January 1, 2023, enforcement was initially delayed until April 15, 2023, and then further delayed until July 5, 2023, due to the large number of comments the DCWP received about the proposed rules. While the City may again push the date of enforcement, given the release of the Final Rules, employers are advised to ensure their compliance with the law by the announced July 5 date.

To do so, each employer must – as an initial matter – determine whether it is using AI tools in screening for hiring or promotion in New York City. If so, the employer should take the following concrete steps to ensure that they are in compliance with the law: First, the employer should engage an independent auditor to conduct a bias audit of any AI tool used. Organizations that use the same tool can share the same bias audit, if each shares the historical data required by the law. Second, each employer must publish the results of the bias audit on its website. Finally, each employer must provide to applicants and employees who reside in New York City notice of its use of AI in hiring and/or promotion decisions, either via website, job posting, mail or e-mail. Employers will be required to conduct such audits and publish the results annually. Violation of the AI law could subject an employer to fines between $500 and $1,500 per violation, per day.

Legal Landscape Outside New York City

Employers should also be cognizant of existing AI restrictions in other jurisdictions and prepare for new ones. For example, both Illinois’ Artificial Intelligence Video Interview Act and Maryland’s H.B. 1202 regulate the use of facial recognition/video analysis tools during pre-employment interviews unless informed consent is obtained first. 

In addition, in 2022, the California Fair Employment and Housing Council released proposed draft revisions to the state’s employment non-discrimination laws that, if implemented, would prohibit the use of AI tools and tests that screen employees/applicants on the basis of a protected characteristic, unless the AI tool or test is job-related. More recently, the New Jersey Assembly introduced bill 4909, which seeks to limit the use of AI tools used in hiring unless a bias audit is conducted.

As the use of AI in hiring and employment decisions continues to increase, employers can expect more regulation. Accordingly, employers who currently use or are considering using AI tools or software should consider what actions they can take now to prepare for future regulatory changes, mitigate legal and reputational exposure, and avoid potential liability. 

Best Practices

Regardless of whether employers are currently subject to state or local regulation in this space, employers who are using or considering using AI should take the following steps: 

  • Familiarize themselves with the AI tools being used or considered in their workplaces. To do so, they should obtain information from vendors about the following: (a) the vendor’s compliance with applicable AI laws and best practices; (b) how the AI is trained, monitored and corrected for purposes of avoiding discrimination and bias; (c) what accommodation options are available, if any; and (d) how the vendor or tool stores and deletes data, and whether it collects confidential data.
  • Create policies governing the use of AI tools as part of the employment decision-making or disciplinary process.  Employees should likewise be made aware if AI tools are being used, how the tools are being used, and that the employer has implemented policies to monitor and govern their usage. 
  • If an employer is operating or developing its own AI software or tool, it should ensure that there are measures in place to routinely monitor, audit, and/or conduct risk assessments of the tool.
  • Train key personnel, including supervisors, managers, human resource personnel, and legal personnel on applicable AI laws and internal policies. Training should also cover how to respond to employee inquiries about the use of AI tools, as well as accommodation requests. 

Further information on the broad range of risks associated with AI can be found in WilmerHale’s prior publications on “The Top 10 Legal and Business Risks of Chatbots and Generative AI” and in guidance issued by the Federal Trade Commission in February and March 2023 to companies using AI tools. Employers should also be cognizant of the potential that federal laws and regulations may be implicated, including possible liability under federal disability and discrimination laws. While there is currently no comprehensive federal legislation on AI in employment, employers may want to consult the EEOC’s May 2022 guidance memo on the use of AI. 

WilmerHale continues to monitor developments in this space, and employers are encouraged to reach out with questions or if they need assistance implementing lawful AI tools.  You can also stay on top of updates involving AI by subscribing to the WilmerHale Privacy and Cybersecurity Blog.



Unless you are an existing client, before communicating with WilmerHale by e-mail (or otherwise), please read the Disclaimer referenced by this link.(The Disclaimer is also accessible from the opening of this website). As noted therein, until you have received from us a written statement that we represent you in a particular manner (an "engagement letter") you should not send to us any confidential information about any such matter. After we have undertaken representation of you concerning a matter, you will be our client, and we may thereafter exchange confidential information freely.

Thank you for your interest in WilmerHale.