Skip to content

Force-fed Products, Open Captioning in Motion Picture Theaters, and Automated Employment Decision Tools

Print Friendly, PDF & Email


Rule status: Adopted

Agency: DCWP

Effective date: August 5, 2022

Proposed Rule Full Text
Rules-re-LL202-LL144-LL37.pdf

Adopted Rule Full Text
Notice-of-Adoption-Open-Captioning-Employment-Decision-Tools-Force-Fed-Prods.pdf

Adopted rule summary:

The Department of Consumer and Worker Protection is adopting rules to implement Local Law 202 of 2019, Local Law 144 of 2021, and Local Law 37 of 2022. These new rules add penalty schedules for violations related to keeping or selling any force-fed products, open captioning in motion picture theaters, and automated employment decision tools.

Comments are now closed.

Online comments: 5

  • Shea Brown

    See the attached pdf, also submitted via email.

    Comment attachment
    BABL-AI-Local-Law-144.pdf
    Comment added June 6, 2022 10:46am
  • Merve Hickok

    Written Comments from Merve Hickok: Regarding Local Law 144 of 2021 in relation to Automated Employment Decision Tools

    Comment attachment
    NYC-Public-Comment_AIethicist.pdf
    Comment added June 6, 2022 4:38pm
  • Ryan Carrier

    Ryan Carrier
    Executive Director
    980 Broadway #506
    Thornwood, NY 10594

    Re: NYC AEDT Bias Audit law – Local Law 144 2021

    Dear Chair and Department Members:

    It is ForHumanity’s pleasure to submit this letter and our Government and Regulatory services in regards to Local law 144 of 2021 related to Automated Employment Decision tools (AEDT). The protection afforded by the law to candidates of AEDT are aligned with ForHumanity’s mission to examine and analyze downside risk associated with the ubiquitous advance of AI, algorithmic and autonomous systems and where possible to engage in risk mitigation to maximize the benefits of these systems… ForHumanity.

    ForHumanity (https://forhumanity.center/) is a 501(c)(3) nonprofit organization dedicated to addressing the Ethics, Bias, Privacy, Trust, and Cybersecurity in artificial intelligence and autonomous systems. ForHumanity uses an open and transparent process that draws from a pool of over 1000+ international contributors, from more than 70 countries to construct audit criteria, certification schemes, and educational programs for legal and compliance professionals, educators, auditors, developers, and legislators to mitigate bias, enhance ethics, protect privacy, build trust, improve cybersecurity, and drive accountability and transparency in AI and autonomous systems. ForHumanity works to make AI safe for all people and makes itself available to support government agencies and instrumentalities to manage risk associated with AI and autonomous systems.

    In support of the NYC AEDT Bias Audit law, ForHumanity has regularly convened a team of volunteers (all humans are welcome in our transparent, crowdsourced process) to draft ForHumanity’s NYC AEDT Bias Audit – a certification scheme aimed to satisfy Local Law 144’s term -“bias audit” . It is our belief that a “bias audit” is not a widely understood and accepted term, whereby all auditors know all steps that are required to satisfy such an audit. In our conversations with auditors, AEDT providers, plaintiff-side attorneys and employers, great ambiguity remains on how audit satisfaction will be achieved. In light of this ambiguity most compliance will error on the side of minimum compliance. The ambiguity exists as a result of the law’s language copied here, “Such bias audit shall include but not be limited to the testing of an automated employment decision tool to assess the tool’s disparate impact on persons of any component 1 category required to be reported by employers pursuant to subsection (c) of section 2000e-8 of title 42 of the United States code as specified in part 1602.7 of title 29 of the code of federal regulations”. The “but not limited to” clause rightly highlights that bias is not only about disparate impact. In fact, bias exists in many forms, such as statistical bias, cognitive bias and non-response bias. Further bias manifests in data, architectural inputs and outcomes from AI, Algorithmic and Autonomous systems (AAA Systems), like AEDTs. ForHumanity agrees with the Council that we ought to maximize bias mitigation (“but not limited to”) in AEDTs and our audit criteria already is designed to mitigate a wider array of bias.

    The law also did not appear to fully embrace all Protected Categories (the subjects of bias), such as the Disabled. AAA Systems by their very design (seeking “best-fit” conclusions) are often exclusionary, especially in the areas of Disability and neuro-divergence. ForHumanity’s audit criteria can help the Council include bias remediation in AEDTs for all New Yorkers

    ForHumanity offers to assist the Council in overcoming these challenges with our expertise in drafting audit criteria and our focus on mitigating risk for AAA Systems for all humans. We offer this service for the Council’s consideration as a means to establishing uniformity, certainty and an infrastructure of trust for “bias audits” of AEDTs. This offer is not unique for ForHumanity. We have provided the UK’s Information Commissioner’s Office with a similar submission of audit criteria for the General Data Protection Regulations (GDPR) and we have been retained by CEN/CENELEC JTC 21 as a technical liaison on the conformity assessment called for in the EU’s Proposed AI Act. ForHumanity is conducting this work in numerous other jurisdictions globally as law-makers race to place guardrails around these largely ungoverned AAA Systems.

    Financial audits have a series of critical elements of infrastructure, including checks and balances leading to successful governance, oversight and accountability. Those key elements are discussed in the attached document laying out a comprehensive framework establishing an infrastructure of trust and are summarized here:

    Trained bias audit professionals – like CPAs
    Independent third-party rules (Like Generally Accepted Accounting Principles – GAAP) – accepted and approved by the Council
    A body to ensure independence, anti-collusion and uniformity of audits prevail.
    A code of Ethics and Professional Conduct governing auditors and their actions

    This set of criteria would dramatically enhance the impact and compliance with the law, providing a leveraged enforcement mechanism of trained auditors abiding by a set of rules the council has approved. ForHumanity provides the services, under the authority of the council for all four elements at no cost to the Council or New York City. As a non-profit, public charity, 501(c)(3) registered, the Council can be assured that our goals are aligned – protecting New Yorkers from bias in Automated Employment Decision Tools.

    Thank you to the Council and the City of New York for the opportunity to testify on behalf of all New Yorker’s who are the beneficiaries of ForHumanity’s work and mission. We hope you will consider our assistance and would welcome any opportunity to further share our work in support of Local Law 144 2021.

    Kind regards,

    Ryan Carrier
    Executive Director, ForHumanity

    Comment attachment
    ForHumanitys-response-to-NYC-Council-and-Government-and-Regulatory-Services-v1.0-1.pdf
    Comment added June 6, 2022 6:13pm
  • Littler Mendelson P.C. Workplace Policy Institute (WPI)

    Attached please find comments from Littler Mendelson P.C.’s Workplace Policy Institute.

    Comment attachment
    WPI-AI-Comments.pdf
    Comment added June 6, 2022 6:17pm
  • Navrina Singh

    Written Comments from Credo AI: Regarding Local Law 144 of 2021 in relation to Automated Employment Decision Tools

    Comment attachment
    NYC-Public-Comments-By-Credo-AI.docx.pdf
    Comment added June 6, 2022 9:30pm