Skip to content

Automated Employment Decision Tools (Updated)

Print Friendly, PDF & Email


Rule status: Proposed

Agency: DCWP

Comment by date: January 23, 2023

Rule Full Text
DCWP-NOH-AEDTs-1.pdf

The Department of Consumer and Worker Protection is proposing to add rules to implement new legislation clarifying the requirements for the use of automated employment decision tools within New York City, the notices to employees and candidates for employment regarding the use of the tool, the bias audit for the tool, and the required published results of the bias audit.

Attendees who need reasonable accommodation for a disablity such as a sign language translation should contact the agency by calling 1 (212) 436-0210 or emailing Rulecomments@dcwp.nyc.gov by January 16, 2022

Send comments by

Public Hearings

Date

January 23, 2023
11:00am - 12:00pm EST

Location



Connect Virtually
https://tinyurl.com/59fjw928
Dial 646-893-7101
Meeting ID: 228 381 285 379
Password: TG5jkM

Disability Accommodation
  • Sign Language Interpretation
  • Open Captioning
  • Communication Access Real-Time Translation

Comments are now closed.

Online comments: 19

  • Chris

    Dear Team,

    While most comments about the law and its definition have already been made; I’d like to add another perspective on the role of additional players in this new markets, which ought to be regulated (i.e., auditors).

    As the attempt is to regulate new technologies in markets (e.g., AI in hiring) questions arise with the alongside development of new services that themselves can become an industry (i.e., auditing in AI).

    As AI-auditing itself becomes an industry, which is not an NGO, it can be postulated that they follow economic interests themselves. Thus, as it is necessary to regulate industries (e.g., pharma, finance, technology, etc.) and auditors in other industries have to follow rules, the same principles should apply here.

    Again, as auditors are part of an economic industry and thus follow economic interests, those who are being audited cannot ensure that auditors themselves are independent (e.g., favoring large clients) if they do not have to follow rules.

    This should be considered when implementing those rules because those who need and want to be audited will a) either have to do it themselves or b) rely on services in an unregulated market itself (i.e., AI-auditing).
    I’d like to bring this point up, because companies who deploy AI technology in the market are seen as the only player who needs to be regulated in a new market, while in other industries (e.g., finance) not only the one’s who are offering financial products are regulated but also those who are auditing them.

    It’s unquestionable that new technologies need to be regulated and this never been questioned.

    Comment added January 2, 2023 1:16pm
  • Chris

    Dear Team,

    While most comments about the law and its definition have already been made; I’d like to add another perspective on the role of auditors.
    As the attempt is to regulate new technologies in markets (e.g., AI in hiring) questions arise with the alongside development of new services that themselves can become an industry (i.e., auditing in AI).

    As AI-auditing itself becomes an industry, which is not an NGO, it can be postulated that they follow economic interests themselves. Thus, as it is necessary to regulate industries (e.g., pharma, finance, technology, etc.) and auditors in other industries have to follow rules, the same principles should apply here.

    Again, as auditors are part of an economic industry and thus follow economic interests, those who are being audited cannot ensure that auditors themselves are independent (e.g., favoring large clients) if they do not have to follow rules.
    This should be considered when implementing those rules because those who need and want to be audited will a) either have to do it themselves or b) rely on services in an unregulated market itself (i.e., AI-auditing).

    I’d like to bring this point up, because companies who deploy AI technology in the market are seen as the only player who needs to be regulated in a new market, while in other industries (e.g., finance) not only the one’s who are offering financial products are regulated but also those who are auditing them.

    It’s unquestionable that new technologies need to be regulated and this never been questioned.

    Comment added January 2, 2023 1:17pm
  • Tim

    My main comment would be to make sure you understand the status quo in hiring, and how utterly dreadful it is. Traditional hiring is fundamentally flawed from a bias perspective. Any job application typically starts with a candidate submitting a CV, containing innumerate pieces of noise that reveal their gender, race, religion, socioeconomic status and many other things that are irrelevant to figuring out ‘is this the best person for the job’?

    Once the CV is received, a human then – if the candidate is lucky – scans the CV for 10 seconds and makes a yes or no call based on their gut. This process has very low accuracy (the best candidate often misses out), is fundamentally biased, costs a lot of money, slows down the hiring process, and leaves 0 candidates with any meaningful feedback – they just get the ‘sorry, not sorry’ email.

    By using products that actually measure candidates’ on things that are scientifically proven to predict job performance, such as their skills, personality and intelligence, is clearly a fairer way, that some random person glancing at CV and deciding they don’t like where the candidate went to school.

    There is 0 current auditability of current practices. No company in the world could explain why they rejected a candidate based on their CV. How could they? The logic of the decision – made in 10 seconds let’s not forget – is not recorded anywhere. And how could it be? What would they record? ‘Oh I didn’t like their name’, ‘Their formatting was bad’, ‘They had a spelling error’ or any number of ridiculous reasons – I really suggest you research how this is currently done in practice to realise how bad it is.

    At least with products that actually measure things, you’ll always be able to go back and say ‘Ok, this job required Python skills and this candidate scored 10% on a Python test, so that’s why they got rejected’. No matter how imperfectly the Python test is constructed, surely that’s a more legitimate reason for rejection than just someone’s whim?

    And this is just the screening stage. The rest of the process of traditional hiring is also filled to the brim with bias, like unstructured interviews where all decisions are based on gutfeel and something approaching astrology.

    Please, understand the current market conditions and how incredibly unfair traditional hiring is before implementing this law and throwing the baby out with the bathwater.

    Comment added January 12, 2023 4:41am
  • Mike Fetzer

    I applaud and fully support the efforts of the City of New York and the Department of Consumer and Worker Protection (DCWP) for being at the forefront of addressing the potential for bias in the use of artificial intelligence and machine learning applications to make automated employment decisions. In order to best facilitate the efficient and intended impact of this legislation, I would respectfully recommend that the DCWP adopt the standard definitions of artificial intelligence and machine learning that were established by Congress in the National Artificial Intelligence Act of 2020 at sections 5002(3) and 5002(11) respectively:

    (3) ARTIFICIAL INTELLIGENCE. The term ‘‘artificial intelligence’’ means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to—
    (A) perceive real and virtual environments;
    (B) abstract such perceptions into models through analysis in an
    automated manner; and
    (C) use model inference to formulate options for information or
    action.

    (11) MACHINE LEARNING. The term ‘‘machine learning’’ means an application of artificial intelligence that is characterized by providing systems the ability to automatically learn and improve on the basis of data or experience, without being explicitly programmed.

    These definitions have already been adopted by the U.S. Equal Employment Opportunity Commission (EEOC), the U.S. Department of Justice (DOJ), and numerous other Federal and State agencies. Using commonly accepted definitions of artificial intelligence and machine learning will enable the DCWP to focus the definition of automated employment decision tools (AEDTs), draw upon the precedence set at the state and federal levels, and avoid any confusion and/or conflicts that might arise from the use of non-standard definitions. Further, I would recommend striking the terms “statistical modelling” and “data analytics” from LL 144 to maintain focus on artificial intelligence and machine learning.

    Alternatively, if the DCWP decides to retain or revise the current definition of machine learning, statistical modelling, data analytics, or artificial intelligence, I would recommend the following be added to part (iii) of the definition:

    Cross-validation is a statistical method of evaluating and comparing machine learning algorithms by dividing a single, identical data set into two parts: training and testing data. The training data can be further segmented into training set used to fit the parameters of the model and validation set used to optimize the model parameters. The testing data is then used to provide an unbiased evaluation of a final model fit on the training data set.

    Comment added January 19, 2023 3:35pm
  • Holistic AI Team

    Please see the attached comments relating to issues surrounding small sample sizes and and concerns about how the metric for regression systems can be fooled by bimodal distributions.

    Comment attachment
    DCWP-Comment-on-Updates-.docx
    Comment added January 20, 2023 11:44am
  • Barbara Kelly

    The Institute for Workplace Equality (“IWE” or “The Institute”) submits the attached comments in response to the New York City (“NYC” or the “City”) Department of Consumer and Worker Protection’s (“DCWP” or the “Department”) invitation. The Department’s Notice of Proposed Rules is seeking to clarify the requirements set forth by NYC’s Local Law 144 that will regulate the use of automated employment decision tools (“AEDT”) wherein hiring or promotion decisions are made or substantially assisted by algorithmically-driven mechanisms.

    Comment attachment
    2023.01.20-IWE-NYC-Local-Law-144-Letter-of-Comment.pdf
    Comment added January 20, 2023 12:29pm
  • Joseph Abraham

    Thank you for the opportunity to provide comment. Note that these comments reflect my personal observations and recommendations, and not necessarily the position of my employer.

    Comment attachment
    proposed-NYC-rule-comments-v2.docx
    Comment added January 20, 2023 2:06pm
  • BABL AI Team

    Please consider the attached comments on the new proposed rules, submitted on behalf of the team at BABL AI.

    Comment attachment
    BABL-AI-LL144-Comments-III.pdf
    Comment added January 22, 2023 4:42pm
  • Merve Hickok (AIethicist.org)

    Thank you for the opportunity to provide further comments as DCWP continues to clarify this pioneering law. Please find attached recommendations and feedback submitted on behalf of AIethicist.org.

    Comment attachment
    DCWP_NYC-Public-Comment_Merve-Hickok_Jan2023.pdf
    Comment added January 22, 2023 7:33pm
  • retrain.ai Team

    Please see the attached comments relating to questions concerning adequate data sample sizes, sufficient historical data, and the definition of test data.

    Comment attachment
    JANUARY-retrain.ai-Comments-on-NYC-Local-Law-144.pdf
    Comment added January 22, 2023 11:37pm
  • Fred Oswald

    Thank you for this opportunity to provide further input on this important law. My comments are attached, in hopes they are helpful.

    Comment attachment
    DCWP-NYC-AEDT-comment-Fred-Oswald-230123.pdf
    Comment added January 23, 2023 8:52am
  • Julia Stoyanovich

    See attachment

    Comment attachment
    Stoyanovich_144_Jan23_2023.pdf
    Comment added January 23, 2023 9:18am
  • Daniel Schwarz

    Comments attached

    Comment attachment
    NYCLU-Testimony-DCWP-Employment-ADS-20230113.pdf
    Comment added January 23, 2023 11:21am
  • Mitch C. Taylor

    Attached please find the public comment from SHRM, the Society for Human Resource Management.

    Comment attachment
    SHRM-NYC-AEDT-Revision-Comment-1.23.2023.pdf
    Comment added January 23, 2023 12:31pm
  • Rose Mesina

    Thank you for this opportunity to provide further input. Please see attached for comments.

    Comment attachment
    Second-Set-of-Comments-Questions-on-the-Proposed-Rules-January-23-2023.pdf
    Comment added January 23, 2023 2:52pm
  • Jiahao Chen, PhD

    Please see attachment for further comments.

    Comment attachment
    2023-01-23-NYC-AEDT.pdf
    Comment added January 23, 2023 3:50pm
  • Workday

    Please see our comments on DCWP’s proposed amended rules attached.

    Comment attachment
    WDAY-Comments_NYC-LL-144-Rules_Second-Version_1.23.23.pdf
    Comment added January 23, 2023 4:22pm
  • Hannah Wade

    NYU Langone Health Comments on Proposed Rule Amendment, Subchapter T: Automated Employment Decision Tools

    RULE TITLE: Requirement for Use of Automated Employment Decisionmaking Tools

    REFERENCE NUMBER: 2022 RG 061

    RULEMAKING AGENCY: Department of Consumer and Worker Protection

    On behalf of NYU Langone Health, please accept our comments on the proposed rules to implement Local Law 144 of 2021 related to the use of automated employment decision tools, or AEDTs. We appreciate the Department of Consumer and Worker Protection (DCWP) for the opportunity to comment again.

    As the healthcare system strives to recover from the COVID-19 pandemic, New York City hospitals are facing significant workforce challenges. The DCWP should consider providing an exemption to the healthcare field due to ongoing public health crises. These crises, including recovery from the COVID-19 epidemic, the monkeypox outbreak, and the recent influx of asylum seekers, have put significant stress on all New York City hospitals. We are deeply troubled by any additional measures that prevent us from fulfilling our mission to provide safe, quality care for our patients.

    At NYU Langone Health, we are opposed to any additional barriers to fill urgently needed positions including nursing, allied health, clinical support and other support services. In particular, we have concerns about the potential hiring delays presented by the requirement (Section 5-304) to provide notice to candidates and employees 10 business days prior to the use of an automated employment decision tool, or AEDT. This requirement presents an unnecessary waiting period that will prolong staffing shortages and negatively impact patients in New York City.

    During our Fiscal Year 2022, we received 368,536 applications for 12,796 posted positions which require the use of data analytics to effectively process. Delays of 10 business days in processing time would pose an undue hardship on our healthcare system as we work to recruit and employ talent to best serve our patients.

    Once again, thank you for the opportunity to comment. Please reach out to us with any questions or for additional information.

    Comment added January 23, 2023 5:00pm
  • Gibson, Dunn & Crutcher, LLP

    Please consider the attached comments.

    Comment attachment
    Comments-Regarding-Proposed-Rules-for-Implementing-Local-Law-144-of-2021.pdf
    Comment added January 23, 2023 5:41pm