• bias
    HR:清醒吧!员工更信任AI而非HR 多年来,我一直是HR的支持者和朋友。在与HR团队的每次交流中,我都对他们的热情、投入和善意印象深刻。然而,尽管我们尽了最大努力,一项针对851名职场专业人士的最新调查发现,“员工更信任AI,而非HR。” 什么?这怎么可能? 在你否定这个结果之前,让我解释一下数据。这并不像表面看起来那样简单。 数据揭示了什么? 1. AI被认为更值得信任 当被问到“你更信任AI还是HR专业人士”时,54%的受访者表示更信任AI,而27%表示更信任HR。这个数据虽然听起来奇怪,但实际上反映的是“信任”的问题。员工知道经理有偏见,因此任何由HR提供的绩效评估、加薪或其他反馈可能都会受到某种偏见的影响(甚至是近期偏见)。 而AI没有“个人意见”。在基于真实数据的情况下,它的决策往往更“值得信赖”。65%的受访者相信AI工具会被公平使用。 这很合理:我们已经从认为AI会毁灭世界的担忧中跨越了鸿沟,现在更多地将其视为统计和基于数据的决策系统。而且你可以问AI“为什么选择这个候选人”或“为什么这样评估这个员工”,AI会给出精准且明确的答案。(而人往往难以清楚地解释自己的决定。) 2. AI已被信任用于绩效评估 尽管目前市场上可用的AI绩效评估工具还很少(如Rippling的工具),但39%的受访者认为AI的绩效评估会更公平,33%认为基于AI的薪酬决策不会有偏见。同样,这很可能是因为AI能够清晰地解释其决策,而管理者往往依赖“直觉”。 3. AI更受欢迎作为职业教练 当被问到“你是否重视AI工具在职业目标设定方面的指导能力”时,64%的受访者表示“是”。这再次表明员工对反馈和指导的需求,而这是许多管理者做得不够好(或者不够开放)的地方。 这不是对HR的否定,而是对管理者信任度的质疑 对我而言,这些数据揭示了三个重要点,每个都可能让你感到意外: 1. 员工对管理者的决策能力存疑 我们并不总是信任“管理者”在招聘、绩效和薪酬方面做出公正、不偏不倚的决定。员工知道偏见存在,因此希望有一个系统可以更公平地选择和评估他们。 2. AI从“令人恐惧”到“被信任”的转变 我们已经跨越了“AI令人害怕”的心理鸿沟,开始更多地将其视为可信赖的工具,这使得企业可以更大规模地将AI用于人事决策。 3. HR需要迅速适应AI时代 对于HR部门来说,前进的方向已经明确。我们现在必须立刻学习AI工具,将它们引入最重要的HR领域,并投入时间去管理、培训和利用这些工具。 关于HR赢得信任的能力,现在的逻辑变成了这样:公司内部支持和信任的建立将越来越依赖于HR如何选择和实施AI系统。员工的期望很高,因此我们必须满足这些需求。不管你喜欢与否,AI正在改变我们管理人的方式。
    bias
    2024年11月21日
  • bias
    美国公民自由联盟对Aon人工智能招聘工具发起投诉 美国公民自由联盟(ACLU)于2024年6月6日向美国联邦贸易委员会提交了针对Aon的投诉,挑战其候选人评估工具的合法性和偏见问题。ACLU指控Aon的评估工具,如Adept-15人格测试和vidAssess-AI视频评估工具,在市场上虚假宣称“无偏见”并能“增进多样性”,实际上这些工具可能基于种族和残疾(如自闭症和心理健康障碍)歧视求职者。此外,ACLU还提到,Aon的gridChallenge认知能力评估也显示出种族表现上的差异。针对这些指控,Aon回应称其评估工具遵循行业最佳实践和EEOC、法律及专业指导原则。ACLU此举揭示了在职场包容性与合规性之间的紧张关系,呼吁更严格审查这些广泛使用的人力资源技术工具。 在人力资源技术迅速发展的世界中,人工智能(AI)扮演着关键角色,承诺将简化流程并增强招聘实践的效率。然而,AI整合到这些实践中经常引发关于公平性和歧视的重大争议。最近的一个例子涉及到全球专业服务公司Aon,该公司的AI驱动的招聘评估工具因美国公民自由联盟(ACLU)的指控而受到审查。ACLU向美国联邦贸易委员会(FTC)正式投诉Aon,突显了关于AI在招聘中应用的重要对话。 ACLU投诉的基础 ACLU指控Aon欺骗性地营销其招聘评估工具——特别是Adept-15性格评估、vidAssess-AI视频面试工具和gridChallenge认知能力测试——这些工具被宣称为无偏见并有助于提高工作场所的多样性。根据ACLU的说法,这些声明不仅具有误导性,而且可能违法,因为这些工具可能会基于种族和残疾(如自闭症、抑郁症和焦虑症)歧视求职者。这些工具使用算法和AI进行评估,根据候选人的积极性、情感意识和活力等特征进行评估,这些特征往往与工作表现无直接关联,且可能对某些残疾人群产生不成比例的影响。 Aon的辩护和行业实践 面对ACLU的指控,Aon为其产品辩护,声称这些工具是根据法律和专业指南(包括平等就业机会委员会EEOC设定的指南)设计的。Aon强调他们的工具是雇主用于做出更具包容性招聘决策的更广泛评估工具集的一部分。此外,Aon还指出其工具的效率和成本效益,认为这些工具比传统方法更少歧视性。 法律和道德含义 这场争议引发了关于使用AI进行就业的重要法律和道德问题。美国的法律,包括美国残疾人法案(ADA)和民权法案第七章,要求就业中的非歧视实践,涵盖从招聘到工作场所的所有方面。ACLU向FTC的投诉不仅提示可能违反这些法律,还将问题框定为不仅是就业歧视,还涉及消费者欺诈的问题。 更广泛的行业关注 ACLU对Aon的行动是更广泛运动的一部分,旨在审查用于招聘的AI工具。批评者认为,虽然这些技术提供了无偏见决策的潜力,但它们常常缺乏透明度,并可能无意中编码了其开发者或它们所训练的数据集的偏见。这一问题由于这些工具的专有性质而变得更加复杂,这阻碍了对它们的公平性和效率进行彻底的公众评估。 潜在后果和改革 ACLU对Aon的投诸可能对人力资源技术行业产生深远影响。如果FTC决定调查或制裁Aon,可能会导致对AI在招聘中的使用进行更严格的监管,可能为整个行业中类似工具的市场营销和实施设定先例。对依赖这些工具的公司而言,此案可能是重新评估其算法以确保符合反歧视法律的关键提示。 此外,此案凸显了技术专家、法律专家、政策制定者和民权倡导者之间需要进行持续对话的需求,以确保AI的进步能够增强而非破坏工作场所的平等。随着AI继续渗透到各种人力资源方面,制定维护反歧视和坚持道德原则的标准和最佳实践将至关重要。 结论 ACLU对Aon的投诉提醒我们在AI时代,创新、监管和权利之间的复杂相互作用。虽然AI为HR提供了变革的潜力,但它也需要谨慎处理以防止新形式的歧视。这个案例可能会成为AI在招聘伦理辩论中的一个里程碑,促使所有利益相关者考虑其技术选择的更广泛影响。随着法律程序的展开,人力资源技术行业将密切关注,意识到AI在招聘中的未来现在受到更审慎的公众和法律审视。   Unveiling Bias: The Controversy Over Aon's AI Hiring Tools and the ACLU's Challenge In the rapidly evolving world of human resources technology, artificial intelligence (AI) plays a pivotal role, promising to streamline processes and enhance the efficiency of hiring practices. However, the integration of AI into these practices often sparks significant debate regarding fairness and discrimination. A recent example of this controversy involves Aon, a global professional services firm, whose AI-driven hiring assessment tools have come under scrutiny by the American Civil Liberties Union (ACLU). The ACLU's allegations against Aon, leading to a formal complaint to the U.S. Federal Trade Commission (FTC), underline a critical dialogue about the implications of AI in hiring. The Basis of the ACLU’s Complaint The ACLU has accused Aon of deceptively marketing its hiring assessment tools — specifically the Adept-15 personality assessment, the vidAssess-AI video interviewing tool, and the gridChallenge cognitive ability test — as bias-free and conducive to improving diversity in the workplace. According to the ACLU, these claims are not only misleading but also potentially unlawful, as the tools may perpetuate discrimination against job seekers based on race and disabilities such as autism, depression, and anxiety. These tools, which utilize algorithmic processes and AI, are said to evaluate candidates on traits like positivity, emotional awareness, and liveliness, which are often not directly relevant to job performance and may disproportionately affect individuals with certain disabilities. Aon’s Defense and Industry Practices In response to the ACLU's claims, Aon has defended its products by asserting that they are designed in compliance with legal and professional guidelines, including those set forth by the Equal Employment Opportunity Commission (EEOC). Aon emphasizes that their tools are part of a broader array of assessments used by employers to make more inclusive hiring decisions. Moreover, Aon points to the efficiency and cost-effectiveness of their tools, arguing that they are less discriminatory than traditional methods. Legal and Ethical Implications The controversy raises significant legal and ethical questions about the use of AI in employment. U.S. laws, including the Americans with Disabilities Act (ADA) and Title VII of the Civil Rights Act, mandate non-discriminatory practices in employment, covering all aspects from hiring to workplace accommodation. The ACLU's complaint to the FTC, an agency tasked with protecting America’s consumers and competition, suggests potential violations of these laws, framing the issue not only as one of employment discrimination but also of consumer deception. Broader Industry Concerns The ACLU's actions against Aon are part of a larger movement to scrutinize AI tools used for hiring. Critics argue that while these technologies offer the potential for unbiased decision-making, they often lack transparency and can inadvertently encode the biases of their developers or the data sets they are trained on. This issue is compounded by the proprietary nature of these tools, which prevents a thorough public assessment of their fairness and effectiveness. Potential Repercussions and Reforms The outcome of the ACLU’s complaint could have far-reaching implications for the HR technology industry. A decision by the FTC to investigate or sanction Aon could lead to more stringent regulations governing the development and use of AI in hiring, potentially setting a precedent for how similar tools are marketed and implemented across the industry. For companies that rely on these tools, the case may serve as a critical prompt to reevaluate their algorithms to ensure compliance with anti-discrimination laws. Moreover, this case highlights the need for ongoing dialogue between technologists, legal experts, policymakers, and civil rights advocates to ensure that advancements in AI serve to enhance, rather than undermine, workplace equality. As AI continues to permeate various aspects of human resources, the development of standards and best practices that safeguard against discrimination and uphold ethical principles will be crucial. Conclusion The ACLU's complaint against Aon is a reminder of the complex interplay between innovation, regulation, and rights in the age of AI. While AI offers transformative potentials for HR, it also demands a cautious approach to prevent new forms of discrimination. This case may well become a landmark in the ongoing debate over AI ethics in hiring, urging all stakeholders to consider the broader implications of their technological choices. As the legal proceedings unfold, the HR technology industry will be watching closely, aware that the future of AI in hiring is now under a more discerning public and legal microscope.
    bias
    2024年06月06日