• class action
    特斯拉H-1B歧视案最新裁定:一封"H-1B only"邮件让工程师胜出初审,HR高管却因岗位性质被拒之门外 核心摘要:美国联邦法官2月23日裁定,拒绝驳回工程师Scott Taub对特斯拉提起的集体诉讼。Taub称,招聘公司Max Eleven明确告知他某特斯拉职位"H-1B only",导致他作为美国公民被拒于门外。法官Chhabria认为,这封邮件虽"证据单薄",但已足够维持歧视指控。然而同案另一原告、HR高管Sofia Brander的指控则被驳回——法官认为特斯拉偏好H-1B员工的领域集中于技术岗位,将其延伸至HR岗位"不合逻辑"。 案件背景:一封邮件,引发一场联邦诉讼 2026年2月23日,美国加利福尼亚州北区联邦地区法院法官 Vince Chhabria 作出裁定,拒绝驳回工程师 Scott Taub 对特斯拉(Tesla, Inc.)提起的集体诉讼核心指控。这一裁定意味着,指控特斯拉在招聘中系统性偏袒H-1B签证持有人、歧视美国公民的诉讼,将正式进入下一阶段的司法审查程序。 事情的起源,是一封再普通不过的招聘邮件。 Taub 收到了第三方招聘公司 Max Eleven 联系人 Mr. Sainik 发来的职位邀约,职位为特斯拉的 QA Automation Engineer(质量自动化工程师)。邮件内容写明,该职位为"H1B only"——仅限持H-1B工作签证的候选人申请,并要求提供"Travel history / i94"(入境记录与入境卡信息)。作为美国公民的 Taub,看到这行字后放弃了申请。随后,他以就业歧视为由,将特斯拉告上了联邦法院,并寻求集体诉讼资格。 案件基本信息:案件名称 Taub et al v. Tesla, Inc.,案件编号 3:25-cv-07785,法院为美国加利福尼亚州北区联邦地区法院,主审法官 Vince Chhabria,裁定日期2026年2月23日。原告一 Scott Taub(工程师,指控维持),原告二 Sofia Brander(HR高管,指控驳回),被告 Tesla, Inc.,涉事第三方招聘公司 Max Eleven。 法官裁定:证据"单薄",但足以维持诉讼 在裁定书中,Chhabria 法官坦承这是一个"close question"(近乎临界的问题),措辞罕见地透露出司法审慎。他的核心逻辑是:在诉讼初期审查阶段,所有推断必须有利于原告。 法官在裁定中写道:"The plaintiffs have pled just enough facts to allege that Tesla discriminated against US citizens."(原告已陈述了足够的事实,以主张特斯拉歧视美国公民。) 法官同时指出,可以合理推断,当 Max Eleven 告知 Taub 该职位"H-1B only"时,是在转达特斯拉的招聘偏好,而非招聘公司的自发筛选行为。 他同时对证据质量表达了保留态度,指出邮件的原始上下文存在模糊性:Taub 提问的原文未附在诉状中,邮件格式也难以判断是否为回复邮件。法官在裁定书中写道:"All of this causes the court to be somewhat skeptical of Taub's allegations. But because all inferences must be drawn in Taub's favor at this stage, the Max Eleven email and the allegations related to it are sufficient to state a claim for discrimination." 两位原告,两种结局——HR高管为何被驳回? 本案有两位原告,但结局截然不同。工程师 Taub 的指控得以维持,HR高管 Sofia Brander 的指控则被法官完全驳回。 Brander 同样主张,自己因美国公民身份被特斯拉歧视性地排除在求职机会之外。然而,法官认为此论据存在根本性的逻辑漏洞。裁定书中写道:"The complaint alleges that Tesla uses H-1B workers for 'specialized roles in engineering, research, and design,' so it wouldn't make sense for Tesla to favor H-1B workers for HR roles." 换言之,法院认为特斯拉对H-1B工人的依赖主要集中于工程、研发等高技术性岗位,将这一逻辑延伸至HR职位不合情理。因此,Brander 被认定缺乏充分的指控基础。 值得注意的是,法院给予 Brander 14天时间修订诉状,决定是否继续追诉。其指控并非永久关闭,若能提供更充分的事实依据,仍有机会重新进入诉讼程序。 起诉书中的更多细节:数字背后的系统性指控 仅靠一封邮件,能撑起一场集体诉讼吗?起诉书中的指控远不止于此。根据公开的起诉书原文(案件提交于2025年9月),原告援引了一系列数据,试图证明特斯拉的H-1B偏好并非个案,而是系统性的招聘政策: 特斯拉在2024年聘用了约1,355名H-1B签证持有人,同期却裁员逾6,000名美国本土员工,其中绝大多数为美国公民。 特斯拉通过 Max Eleven、West Valley Staffing、ManPower、Balance Staffing、Nelson Staffing 等多家第三方人力外包公司大量引进H-1B工人,而这部分招聘名额未必计入官方H-1B申报数据,属于监管盲区。 招聘公司 Max Eleven 的联系人 Mr. Sainik 在邮件中不仅注明"H1B only",还同时要求候选人提供"Travel history / i94"——这是典型的签证状态筛选操作。 特斯拉方面在法庭文件中将上述指控称为"preposterous"(荒谬绝伦),否认存在任何系统性歧视。 法官 Chhabria 对这些系统性指控态度相对冷淡,明确表示所有指控中只有关于招聘公司邮件的部分具有价值,大量统计数据本身不构成歧视的直接证据。但这些数字仍将在后续诉讼中发挥重要作用,为原告构建系统性歧视模式(pattern and practice)的主张提供支撑。 北美HR从业者合规警示:你的风险在哪里? 对于在美国企业从事HR工作的专业人士,这起案件并非遥远的法律事件,而是直接关乎日常操作规范的行业警钟。 风险一:第三方招聘公司的沟通内容,同样是你的法律责任 本案最具颠覆性的法律逻辑在于:法官明确表示,可以合理推断招聘公司是在转达特斯拉的偏好,而非自行决策。这意味着,即便是你委托的外部猎头或人力外包公司,其发给候选人邮件中的一句话,也可能成为你公司的诉讼依据。 合规建议:立即审查所有外部招聘合作协议,加入明确条款,禁止合作方在任何沟通材料中出现签证状态相关的候选人筛选表述,并要求合作方定期报告招聘沟通样本。 风险二:"H-1B only"是高风险表述,即便是口头传达 在美国就业歧视的法律框架下,以国籍(national origin)或公民/签证身份(citizenship / immigration status)为由限制求职机会,在特定情境下可能违反《移民与国籍法》第274B条款(INA § 274B),该条款明确禁止因公民身份而产生的就业歧视。即便是通过招聘公司间接传达的偏好,都可能被视为企业招聘政策的延伸。 风险三:岗位描述和签证状态挂钩,属于高风险操作 将特定职位与特定签证类型绑定,不论是在职位描述、招聘简报还是内部备忘录中,均可能成为歧视意图的直接证据。HR团队在与招聘方沟通岗位需求时,应严格区分"工作授权要求"(work authorization requirement)与"签证类型偏好"(visa type preference)。前者在某些涉密岗位中有法律依据,后者几乎没有任何合法保护空间。 建议立即采取的合规行动: 审计所有与外部招聘机构的合作协议,确保包含明确的反歧视条款,尤其是禁止签证状态相关筛选的条款。要求外部招聘合作方提交标准化的候选人沟通模板,由内部法务或HR合规团队审核。对内部招聘团队进行合规培训,明确区分"工作授权状态"(可合法询问)与"签证类型偏好"(通常不可操作)的边界。建立招聘合规举报机制,鼓励员工在发现类似问题时安全报告。如企业使用多家人力外包公司,需逐一审查其操作规范,不能仅依靠合同条款而忽视实际执行层面的风险。 更大的背景:H-1B政策收紧与合规新常态 这起诉讼并非孤立事件。它发生在美国政治与政策环境对H-1B签证高度敏感的时间节点上。近年来,围绕H-1B制度的争议持续发酵,"美国工人优先"的政策叙事贯穿两党议题。在这一背景下,企业的H-1B招聘实践正面临来自多个方向的审查压力——不仅是政府监管层面,也包括来自员工、竞争对手和公众舆论的多重目光。 值得注意的是,本案的指控人并非政府监管机构,而是个人求职者。这意味着就业歧视的法律风险并不仅来自劳工部、移民局等监管部门的执法,也可能来自任何一位曾在招聘过程中感受到不公平对待的候选人。 对于在美国运营、并大量依赖H-1B人才管道的企业,这起案件传递出清晰的信号:招聘合规必须从政策层面下沉到操作层面,从内部规范延伸到外部合作伙伴的沟通细节。一封来自第三方招聘公司的邮件,足以成为联邦集体诉讼的起点。 案件后续进展 特斯拉目前尚未就此案公开发表评论。Sofia Brander 获14天时间决定是否修订诉状。Scott Taub 的指控将进入实质性诉讼审查阶段,双方需就证据开示(discovery)展开程序。HR Tech 将持续追踪本案进展,为北美HR从业者带来第一手合规资讯。 本文仅供专业参考,不构成法律意见。如需法律咨询,请联系持牌劳动法律师。 1. 原始起诉书(Complaint)全文 PDF 这是2025年9月12日提交的诉状,已公开: ? https://casefilingsalert.com/wp-content/uploads/2025/09/Tesla-Suit-re-H-1B-Visas.pdf
    class action
    2026年03月03日
  • class action
    Workday 请求法院驳回年龄歧视索赔:AI 招聘首次进入全国集体诉讼核心,北美华人 HR 需要关注什么? HR科技巨头 Workday 近日在一起备受关注的 AI 招聘诉讼中采取了最新法律行动。根据其在 2026 年 1 月 21 日向法院提交的文件,Workday 正式请求法官驳回原告提出的“差别影响(disparate impact)年龄歧视”指控,并主张《Age Discrimination in Employment Act(ADEA)》的适用范围仅保护在职员工,而不涵盖求职者。因此,公司认为应当撤销求职者基于年龄歧视提出的相关索赔。这一动议,是目前 Mobley v. Workday 案件的最新进展,也标志着该案正式进入核心法律博弈阶段。 该诉讼最早于 2023 年提起,原告为一批求职者,他们指控 Workday 的 AI 招聘与筛选工具在算法决策中对年龄等受保护群体造成系统性不利影响,从而构成歧视。2025 年 2 月,法院批准该案以 nationwide collective action(全国集体诉讼) 形式推进,使其从个体纠纷升级为覆盖全美范围的大规模案件。与此同时,法官还曾要求 Workday 提供使用其 HiredScore technology 的雇主完整名单,进一步扩大了潜在影响面。Workday 则公开回应称,其 AI 工具并不会识别或使用种族、年龄或残疾等受保护属性,并强调最终决策仍由人工主导。 从法律层面看,Workday 当前的策略并非直接围绕“算法是否存在偏见”展开,而是聚焦更基础的问题——求职者是否具备提起“差别影响”诉讼的法律资格。换言之,公司希望通过对法律条款的解释,缩小案HR件的适用范围。无论法院最终是否采纳这一主张,这一动作本身已经说明:AI 招聘正在从技术问题转变为司法问题。 对于北美华人 HR 从业者而言,这一点尤其值得重视。许多 NACSHR 社群成员所在的企业多为中小规模公司、跨州运营团队或初创组织,HR 通常身兼招聘、合规、员工关系与系统管理等多重角色。现实情况是,当企业采购 ATS 或 AI 筛选工具时,系统上线往往被视为效率优化;但一旦候选人质疑筛选结果或提起投诉,站出来解释流程、提供记录、应对律师函的人,往往是 HR 本人,而不是技术供应商。 这正是 Workday 案件释放的真正信号:算法并不会分担雇主责任。即便筛选由系统完成,法律仍然认定这是雇主的用工行为。企业不能以“系统自动决定”为由规避风险,HR 也无法以“工具问题”完全免责。 更广泛地看,Workday 并非孤例。此前 Eightfold AI 也因招聘流程涉及 FCRA 合规问题而遭遇诉讼调查。两起案件虽然分别涉及 ADEA 与 FCRA,不同的法律框架,却指向同一个趋势:只要算法影响到候选人的就业机会,它就等同于招聘决策本身,必须接受同等甚至更严格的监管与审查。这意味着,HR 科技行业已经进入“强合规时代”。 与此同时,监管环境也在不断收紧。包括 California 在内的多个州已开始要求企业在使用自动化招聘工具时提供候选人退出机制(opt-out),并进行风险评估与透明度披露。这类规定实际上将“算法治理”正式纳入 HR 日常合规管理范畴,而不再是技术团队的内部事务。 在这一背景下,HR 的能力模型正在悄然改变。过去我们关注的是招聘速度、转化率和成本控制;而未来更关键的问题是:系统是否可解释、是否可审计、是否留存记录、是否经得起监管问询。如果无法清晰说明筛选逻辑或提供合规证明,那么效率提升带来的收益,很可能被一次诉讼完全抵消。 对 NACSHR 的华人 HR 同行来说,这些案例并非遥远的大公司新闻,而是与日常工作直接相关的风险提醒。无论企业规模大小,只要开始使用 AI 招聘工具,就已经进入同一套法律框架之中。真正成熟的数字化升级,不是简单上线更多自动化,而是在效率、合规与信任之间取得平衡。 Workday 当前的法律动作,或许只是这场变革的开端,但它已经清晰地勾勒出一个趋势:未来的招聘竞争,不再只是“谁更智能”,而是“谁更合规、谁更可解释、谁更负责任”。这将成为所有北美 HR 必须面对的新现实。 Workday is seeking dismissal of disparate impact age discrimination claims brought by job applicants in the ongoing Mobley v. Workday lawsuit, arguing that the Age Discrimination in Employment Act (ADEA) does not extend such protections to applicants. In a court filing on January 21, 2026, the company stated that the law’s “plain language” limits disparate impact claims to employees, not candidates. The case, originally filed in 2023 and certified as a nationwide collective action in 2025, alleges that Workday’s AI recruiting tools discriminated based on age and other protected factors. Workday denies the claims, asserting that its AI systems neither use nor identify protected characteristics. The dispute highlights growing legal and compliance risks tied to AI-driven hiring technologies. Meanwhile, states including California are tightening regulations, requiring opt-out mechanisms and risk assessments for automated decision tools. The case could significantly shape how HR technology vendors and employers deploy AI in recruitment.
    class action
    2026年01月27日
  • class action
    Agency Law and the Workday Lawsuit 文章讨论了在Workday诉讼中,代理法的相关法律问题。原告声称,Workday的AI筛选工具因种族、年龄和残疾而对他进行了歧视。这起案件提出了HR技术供应商是否可以对歧视性结果直接负责的问题。法律的复杂性包括AI在招聘决策中的角色、代理责任以及对雇主和AI开发者的潜在影响。此案件提醒雇主在实施AI招聘工具时要谨慎,并确保避免法律风险。AI开发者也必须确保其产品无歧视行为,因为该诉讼可能会树立重要的法律先例。 Editor's Note Agency Law and the Workday Lawsuit Agency law is so old that it used to be called master and servant law. (That's different from slavery, where human beings were considered the legal property of other humans based on their race, gender, and age, which is partly why we have discrimination laws.) Today, agency laws refer to principals and agents. All employees are agents of their employer, who is the principal. And employers can have nonemployee agents too when they hire someone to do things on their behalf. Generally, agents owe principals a fiduciary duty to act in the principal's best interest, even when that isn't the agent's best interest. Agency laws gets tricky fast because you have to figure out who is in charge, what authority was granted, whether the person acting was inside or outside that authority, what duty applies, and who should be held responsible as a matter of fairness and public policy. Generally, the principal is liable for the acts of the agent, sometimes even when the agent acts outside their authority. And agents acting within their authority are rarely liable for their actions unless it also involves intentional wrongs, like punching someone in the nose. Enter discrimination, which is generally a creature of statute that may or may not be consistent with general agency law even when the words used are exactly the same.   Discrimination is generally an intentional wrong, but employees are not usually directly liable for discrimination because making employment decisions is part of the way employment works and the employer is always liable for those decisions. The big exception is harassment because harassment, particularly sexual harassment, is never part of someone's job duties. So in harassment cases, the individual harasser is liable but the employer may not be unless they knew what was going on and didn't do anything about it. It's confusing and makes your head hurt. And that's just federal discrimination law. Other employment laws, both state and federal, deal with agent liability differently. Now, let's move to the Workday lawsuit. In that case, the plaintiff is claiming that Workday was an agent of the employer, but not in the sense of someone the employer was directing. They are claiming that Workday has independent liability as an employer too because they were acting like an employer in screening and rejecting applicants for the employer. But that's kinda the whole point of HR Technology—to save the employer time and resources by doing some of the work. The software doesn't replace the employer's decision making and the employer is going to be liable for any discrimination regardless of whether and how the employer used their software. If this were a products liability case, the answer would turn on how the product was designed to be used and how the employer used it. But this is an employment law and discrimination case. So, the legal question here is whether a company that makes HR Technology can also be directly liable for discriminatory outcomes when the employer uses that technology.   We don't have an answer to that yet and won't for a while. That's because this case is just at the pleading stage and hasn't been decided based on the evidence. What's happened so far is Workday filed a motion to dismiss based on the allegations in the complaint. Basically, Workday said, "Hey, we're just a software company. We don't make employment decisions; the employer does. It's the employer who is responsible for using our software in a way that doesn't discriminate. So, please let us out of the case. Then the plaintiff and EEOC said it's too soon to decide that. If all of the allegations in the lawsuit are considered true, then the plaintiff has made viable legal claims against Workday.   Those claims are that Workday's screening function acts like the employer in evaluating applications and rejecting or accepting them for the next level of review. This is similar to what third party recruiters and other employment agencies do and those folks are generally liable for those decisions under discrimination law. In addition, Workday could even be an agent of the employer if the employer has directly delegated that screening function to the software.   We're not to the question of whether a software company is really an agent of the employer or is even acting like an employment agency. And even if it is, whether it's the kind of agency that has direct liability or whether it's just the employer who ends up liable. This will all depend on statutory definitions and actual evidence about how the software is designed, how it works, and how the employer used it.   We also aren't at the point where we look at the contracts between the employer and Workday, how liability is allocated, whether there are indemnity clauses, and whether these type of contractual defenses even apply if Workday meets the statutory definition of an employer or agent who can be liable under Title VII.   Causation will also be a big issue because how the employer sets up the software, it's level of supervision of what happens with the software, and what's really going on in the screening process will all be extremely important.   The only thing that's been decided so far is that the plaintiff filed a viable claim against Workday and the lawsuit can proceed. Here are the details of the case and some good general advice for employers using HR Technology in any employment decision making process.   - Heather Bussing AI Workplace Screener Faces Bias Lawsuit: 5 Lessons for Employers and 5 Lessons for AI Developers by Anne Yarovoy Khan, John Polson, and Erica Wilson at Fisher Phillips   A California federal court just allowed a frustrated job applicant to proceed with an employment discrimination lawsuit against an AI-based vendor after more than 100 employers that use the vendor’s screening tools rejected him. The judge’s July 12 decision allows the class action against Workday to continue based on employment decisions made by Workday’s customers on the theory that Workday served as an “agent” for all of the employers that rejected him and that its algorithmic screening tools were biased against his race, age, and disability status. The lawsuit can teach valuable lessons to employers and AI developers alike. What are five things that employers can learn from this case, and what are five things that AI developers need to know? AI Job Screening Tool Leads to 100+ Rejections Here is a quick rundown of the allegations contained in the complaint. It’s important to remember that this case is in the very earliest stages of litigation, and Workday has not yet even provided a direct response to the allegations – so take these points with a grain of salt and recognize that they may even be proven false. Derek Mobley is a Black man over the age of 40 who self-identifies as having anxiety and depression. He has a degree in finance from Morehouse College and extensive experience in various financial, IT help-desk, and customer service positions. Between 2017 and 2024, Mobley applied to more than 100 jobs with companies that use Workday’s AI-based hiring tools – and says he was rejected every single time. He would see a job posting on a third-party website (like LinkedIn), click on the job link, and be redirected to the Workday platform. Thousands of companies use Workday’s AI-based applicant screening tools, which include personality and cognitive tests. They then interpret a candidate’s qualifications through advanced algorithmic methods and can automatically reject them or advance them along the hiring process. Mobley alleges the AI systems reflect illegal biases and rely on biased training data. He notes the fact that his race could be identified because he graduated from a historically Black college, his age could be determined by his graduation year, and his mental disabilities could be revealed through the personality tests. He filed a federal lawsuit against Workday alleging race discrimination under Title VII and Section 1981, age discrimination under the ADEA, and disability discrimination under the ADA. But he didn’t file just any type of lawsuit. He filed a class action claim, seeking to represent all applicants like him who weren’t hired because of the alleged discriminatory screening process. Workday asked the court to dismiss the claim on the basis that it was not the employer making the employment decision regarding Mobley, but after over a year of procedural wrangling, the judge gave the green light for Mobley to continue his lawsuit. Judge Gives Green Light to Discrimination Claim Against AI Developer Direct Participation in Hiring Process is Key – The judge’s July 12 order says that Workday could potentially be held liable as an “agent” of the employers who rejected Mobley. The employers allegedly delegated traditional hiring functions – including automatically rejecting certain applicants at the screening stage – to Workday’s AI-based algorithmic decision-making tools. That means that Workday’s AI product directly participated in the hiring process. Middle-of-the-Night Email is Critical – One of the allegations Mobley raises to support his claim that Workday’s AI decision-making tool automatically rejected him was an application he submitted to a particular company at 12:55 a.m. He received a rejection email less than an hour later at 1:50 a.m., making it appear unlikely that human oversight was involved. “Disparate Impact” Theory Can Be Advanced – Once the judge decided that Workday could be a proper defendant as an agent, she then allowed Mobley to proceed against Workday on a “disparate impact” theory. That means the company didn’t necessarily intend to screen out Mobley based on race, age, or disability, but that it could have set up selection criteria that had the effect of screening out applicants based on those protected criteria. In fact, in one instance, Mobley was rejected for a job at a company where he was currently working on a contract basis doing very similar work. Not All Software Developers On the Hook – This decision doesn’t mean that all software vendors and AI developers could qualify as “agents” subject to a lawsuit. Take, for example, a vendor that develops a spreadsheet system that simply helps employers sort through applicants. That vendor shouldn’t be part of any later discrimination lawsuit, the court said, even if the employer later uses that system to purposefully sort the candidates by age and rejects all those over 40 years old. 5 Tips for Employers This lawsuit could have just easily been filed against any of the 100+ employers that rejected Mobley, and they still may be added as parties or sued in separate actions.  That is a stark reminder that employers need to tread carefully when implementing AI hiring solutions through third parties. A few tips: Vet Your Vendors – Ensure your AI vendors follow ethical guidelines and have measures in place to prevent bias before you deploy the tool. This includes understanding the data they use to train their models and the algorithms they employ. Regular audits and evaluations of the AI systems can help identify and mitigate potential biases – but it all starts with asking the right questions at the outset of the relationship and along the way. Work with Counsel on Indemnification Language – It’s not uncommon for contracts between business partners to include language shifting the cost of litigation and resulting damages from employer to vendor. But make sure you work with counsel when developing such language in these instances. Public policy doesn’t often allow you to transfer the cost of discriminatory behavior to someone else. You may want to place limits on any such indemnity as well, like certain dollar amounts or several months of accrued damages. And you’ll want to make sure that your agreements contain specific guidance on what type of vendor behavior falls under whatever agreement you reach. Consider Legal Options – Should you be targeted in a discrimination action, consider whether you can take action beyond indemnification when it comes to your AI vendors. Breach of contract claims, deceptive business practice lawsuits, or other formal legal actions to draw the third party into the litigation could work to shield you from shouldering the full responsibility. Implement Ongoing Monitoring – Regularly monitor the outcomes of your AI hiring tools. This includes tracking the demographic data of applicants and hires to identify any patterns that may suggest bias or have a potential disparate impact. This proactive approach can help you catch and address issues before they become legal problems. Add the Human Touch – Consider where you will insert human decision-making at critical spots along your hiring process to prevent AI bias, or the appearance of bias. While an automated process that simply screens check-the-box requirements such as necessary licenses, years of experience, educational degrees, and similar objective criteria is low risk, completely replacing human judgment when it comes to making subjective decisions stands at the peak of riskiness when it comes to the use of AI. And make sure you train your HR staff and managers on the proper use of AI when it comes to making hiring or employment-related decisions. 5 Tips for Vendors While not a complete surprise given all the talk from regulators and others in government regarding concerns with bias in automated decision making tools, this lawsuit should grab the attention of any developer of AI-based hiring tools. When taken in conjunction with the recent ACLU action against Aon Consulting for its use of AI screening platforms, it seems the time for government expressing concerns has been replaced with action. While plaintiffs’ attorneys and government enforcement officials have typically focused on employers when it comes to alleged algorithmic bias, it was only a matter of time before they turned their attention to the developers of these products. Here are some practical steps AI vendors can take now to deal with the threat. Commit to Trustworthy AI – Make sure the design and delivery of your AI solutions are both responsible and transparent. This includes reviewing marketing and product materials. Review Your Work – Engage in a risk-based review process throughout your product’s lifecycle. This will help mitigate any unintended consequences. Team With Your Lawyers – Work hand-in-hand with counsel to help ensure compliance with best practices and all relevant workplace laws – and not just law prohibiting intentional discrimination, but also those barring the unintentional “disparate impact” claims as we see in the Workday lawsuit. Develop Bias Detection Mechanisms – Implement robust testing and validation processes to detect and eliminate bias in your AI systems. This includes using diverse training data and regularly updating your algorithms to address any identified biases. Lean Into Outside Assistance – Meanwhile, collaborate with external auditors or third-party reviewers to ensure impartiality in your bias detection efforts. 原文来自:https://www.salary.com/newsletters/law-review/agency-law-and-the-workday-lawsuit/
    class action
    2024年08月10日
  • class action
    法官允许针对 Workday 的人工智能偏见诉讼继续进行 Workday因其AI筛选软件涉嫌偏见而面临集体诉讼。美国加州北区地方法院法官Rita Lin裁定,Workday可能被视为受联邦反歧视法律保护的雇主,因为它执行的筛选功能是其客户通常自己执行的。这一裁决可能会对使用AI进行招聘的法律责任产生重大影响。该诉讼由Derek Mobley提起,他表示自己因为是黑人、年龄超过40岁且患有焦虑和抑郁症而被Workday的客户公司拒绝了超过100次工作机会。EEOC警告雇主,如果他们未能防止筛选软件产生歧视性影响,他们可能会承担法律责任。 7月15日(路透社)——加利福尼亚的一位联邦法官驳回了Workday公司试图驳回一项拟议中的集体诉讼的请求。该诉讼称,Workday公司用于筛选其他企业求职者的人工智能软件中包含了现有的偏见。 在这一首例裁决中,美国地方法官Rita Lin于周五表示,Workday可以被视为受联邦工作场所歧视法律覆盖的雇主,因为它执行了其客户通常自己进行的筛选功能。 Lin拒绝驳回Derek Mobley在2023年提出的几项诉讼。Mobley声称由于他是黑人、年龄超过40岁并患有焦虑和抑郁症,他在与Workday签约的公司中申请了超过100个职位但都被拒绝。 此案是首个挑战使用AI筛选软件的拟议集体诉讼,可能会在使用AI自动化招聘和其他就业功能的法律影响上树立重要的先例。现在,大多数大型公司都在使用这种技术。 Lin驳回了Workday基于种族和年龄的故意歧视指控。她还裁定该公司不能被视为反偏见法下的“就业机构”,因为与人力资源公司不同,它不为工人提供就业机会。 Workday发言人在一份声明中表示,公司对Lin驳回部分指控感到满意。“我们有信心在进入下一阶段时能轻松驳斥剩余指控,因为我们将有机会直接挑战其准确性,”发言人说。 Mobley的律师没有立即回应置评请求。诉讼称,Workday使用公司现有员工的数据来训练其AI软件,以筛选最佳申请者,但没有考虑到现有歧视可能反映的问题。 Mobley指控Workday违反了1964年《民权法案》第七章(Title VII of the Civil Rights Act of 1964)和其他联邦反歧视法律,进行了种族、年龄和残疾歧视。拟议中的集体诉讼可能包括数十万人。 Workday表示,由于它不是Mobley的潜在雇主,也不是可以因歧视而被追责的就业机构,因为它不为客户做出招聘决定,因此不受工作场所偏见法律的约束。 但Lin在周五表示,反偏见法律旨在广泛保护工人,防止雇主将筛选申请者等任务外包以逃避责任,并且Workday可以作为其客户的代理人承担责任。 “(诉讼)合理地声称Workday的客户将包括拒绝申请者在内的传统招聘功能委托给Workday提供的算法决策工具,”民主党总统Joe Biden任命的Lin写道。 美国平等就业机会委员会(U.S. Equal Employment Opportunity Commission)负责执行联邦禁止工作场所歧视的法律,该机构在4月份的一份简报中曾敦促Lin让案件继续进行。该机构警告雇主,如果他们未能防止筛选软件产生歧视性影响,他们可能会被追究法律责任。  
    class action
    2024年07月17日