• ADA
    从“一个人做五个岗位”到系统支撑:PEO如何重构HR职能边界 在美国做HR,很多人都有一个共同感受:不是能力不够,而是一个人被要求承担五个岗位的工作。从合规(FLSA、OSHA、ACA)到薪资、福利、员工关系,几乎所有事务都压在HR身上。问题的本质不是“HR不专业”,而是企业缺乏系统性的资源配置。PEO(专业雇主组织)正在成为中小企业的关键解法。以ADP TotalSource为代表,通过整合合规、薪资、福利与风险管理能力,让HR不再单打独斗。更重要的是,它本质是在重构风险结构,把个人风险转化为专业机构承担。 当HR不再疲于执行,才有可能真正进入组织发展与战略层。 做HR的人都懂这种感受: 员工来问福利政策,你要答;老板问薪资合规,你要查;律师函来了,你要协调;OSHA检查通知到了,你要准备材料…… 公司没有专门的法务、没有专门的薪资团队、没有专门的福利顾问——这些事情,默认都落在HR身上。 问题是,很多中小企业的HR团队就一两个人,有时候就是你一个人。 你不是不专业,你是被要求用一个人的精力做五个岗位的工作。 HR部门最大的隐患,不是能力问题,是资源问题 美国的劳工合规体系,是联邦、州、地方三级叠加的复杂结构。FLSA、ADA、FMLA、OSHA、ACA、EPLI……每一项都有更新周期,每一项理解错了都可能让公司面临罚款或诉讼。 而这些风险的第一责任人,往往指向HR。 与此同时,员工对福利的期望越来越高。根据行业数据,超过九成员工将整体工作满意度与福利方案直接挂钩,78%的员工表示如果福利不达标会考虑离职。但中小企业的HR在和保险公司谈判时,根本没有筹码。 招聘竞争不过大公司,合规风险自己兜,行政事务做不完——这不是HR的失职,这是结构性的资源缺口。 PEO模式,是HR最有力的外部支援 PEO(Professional Employer Organization,专业雇主组织)的本质,是为HR部门引入一支专业后援团队,把你无法独自覆盖的职能系统性补齐。 这不是替代HR,而是让HR从执行层解放出来,专注真正需要判断力和人际智慧的工作。 在北美PEO市场,ADP TotalSource 是规模最大、资质最完整的服务商,同时持有IRS认证和ESAC资质认证——全美满足这两项认证的PEO不到行业总数的4%。Forbes Advisor 2025年度将其评为"最佳PEO",5星满分。 ? 点击了解ADP服务方案:立即查看 加入ADP TotalSource,HR部门实际上获得了什么? 合规支持不再靠HR一个人扛 ADP TotalSource配备专职的HR Business Partner,持有SHRM、SPHR、PHR等专业认证,覆盖FLSA合规、员工分类、ADA问题、劳工海报、员工手册等高频合规场景。联邦、州、地方法规有更新,ADP主动预警,不需要HR自己盯着Federal Register逐条追。 福利方案质量直接跳级 通过ADP TotalSource,中小企业的员工可以进入大集团级别的团体福利池——覆盖全美50个州,包括医疗、牙科、视力、401(k)、HSA/FSA、人寿保险、意外险、EAP员工援助计划,以及健身房会员等员工福利。开放注册(Open Enrollment)的全流程管理,也由ADP负责,HR不需要每年手动协调几十个员工的选择和变更。 工伤与安全风险有专人处理 ADP TotalSource提供专职的风险与安全顾问,负责工作场所安全评估、OSHA合规培训、工伤保险理赔处理,以及24/7员工受伤支持热线。这些事情不再压在HR肩上。 薪资与税务有团队接管 专属的薪资业务伙伴和税务业务伙伴负责薪资处理、多州税务申报、年度税表生成。HR只需提交每期薪资数据,其余由ADP执行,大幅降低人为错误风险。 数据支持HR向上汇报 ADP DataCloud提供基于超过100万家雇主、3900万名员工的匿名聚合数据集,涵盖薪资基准、加班成本、员工流失率、多元化指标等30余项核心维度。HR可以用真实的市场数据支撑薪酬调整建议,而不是靠感觉和经验跟老板争预算。 HR团队得到的专属支持阵容 与ADP TotalSource合作后,HR背后实际上站着一支完整的专业团队: HR Business Partner(合规与战略顾问) Payroll Business Partner(薪资与税务执行) Tax Business Partner(税务专项支持) 福利专家(方案设计与开放注册管理) 风险与安全顾问(OSHA与工伤合规) ADP MyLife Advisors(员工直接支持热线,不占用HR时间) 人才专家、HR调查团队(员工关系复杂案件支持) 这个阵容,是绝大多数中小企业HR部门从未有过的资源配置。 这不是HR的妥协,而是HR的升级 引入PEO服务,不是因为HR能力不够,而是因为今天的HR工作早已超出一个小团队应当独自承担的边界。 把行政执行交给专业系统,把合规风险交给持证团队,把福利谈判交给有议价能力的平台——HR可以把省下来的精力,用在员工发展、文化建设、组织效能这些真正体现HR价值的地方。 这才是HR应该有的工作状态。 联系我们 如果你正在评估为公司引入专业的薪资(Payroll)或PEO解决方案,欢迎通过以下两种方式进一步了解: 点击链接直接了解ADP方案:立即查看 或关注我们的微信公众号,添加小助手微信,备注「ADP」。后续酱由工作人员一对一沟通,结合公司的实际情况给出具体建议。
    ADA
    2026年04月10日
  • ADA
    Agency Law and the Workday Lawsuit 文章讨论了在Workday诉讼中,代理法的相关法律问题。原告声称,Workday的AI筛选工具因种族、年龄和残疾而对他进行了歧视。这起案件提出了HR技术供应商是否可以对歧视性结果直接负责的问题。法律的复杂性包括AI在招聘决策中的角色、代理责任以及对雇主和AI开发者的潜在影响。此案件提醒雇主在实施AI招聘工具时要谨慎,并确保避免法律风险。AI开发者也必须确保其产品无歧视行为,因为该诉讼可能会树立重要的法律先例。 Editor's Note Agency Law and the Workday Lawsuit Agency law is so old that it used to be called master and servant law. (That's different from slavery, where human beings were considered the legal property of other humans based on their race, gender, and age, which is partly why we have discrimination laws.) Today, agency laws refer to principals and agents. All employees are agents of their employer, who is the principal. And employers can have nonemployee agents too when they hire someone to do things on their behalf. Generally, agents owe principals a fiduciary duty to act in the principal's best interest, even when that isn't the agent's best interest. Agency laws gets tricky fast because you have to figure out who is in charge, what authority was granted, whether the person acting was inside or outside that authority, what duty applies, and who should be held responsible as a matter of fairness and public policy. Generally, the principal is liable for the acts of the agent, sometimes even when the agent acts outside their authority. And agents acting within their authority are rarely liable for their actions unless it also involves intentional wrongs, like punching someone in the nose. Enter discrimination, which is generally a creature of statute that may or may not be consistent with general agency law even when the words used are exactly the same.   Discrimination is generally an intentional wrong, but employees are not usually directly liable for discrimination because making employment decisions is part of the way employment works and the employer is always liable for those decisions. The big exception is harassment because harassment, particularly sexual harassment, is never part of someone's job duties. So in harassment cases, the individual harasser is liable but the employer may not be unless they knew what was going on and didn't do anything about it. It's confusing and makes your head hurt. And that's just federal discrimination law. Other employment laws, both state and federal, deal with agent liability differently. Now, let's move to the Workday lawsuit. In that case, the plaintiff is claiming that Workday was an agent of the employer, but not in the sense of someone the employer was directing. They are claiming that Workday has independent liability as an employer too because they were acting like an employer in screening and rejecting applicants for the employer. But that's kinda the whole point of HR Technology—to save the employer time and resources by doing some of the work. The software doesn't replace the employer's decision making and the employer is going to be liable for any discrimination regardless of whether and how the employer used their software. If this were a products liability case, the answer would turn on how the product was designed to be used and how the employer used it. But this is an employment law and discrimination case. So, the legal question here is whether a company that makes HR Technology can also be directly liable for discriminatory outcomes when the employer uses that technology.   We don't have an answer to that yet and won't for a while. That's because this case is just at the pleading stage and hasn't been decided based on the evidence. What's happened so far is Workday filed a motion to dismiss based on the allegations in the complaint. Basically, Workday said, "Hey, we're just a software company. We don't make employment decisions; the employer does. It's the employer who is responsible for using our software in a way that doesn't discriminate. So, please let us out of the case. Then the plaintiff and EEOC said it's too soon to decide that. If all of the allegations in the lawsuit are considered true, then the plaintiff has made viable legal claims against Workday.   Those claims are that Workday's screening function acts like the employer in evaluating applications and rejecting or accepting them for the next level of review. This is similar to what third party recruiters and other employment agencies do and those folks are generally liable for those decisions under discrimination law. In addition, Workday could even be an agent of the employer if the employer has directly delegated that screening function to the software.   We're not to the question of whether a software company is really an agent of the employer or is even acting like an employment agency. And even if it is, whether it's the kind of agency that has direct liability or whether it's just the employer who ends up liable. This will all depend on statutory definitions and actual evidence about how the software is designed, how it works, and how the employer used it.   We also aren't at the point where we look at the contracts between the employer and Workday, how liability is allocated, whether there are indemnity clauses, and whether these type of contractual defenses even apply if Workday meets the statutory definition of an employer or agent who can be liable under Title VII.   Causation will also be a big issue because how the employer sets up the software, it's level of supervision of what happens with the software, and what's really going on in the screening process will all be extremely important.   The only thing that's been decided so far is that the plaintiff filed a viable claim against Workday and the lawsuit can proceed. Here are the details of the case and some good general advice for employers using HR Technology in any employment decision making process.   - Heather Bussing AI Workplace Screener Faces Bias Lawsuit: 5 Lessons for Employers and 5 Lessons for AI Developers by Anne Yarovoy Khan, John Polson, and Erica Wilson at Fisher Phillips   A California federal court just allowed a frustrated job applicant to proceed with an employment discrimination lawsuit against an AI-based vendor after more than 100 employers that use the vendor’s screening tools rejected him. The judge’s July 12 decision allows the class action against Workday to continue based on employment decisions made by Workday’s customers on the theory that Workday served as an “agent” for all of the employers that rejected him and that its algorithmic screening tools were biased against his race, age, and disability status. The lawsuit can teach valuable lessons to employers and AI developers alike. What are five things that employers can learn from this case, and what are five things that AI developers need to know? AI Job Screening Tool Leads to 100+ Rejections Here is a quick rundown of the allegations contained in the complaint. It’s important to remember that this case is in the very earliest stages of litigation, and Workday has not yet even provided a direct response to the allegations – so take these points with a grain of salt and recognize that they may even be proven false. Derek Mobley is a Black man over the age of 40 who self-identifies as having anxiety and depression. He has a degree in finance from Morehouse College and extensive experience in various financial, IT help-desk, and customer service positions. Between 2017 and 2024, Mobley applied to more than 100 jobs with companies that use Workday’s AI-based hiring tools – and says he was rejected every single time. He would see a job posting on a third-party website (like LinkedIn), click on the job link, and be redirected to the Workday platform. Thousands of companies use Workday’s AI-based applicant screening tools, which include personality and cognitive tests. They then interpret a candidate’s qualifications through advanced algorithmic methods and can automatically reject them or advance them along the hiring process. Mobley alleges the AI systems reflect illegal biases and rely on biased training data. He notes the fact that his race could be identified because he graduated from a historically Black college, his age could be determined by his graduation year, and his mental disabilities could be revealed through the personality tests. He filed a federal lawsuit against Workday alleging race discrimination under Title VII and Section 1981, age discrimination under the ADEA, and disability discrimination under the ADA. But he didn’t file just any type of lawsuit. He filed a class action claim, seeking to represent all applicants like him who weren’t hired because of the alleged discriminatory screening process. Workday asked the court to dismiss the claim on the basis that it was not the employer making the employment decision regarding Mobley, but after over a year of procedural wrangling, the judge gave the green light for Mobley to continue his lawsuit. Judge Gives Green Light to Discrimination Claim Against AI Developer Direct Participation in Hiring Process is Key – The judge’s July 12 order says that Workday could potentially be held liable as an “agent” of the employers who rejected Mobley. The employers allegedly delegated traditional hiring functions – including automatically rejecting certain applicants at the screening stage – to Workday’s AI-based algorithmic decision-making tools. That means that Workday’s AI product directly participated in the hiring process. Middle-of-the-Night Email is Critical – One of the allegations Mobley raises to support his claim that Workday’s AI decision-making tool automatically rejected him was an application he submitted to a particular company at 12:55 a.m. He received a rejection email less than an hour later at 1:50 a.m., making it appear unlikely that human oversight was involved. “Disparate Impact” Theory Can Be Advanced – Once the judge decided that Workday could be a proper defendant as an agent, she then allowed Mobley to proceed against Workday on a “disparate impact” theory. That means the company didn’t necessarily intend to screen out Mobley based on race, age, or disability, but that it could have set up selection criteria that had the effect of screening out applicants based on those protected criteria. In fact, in one instance, Mobley was rejected for a job at a company where he was currently working on a contract basis doing very similar work. Not All Software Developers On the Hook – This decision doesn’t mean that all software vendors and AI developers could qualify as “agents” subject to a lawsuit. Take, for example, a vendor that develops a spreadsheet system that simply helps employers sort through applicants. That vendor shouldn’t be part of any later discrimination lawsuit, the court said, even if the employer later uses that system to purposefully sort the candidates by age and rejects all those over 40 years old. 5 Tips for Employers This lawsuit could have just easily been filed against any of the 100+ employers that rejected Mobley, and they still may be added as parties or sued in separate actions.  That is a stark reminder that employers need to tread carefully when implementing AI hiring solutions through third parties. A few tips: Vet Your Vendors – Ensure your AI vendors follow ethical guidelines and have measures in place to prevent bias before you deploy the tool. This includes understanding the data they use to train their models and the algorithms they employ. Regular audits and evaluations of the AI systems can help identify and mitigate potential biases – but it all starts with asking the right questions at the outset of the relationship and along the way. Work with Counsel on Indemnification Language – It’s not uncommon for contracts between business partners to include language shifting the cost of litigation and resulting damages from employer to vendor. But make sure you work with counsel when developing such language in these instances. Public policy doesn’t often allow you to transfer the cost of discriminatory behavior to someone else. You may want to place limits on any such indemnity as well, like certain dollar amounts or several months of accrued damages. And you’ll want to make sure that your agreements contain specific guidance on what type of vendor behavior falls under whatever agreement you reach. Consider Legal Options – Should you be targeted in a discrimination action, consider whether you can take action beyond indemnification when it comes to your AI vendors. Breach of contract claims, deceptive business practice lawsuits, or other formal legal actions to draw the third party into the litigation could work to shield you from shouldering the full responsibility. Implement Ongoing Monitoring – Regularly monitor the outcomes of your AI hiring tools. This includes tracking the demographic data of applicants and hires to identify any patterns that may suggest bias or have a potential disparate impact. This proactive approach can help you catch and address issues before they become legal problems. Add the Human Touch – Consider where you will insert human decision-making at critical spots along your hiring process to prevent AI bias, or the appearance of bias. While an automated process that simply screens check-the-box requirements such as necessary licenses, years of experience, educational degrees, and similar objective criteria is low risk, completely replacing human judgment when it comes to making subjective decisions stands at the peak of riskiness when it comes to the use of AI. And make sure you train your HR staff and managers on the proper use of AI when it comes to making hiring or employment-related decisions. 5 Tips for Vendors While not a complete surprise given all the talk from regulators and others in government regarding concerns with bias in automated decision making tools, this lawsuit should grab the attention of any developer of AI-based hiring tools. When taken in conjunction with the recent ACLU action against Aon Consulting for its use of AI screening platforms, it seems the time for government expressing concerns has been replaced with action. While plaintiffs’ attorneys and government enforcement officials have typically focused on employers when it comes to alleged algorithmic bias, it was only a matter of time before they turned their attention to the developers of these products. Here are some practical steps AI vendors can take now to deal with the threat. Commit to Trustworthy AI – Make sure the design and delivery of your AI solutions are both responsible and transparent. This includes reviewing marketing and product materials. Review Your Work – Engage in a risk-based review process throughout your product’s lifecycle. This will help mitigate any unintended consequences. Team With Your Lawyers – Work hand-in-hand with counsel to help ensure compliance with best practices and all relevant workplace laws – and not just law prohibiting intentional discrimination, but also those barring the unintentional “disparate impact” claims as we see in the Workday lawsuit. Develop Bias Detection Mechanisms – Implement robust testing and validation processes to detect and eliminate bias in your AI systems. This includes using diverse training data and regularly updating your algorithms to address any identified biases. Lean Into Outside Assistance – Meanwhile, collaborate with external auditors or third-party reviewers to ensure impartiality in your bias detection efforts. 原文来自:https://www.salary.com/newsletters/law-review/agency-law-and-the-workday-lawsuit/
    ADA
    2024年08月10日