• Engineering
    特斯拉H-1B歧视案最新裁定:一封"H-1B only"邮件让工程师胜出初审,HR高管却因岗位性质被拒之门外 核心摘要:美国联邦法官2月23日裁定,拒绝驳回工程师Scott Taub对特斯拉提起的集体诉讼。Taub称,招聘公司Max Eleven明确告知他某特斯拉职位"H-1B only",导致他作为美国公民被拒于门外。法官Chhabria认为,这封邮件虽"证据单薄",但已足够维持歧视指控。然而同案另一原告、HR高管Sofia Brander的指控则被驳回——法官认为特斯拉偏好H-1B员工的领域集中于技术岗位,将其延伸至HR岗位"不合逻辑"。 案件背景:一封邮件,引发一场联邦诉讼 2026年2月23日,美国加利福尼亚州北区联邦地区法院法官 Vince Chhabria 作出裁定,拒绝驳回工程师 Scott Taub 对特斯拉(Tesla, Inc.)提起的集体诉讼核心指控。这一裁定意味着,指控特斯拉在招聘中系统性偏袒H-1B签证持有人、歧视美国公民的诉讼,将正式进入下一阶段的司法审查程序。 事情的起源,是一封再普通不过的招聘邮件。 Taub 收到了第三方招聘公司 Max Eleven 联系人 Mr. Sainik 发来的职位邀约,职位为特斯拉的 QA Automation Engineer(质量自动化工程师)。邮件内容写明,该职位为"H1B only"——仅限持H-1B工作签证的候选人申请,并要求提供"Travel history / i94"(入境记录与入境卡信息)。作为美国公民的 Taub,看到这行字后放弃了申请。随后,他以就业歧视为由,将特斯拉告上了联邦法院,并寻求集体诉讼资格。 案件基本信息:案件名称 Taub et al v. Tesla, Inc.,案件编号 3:25-cv-07785,法院为美国加利福尼亚州北区联邦地区法院,主审法官 Vince Chhabria,裁定日期2026年2月23日。原告一 Scott Taub(工程师,指控维持),原告二 Sofia Brander(HR高管,指控驳回),被告 Tesla, Inc.,涉事第三方招聘公司 Max Eleven。 法官裁定:证据"单薄",但足以维持诉讼 在裁定书中,Chhabria 法官坦承这是一个"close question"(近乎临界的问题),措辞罕见地透露出司法审慎。他的核心逻辑是:在诉讼初期审查阶段,所有推断必须有利于原告。 法官在裁定中写道:"The plaintiffs have pled just enough facts to allege that Tesla discriminated against US citizens."(原告已陈述了足够的事实,以主张特斯拉歧视美国公民。) 法官同时指出,可以合理推断,当 Max Eleven 告知 Taub 该职位"H-1B only"时,是在转达特斯拉的招聘偏好,而非招聘公司的自发筛选行为。 他同时对证据质量表达了保留态度,指出邮件的原始上下文存在模糊性:Taub 提问的原文未附在诉状中,邮件格式也难以判断是否为回复邮件。法官在裁定书中写道:"All of this causes the court to be somewhat skeptical of Taub's allegations. But because all inferences must be drawn in Taub's favor at this stage, the Max Eleven email and the allegations related to it are sufficient to state a claim for discrimination." 两位原告,两种结局——HR高管为何被驳回? 本案有两位原告,但结局截然不同。工程师 Taub 的指控得以维持,HR高管 Sofia Brander 的指控则被法官完全驳回。 Brander 同样主张,自己因美国公民身份被特斯拉歧视性地排除在求职机会之外。然而,法官认为此论据存在根本性的逻辑漏洞。裁定书中写道:"The complaint alleges that Tesla uses H-1B workers for 'specialized roles in engineering, research, and design,' so it wouldn't make sense for Tesla to favor H-1B workers for HR roles." 换言之,法院认为特斯拉对H-1B工人的依赖主要集中于工程、研发等高技术性岗位,将这一逻辑延伸至HR职位不合情理。因此,Brander 被认定缺乏充分的指控基础。 值得注意的是,法院给予 Brander 14天时间修订诉状,决定是否继续追诉。其指控并非永久关闭,若能提供更充分的事实依据,仍有机会重新进入诉讼程序。 起诉书中的更多细节:数字背后的系统性指控 仅靠一封邮件,能撑起一场集体诉讼吗?起诉书中的指控远不止于此。根据公开的起诉书原文(案件提交于2025年9月),原告援引了一系列数据,试图证明特斯拉的H-1B偏好并非个案,而是系统性的招聘政策: 特斯拉在2024年聘用了约1,355名H-1B签证持有人,同期却裁员逾6,000名美国本土员工,其中绝大多数为美国公民。 特斯拉通过 Max Eleven、West Valley Staffing、ManPower、Balance Staffing、Nelson Staffing 等多家第三方人力外包公司大量引进H-1B工人,而这部分招聘名额未必计入官方H-1B申报数据,属于监管盲区。 招聘公司 Max Eleven 的联系人 Mr. Sainik 在邮件中不仅注明"H1B only",还同时要求候选人提供"Travel history / i94"——这是典型的签证状态筛选操作。 特斯拉方面在法庭文件中将上述指控称为"preposterous"(荒谬绝伦),否认存在任何系统性歧视。 法官 Chhabria 对这些系统性指控态度相对冷淡,明确表示所有指控中只有关于招聘公司邮件的部分具有价值,大量统计数据本身不构成歧视的直接证据。但这些数字仍将在后续诉讼中发挥重要作用,为原告构建系统性歧视模式(pattern and practice)的主张提供支撑。 北美HR从业者合规警示:你的风险在哪里? 对于在美国企业从事HR工作的专业人士,这起案件并非遥远的法律事件,而是直接关乎日常操作规范的行业警钟。 风险一:第三方招聘公司的沟通内容,同样是你的法律责任 本案最具颠覆性的法律逻辑在于:法官明确表示,可以合理推断招聘公司是在转达特斯拉的偏好,而非自行决策。这意味着,即便是你委托的外部猎头或人力外包公司,其发给候选人邮件中的一句话,也可能成为你公司的诉讼依据。 合规建议:立即审查所有外部招聘合作协议,加入明确条款,禁止合作方在任何沟通材料中出现签证状态相关的候选人筛选表述,并要求合作方定期报告招聘沟通样本。 风险二:"H-1B only"是高风险表述,即便是口头传达 在美国就业歧视的法律框架下,以国籍(national origin)或公民/签证身份(citizenship / immigration status)为由限制求职机会,在特定情境下可能违反《移民与国籍法》第274B条款(INA § 274B),该条款明确禁止因公民身份而产生的就业歧视。即便是通过招聘公司间接传达的偏好,都可能被视为企业招聘政策的延伸。 风险三:岗位描述和签证状态挂钩,属于高风险操作 将特定职位与特定签证类型绑定,不论是在职位描述、招聘简报还是内部备忘录中,均可能成为歧视意图的直接证据。HR团队在与招聘方沟通岗位需求时,应严格区分"工作授权要求"(work authorization requirement)与"签证类型偏好"(visa type preference)。前者在某些涉密岗位中有法律依据,后者几乎没有任何合法保护空间。 建议立即采取的合规行动: 审计所有与外部招聘机构的合作协议,确保包含明确的反歧视条款,尤其是禁止签证状态相关筛选的条款。要求外部招聘合作方提交标准化的候选人沟通模板,由内部法务或HR合规团队审核。对内部招聘团队进行合规培训,明确区分"工作授权状态"(可合法询问)与"签证类型偏好"(通常不可操作)的边界。建立招聘合规举报机制,鼓励员工在发现类似问题时安全报告。如企业使用多家人力外包公司,需逐一审查其操作规范,不能仅依靠合同条款而忽视实际执行层面的风险。 更大的背景:H-1B政策收紧与合规新常态 这起诉讼并非孤立事件。它发生在美国政治与政策环境对H-1B签证高度敏感的时间节点上。近年来,围绕H-1B制度的争议持续发酵,"美国工人优先"的政策叙事贯穿两党议题。在这一背景下,企业的H-1B招聘实践正面临来自多个方向的审查压力——不仅是政府监管层面,也包括来自员工、竞争对手和公众舆论的多重目光。 值得注意的是,本案的指控人并非政府监管机构,而是个人求职者。这意味着就业歧视的法律风险并不仅来自劳工部、移民局等监管部门的执法,也可能来自任何一位曾在招聘过程中感受到不公平对待的候选人。 对于在美国运营、并大量依赖H-1B人才管道的企业,这起案件传递出清晰的信号:招聘合规必须从政策层面下沉到操作层面,从内部规范延伸到外部合作伙伴的沟通细节。一封来自第三方招聘公司的邮件,足以成为联邦集体诉讼的起点。 案件后续进展 特斯拉目前尚未就此案公开发表评论。Sofia Brander 获14天时间决定是否修订诉状。Scott Taub 的指控将进入实质性诉讼审查阶段,双方需就证据开示(discovery)展开程序。HR Tech 将持续追踪本案进展,为北美HR从业者带来第一手合规资讯。 本文仅供专业参考,不构成法律意见。如需法律咨询,请联系持牌劳动法律师。 1. 原始起诉书(Complaint)全文 PDF 这是2025年9月12日提交的诉状,已公开: ? https://casefilingsalert.com/wp-content/uploads/2025/09/Tesla-Suit-re-H-1B-Visas.pdf
    Engineering
    2026年03月03日
  • Engineering
    加拿大两大行业协会ACSESS和NACCB宣布合并 加拿大咨询企业国家协会(NACCB)和加拿大猎头、就业和人员服务协会(ACSESS)决定合并,计划在2025年1月1日正式生效。此次合并旨在通过一个统一的组织更好地代表加拿大的人员配置行业,为成员提供法规更新、合规教育和操作工具等支持。NACCB主席Michael Leacy指出,未来独立承包商仍是协会关注的重点之一,而ACSESS则继续在立法变动和监管更新上提供指导。两大协会的领导者一致认为,这一联盟将更好地服务于IT、工程、金融和法律等专业人才配置领域的企业需求,加强行业在政策制定中的影响力。、   加拿大两大行业协会ACSESS和NACCB宣布合并 加拿大咨询企业国家协会(National Association of Canadian Consulting Businesses,NACCB)将与加拿大猎头、就业和人员服务协会(Association for Canadian Search, Employment and Staffing Services,ACSESS)合并。 计划是通过该联合组织,以一个统一的声音来代表加拿大的人员配置行业。 NACCB代表着专业配置公司和咨询行业。这一合并决定承认了人员配置行业的发展,因为该行业的发展使得NACCB和ACSESS的业务定位越来越接近,根据宣布合并的信函所述。随着这一发展,这两个组织在使命和战略目标上也逐步趋同。 “NACCB一直专注于包括IT、工程、金融和法律在内的专业配置,”NACCB的主席Michael Leacy在宣布合并的信中表示。“在合并后的协会中,独立承包商的角色仍将是一个重要的关注和支持领域。” Leacy继续说道:“我们的会员将受益于ACSESS在全国立法和监管变化方面提供的指导,并能够接收到相关建议。通过加入这个统一的协会,他们还将获得合规教育以及各种运营工具。” 在今年早些时候,在Leacy和ACSESS主席Darlene Minatel的指导下,双方进行了有条不紊的协商。两个组织一致通过了合并决议,该合并将于2025年1月1日生效。 “每个组织的领导层都认识到,扩大后的协会在全加拿大范围内的代表性将有助于保持专业配置公司和咨询行业在政策决策中的重要利益,”该合并声明中提到。
    Engineering
    2024年11月05日
  • Engineering
    Josh Bersin人工智能实施越来越像传统IT项目 Josh Bersin的文章《人工智能实施越来越像传统IT项目》提出了五个主要发现: 数据管理:强调数据质量、治理和架构在AI项目中的重要性,类似于IT项目。 安全和访问管理:突出AI实施中强大的安全措施和访问控制的重要性。 工程和监控:讨论了持续工程支持和监控的需求,类似于IT基础设施管理。 供应商管理:指出了AI项目中彻底的供应商评估和选择的重要性。 变更管理和培训:强调了有效变更管理和培训的必要性,这对AI和IT项目都至关重要。 原文如下,我们一起来看看: As we learn more and more about corporate implementations of AI, I’m struck by how they feel more like traditional IT projects every day. Yes, Generative AI systems have many special characteristics: they’re intelligent, we need to train them, and they have radical and transformational impact on users. And the back-end processing is expensive. But despite the talk about advanced models and life-like behavior, these projects have traditional aspects. I’ve talked with more than a dozen large companies about their various AI strategies and I want to encourage buyers to think about the basics. Finding 1: Corporate AI projects are all about the data. Unlike the implementation of a new ERP system, payroll system, recruiting, or learning platform, an AI platform is completely data dependent. Regardless of the product you’re buying (an intelligent agent like Galileo™, an intelligent recruiting system like Eightfold, or an AI-enabling platform to provide sales productivity), success depends on your data strategy. If your enterprise data is a mess, the AI won’t suddenly make sense of it. This week I read a story about Microsoft’s Copilot promoting election lies and conspiracy theories. While I can’t tell how widespread this may be, it simply points out that “you own the data quality, training, and data security” of your AI systems. Walmart’s My Assistant AI for employees already proved itself to be 2-3x more accurate at handling employee inquiries about benefits, for example. But in order to do this the company took advantage of an amazing IT architecture that brings all employee information into a single profile, a mobile experience with years of development, and a strong architecture for global security. One of our clients, a large defense contractor, is exploring the use of AI to revolutionize its massive knowledge management environment. While we know that Gen AI can add tremendous value here, the big question is “what data should we load” and how do we segment the data so the right people access the right information? They’re now working on that project. During our design of Galileo we spent almost a year combing through the information we’ve amassed for 25 years to build a corpus that delivers meaningful answers. Luckily we had been focused on data management from the beginning, but if we didn’t have a solid data architecture (with consistent metadata and information types), the project would have been difficult. So core to these projects is a data management team who understands data sources, metadata, and data integration tools. And once the new AI system is working, we have to train it, update it, and remove bias and errors on a regular basis. Finding 2: Corporate AI projects need heavy focus on security and access management. Let’s suppose you find a tool, platform, or application that delivers a groundbreaking solution to your employees. It could be a sales automation system, an AI-powered recruiting system, or an AI application to help call center agents handle problems. Who gets access to what? How do you “layer” the corpus to make sure the right people see what they need? This kind of exercise is the same thing we did at IBM in the 1980s, when we implemented this complex but critically important system called RACF. I hate to promote my age, but RACF designers thought through these issues of data security and access management many years ago. AI systems need a similar set of tools, and since the LLM has a tendency to “consolidate and aggregate” everything into the model, we may need multiple models for different users. In the case of HR, if build a talent intelligence database using Eightfold, Seekout, or Gloat which includes job titles, skills, levels, and details about credentials and job history, and then we decide to add “salary” …  oops.. well all of a sudden we have a data privacy problem. I just finished an in-depth discussion with SAP-SuccessFactors going through the AI architecture, and what you see is a set of “mini AI apps” developed to operate in Joule (SAP’s copilot) for various use cases. SAP has spent years building workflows, access patterns, and various levels of user security. They designed the system to handle confidential data securely. Remember also that tools like ChatGPT, which access the internet, can possibly import or leak data in a harmful way. And users may accidentally use the Gen AI tools to create unacceptable content, dangerous communications, and invoke other “jailbreak” behaviors. In your talent intelligence strategy, how will you manage payroll data and other private information? If the LLM uses this data for analysis we have to make sure that only appropriate users can see it. Finding 3: Corporate AI projects need focus on “prompt engineering” and system monitoring. In a typical IT project we spend a lot of time on the user experience. We design portals, screens, mobile apps, and experiences with the help of UI designers, artists, and craftsmen. But in Gen AI systems we want the user to “tell us what they’re looking for.” How do we train or support the user in prompting the system well? If you’ve ever tried to use a support chatbot from a company like Paypal you know how difficult this can be. I spent weeks trying to get Paypal’s bot to tell me how to shut down my account, but it never came close to giving me the right answer. (Eventually I figured it out, even though I still get invoices from a contractor who has since deceased!) We have to think about these issues. In our case, we’ve built a “prompt library” and series of workflows to help HR professionals get the most out of Galileo to make the system easy to use. And vendors like Paradox, Visier (Vee), and SAP are building sophisticated workflows that let users ask a simple question (“what candidates are at stage 3 of the pipeline”) and get a well formatted answer. If you ask a recruiting bot something like “who are the top candidates for this position” and plug it into the ATS, will it give you a good answer? I’m not sure, to be honest – so the vendors (or you) have to train it and build workflows to predict what users will ask. This means we’ll be monitoring these systems, looking at interactions that don’t work, and constantly tuning them to get better. A few years ago I interviewed the VP of Digital Transformation at DBS (Digital Bank of Singapore), one of the most sophisticated digital banks in the world. He told me they built an entire team to watch every click on the website so they could constantly move buttons, simplify interfaces, and make information easier to find. We’re going to need to do the same thing with AI, since we can’t really predict what questions people will ask. Finding 4: Vendors will need to be vetted. The next “traditional IT” topic is going to be the vetting of vendors. If I were a large bank or insurance company and I was looking at advanced AI systems, I would scrutinize the vendor’s reputation and experience in detail. Just because a firm like OpenAI has built a great LLM doesn’t mean that they, as a vendor, are capable of meeting your needs. Does the vendor have the resources, expertise, and enterprise feature set you require? I recently talked with a large enterprise in the middle east who has major facilities in Saudi Arabia, Dubai, and other countries in the region. They do not and will not let user information, queries, or generated data leave their jurisdiction. Does the vendor you select have the ability to handle this requirement? Small AI vendors will struggle with these issues, leading IT to do risk assessment in a new way. There are also consultants popping up who specialize in “bias detection” or testing of AI systems. Large companies can do this themselves, but I expect that over time there will be consulting firms who help you evaluate the accuracy and quality of these systems. If the system is trained on your data, how well have you tested it? In many cases the vendor-provided AI uses data from the outside world: what data is it using and how safe is it for your application? Finding 5: Change management, training, and organization design are critical. Finally, as with all technology projects, we have to think about change management and communication. What is this system designed to do? How will it impact your job? What should you do if the answers are not clear or correct? All these issues are important. There’s a need for user training. Our experience shows that users adopt these systems quickly, but they may not understand how to ask a question or how to interpret an answer. You may need to create prompt libraries (like Galileo), or interactive conversation journeys. And then offer support so users can resolve answers which are wrong, unclear, or inconsistent. And most importantly of all, there’s the issue of roles and org design. Suppose we offer an intelligent system to let sales people quickly find answers to product questions, pricing, and customer history. What is the new role of sales ops? Do we have staff to update and maintain the quality of the data? Should we reorganize our sales team as a result? We’ve already discovered that Galileo really breaks down barriers within HR, for example, showing business partners or HR leaders how to handle issues that may be in another person’s domain. These are wonderful outcomes which should encourage leaders to rethink how the roles are defined. In our company, as we use AI for our research, I see our research team operating at a higher level. People are sharing information, analyzing cross-domain information more quickly, and taking advantage of interviews and external data at high speed. They’re writing articles more quickly and can now translate material into multiple languages. Our member support and advisory team, who often rely on analysts for expertise, are quickly becoming consultants. And as we release Galileo to clients, the level of questions and inquiries will become more sophisticated. This process will happen in every sales organization, customer service organization, engineering team, finance, and HR team. Imagine the “new questions” people will ask. Bottom Line: Corporate AI Systems Become IT Projects At the end of the day the AI technology revolution will require lots of traditional IT practices. While AI applications are groundbreaking powerful, the implementation issues are more traditional than you think. I will never forget the failed implementation of Siebel during my days at Sybase. The company was enamored with the platform, bought, and forced us to use it. Yet the company never told us why they bought it, explained how to use it, or built workflows and job roles to embed it into the company. In only a year Sybase dumped the system after the sales organization simply rejected it. Nobody wants an outcome like that with something as important as AI. As you learn and become more enamored with the power of AI, I encourage you to think about the other tech projects you’ve worked on. It’s time to move beyond the hype and excitement and think about real-world success.
    Engineering
    2023年12月17日