• AI regulation
    从“禁用机器人上司”到“人机共治”:加州 SB 7 将如何重塑用工 AI 合规版图 加州通过“No Robo Bosses Act”(SB 7),加州州长需在 10 月 12 日前决定是否签署。一旦生效,该法案将于 2026 年 1 月 1 日实施,全面规范 AI 在就业决策中的使用。主要条款包括:雇主不得仅依赖 AI 作出纪律、解雇或停用决定;若主要依赖 AI,必须有人工复核;使用前至少提前 30 天通知员工,并在解雇时单独告知;员工每年有权申请一次相关数据。违规将面临 500 美元/次罚款。这项立法被视为全球首个系统性 AI 职场监管案例,为员工提供更高透明度与数据权利。 以下是正文,供参考: 加州立法机构已通过被称为“No Robo Bosses Act”的 SB 7,并送交州长审议。按照加州官方立法日程,州长需在 2025 年 10 月 12 日前签署或否决该法案;若签署,新法自 2026 年 1 月 1 日起生效。这部全美首创的就业 AI 专法,以“通知—限制—监督—救济”为主线,覆盖从招聘到解雇的几乎全部“就业相关决策”,并在“纪律、解雇或停用”等高风险场景强制引入实质性人工复核。 一、SB 7 到底规范了什么? 关键定义与适用范围 自动化决策系统(ADS):凡是由机器学习、统计建模、数据分析或 AI 组成、输出“评分/分类/推荐”等简化结果、用来辅助或替代人的自由裁量、并对自然人产生实质性影响的计算过程,均被纳入。 就业相关决策:范围极广,几乎涵盖工资、福利、工作时长/班次、绩效、招聘、纪律、晋升、解雇、任务/技能要求、工作分配、培训与机会获取、生产率要求、健康与安全等。 工人(Worker):不仅包括雇员,也包括向企业或政府实体提供服务的独立承包人。 “不能只靠机器”的红线 在纪律、解雇或“停用(deactivation)”决策中,雇主不得仅依赖 ADS。 若“主要依赖” ADS 输出作出上述决定,必须由人类复核者审阅 ADS 输出,并汇总与审查其他相关信息(如主管评估、人员档案、工作成果、同事评议、证人访谈/相关在线评价等)。 不得将客户评分作为唯一或主要输入数据用于就业决策。 二、四大通知义务 使用前通知(Pre-Use Notice) 部署前至少 30 天向将被直接影响的工人发出书面通知;若法律生效时已在用,最迟 2026 年 4 月 1 日完成补通知;新入职者在入职 30 天内告知。 招聘场景:对使用 ADS 的岗位,需在收到申请时或在岗位公告中告知。内容包括:受影响的决策类型、输入数据类别与来源及采集方式、已知会导致输出偏差的关键参数、ADS 创建方、工人数据访问/更正权利、配额说明(如适用)、反报复声明等。 使用后通知(Post-Use Notice) 若在纪律、解雇或停用中“主要依赖 ADS”,在告知该决定的同时,还须向员工发出独立书面通知:说明可联系的人与数据获取方式,表明使用了 ADS,员工有权请求其被使用的数据,并重申反报复。 三、数据权利与合规底线 员工每 12 个月可申请一次,获取过去 12 个月在“主要依赖 ADS”的纪律/解雇/停用决策中所使用的本人数据(提供时须匿名化他人信息)。 禁止性使用:不得用 ADS 违反劳动/民权/安健等法律;不得推断受保护身份;不得围绕未披露目的收集工人数据;不得针对行使权利者进行画像/预测/不利行动。 四、执行与责任 执法机关:加州劳动专员主责,可调查、发临时救济、开具传票与罚单并提起民事诉讼;地方检察官也可起诉。 罚则:每次违规 500 美元民事罚款(可累计)。法案同时禁止对行使权利的员工实施任何形式的报复。 生效与时点:2025 年 10 月 12 日为州长签署/否决截止日期;2026 年 1 月 1 日起生效(若签署)。 五、与其他法律的交互作用 与 CCPA/CPPA 的衔接:若企业受《加州消费者隐私法》及加州隐私保护局关于自动化决策技术的隐私规则约束,则仍须遵守相应隐私规。 工会豁免:若有效的集体谈判协议中明确豁免 SB 7,并对工资/工时/工况与算法管理保护有清晰规定,则在该覆盖范围内不适用。 地方更高标准:SB 7 不排斥提供等同或更高保护的地方法规。 六、难点与灰区 “主要依赖”如何判定? 法律未给百分比或权重阈值,复核是否实质有效而非走过场,将依赖执法与判例。 通知与数据工作量 多系统、多岗位、多轮通知加数据留存,意味着 HR、法务与 IT 协作成本显著上升。 客户评分的边界 “不得作为唯一或主要输入”的要求,将迫使零售、外卖、平台经济等行业调整绩效与纪律模型。 七、横向对比:其他地区的经验 纽约市 Local Law 144:要求使用自动就业决策工具(AEDT)的企业进行年度偏见审计,并将结果公开,同时在招聘/评估阶段告知候选人和员工。 科罗拉多州 SB 24-205:对“高风险 AI”规定开发者和部署者的合理注意义务,要求进行影响评估,并建立申诉与数据更正路径,将于 2026 年 2 月 1 日生效。 欧盟 AI 法案:采取风险分级监管模式,高风险系统必须建立合规体系,并开展基本权利影响评估(FRIA),监管覆盖就业、教育、金融等多个场景。 八、企业实操路线图 盘点与评估 列出所有 ADS 使用点(招聘、绩效、排班、监控、培训等)。 识别纪律/解雇/停用链路中是否存在“主要依赖”。 审查 AI 供应商合同,确保披露必要数据来源与偏置参数。 通知与数据管理 建立前置通知、后置通知模板,并完成 2026 年 4 月 1 日前的补通知。 建立数据台账,支持员工数据申请与匿名化处理。 培训与演练 培训人类复核者,明确复核标准和证据清单。 建立纪律/解雇/停用的双轨记录机制,确保合规。 九、场景演练:门店一线员工“低评分解雇” 错误做法:直接将顾客星级评价作为主要依据触发解雇。合规做法: 将客户评分作为辅证。 由人类复核者调取主管评估、档案、工作样本、同事/证人意见等。 在决定同时发出后置通知,说明联系人、ADS 介入情况与数据申请权利。 十、对 HR 科技与供应链的影响 产品设计将更重视:通知生成器、人类复核工作台、数据取证与匿名化导出、偏置敏感参数标注等功能。 商业条款倾向:在 SLA 中加入合规配合、日志可提取性、异常暂停条款,对高风险场景的责任分配更加谨慎。 十一、编辑部点评 SB 7 的真正创新点不在于偏见审计或宏观风险分级,而在于直接规定:高风险就业决策必须有人类复核。这一行为导向+流程内嵌的模式,预示着企业 HR 管理将进入“人机共治”的新阶段。难点在于如何界定“主要依赖”与如何确保“复核质量”。未来数年,这些模糊地带将决定 SB 7 在实践中的实际效果。 关键信息与来源 加州议会法案文本:SB 7 Employment: automated decision systems 加州立法日程:2025 年 10 月 12 日为州长签署/否决截止日期,2026 年 1 月 1 日生效(若签署) 相关法规:纽约市 Local Law 144、科罗拉多州 SB 24-205、欧盟 AI 法案
    AI regulation
    2025年09月23日
  • AI regulation
    Workday: It’s Time to Close the AI Trust Gap Workday, a leading provider of enterprise cloud applications for finance and human resources, has pressed a global study recently recognizing the  importance of addressing the AI trust gap. They believe that trust is a critical factor when it comes to implementing artificial intelligence (AI) systems, especially in areas such as workforce management and human resources. Research results are as follows: At the leadership level, only 62% welcome AI, and only 62% are confident their organization will ensure AI is implemented in a responsible and trustworthy way. At the employee level, these figures drop even lower to 52% and 55%, respectively. 70% of leaders say AI should be developed in a way that easily allows for human review and intervention. Yet 42% of employees believe their company does not have a clear understanding of which systems should be fully automated and which require human intervention. 1 in 4 employees (23%) are not confident that their organization will put employee interests above its own when implementing AI. (compared to 21% of leaders) 1 in 4 employees (23%) are not confident that their organization will prioritize innovating with care for people over innovating with speed. (compared to 17% of leaders) 1 in 4 employees (23%) are not confident that their organization will ensure AI is implemented in a responsible and trustworthy way. (compared to 17% of leaders) “We know how these technologies can benefit economic opportunities for people—that’s our business. But people won’t use technologies they don’t trust. Skills are the way forward, and not only skills, but skills backed by a thoughtful, ethical, responsible implementation of AI that has regulatory safeguards that help facilitate trust.” said Chandler C. Morse, VP, Public Policy, Workday. Workday’s study focuses on various key areas: Section 1: Perspectives align on AI’s potential and responsible use. “At the outset of our research, we hypothesized that there would be a general alignment between business leaders and employees regarding their overall enthusiasm for AI. Encouragingly, this has proven true: leaders and employees are aligned in several areas, including AI’s potential for business transformation, as well as efforts to reduce risk and ensure trustworthy AI.” Both leaders and employees believe in and hope for a transformation scenario* with AI. Both groups agree AI implementation should prioritize human control. Both groups cite regulation and frameworks as most important for trustworthy AI. Section 2: When it comes to the development of AI, the trust gap between leaders and employees diverges even more. “While most leaders and employees agree on the value of AI and the need for its careful implementation, the existing trust gap becomes even more pronounced when it comes to developing AI in a way that facilitates human review and intervention.” Employees aren’t confident their company takes a people-first approach. At all levels, there’s the worry that human welfare isn’t a leadership priority. Section 3: Data on AI governance and use is not readily visible to employees. “While employees are calling for regulation and ethical frameworks to ensure that AI is trustworthy, there is a lack of awareness across all levels of the workforce when it comes to collaborating on AI regulation and sharing responsible AI guidelines.” Closing remarks: How Workday is closing the AI trust gap. Transparency: Workday can prioritize transparency in their AI systems. Providing clear explanations of how AI algorithms make decisions can help build trust among users. By revealing the factors, data, and processes that contribute to AI-driven outcomes, Workday can ensure transparency in their AI applications. Explainability: Workday can work towards making their AI systems more explainable. This means enabling users to understand the reasoning behind AI-generated recommendations or decisions. Employing techniques like interpretable machine learning can help users comprehend the logic and factors influencing the AI-driven outcomes. Ethical considerations: Working on ethical frameworks and guidelines for AI use can play a crucial role in closing the trust gap. Workday can ensure that their AI systems align with ethical principles, such as fairness, accountability, and avoiding bias. This might involve rigorous testing, auditing, and ongoing monitoring of AI models to detect and mitigate any potential biases or unintended consequences. User feedback and collaboration: Engaging with users and seeking their feedback can be key to building trust. Workday can involve their customers and end-users in the AI development process, gathering insights and acting on user concerns. Collaboration and open communication will help Workday enhance their AI systems based on real-world feedback and user needs. Data privacy and security: Ensuring robust data privacy and security measures is vital for instilling trust in AI systems. Workday can prioritize data protection and encryption, complying with industry standards and regulations. By demonstrating strong data privacy practices, they can alleviate concerns associated with AI-driven data processing. SOURCE Workday
    AI regulation
    2024年01月11日