• transparency
    77%的美国员工表示人工智能帮助他们更好地完成工作 77%美国员工表示 AI 让工作更高效,52%已日常使用,写作和数据分析最受欢迎。但 60% 对 AI 参与招聘晋升感到不安。效率与信任如何平衡,正成为 HR 新课题。值得每位 HR 管理者看看这份数据。 在人工智能快速融入企业日常运营的背景下,职场对 AI 的态度正在发生实质性转变。近日,美国 AI 求职自动化平台 Sonara 发布的一项全国调研显示,AI 已不再是“试点工具”,而成为越来越多员工完成工作的“标配能力”。 这项覆盖 3,000 多名美国职场人士 的调查指出,77% 的 AI 使用者表示人工智能能帮助他们更好地完成工作,显示出明确的生产力红利。与此同时,70% 的企业已允许员工在工作中使用 AI 工具,其中 37% 主动鼓励采用,反映出组织层面对 AI 应用的开放态度正在加速形成。 从实际应用场景来看,AI 已深入知识型岗位的核心流程。员工最常见的用途包括写作与编辑(69%)、创意构思与头脑风暴(53%)、数据分析与报告生成(29%),以及设计制作和会议记录整理等。这些高频、重复性工作率先被自动化或增强,使员工将时间转向更具判断力和创造性的任务。 不过,效率提升并未完全消除顾虑。调查同时发现,55% 的员工认为 AI 已参与到招聘或晋升评估中,但 60% 对此感到不适或担忧。这种“生产力认同与决策信任缺失并存”的现象,揭示了 AI 落地过程中的治理挑战。员工希望获得更高透明度、更清晰的规则,以及对算法公平性的保障。 技能准备度同样成为关键变量。仅 30% 的受访者认为自身能力完全匹配未来岗位需求,超过半数认为“只部分匹配”。这意味着,企业若仅引入技术,而缺乏系统性的再培训与能力建设,可能难以真正释放 AI 投资价值。 Sonara 职业专家 Keith Spencer 指出,如今企业面临的核心问题已从“是否使用 AI”,转向“如何负责任地使用 AI”。他强调,透明的使用政策、清晰的决策边界以及持续的技能升级,将成为企业赢得员工信任的关键。 整体来看,这份报告释放出一个明确信号:AI 正在成为职场基础设施,而非附加选项。对 HR 和管理层而言,下一阶段的竞争优势,不再只是工具部署速度,而是能否同步推进治理规范、组织变革与人才能力升级,实现“效率提升”与“员工信任”的双重平衡。 在生成式 AI 深度渗透办公场景的当下,真正领先的企业,或许不是用得最多的,而是用得最稳、最透明、也最被员工接受的那一批。
    transparency
    2026年02月09日
  • transparency
    【AI“幻觉”惹祸】德勤退还澳大利亚政府44万澳元:AI责任与信任的边界被推上风口浪尖 德勤(Deloitte)因使用AI生成报告出现虚假引用,已向澳大利亚政府退回部分项目款项。报告价值约44万澳元,生成工具为OpenAI GPT-4o。事件核心并非“AI出错”,而是“人未复核”。当AI进入专业决策场景,如合规审查、HR管理、绩效评估,风险就不再是技术问题,而是责任问题。 2025年10月6日,多家国际主流媒体——包括《金融时报》(Financial Times)、《卫报》(The Guardian)与《商业内幕》(Business Insider)——同时报道:全球“四大会计师事务所”之一的德勤(Deloitte)已向澳大利亚政府退回部分咨询费用,原因是一份由AI辅助生成的官方报告出现了严重引用错误与虚构文献。 这份价值约44万澳元(约合人民币210万元)的报告名为《Targeted Compliance Framework Assurance Review》(目标合规框架保障评估报告),由澳大利亚就业与劳工关系部(DEWR)委托编制,旨在评估政府福利与合规系统的执行效果。报告发布后不久,学术界发现其中存在大量不实引用、错误脚注甚至虚构学术来源,随后被媒体揭露使用了生成式AI撰写部分内容。 【AI参与写报告?德勤承认并退款】 在舆论持续发酵后,德勤承认报告部分内容由微软Azure OpenAI GPT-4o辅助生成。由于AI“幻觉”导致文献造假、案例失真,德勤决定退回合同的最后一笔款项,澳大利亚政府也确认将公开合同细节。 德勤在声明中强调,报告的核心结论与政策建议未受影响,错误仅限于引用与注释层面。然而,公众和立法机构的质疑并未因此平息。澳大利亚工党参议员Deborah O’Neill直言:“德勤的问题不是AI问题,而是人类智力问题(a human intelligence problem)。” 她进一步呼吁,所有与政府合作的咨询机构应明确披露“AI在项目中的使用范围与程度”,并建立AI审查与复核机制。 【时间线回顾】 7月:德勤提交最终报告,未披露AI使用。 8月:学术界发现报告引用了不存在的案例与研究文献。 9月:媒体报道德勤正在内部审查并准备修订。 10月6日:德勤公开承认使用AI,并确认退款;政府表态将强化未来合同披露要求。 【咨询业的信任危机:从AI到责任边界】 这起事件不仅揭示了AI在文本生成领域的潜在风险,也让整个咨询与公共服务行业重新思考“责任”的归属。当AI生成报告、起草政策或撰写分析文件时,如果缺乏人类复核机制,错误就不再只是“技术问题”,而是专业信任危机。 AI的“幻觉”并非罕见——它会在缺乏事实支撑时生成貌似合理却虚构的信息。但当此类内容出现在政府报告、政策研究或企业审计中,其后果已不止于“学术瑕疵”,而可能直接影响公共决策与财政责任。 专家指出,AI可用于信息整理与初步分析,但在涉及法律、政策与公共治理的场景中,必须建立“三层防线”:算法审计、人工复核、责任归属。否则,AI效率越高,错误传播也越快。 【对HR与企业管理的启示】 这场“AI幻觉风波”不仅属于咨询行业的教训,也对人力资源与组织管理发出了强烈信号。AI的普及正快速改变招聘、绩效、培训与合规管理等领域,但如果HR不掌握AI使用的监督与伦理权力,这一权力就会被技术部门接管。 负责任的AI治理(Responsible AI)并非技术议题,而是组织文化与价值观的延伸。HR不仅要懂得如何应用AI提升效率,更要懂得如何为AI“设边界”。 AI可以加速决策,但只有人类能为决策承担责任。在AI时代,HR是组织中最后的信任守门人。 德勤的退款事件或许只是AI时代众多“幻觉案例”中的一个,但它让我们重新看清一个本质:技术从不犯错,真正出问题的,是放弃了复核与判断的人。
    transparency
    2025年10月07日
  • transparency
    从“禁用机器人上司”到“人机共治”:加州 SB 7 将如何重塑用工 AI 合规版图 加州通过“No Robo Bosses Act”(SB 7),加州州长需在 10 月 12 日前决定是否签署。一旦生效,该法案将于 2026 年 1 月 1 日实施,全面规范 AI 在就业决策中的使用。主要条款包括:雇主不得仅依赖 AI 作出纪律、解雇或停用决定;若主要依赖 AI,必须有人工复核;使用前至少提前 30 天通知员工,并在解雇时单独告知;员工每年有权申请一次相关数据。违规将面临 500 美元/次罚款。这项立法被视为全球首个系统性 AI 职场监管案例,为员工提供更高透明度与数据权利。 以下是正文,供参考: 加州立法机构已通过被称为“No Robo Bosses Act”的 SB 7,并送交州长审议。按照加州官方立法日程,州长需在 2025 年 10 月 12 日前签署或否决该法案;若签署,新法自 2026 年 1 月 1 日起生效。这部全美首创的就业 AI 专法,以“通知—限制—监督—救济”为主线,覆盖从招聘到解雇的几乎全部“就业相关决策”,并在“纪律、解雇或停用”等高风险场景强制引入实质性人工复核。 一、SB 7 到底规范了什么? 关键定义与适用范围 自动化决策系统(ADS):凡是由机器学习、统计建模、数据分析或 AI 组成、输出“评分/分类/推荐”等简化结果、用来辅助或替代人的自由裁量、并对自然人产生实质性影响的计算过程,均被纳入。 就业相关决策:范围极广,几乎涵盖工资、福利、工作时长/班次、绩效、招聘、纪律、晋升、解雇、任务/技能要求、工作分配、培训与机会获取、生产率要求、健康与安全等。 工人(Worker):不仅包括雇员,也包括向企业或政府实体提供服务的独立承包人。 “不能只靠机器”的红线 在纪律、解雇或“停用(deactivation)”决策中,雇主不得仅依赖 ADS。 若“主要依赖” ADS 输出作出上述决定,必须由人类复核者审阅 ADS 输出,并汇总与审查其他相关信息(如主管评估、人员档案、工作成果、同事评议、证人访谈/相关在线评价等)。 不得将客户评分作为唯一或主要输入数据用于就业决策。 二、四大通知义务 使用前通知(Pre-Use Notice) 部署前至少 30 天向将被直接影响的工人发出书面通知;若法律生效时已在用,最迟 2026 年 4 月 1 日完成补通知;新入职者在入职 30 天内告知。 招聘场景:对使用 ADS 的岗位,需在收到申请时或在岗位公告中告知。内容包括:受影响的决策类型、输入数据类别与来源及采集方式、已知会导致输出偏差的关键参数、ADS 创建方、工人数据访问/更正权利、配额说明(如适用)、反报复声明等。 使用后通知(Post-Use Notice) 若在纪律、解雇或停用中“主要依赖 ADS”,在告知该决定的同时,还须向员工发出独立书面通知:说明可联系的人与数据获取方式,表明使用了 ADS,员工有权请求其被使用的数据,并重申反报复。 三、数据权利与合规底线 员工每 12 个月可申请一次,获取过去 12 个月在“主要依赖 ADS”的纪律/解雇/停用决策中所使用的本人数据(提供时须匿名化他人信息)。 禁止性使用:不得用 ADS 违反劳动/民权/安健等法律;不得推断受保护身份;不得围绕未披露目的收集工人数据;不得针对行使权利者进行画像/预测/不利行动。 四、执行与责任 执法机关:加州劳动专员主责,可调查、发临时救济、开具传票与罚单并提起民事诉讼;地方检察官也可起诉。 罚则:每次违规 500 美元民事罚款(可累计)。法案同时禁止对行使权利的员工实施任何形式的报复。 生效与时点:2025 年 10 月 12 日为州长签署/否决截止日期;2026 年 1 月 1 日起生效(若签署)。 五、与其他法律的交互作用 与 CCPA/CPPA 的衔接:若企业受《加州消费者隐私法》及加州隐私保护局关于自动化决策技术的隐私规则约束,则仍须遵守相应隐私规。 工会豁免:若有效的集体谈判协议中明确豁免 SB 7,并对工资/工时/工况与算法管理保护有清晰规定,则在该覆盖范围内不适用。 地方更高标准:SB 7 不排斥提供等同或更高保护的地方法规。 六、难点与灰区 “主要依赖”如何判定? 法律未给百分比或权重阈值,复核是否实质有效而非走过场,将依赖执法与判例。 通知与数据工作量 多系统、多岗位、多轮通知加数据留存,意味着 HR、法务与 IT 协作成本显著上升。 客户评分的边界 “不得作为唯一或主要输入”的要求,将迫使零售、外卖、平台经济等行业调整绩效与纪律模型。 七、横向对比:其他地区的经验 纽约市 Local Law 144:要求使用自动就业决策工具(AEDT)的企业进行年度偏见审计,并将结果公开,同时在招聘/评估阶段告知候选人和员工。 科罗拉多州 SB 24-205:对“高风险 AI”规定开发者和部署者的合理注意义务,要求进行影响评估,并建立申诉与数据更正路径,将于 2026 年 2 月 1 日生效。 欧盟 AI 法案:采取风险分级监管模式,高风险系统必须建立合规体系,并开展基本权利影响评估(FRIA),监管覆盖就业、教育、金融等多个场景。 八、企业实操路线图 盘点与评估 列出所有 ADS 使用点(招聘、绩效、排班、监控、培训等)。 识别纪律/解雇/停用链路中是否存在“主要依赖”。 审查 AI 供应商合同,确保披露必要数据来源与偏置参数。 通知与数据管理 建立前置通知、后置通知模板,并完成 2026 年 4 月 1 日前的补通知。 建立数据台账,支持员工数据申请与匿名化处理。 培训与演练 培训人类复核者,明确复核标准和证据清单。 建立纪律/解雇/停用的双轨记录机制,确保合规。 九、场景演练:门店一线员工“低评分解雇” 错误做法:直接将顾客星级评价作为主要依据触发解雇。合规做法: 将客户评分作为辅证。 由人类复核者调取主管评估、档案、工作样本、同事/证人意见等。 在决定同时发出后置通知,说明联系人、ADS 介入情况与数据申请权利。 十、对 HR 科技与供应链的影响 产品设计将更重视:通知生成器、人类复核工作台、数据取证与匿名化导出、偏置敏感参数标注等功能。 商业条款倾向:在 SLA 中加入合规配合、日志可提取性、异常暂停条款,对高风险场景的责任分配更加谨慎。 十一、编辑部点评 SB 7 的真正创新点不在于偏见审计或宏观风险分级,而在于直接规定:高风险就业决策必须有人类复核。这一行为导向+流程内嵌的模式,预示着企业 HR 管理将进入“人机共治”的新阶段。难点在于如何界定“主要依赖”与如何确保“复核质量”。未来数年,这些模糊地带将决定 SB 7 在实践中的实际效果。 关键信息与来源 加州议会法案文本:SB 7 Employment: automated decision systems 加州立法日程:2025 年 10 月 12 日为州长签署/否决截止日期,2026 年 1 月 1 日生效(若签署) 相关法规:纽约市 Local Law 144、科罗拉多州 SB 24-205、欧盟 AI 法案
    transparency
    2025年09月23日
  • transparency
    员工体验平台的演进:推动 AI 转型的关键引擎 Josh Bersin 公司发布新研究指出:员工体验平台(EXP)正在成为企业 AI 转型的关键基础设施。EXP 不再只是HR工具,而是推动组织学习、透明沟通和员工赋能的核心平台。研究提出五大战略:以人为本、自下而上、持续学习、透明沟通和实时优化。案例包括 Microsoft 的 HR AI 转型、ASOS 的 AI 自动化、Clifford Chance 的法律文书 AI 起草。EXP 赋能组织实现敏捷变革和AI落地。 AI 正在快速改变职场——不仅是技术,更是组织文化与工作方式的深刻变革。 人工智能(AI)的广泛应用为生产力、效率和业务增长带来了前所未有的机遇。然而,AI 转型并不仅仅意味着“部署新技术”,它实际上深刻地重塑了员工体验,影响着组织文化、团队协作方式与工作流程。 在这一转型过程中,员工体验平台(Employee Experience Platform,简称 EXP) 正逐渐从传统的 HR 工具,演进为推动企业成功实施 AI 的关键引擎。EXP 不再只是一个用于请假或查政策的门户,而是集成沟通、学习、协作、数据与自动化的智能化平台,帮助组织推动 AI 采纳、提升员工准备度,并确保 AI 真正带来业务价值。 员工体验平台的演进 EXP 的初始功能主要是处理事务性流程,如请假申请、薪资查询等。但如今,随着 AI 技术的发展,EXP 已演变为智能化的交互中心,集成以下核心功能: 跨系统的员工沟通与协作 提供关于 AI 使用和员工情绪的实时洞察 支持个性化的学习与技能建设 自动化重复任务,让员工专注于更有价值的工作 同时,得益于 AI Agent 的融入,如今的 EXP 变得更易使用,员工可通过自然语言与系统交互,实现跨系统流程操作,无需再进入多个事务性系统。 因此,EXP 不再是“可有可无”的系统,而是 企业 AI 成功转型的关键基础设施。 企业 AI 转型案例 我们调研了三家具有代表性的公司,探讨他们在 AI 转型中如何借助 EXP 实现落地与成效: 1. ASOS(线上时尚零售) 部署 Microsoft Copilot 与 Microsoft Viva 赋能多业务部门 用 AI 驱动 HR 案例处理工具,提升服务效率 通过自助服务门户精简事务流程 用自定义 AI bot 自动完成可持续认证流程 成果:员工生产力提升、参与度增强、AI 无缝落地 2. Microsoft(打造 AI 驱动的 HR 部门) 通过 Viva 学习模块开展 AI 培训 自助 HR 工具增强员工支持体验 实时分析 AI 使用情况,持续优化策略 成果:HR 效率显著提升,数千名 HR 领导参与 AI 社群 3. Clifford Chance(国际律所) 用 AI 起草法律文件,为律师提供初稿 借助 AI 语言工具跨越法律语境差异 利用 AI 管理法律知识,快速找出相关案例 成果:文书效率提升、知识共享加速、决策更精准 AI 转型的敏捷性要求 与传统变革不同,AI 推广不是一次性事件,而是一个 持续试验、迭代和适应的过程。因此,企业需具备“变革敏捷性”(Change Agility),用灵活的机制推动员工学习和组织协同。 借助 EXP 实现 AI 成功的五大战略 我们总结出五个成功企业在 AI 转型过程中普遍遵循的策略,而 EXP 是支撑这些策略实施的核心平台: 1. 以人为本与目标导向(Focus on People and Purpose) AI 的导入需与组织使命、价值观和员工需求保持一致。EXP 可确保所有 AI 工具围绕员工体验设计,提升参与度、工作效率和福祉。 ? 案例:Microsoft HR 借助 Viva Amplify 定制 AI 推广内容,让 HR 团队及时获取战略沟通信息,确保 AI 项目与业务目标保持一致。 2. 采用自下而上的迭代方法(Bottom-Up, Iterative Approach) AI 转型不能靠高层指令推动,而应依赖一线员工的反馈与试验。EXP 通过实时反馈与学习机制,让员工在实际工作中试用、迭代与优化 AI 工具。 ? 案例:ASOS 借助 Viva 社区功能发起“Work Smarter”活动,员工可在平台上公开交流 AI 使用案例,形成知识共享文化。 3. 鼓励透明沟通与试验精神(Transparent Communication and Experimentation) 员工需要明确知道 AI 工具的使用场景、目的与风险,才能建立信任并积极参与。EXP 提供结构化、公开的试验机制,确保过程透明。 ? 案例:Clifford Chance 在 Microsoft Viva 中嵌入 AI 工作流程,员工可以实时测试 AI 辅助起草功能,同时了解其运行逻辑。 4. 推动持续学习与技能建设(Continuous Learning and Skill-Building) 员工必须掌握 AI 基本素养,才能有效融入 AI 工具。EXP 提供基于角色定制的学习路径,支持技能升级与长期成长。 ? 案例:Clifford Chance 借助 Viva Learning 培训员工 prompt 工程、AI 素养与数据分析技能,为 AI 工具的使用打下基础。 5. 实现实时度量与持续优化(Real-Time Measurement and Improvement) 与传统 IT 项目不同,AI 推广必须持续监测并快速调整策略。EXP 提供实时分析能力,帮助企业追踪员工情绪、生产力与 AI 使用情况。 ? 案例:Microsoft HR 借助 Viva Insights 实时追踪 AI 使用频率、员工负荷减轻情况与情绪变化,以便动态调整 AI 战略。 HR 在 AI 转型中的新角色 在 AI 重构工作的过程中,HR 部门不再只是支持者,而是: 主导员工技能升级与再培训 协助重塑岗位定义与工作流程 在 HR、IT 与业务之间架起 AI 战略桥梁 落实负责任 AI 政策,确保 AI 应用符合伦理与企业文化 HR 将在未来的 AI 时代中扮演 “战略引导者 + 管理变革催化者” 的核心角色。 行动建议与未来展望 企业若想在 AI 转型中取得成功,应当: ✅ 采用“变革敏捷”思维,持续学习、实时迭代 ✅ 建立 AI 驱动的员工体验平台,支持流程与文化融合 ✅ 打破 HR、IT、业务之间的壁垒,实现跨部门协同 ✅ 实施实时度量机制,根据反馈不断优化 AI 战略 EXP 已成为企业迈入 AI 未来的基础设施。 AI 将持续重塑职场,但决定 AI 成败的关键并非技术本身,而是组织是否能让员工真正拥抱 AI、用好 AI。 EXP 不再只是一个 HR 工具,而是打造学习型组织、推动信任建设和灵活变革的“中枢神经系统”。企业若想在 AI 驱动的时代中保持竞争力,就必须把员工体验放在战略核心位置。 作者:Kathi Enderes | 全球研究与行业分析高级副总裁 | Josh Bersin Company
    transparency
    2025年07月19日
  • transparency
    Despite Political Firestorm, Diversity Investments Are Alive And Well Josh Bersin 发表文章:尽管政治压力和社会对多元化与包容性(DEI)计划的批评日益加剧,许多公司依然重视相关投资。这些企业将DEI从单独的HR计划融入到领导力、绩效管理和招聘战略中,形成了更加全面的文化建设方式。在员工对企业领导层信任度下降的背景下(如Edelman信任晴雨表指出的68%员工认为CEO不诚实),信任、透明和公平已成为企业文化的核心要素。 企业如今更注重绩效文化,通过构建基于能力与高绩效的包容环境,吸引各年龄、性别及种族的优秀人才。杰米·戴蒙等领导者已公开表示支持DEI,证明高绩效与包容性是现代企业成功的关键。尽管DEI独立职能角色在减少,但相关实践已经深度融入企业运营。各行业的领先企业正通过这种方式实现快速转型和增长,进一步强调了DEI对企业文化和绩效的重要性。 下面是全文,请欣赏: As the WSJ has reported extensively, companies like Harley Davidson, Tractor Supply, Walmart, and McDonalds are publicly pulling back on DEI programs, largely under pressure by political activists. Fueled by the supreme court’s striking down of affirmative action in 2023, there is a political movement to dismantle the “social justice” movement that took hold in corporate HR departments. Now, driven by the new administration, the Federal Government is “ending radical and wasteful” government DEI programs. And the executive order is asking the Justice Department to litigate up to 9 private companies as examples. As a part of this plan, each agency shall identify up to nine potential civil compliance investigations of publicly traded corporations, large non-profit corporations or associations, foundations with assets of 500 million dollars or more, State and local bar and medical associations, and institutions of higher education with endowments over 1 billion dollars. Of course this has created a firestorm of debate, and many companies are doing away with dedicated DEI roles in HR. But our research, which includes discussions with many dozens of Chief HR Officers, heads of recruitment, and others, finds that the investments are alive and well. Here’s where I sense we are. While DEI and pay equity programs have been around since the 1960s (companies like Coca Cola and Google have been sued for gender and racial pay inequities), the topic got out of hand. Post George Floyd, which was a traumatic event in the United States, companies went overboard with training and messaging about social justice, oppression, micro-aggression, and other uncomfortable topics. Many programs included discussions of topics like “white fragility,” “intersectionality,” “oppression,” and other social topics. While this was trending in the media, many employees told us these programs made them uncomfortable. In a country like the United States (I just got back from two weeks in South Africa, where these issues are front and center) where we have a long history of immigration and diversity, this topic has been debated for hundreds of years. I worked at IBM during the days of affirmative action (1970s and 1980s) and my personal experience was very positive. Black and Asian professionals were actively recruited and promoted at IBM during my tenure and I have fond memories of IBM as a company with a powerful culture of “respect for the individual” (IBM’s motto). (Read Thomas Watson’s 1963 manifesto: it’s a bit gender-biased but remains relevant today. Watson, the founder of IBM, talks extensively about equity between white and blue collar work, fair wages and benefits, and opportunities for all. Note that IBM is one of the only tech companies that has survived more than 100 years so these principles have served the company well.) Now that we’ve entered a business focus on productivity, AI, and technology transformation, companies want to build a culture of meritocracy, skills, leadership, and internal mobility. The #1 issue we hear from CHROs and CEOs is “how do we transform our company faster?” Sitting around to debate diversity targets or DEI agendas just doesn’t feel important. That said, as we discuss regularly with leaders in every industry, CEOs and CHROs are very concerned about corporate culture. The new Edelman Trust Barometer describes a shocking drop in trust among workers. More than half of all employees believe CEOs are overpaid and 68% believe they lie on a regular basis. So cultural topics of inclusion, fairness, and respect are extremely important. (The Edelman research even points out that 40% employees believe that hostile activism against their employer is acceptable (violence, property damage, social media attacks). So building a culture of trust, transparency, and listening remains essential. And that’s why culture still matters. As I discuss in our research “The Rise of the Superworker,” (and PwC’s 2025 CEO survey also points this out), companies that transform faster make more money. And transformation, regardless of the technology behind it, is always dependent on people. So when we read about corporate transformations at companies like Boeing, Intel, and Nike, we know that there are always issues of culture. Where does the DEI agenda now fit? As I talk with leaders around the world, it has clearly not gone away. Today, rather than focus on representation targets or social issues, companies are embedding their focus on meritocracy within the business, moving it out of the world of an “HR program.” And this, despite the political backlash, is a good thing. As even Robby Starbuck points out, every leader believes in meritocracy. We want our teams to reward high performance and encourage everyone to learn, grow, and advance in a fair way. DEI, which became a standalone mission of its own, is now a part of “building a culture of performance,” and that means respecting high performance among all genders, races, disabilities, and ages. It means creating a culture of psychological safety where people can speak up, and it means being crystal clear with feedback, accountability, and behaviors we value. Finally, let me celebrate the public statement by Jamie Dimon, one of the most respected CEOs in the world. When asked about DEI activists at the World Economic Forum, he answered “bring them on, we’re proud of what we do.” While much of the political focus against DEI seems to focus on “moving companies to the right,” I think the real trend is quite different. Leaders and HR departments are taking the high-profile DEI agenda and embedding it into the disciplines of leadership, recruitment, performance management, and rewards. And even today, as Lightcast data shows, there are more than 7,000 DEI roles posted for hire. The highest performing companies in the world are inclusive and fair by nature – that’s why high-performers want to work there. Let’s let “DEI” as an HR agenda move aside, and move the topic back into the business of leadership where it belongs. (Listen to real-world case studies in The Josh Bersin Academy or browse all our DEI research in Galileo.)
    transparency
    2025年01月27日
  • transparency
    纳斯达克多元化规则被上诉法院推翻 近日,美国联邦上诉法院驳回了纳斯达克推动董事会多元化的规定,该规则要求企业董事会中至少包括女性、少数族裔或LGBTQ+成员,或解释未达成的原因。法院认为该规则超出了美国证券交易委员会(SEC)的监管权限。尽管这一裁决是对多元化和包容性努力的重大打击,但许多投资者和企业仍认为多元化对公司治理和投资回报至关重要。一些公司,如高盛,仍保留多元化董事会的政策,以回应投资者对透明度的期待。然而,此次裁决可能影响未来与气候风险和多元化相关的披露规则,投资者的透明度面临挑战。 主要内容 纳斯达克在美国“黑人的命也是命”(Black Lives Matter)运动席卷全国之际曾传递这样的信息:推动多元化,否则准备好为自己的行为做解释。 四年后,美国联邦上诉法院推翻了纳斯达克交易所(包括苹果、英伟达、微软和特斯拉等公司)的规则,该规则试图要求公司董事会纳入更多女性、有色人种及LGBTQ+董事。 本周的裁决证实了许多高管私下接受的现实:他们所采纳并庆祝的多元化举措不仅面临攻击,还在被逐步撤销。随着特朗普新政府的到来,这种压力无疑会进一步加剧。 纳斯达克对此裁决表示接受,并决定不再寻求进一步审查。 “我们尊重法院的裁决,不打算进一步上诉,”一位发言人在声明中说道。 DEI(多元化、公平与包容)之争 此次裁决来自新奥尔良的保守派“第五巡回上诉法院”,这是右翼活动人士与美国企业界之间战争的最新一击。尽管这场官司被描述为一场歧视诉讼,但它实际上围绕着证券交易委员会(SEC)的权力展开。然而,随着越来越多针对实习项目、初创企业补助金甚至ATM费用减免的法律挑战,即使是最坚定的企业也开始公开做出妥协。 此次裁决也引发了人们对其他SEC披露规则(如气候相关风险和温室气体排放)的担忧,这些规则可能会被放宽,导致投资者无法获得重要信息。即使是与多元化相关的披露也可能因公司之间的不一致而变得难以比较,这对试图评估企业的股东来说是一个难题。 在此背景下,高盛集团仍保持着一个显著的多元化要求:自2020年起,该银行不再为美国或欧洲没有至少一名多元化董事(女性、有色人种或LGBTQ+)的公司进行首次公开募股(IPO)承销,从2021年开始要求至少两名多元化董事。高盛在周四确认了这一政策依然有效。 即使在四年前该规则提出时,纳斯达克的多元化要求就已颇具争议。规则规定,企业董事会必须包含一名女性、“未被充分代表的少数族裔”或LGBTQ+董事,否则需在代理声明或公司网站上解释未能合规的原因,并要求在去年12月31日前实现合规。纳斯达克原计划到明年年底将这一要求提高至至少两名多元化董事,其中一人必须为女性。 “理论上,这一规则或许无法推动领先公司表现得更好,但它能迫使那些落后的公司——例如只有零到一名女性董事的企业——取得进步,”彭博情报(Bloomberg Intelligence)高级ESG分析师Rob Du Boff表示。他还补充道,纳斯达克目前约25%的董事会席位已由女性担任,因此其要求门槛非常低。 质疑声音 从一开始,批评人士就将纳斯达克的规则称为一种配额制。尽管董事任命并不受传统的反歧视就业法律约束,但配额仍然在法律上存有争议。由成功推动高校录取平权法案诉讼的活动人士Edward Blum领导的“公平董事会招募联盟”(Alliance for Fair Board Recruitment)迅速对SEC批准该规则提出了挑战,称其助长了“有害的歧视”。一些州的总检察长也加入了类似的声音。 法院的决定避开了对歧视的直接讨论,仅指出多元化规则并不在SEC的监管权限内。 “第五巡回上诉法院,尤其是其整体倾向,意图推翻任何监管机构的规则,无论这些规则对投资者、市场和公众利益多么重要,”金融市场非营利组织Better Markets的法律总监Stephen Hall表示。他补充道,他认为法院对法律的解释是错误的。“这一裁决对透明度来说是一个巨大的挫折,而透明度是证券市场的生命线。” 律师们预计,作为诉讼主要当事方的SEC不会对此结果提出挑战,尤其是在该机构即将由特朗普任命的人士领导的情况下。 一些知名公司此前支持纳斯达克:包括Airbnb、微软和美光科技等公司,它们均在一份法律简报中将这一规则描述为“一个常识性措施”。像Lord Abbett & Co.和Northern Trust Investments Inc.这样的投资巨头也表达了支持。它们抱怨称,董事会多元化的可靠信息“难以获得,有时不准确”,且往往不一致。 投资者的需求 前激进投资者、现任雪城大学兼职教授Jared Landaw表示,发布有关董事会多元化的信息是一种明智的企业信号。在Barington资本集团担任首席运营官超过16年期间,他发现“许多表现不佳的公司在董事会内部往往存在某种同质性,这种同质性要么导致问题产生,要么阻碍董事会自我纠正。”引入来自不同人口统计和生活背景的董事能够帮助解决这些问题。 “大多数标普500公司都会披露它们的多元化统计数据,无论它们是否在纳斯达克上市,”Landaw说。“我认为这是投资者期望的反映。” 领导董事会多元化倡导组织“50/50董事会女性”(50/50 Women on Boards)的Heather Spilsbury提到她在加州的经验。加州曾在2019年实施一项法律,要求州内大多数上市公司在2021年前至少拥有三名女性董事。尽管该法律在2022年被推翻,但在法律实施期间,加州公司董事会的女性比例从20.6%增长到34.2%,至今仍维持在约34%。 “我们看到它的影响有涟漪效应,甚至超出了加州,”她补充道。 然而,其他人并不确定。没有规则的情况下,公司将“享有更大的灵活性,可以决定在SEC文件、网站或其他公开披露中包含哪些与董事会多元化相关的信息以及披露的范围,”Davis Polk律师在周四的一份客户更新中写道。 Davis Polk的律师们表示,法院可能会对SEC的气候相关风险和温室气体排放披露规则采取类似的裁决逻辑,这可能会影响联邦存款保险公司(FDIC)制定的企业治理规则。 无论投资者的需求如何,这是一个可能被一再使用的法律策略。 (Andrew Ramonas和Sridhar Natarajan提供了协助。)
    transparency
    2024年12月16日
  • transparency
    LinkedIn因违反欧盟GDPR被爱尔兰数据保护委员会罚款3.1亿欧元 LinkedIn因违反欧盟《通用数据保护条例》(GDPR)被爱尔兰数据保护委员会罚款3.10亿欧元(约合3.34亿美元)。调查始于2018年,法国非营利组织La Quadrature du Net提交投诉,指出LinkedIn广告数据处理方面存在问题。爱尔兰监管机构发现,LinkedIn在处理个人数据时未能遵守GDPR第6(1)条的法律依据要求。虽然LinkedIn声称一直遵守规定,但已被要求在规定时间内纠正数据处理方式。完整决定将在稍后公布。 全球最大的在线招聘广告平台LinkedIn因其个人数据处理方式被爱尔兰数据保护委员会罚款3.10亿欧元(约合3.343亿美元)。此外,LinkedIn还收到了谴责,并被要求将其数据处理操作改为符合规定。 调查揭示:未获用户许可的隐私数据处理 根据爱尔兰DPC的调查,LinkedIn使用了未经授权的个人数据,用于定向广告服务。具体而言,LinkedIn在用户不知情的情况下,追踪他们的在线行为,以推送更加个性化的广告。LinkedIn声称其基于“用户同意”或“合法利益”作为法律依据进行数据收集,但经过调查,这些理由未能满足GDPR的严格标准。DPC指出,LinkedIn在数据收集过程中缺乏透明度,未能获得用户的明确同意,严重违反了GDPR的规定。 自2018年8月起,欧洲官员开始调查LinkedIn的做法,这一调查源于法国非营利组织La Quadrature du Net的投诉。该投诉最初提交给法国数据保护机构,随后转交给负责LinkedIn的主要监督机构——爱尔兰数据保护委员会。 爱尔兰数据保护局在其决定中指出以下几点: 根据GDPR第6(1)条款,缺乏适当法律依据进行个人数据处理是对数据主体基本数据保护权利的严重侵犯。 GDPR要求数据处理必须遵循公平原则,即个人数据不得以损害、歧视、意外或误导数据主体的方式进行处理。 透明性条款的合规确保数据主体在数据处理前完全了解处理的范围及后果,并能够行使其权利。 完整的最终决定将在稍后公布。 科技行业的警示 这一事件不仅对LinkedIn构成打击,也为整个科技行业敲响了警钟。在当今数据驱动的世界中,越来越多的企业依赖于用户数据来提供个性化服务,如定向广告。然而,企业在处理个人数据时必须确保完全符合GDPR的要求,尤其是在透明度和用户同意方面。随着欧盟对数据保护的监管力度加大,所有在欧盟运营的科技公司都必须采取严格措施,确保数据处理合规,避免类似的处罚。 未来展望:隐私保护将成重中之重 此次事件表明,隐私保护将成为未来科技行业的核心议题。随着技术的进步,数据收集和分析变得更加复杂,企业必须加倍努力确保符合GDPR等法规的要求。这不仅需要定期进行隐私审查,还需要不断更新隐私政策,确保用户数据的安全使用。除此之外,企业还需加强对员工的培训,提升他们对数据保护的意识,以减少违规风险。 LinkedIn的案例揭示了数据隐私保护不容忽视的重要性。企业必须在透明度、用户同意和合法性之间取得平衡,否则将面临法律和声誉的双重打击。随着监管环境的日益严格,科技公司只有通过主动合规,才能在未来的市场中立于不败之地。
    transparency
    2024年10月24日
  • transparency
    美国劳工部发布职场人工智能使用原则,保护员工权益(附录原文) 今天5月16日,美国劳工部发布了一套针对人工智能(AI)在职场使用的原则,旨在为雇主提供指导,确保人工智能技术的开发和使用以员工为核心,提升所有员工的工作质量和生活质量。代理劳工部长朱莉·苏在声明中指出:“员工必须是我们国家AI技术发展和使用方法的核心。这些原则反映了拜登-哈里斯政府的信念,人工智能不仅要遵守现有法律,还要提升所有员工的工作和生活质量。” 根据劳工部发布的内容,这些AI原则包括: 以员工赋权为中心:员工及其代表,特别是来自弱势群体的代表,应被告知并有真正的发言权参与AI系统的设计、开发、测试、培训、使用和监督。这确保了AI技术在整个生命周期中考虑到员工的需求和反馈。 道德开发AI:AI系统应以保护员工为目标设计、开发和培训。这意味着在开发AI时,需要优先考虑员工的安全、健康和福祉,防止技术对员工造成不利影响。 建立AI治理和人工监督:组织应有明确的治理体系、程序、人工监督和评估流程,确保AI系统在职场中的使用符合伦理规范,并有适当的监督机制来防止误用。 确保AI使用的透明度:雇主应对员工和求职者透明地展示其使用的AI系统。这包括向员工说明AI系统的功能、目的以及其在工作中的具体应用,增强员工的信任感。 保护劳动和就业权利:AI系统不应违反或破坏员工的组织权、健康和安全权、工资和工时权以及反歧视和反报复保护。这确保了员工在AI技术的应用下,其基本劳动权益不受侵害。 使用AI来支持员工:AI系统应协助、补充和支持员工,并改善工作质量。这意味着AI应被用来提升员工的工作效率和舒适度,而不是取代员工或增加其工作负担。 支持受AI影响的员工:雇主应在与AI相关的工作转换期间支持或提升员工的技能。这包括提供培训和职业发展机会,帮助员工适应新的工作环境和技术要求。 确保负责任地使用员工数据:AI系统收集、使用或创建的员工数据应限于合法商业目的,并被负责地保护和处理。这确保了员工数据的隐私和安全,防止数据滥用。 这些原则是根据拜登总统发布的《安全、可靠和可信赖的人工智能开发和使用行政命令》制定的,旨在为开发者和雇主提供路线图,确保员工在AI技术带来的新机遇中受益,同时避免潜在的危害。 拜登政府强调,这些原则不仅适用于特定行业,而是应在各个领域广泛应用。原则不是详尽的列表,而是一个指导框架,供企业根据自身情况进行定制,并在员工参与下实施最佳实践。通过这种方式,拜登政府希望能在确保AI技术推动创新和机会的同时,保护员工的权益,避免技术可能带来的负面影响。 这套原则发布后,您认为它会对贵公司的AI技术使用和员工权益保护产生怎样的影响? 英文如下: Department of Labor's Artificial Intelligence and Worker Well-being: Principles for Developers and Employers Since taking office, President Biden, Vice President Harris, and the entire Biden-Harris Administration have moved with urgency to harness AI's potential to spur innovation, advance opportunity, and transform the nature of many jobs and industries, while also protecting workers from the risk that they might not share in these gains. As part of this commitment, the AI Executive Order directed the Department of Labor to create Principles for Developers and Employers when using AI in the workplace. These Principles will create a roadmap for developers and employers on how to harness AI technologies for their businesses while ensuring workers benefit from new opportunities created by AI and are protected from its potential harms. The precise scope and nature of how AI will change the workplace remains uncertain. AI can positively augment work by replacing and automating repetitive tasks or assisting with routine decisions, which may reduce the burden on workers and allow them to better perform other responsibilities. Consequently, the introduction of AI-augmented work will create demand for workers to gain new skills and training to learn how to use AI in their day-to-day work. AI will also continue creating new jobs, including those focused on the development, deployment, and human oversight of AI. But AI-augmented work also poses risks if workers no longer have autonomy and direction over their work or their job quality declines. The risks of AI for workers are greater if it undermines workers' rights, embeds bias and discrimination in decision-making processes, or makes consequential workplace decisions without transparency, human oversight and review. There are also risks that workers will be displaced entirely from their jobs by AI. In recent years, unions and employers have come together to collectively bargain new agreements setting sensible, worker-protective guardrails around the use of AI and automated systems in the workplace. In order to provide AI developers and employers across the country with a shared set of guidelines, the Department of Labor developed "Artificial Intelligence and Worker Well-being: Principles for Developers and Employers" as directed by President Biden's Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, with input from workers, unions, researchers, academics, employers, and developers, among others, and through public listening sessions. APPLYING THE PRINCIPLES The following Principles apply to the development and deployment of AI systems in the workplace, and should be considered during the whole lifecycle of AI – from design to development, testing, training, deployment and use, oversight, and auditing. The Principles are applicable to all sectors and intended to be mutually reinforcing, though not all Principles will apply to the same extent in every industry or workplace. The Principles are not intended to be an exhaustive list but instead a guiding framework for businesses. AI developers and employers should review and customize the best practices based on their own context and with input from workers. The Department's AI Principles for Developers and Employers include: [North Star] Centering Worker Empowerment: Workers and their representatives, especially those from underserved communities, should be informed of and have genuine input in the design, development, testing, training, use, and oversight of AI systems for use in the workplace. Ethically Developing AI: AI systems should be designed, developed, and trained in a way that protects workers. Establishing AI Governance and Human Oversight: Organizations should have clear governance systems, procedures, human oversight, and evaluation processes for AI systems for use in the workplace. Ensuring Transparency in AI Use: Employers should be transparent with workers and job seekers about the AI systems that are being used in the workplace. Protecting Labor and Employment Rights: AI systems should not violate or undermine workers' right to organize, health and safety rights, wage and hour rights, and anti-discrimination and anti-retaliation protections. Using AI to Enable Workers: AI systems should assist, complement, and enable workers, and improve job quality. Supporting Workers Impacted by AI: Employers should support or upskill workers during job transitions related to AI. Ensuring Responsible Use of Worker Data: Workers' data collected, used, or created by AI systems should be limited in scope and location, used only to support legitimate business aims, and protected and handled responsibly.
    transparency
    2024年05月16日
  • transparency
    改善居家办公问责制的7个方法 受漫长的疫情影响以及网络通讯日益方便迅捷,居家办公了越来越合理化。居家办公成为职场大势,而管理者通常认为居家办公会严重影响员工工作效率。事实上,与其一味心里过滤不如正视其好处,适应混合式和多元化工作。 想了解居家办公问责制,就要知道是居家办公问责制的什么,其好处是什么,以及七个改善团队或企业问责制的方法。 居家办公(WFH)是许多美国专业人士的新型常态,无论是全职的远程工作还是混合式工作。然而,这种工作制度给企业和员工带来许多利益的同时也带来了特别的挑战——尤其是问责制的问题。没有了传统的办公环境,办公效率以及办公可靠度就需要刻意努力和有效的策略来维持。 在这篇文章中,我们将探讨如何改善居家办公的问责制,让个人和团队在这种新的工作环境下蓬勃发展。 What is work-from-home accountability? 什么是居家办公问责制? Accountability is taking ownership of one’s actions, decisions and outcomes in the remote work context. This means being responsible for meeting deadlines, maintaining quality levels and honoring commitments made to colleagues and stakeholders. When working from home, it’s important to establish clear expectations and guidelines for accountability. This includes: Defining specific goals and objectives Setting realistic deadlines Providing regular feedback Giving support Accountability in remote work also requires effective communication. Inform all team members about progress, challenges and any changes that may affect the workflow. This promotes transparency and allows for better collaboration and problem-solving among teams. Benefits of work-from-home accountability 居家办公问责制的好处 To enhance accountability in remote work, it’s important to recognize its significance. Some of the benefits of focusing on accountability in remote workers include: Improved responsibility: When people hold themselves accountable for their work, they’re more likely to step up and take responsibility for the outcome of their tasks. This also gives employees a sense of accomplishment and improves job satisfaction. More transparency: When you set clear expectations for remote teams, it’s easier for them to be clear about what they’re working on and when they may need help. This also increases trust among team members. Improved collaboration: Remote team accountability helps employees collaborate by outlining who’s responsible for what, so they know who to communicate with to ensure work is completed. Fewer missed deadlines: When working remotely, it’s easy to let deadlines slide past without colleagues reminding you when work is due. Improving accountability among WFH team members helps reduce the number of missed deadlines and streamlines workflows. Better work-life balance: Accountability also improves work-life balance for employees by making sure no team members have to pick up the slack for others. 7 ways to improve work-from-home accountability 改善居家办公问责制的7个方法 Leaders and managers can establish and improve WFH accountability through a few methods. Every organization is different, so you’ll need to find what works best for your situation. 1. Set a clear WFH policy 建立明确的WFH政策 The first step in establishing WFH accountability is to have a clear policy in place. It’s a good idea to ensure team members have buy-in so they don’t feel that they can’t follow the rules. Some items your policy should cover should include expected working hours, hybrid schedules and technology usage policies. Some virtual teams may work on their own schedules and timelines while others will need to have set hours in place to ensure collaboration. Many virtual teams will need more structure than others. It’s important your policy encompasses the best system for your entire organization. Work with your managers and team leaders to find out what policies will work best for everyone. 2. Clarify responsibilities 明确职责 If employees know what’s expected of them, they’ll be more likely to hold themselves accountable to those expectations. Make sure you set clear goals, deadlines and benchmarks so employees can hold themselves to them. Workers need to know what they’re responsible for and who to ask if a project is running late or they need more help. Key performance indicators (KPIs) help teams measure the quality and efficiency of their work to make changes where needed. This is particularly important in a remote work environment where team members don’t have regular physical interactions with each other. 3. Provide the right tools 提供合适的工具 Remote employees may need additional technology and tools to communicate, collaborate and complete tasks. Make sure you provide your teams with the right technology to help them meet goals and stay on track. Virtual teams will need the right communication tool for team meetings, plus project management and collaboration tools to keep each other accountable in real time. Time management and tracking tools help teams determine how to assign project deadlines and prioritize as well. Cloud-based systems help employees work from anywhere and at any time, helping them complete projects when working from home or traveling. Leadership also needs specialized software like ActivTrak to maintain visibility and manage hybrid and remote workforces. 4. Encourage clear communication 鼓励清晰的交流 The best-performing virtual teams are those who can communicate regularly and clearly about their work. Many of the tools you provide your team members will help them communicate about work status, bottlenecks and processes. However, you should also encourage communication among teams through other means, such as weekly newsletters and quarterly all-staff meetings. Just make sure that you’re not scheduling unnecessary meetings for your team’s needs. 5. Give regular check-ins 日常打卡 Beyond clear communication about the team or organization as a whole, structured check-ins for individual employees helps ensure work-from-home policies are working for each person. Give employees a chance to voice their concerns with existing policies or let their managers know where they may be struggling. This also provides an opportunity for managers to help employees see where they’re hitting goals or where they may need to work harder. WFH environments may change over time as your team members and their needs change, so flexibility and regular feedback are key. 6. Measure productivity 衡量工作效率 Remote employee management requires understanding how your teams work best and what blockers may keep them from productivity. One way to make sure you’re setting realistic goals and that team members are accountable for their work when they work from home is to monitor productivity. There are many benefits to using WFH productivity tracking software like ActivTrak, including helping team members with time management, task management and accountability. It also gives your leaders insight to make decisions driven by data rather than guesswork, so you can see where workflows and processes may need tweaking or what’s working for your remote teams. You can also see if team members may be working too much or too little and redistribute the workload as needed. 7. Reward employees for achievements 员工成就奖励 Create a culture of engagement by rewarding employees for being accountable and meeting (or exceeding) expectations. Bonuses, extra paid time off or gifts can be special rewards, but even publicly praising employees for their contributions can go a long way toward improving accountability in your team. Other rewards can include new opportunities to further their careers or take on new challenges. Different teams and employees will have different needs for feeling valued and rewarded, so let your managers find the best way to let employees know they’re appreciated. Use ActivTrak to improve work-from-home accountability If you’re ready to take the next step to enhance work-from-home accountability for your team, ActivTrak offers a comprehensive workforce analytics platform customizable to your needs. Get insights to assess and improve employee productivity and well-being and gain visibility into how work gets done within your company. Use data to inform key decisions and optimize outcomes for your remote or hybrid teams. To see how ActivTrak can empower your team, contact our sales team for a free demo. SOURCE ActivTrak
    transparency
    2024年01月22日
  • transparency
    Workday: It’s Time to Close the AI Trust Gap Workday, a leading provider of enterprise cloud applications for finance and human resources, has pressed a global study recently recognizing the  importance of addressing the AI trust gap. They believe that trust is a critical factor when it comes to implementing artificial intelligence (AI) systems, especially in areas such as workforce management and human resources. Research results are as follows: At the leadership level, only 62% welcome AI, and only 62% are confident their organization will ensure AI is implemented in a responsible and trustworthy way. At the employee level, these figures drop even lower to 52% and 55%, respectively. 70% of leaders say AI should be developed in a way that easily allows for human review and intervention. Yet 42% of employees believe their company does not have a clear understanding of which systems should be fully automated and which require human intervention. 1 in 4 employees (23%) are not confident that their organization will put employee interests above its own when implementing AI. (compared to 21% of leaders) 1 in 4 employees (23%) are not confident that their organization will prioritize innovating with care for people over innovating with speed. (compared to 17% of leaders) 1 in 4 employees (23%) are not confident that their organization will ensure AI is implemented in a responsible and trustworthy way. (compared to 17% of leaders) “We know how these technologies can benefit economic opportunities for people—that’s our business. But people won’t use technologies they don’t trust. Skills are the way forward, and not only skills, but skills backed by a thoughtful, ethical, responsible implementation of AI that has regulatory safeguards that help facilitate trust.” said Chandler C. Morse, VP, Public Policy, Workday. Workday’s study focuses on various key areas: Section 1: Perspectives align on AI’s potential and responsible use. “At the outset of our research, we hypothesized that there would be a general alignment between business leaders and employees regarding their overall enthusiasm for AI. Encouragingly, this has proven true: leaders and employees are aligned in several areas, including AI’s potential for business transformation, as well as efforts to reduce risk and ensure trustworthy AI.” Both leaders and employees believe in and hope for a transformation scenario* with AI. Both groups agree AI implementation should prioritize human control. Both groups cite regulation and frameworks as most important for trustworthy AI. Section 2: When it comes to the development of AI, the trust gap between leaders and employees diverges even more. “While most leaders and employees agree on the value of AI and the need for its careful implementation, the existing trust gap becomes even more pronounced when it comes to developing AI in a way that facilitates human review and intervention.” Employees aren’t confident their company takes a people-first approach. At all levels, there’s the worry that human welfare isn’t a leadership priority. Section 3: Data on AI governance and use is not readily visible to employees. “While employees are calling for regulation and ethical frameworks to ensure that AI is trustworthy, there is a lack of awareness across all levels of the workforce when it comes to collaborating on AI regulation and sharing responsible AI guidelines.” Closing remarks: How Workday is closing the AI trust gap. Transparency: Workday can prioritize transparency in their AI systems. Providing clear explanations of how AI algorithms make decisions can help build trust among users. By revealing the factors, data, and processes that contribute to AI-driven outcomes, Workday can ensure transparency in their AI applications. Explainability: Workday can work towards making their AI systems more explainable. This means enabling users to understand the reasoning behind AI-generated recommendations or decisions. Employing techniques like interpretable machine learning can help users comprehend the logic and factors influencing the AI-driven outcomes. Ethical considerations: Working on ethical frameworks and guidelines for AI use can play a crucial role in closing the trust gap. Workday can ensure that their AI systems align with ethical principles, such as fairness, accountability, and avoiding bias. This might involve rigorous testing, auditing, and ongoing monitoring of AI models to detect and mitigate any potential biases or unintended consequences. User feedback and collaboration: Engaging with users and seeking their feedback can be key to building trust. Workday can involve their customers and end-users in the AI development process, gathering insights and acting on user concerns. Collaboration and open communication will help Workday enhance their AI systems based on real-world feedback and user needs. Data privacy and security: Ensuring robust data privacy and security measures is vital for instilling trust in AI systems. Workday can prioritize data protection and encryption, complying with industry standards and regulations. By demonstrating strong data privacy practices, they can alleviate concerns associated with AI-driven data processing. SOURCE Workday
    transparency
    2024年01月11日
  • 12