• Technology Adoption
    HR:清醒吧!员工更信任AI而非HR 多年来,我一直是HR的支持者和朋友。在与HR团队的每次交流中,我都对他们的热情、投入和善意印象深刻。然而,尽管我们尽了最大努力,一项针对851名职场专业人士的最新调查发现,“员工更信任AI,而非HR。” 什么?这怎么可能? 在你否定这个结果之前,让我解释一下数据。这并不像表面看起来那样简单。 数据揭示了什么? 1. AI被认为更值得信任 当被问到“你更信任AI还是HR专业人士”时,54%的受访者表示更信任AI,而27%表示更信任HR。这个数据虽然听起来奇怪,但实际上反映的是“信任”的问题。员工知道经理有偏见,因此任何由HR提供的绩效评估、加薪或其他反馈可能都会受到某种偏见的影响(甚至是近期偏见)。 而AI没有“个人意见”。在基于真实数据的情况下,它的决策往往更“值得信赖”。65%的受访者相信AI工具会被公平使用。 这很合理:我们已经从认为AI会毁灭世界的担忧中跨越了鸿沟,现在更多地将其视为统计和基于数据的决策系统。而且你可以问AI“为什么选择这个候选人”或“为什么这样评估这个员工”,AI会给出精准且明确的答案。(而人往往难以清楚地解释自己的决定。) 2. AI已被信任用于绩效评估 尽管目前市场上可用的AI绩效评估工具还很少(如Rippling的工具),但39%的受访者认为AI的绩效评估会更公平,33%认为基于AI的薪酬决策不会有偏见。同样,这很可能是因为AI能够清晰地解释其决策,而管理者往往依赖“直觉”。 3. AI更受欢迎作为职业教练 当被问到“你是否重视AI工具在职业目标设定方面的指导能力”时,64%的受访者表示“是”。这再次表明员工对反馈和指导的需求,而这是许多管理者做得不够好(或者不够开放)的地方。 这不是对HR的否定,而是对管理者信任度的质疑 对我而言,这些数据揭示了三个重要点,每个都可能让你感到意外: 1. 员工对管理者的决策能力存疑 我们并不总是信任“管理者”在招聘、绩效和薪酬方面做出公正、不偏不倚的决定。员工知道偏见存在,因此希望有一个系统可以更公平地选择和评估他们。 2. AI从“令人恐惧”到“被信任”的转变 我们已经跨越了“AI令人害怕”的心理鸿沟,开始更多地将其视为可信赖的工具,这使得企业可以更大规模地将AI用于人事决策。 3. HR需要迅速适应AI时代 对于HR部门来说,前进的方向已经明确。我们现在必须立刻学习AI工具,将它们引入最重要的HR领域,并投入时间去管理、培训和利用这些工具。 关于HR赢得信任的能力,现在的逻辑变成了这样:公司内部支持和信任的建立将越来越依赖于HR如何选择和实施AI系统。员工的期望很高,因此我们必须满足这些需求。不管你喜欢与否,AI正在改变我们管理人的方式。
    Technology Adoption
    2024年11月21日
  • Technology Adoption
    根据美世 2024 年全球人才趋势研究,高管认为人工智能是提高生产力的关键,但大多数员工尚未做好转型的准备 Mercer's 2024 Global Talent Trends Study unveils critical insights from over 12,000 global leaders and employees, highlighting the increasing importance of AI in productivity, discrepancies between executive and HR perceptions, the necessity of human-centric work design, and the growing challenges in trust, diversity, and resilience within the workforce. The study emphasizes the urgency of adapting talent strategies to foster greater agility and employee well-being amidst technological advances and shifting workforce dynamics. 美世今天发布了2024年全球人才趋势研究。该研究借鉴了全球 12,000 多名高管、人力资源主管、员工和投资者的见解,揭示了雇主为在这个新时代蓬勃发展而采取的行动。 “今年的调查结果突显了工作中的惊人转变,”美世总裁帕特·汤姆林森 (Pat Tomlinson) 表示。“他们指出,高管层和人力资源部门对于 2024 年业务发展的看法存在显着分歧,而且员工对于技术影响的看法也存在滞后。随着我们迎来人机团队的时代,组织需要将人置于转型的核心。” 生成式人工智能 (AI) 被视为提高生产力的关键 生成式人工智能能力的快速增长引发了人们对劳动力生产力提升的希望,40% 的高管预测人工智能将带来超过 30% 的收益。然而,五分之三 (58%) 的人认为科技进步的速度超过了公司对员工进行再培训的速度,不到一半 (47%) 的人认为他们可以通过当前的人才模式满足今年的需求。 “通过人工智能提高生产力是高管们最关心的问题,但答案不仅仅在于技术。提高员工生产力需要有意识的、以人为本的工作设计。”美世全球人才咨询主管兼该研究的作者 Kate Bravery 说道。“领先的公司认识到人工智能只是其中的一部分。他们正在从整体的角度来解决生产力下降的问题,并通过新的人机协作模式提供更大的敏捷性。” 寻找通向未来工作的可持续道路面临着挑战。四分之三 (74%) 的高管担心他们的人才的转变能力,不到三分之一 (28%) 的人力资源领导者非常有信心他们能够使人机团队取得成功。提高敏捷性的关键是采用技能驱动的人才模型,这是高增长公司已经掌握的。 员工信任度全面下降 2023 年,对雇主的信任度从 2022 年的历史最高水平下降,这是一个危险信号,因为研究表明信任对员工的精力、蓬勃发展感和留下来的意愿产生重大影响。那些相信雇主会为他们和社会做正确事情的人,表示自己正在蓬勃发展、具有强烈的使命感、归属感和被重视感的可能性是其他人的两倍。 近一半的员工表示,他们希望为一个令他们感到自豪的组织工作,一些公司的回应是优先考虑可持续发展工作和“良好工作”原则。鉴于公平薪酬(34%)和发展机会(28%)是员工今年留下来的主要驱动力,雇主有动力在未来一年在薪酬公平、透明度和公平获得职业机会方面取得更快进展。 在全球范围内,员工都清楚,归属感有助于他们成长,但只有 39% 的人力资源领导者表示,女性和少数族裔在其组织的领导团队中拥有良好的代表,只有 18% 的人表示,最近的多元化、公平性和包容性努力提高了员工保留率关键多元化群体。四分之三的员工 (76%) 目睹过年龄歧视。由于这些挑战加上持续的技能短缺,更多地关注包容性和满足员工的需求将有助于所有员工蓬勃发展。 未来几年,韧性将至关重要 最近在风险缓解方面的投资已获得回报,64% 的高管表示他们的业务能够承受不可预见的挑战,而两年前这一比例为 40%。通货膨胀等近期担忧严重影响高管的三年计划,但网络和气候等长期风险可能没有得到应有的必要关注。 建立个人韧性与企业韧性同样重要,五分之四 (82%) 的员工担心自己今年会精疲力竭。为员工福祉重新设计工作对于缓解这一风险至关重要,51% 的高增长公司(2023 年收入增长 10% 或以上)已经这样做了,而低增长同行中只有 39% 这样做了。 员工体验是重中之重 超过一半的高管 (58%) 担心他们的公司在激励员工采用新技术方面做得不够,三分之二 (67%) 的人力资源领导者也担心他们在没有改变工作方式的情况下实施了新技术解决方案。员工体验是今年HR的首要任务;这是一个值得关注的问题,因为蓬勃发展的员工表示雇主设计的工作体验能够发挥他们的最佳水平的可能性是普通员工的 2.6 倍。 人力资源部门在改善所有人的工作方面发挥着关键作用,但人力资源部门越来越有必要与风险和数字化领导者合作,以按要求的速度引入必要的变革。为了满足组织和员工的期望,96% 的公司计划今年对人力资源职能进行一些重新设计,重点是跨部门交付和领先的数字化工作方式。 投资者重视敬业的员工队伍 今年,美世首次收集资产管理公司关于组织的人才战略如何影响其投资决策的意见。近十分之九 (89%) 的人将员工敬业度视为公司绩效的关键驱动力,84% 的人认为“流失和燃烧”方法会损害商业价值。投资者还表示,营造信任和公平的氛围是未来五年建立真正、可持续价值的最重要因素。 单击此处了解更多信息并下载今年的研究。 关于美世 2024 年全球人才趋势研究 美世全球人才趋势目前已进入第九个年头,汇集了来自 17 个地区和 16 个行业的 12,200 多名高管、人力资源领导者、员工和投资者的见解,该研究重点介绍了当今领先组织为确保人员长期可持续发展所采取的措施。在此过程中走得更远的组织在四个领域取得了长足的进步。(1) 他们认识到,以人为本的生产力需要关注工作的演变以及工作人员的技能和动机。(2) 他们认识到信任是真正的工作对话,通过透明度和公平的工作实践得到加强。(3) 随着风险变得更加关联且难以预测,他们认识到,提高风险意识和缓解水平对于建立一支准备就绪、有复原力的员工队伍至关重要。(4) 他们承认,随着工作变得越来越复杂,简化、吸引和激励员工走向数字化的未来至关重要。 关于美世 美世坚信,可以通过重新定义工作世界、重塑退休和投资成果以及释放真正的健康和福祉来建设更光明的未来。美世在 43 个国家/地区拥有约 25,000 名员工,公司业务遍及 130 多个国家/地区。美世是Marsh McLennan (纽约证券交易所股票代码:MMC)旗下的企业,Marsh McLennan 是风险、战略和人才领域全球领先的专业服务公司,拥有超过 85,000 名同事,年收入达 230 亿美元。通过其市场领先的业务(包括达信、Guy Carpenter和奥纬咨询),达信帮助客户应对日益动态和复杂的环境。
    Technology Adoption
    2024年03月07日
  • Technology Adoption
    Josh Bersin人工智能实施越来越像传统IT项目 Josh Bersin的文章《人工智能实施越来越像传统IT项目》提出了五个主要发现: 数据管理:强调数据质量、治理和架构在AI项目中的重要性,类似于IT项目。 安全和访问管理:突出AI实施中强大的安全措施和访问控制的重要性。 工程和监控:讨论了持续工程支持和监控的需求,类似于IT基础设施管理。 供应商管理:指出了AI项目中彻底的供应商评估和选择的重要性。 变更管理和培训:强调了有效变更管理和培训的必要性,这对AI和IT项目都至关重要。 原文如下,我们一起来看看: As we learn more and more about corporate implementations of AI, I’m struck by how they feel more like traditional IT projects every day. Yes, Generative AI systems have many special characteristics: they’re intelligent, we need to train them, and they have radical and transformational impact on users. And the back-end processing is expensive. But despite the talk about advanced models and life-like behavior, these projects have traditional aspects. I’ve talked with more than a dozen large companies about their various AI strategies and I want to encourage buyers to think about the basics. Finding 1: Corporate AI projects are all about the data. Unlike the implementation of a new ERP system, payroll system, recruiting, or learning platform, an AI platform is completely data dependent. Regardless of the product you’re buying (an intelligent agent like Galileo™, an intelligent recruiting system like Eightfold, or an AI-enabling platform to provide sales productivity), success depends on your data strategy. If your enterprise data is a mess, the AI won’t suddenly make sense of it. This week I read a story about Microsoft’s Copilot promoting election lies and conspiracy theories. While I can’t tell how widespread this may be, it simply points out that “you own the data quality, training, and data security” of your AI systems. Walmart’s My Assistant AI for employees already proved itself to be 2-3x more accurate at handling employee inquiries about benefits, for example. But in order to do this the company took advantage of an amazing IT architecture that brings all employee information into a single profile, a mobile experience with years of development, and a strong architecture for global security. One of our clients, a large defense contractor, is exploring the use of AI to revolutionize its massive knowledge management environment. While we know that Gen AI can add tremendous value here, the big question is “what data should we load” and how do we segment the data so the right people access the right information? They’re now working on that project. During our design of Galileo we spent almost a year combing through the information we’ve amassed for 25 years to build a corpus that delivers meaningful answers. Luckily we had been focused on data management from the beginning, but if we didn’t have a solid data architecture (with consistent metadata and information types), the project would have been difficult. So core to these projects is a data management team who understands data sources, metadata, and data integration tools. And once the new AI system is working, we have to train it, update it, and remove bias and errors on a regular basis. Finding 2: Corporate AI projects need heavy focus on security and access management. Let’s suppose you find a tool, platform, or application that delivers a groundbreaking solution to your employees. It could be a sales automation system, an AI-powered recruiting system, or an AI application to help call center agents handle problems. Who gets access to what? How do you “layer” the corpus to make sure the right people see what they need? This kind of exercise is the same thing we did at IBM in the 1980s, when we implemented this complex but critically important system called RACF. I hate to promote my age, but RACF designers thought through these issues of data security and access management many years ago. AI systems need a similar set of tools, and since the LLM has a tendency to “consolidate and aggregate” everything into the model, we may need multiple models for different users. In the case of HR, if build a talent intelligence database using Eightfold, Seekout, or Gloat which includes job titles, skills, levels, and details about credentials and job history, and then we decide to add “salary” …  oops.. well all of a sudden we have a data privacy problem. I just finished an in-depth discussion with SAP-SuccessFactors going through the AI architecture, and what you see is a set of “mini AI apps” developed to operate in Joule (SAP’s copilot) for various use cases. SAP has spent years building workflows, access patterns, and various levels of user security. They designed the system to handle confidential data securely. Remember also that tools like ChatGPT, which access the internet, can possibly import or leak data in a harmful way. And users may accidentally use the Gen AI tools to create unacceptable content, dangerous communications, and invoke other “jailbreak” behaviors. In your talent intelligence strategy, how will you manage payroll data and other private information? If the LLM uses this data for analysis we have to make sure that only appropriate users can see it. Finding 3: Corporate AI projects need focus on “prompt engineering” and system monitoring. In a typical IT project we spend a lot of time on the user experience. We design portals, screens, mobile apps, and experiences with the help of UI designers, artists, and craftsmen. But in Gen AI systems we want the user to “tell us what they’re looking for.” How do we train or support the user in prompting the system well? If you’ve ever tried to use a support chatbot from a company like Paypal you know how difficult this can be. I spent weeks trying to get Paypal’s bot to tell me how to shut down my account, but it never came close to giving me the right answer. (Eventually I figured it out, even though I still get invoices from a contractor who has since deceased!) We have to think about these issues. In our case, we’ve built a “prompt library” and series of workflows to help HR professionals get the most out of Galileo to make the system easy to use. And vendors like Paradox, Visier (Vee), and SAP are building sophisticated workflows that let users ask a simple question (“what candidates are at stage 3 of the pipeline”) and get a well formatted answer. If you ask a recruiting bot something like “who are the top candidates for this position” and plug it into the ATS, will it give you a good answer? I’m not sure, to be honest – so the vendors (or you) have to train it and build workflows to predict what users will ask. This means we’ll be monitoring these systems, looking at interactions that don’t work, and constantly tuning them to get better. A few years ago I interviewed the VP of Digital Transformation at DBS (Digital Bank of Singapore), one of the most sophisticated digital banks in the world. He told me they built an entire team to watch every click on the website so they could constantly move buttons, simplify interfaces, and make information easier to find. We’re going to need to do the same thing with AI, since we can’t really predict what questions people will ask. Finding 4: Vendors will need to be vetted. The next “traditional IT” topic is going to be the vetting of vendors. If I were a large bank or insurance company and I was looking at advanced AI systems, I would scrutinize the vendor’s reputation and experience in detail. Just because a firm like OpenAI has built a great LLM doesn’t mean that they, as a vendor, are capable of meeting your needs. Does the vendor have the resources, expertise, and enterprise feature set you require? I recently talked with a large enterprise in the middle east who has major facilities in Saudi Arabia, Dubai, and other countries in the region. They do not and will not let user information, queries, or generated data leave their jurisdiction. Does the vendor you select have the ability to handle this requirement? Small AI vendors will struggle with these issues, leading IT to do risk assessment in a new way. There are also consultants popping up who specialize in “bias detection” or testing of AI systems. Large companies can do this themselves, but I expect that over time there will be consulting firms who help you evaluate the accuracy and quality of these systems. If the system is trained on your data, how well have you tested it? In many cases the vendor-provided AI uses data from the outside world: what data is it using and how safe is it for your application? Finding 5: Change management, training, and organization design are critical. Finally, as with all technology projects, we have to think about change management and communication. What is this system designed to do? How will it impact your job? What should you do if the answers are not clear or correct? All these issues are important. There’s a need for user training. Our experience shows that users adopt these systems quickly, but they may not understand how to ask a question or how to interpret an answer. You may need to create prompt libraries (like Galileo), or interactive conversation journeys. And then offer support so users can resolve answers which are wrong, unclear, or inconsistent. And most importantly of all, there’s the issue of roles and org design. Suppose we offer an intelligent system to let sales people quickly find answers to product questions, pricing, and customer history. What is the new role of sales ops? Do we have staff to update and maintain the quality of the data? Should we reorganize our sales team as a result? We’ve already discovered that Galileo really breaks down barriers within HR, for example, showing business partners or HR leaders how to handle issues that may be in another person’s domain. These are wonderful outcomes which should encourage leaders to rethink how the roles are defined. In our company, as we use AI for our research, I see our research team operating at a higher level. People are sharing information, analyzing cross-domain information more quickly, and taking advantage of interviews and external data at high speed. They’re writing articles more quickly and can now translate material into multiple languages. Our member support and advisory team, who often rely on analysts for expertise, are quickly becoming consultants. And as we release Galileo to clients, the level of questions and inquiries will become more sophisticated. This process will happen in every sales organization, customer service organization, engineering team, finance, and HR team. Imagine the “new questions” people will ask. Bottom Line: Corporate AI Systems Become IT Projects At the end of the day the AI technology revolution will require lots of traditional IT practices. While AI applications are groundbreaking powerful, the implementation issues are more traditional than you think. I will never forget the failed implementation of Siebel during my days at Sybase. The company was enamored with the platform, bought, and forced us to use it. Yet the company never told us why they bought it, explained how to use it, or built workflows and job roles to embed it into the company. In only a year Sybase dumped the system after the sales organization simply rejected it. Nobody wants an outcome like that with something as important as AI. As you learn and become more enamored with the power of AI, I encourage you to think about the other tech projects you’ve worked on. It’s time to move beyond the hype and excitement and think about real-world success.
    Technology Adoption
    2023年12月17日