• Training
    2024年组织中人力资源部门的21个关键角色-来自AIHR 组织中人力资源部门的21个关键角色,分为“关键角色”、“合规角色”和“新兴角色”三个部分,如下所示: 关键角色 吸引候选人:开发和执行策略以吸引合适的候选人。 选择候选人:从众多申请者中挑选出最适合的候选人。 内部和外部招聘:内部晋升和外部招聘的管理。 绩效评估:对员工的工作表现进行评估。 薪酬:设计和实施薪酬策略。 员工福利管理:设计和管理员工福利计划。 学习与发展:确保员工技能与组织需求保持一致。 合规角色 晋升:晋升机制的设计与实施。 问题解决小组:创建和管理解决问题的小组。 全面质量管理(TQM):实施全面质量管理以提高服务或产品质量。 信息共享:确保重要信息能够及时传达给所有员工。 组织发展:通过战略性的人力资源管理提升组织效能。 调查管理:管理各种员工调查,收集反馈以改进工作环境。 合规管理:确保公司遵守所有相关法律和规章制度。 商业合作伙伴:HR作为管理层的战略合作伙伴,提供人力资源解决方案。 新兴角色 数据与分析管理:使用数据分析来支持决策过程。 人力资源技术管理:管理和优化HR相关的技术和系统。 变更管理:领导和管理组织变更。 员工体验:设计和改进员工的整体工作体验。 多元化、公平、包容和归属感(DEIB):推广和实施多元化和包容性策略。 公关:管理公司的公共形象和应对公关危机。 原文来自:https://www.aihr.com/blog/human-resources-roles/   Attracting candidates, Selecting candidates, Hiring from within and from outside, Performance appraisals, Compensation, Employee benefit management, Learning & development, Promotions, Problem-solving groups, Total quality management (TQM), Information sharing, Organizational development, Survey management, Compliance management, Business partnering, Data & analytics management, HR technology management, Change management, Employee experience, DEIB, PR 吸引候选人、选择候选人、内部和外部招聘、绩效评估、薪酬、员工福利管理、学习与发展、晋升、问题解决小组、全面质量管理 (TQM)、信息共享、组织发展、调查管理、合规管理、业务合作、数据与分析管理、人力资源技术管理、变革管理、员工体验、DEIB、公共关系  
    Training
    2024年05月12日
  • Training
    Valoir 报告显示 HR 尚未准备好迎接 AI,你呢? 研究显示,人力资源管理领导者面临的主要问题包括缺少 AI 相关的专业知识以及面临的风险和合规性问题。 弗吉尼亚州阿灵顿--Valoir 发布的一项全球新报告显示,尽管 AI 驱动的自动化似乎无法避免,但人力资源部门似乎并未做好准备。这项涵盖超过150位人力资源执行官的调查揭示了利用 AI 的巨大机会,但同时也显示出在制定政策、实施实践和进行培训方面普遍存在不足,以便安全有效地将 AI 技术应用于人力资源管理。 “虽然许多机构开始采用生成式 AI,但很少有组织建立必要的政策、准则和保障措施。作为员工数据的保护者和公司政策的制定者,人力资源领导者需要在 AI 的政策和培训方面走在前列,不仅为自己的团队,也为广大员工群体做好准备。” 以下内容需要特别注意: “AI 正在快速融入人力资源管理领域,特别是在招聘、人才发展和劳动力管理等方面。然而,引入 AI 也伴随着诸如数据泄露、误解、偏见和不当内容等风险,”Valoir 的首席执行官 Rebecca Wettemann 表示。“面对这些挑战并采取措施减少风险的人力资源部门,可以显著提升其从 AI 中获得的益处。” 人力资源的自动化与战略转型潜力 报告指出,有35%的人力资源部门员工的日常工作非常适合自动化处理。在所有人力资源管理活动中,招聘环节最有潜力应用 AI 技术,并且已成为采纳率最高的领域,近四分之一的组织已经开始利用 AI 支持的招聘流程。人才发展、劳动力管理以及培训和发展同样被视为 AI 自动化的关键领域。 生成式 AI 正在加速人力资源部门的生产力提升及风险增加 尽管到2023年中旬,超过三分之四的人力资源领域工作者已经尝试使用过某种形式的生成式 AI,但仅有16%的组织制定了关于使用生成式 AI 的具体政策。而且,真正关于其伦理使用的政策数量更是寥寥无几。人力资源领导者认为,缺乏 AI 相关技能和专业知识是采纳 AI 的最大障碍,但只有14%的组织制定了有效的 AI 使用培训政策。这些政策对于确保所有员工都能充分利用 AI 带来的好处并最小化风险是至关重要的。 “尽管生成式 AI 正被广泛采纳,但几乎没有哪些组织建立了必要的政策、准则和保护措施。作为员工数据的守护者和公司政策的制定者,人力资源领导者必须在 AI 政策和培训方面先行一步,这不仅是为了他们自己的团队,也是为了整个员工群体的利益,”Wettemann 表示。 报告的关键知识点: Integration Challenges: HR faces challenges in managing AI use due to lack of policies, practices, and training. Early Adoption vs. Preparedness: While HR has been an early adopter of AI, most organizations still lack the proper frameworks for safe and effective AI adoption. Rapid Product Release: Post-Chat GPT announcement, HR software vendors have rapidly released generative AI products with varying capabilities. AI’s Double-Edged Sword: AI offers great benefits but also poses risks of "accidents" due to immature technology, inadequate policies, and lack of training. AI Experimentation and Automation Opportunity: Over three-quarters of HR workers have experimented with generative AI. 35% of HR tasks could potentially be automated by AI. Current AI Utilization: The main opportunities for HR benefits from AI are in recruiting, learning and development, and talent management, with recruiting leading in AI adoption. Adoption Barriers: Main hurdles include lack of AI expertise (28%), fear of compliance and risk (23%), and lack of resources (21%). Policy and Training Deficiencies: Only 16% of organizations have policies on generative AI use, and less than 16% have training policies for AI usage. Risk Areas in AI: Data compromises, AI hallucinations, bias and toxicity, and recommendation bias are identified as primary risks. Future Plans for AI: Over 50% of organizations plan to apply AI in recruiting, talent management, and training within the next 24 months. Least Likely AI Adoption: Benefits management has the lowest likelihood of current or future AI adoption due to data sensitivity concerns. AI Skills and Expertise: The significant gap in AI skills and expertise impacts the adoption and effective use of AI in HR. HR’s Role in AI Adoption: HR needs to develop policies, provide training, and ensure ethical AI use aligning with organizational principles. Recommendations for HR: Suggestions include experimenting with generative AI, developing ethical AI usage policies, creating role-specific AI training, and identifying employee groups at risk from AI automation.
    Training
    2024年03月12日
  • Training
    Josh Bersin人工智能实施越来越像传统IT项目 Josh Bersin的文章《人工智能实施越来越像传统IT项目》提出了五个主要发现: 数据管理:强调数据质量、治理和架构在AI项目中的重要性,类似于IT项目。 安全和访问管理:突出AI实施中强大的安全措施和访问控制的重要性。 工程和监控:讨论了持续工程支持和监控的需求,类似于IT基础设施管理。 供应商管理:指出了AI项目中彻底的供应商评估和选择的重要性。 变更管理和培训:强调了有效变更管理和培训的必要性,这对AI和IT项目都至关重要。 原文如下,我们一起来看看: As we learn more and more about corporate implementations of AI, I’m struck by how they feel more like traditional IT projects every day. Yes, Generative AI systems have many special characteristics: they’re intelligent, we need to train them, and they have radical and transformational impact on users. And the back-end processing is expensive. But despite the talk about advanced models and life-like behavior, these projects have traditional aspects. I’ve talked with more than a dozen large companies about their various AI strategies and I want to encourage buyers to think about the basics. Finding 1: Corporate AI projects are all about the data. Unlike the implementation of a new ERP system, payroll system, recruiting, or learning platform, an AI platform is completely data dependent. Regardless of the product you’re buying (an intelligent agent like Galileo™, an intelligent recruiting system like Eightfold, or an AI-enabling platform to provide sales productivity), success depends on your data strategy. If your enterprise data is a mess, the AI won’t suddenly make sense of it. This week I read a story about Microsoft’s Copilot promoting election lies and conspiracy theories. While I can’t tell how widespread this may be, it simply points out that “you own the data quality, training, and data security” of your AI systems. Walmart’s My Assistant AI for employees already proved itself to be 2-3x more accurate at handling employee inquiries about benefits, for example. But in order to do this the company took advantage of an amazing IT architecture that brings all employee information into a single profile, a mobile experience with years of development, and a strong architecture for global security. One of our clients, a large defense contractor, is exploring the use of AI to revolutionize its massive knowledge management environment. While we know that Gen AI can add tremendous value here, the big question is “what data should we load” and how do we segment the data so the right people access the right information? They’re now working on that project. During our design of Galileo we spent almost a year combing through the information we’ve amassed for 25 years to build a corpus that delivers meaningful answers. Luckily we had been focused on data management from the beginning, but if we didn’t have a solid data architecture (with consistent metadata and information types), the project would have been difficult. So core to these projects is a data management team who understands data sources, metadata, and data integration tools. And once the new AI system is working, we have to train it, update it, and remove bias and errors on a regular basis. Finding 2: Corporate AI projects need heavy focus on security and access management. Let’s suppose you find a tool, platform, or application that delivers a groundbreaking solution to your employees. It could be a sales automation system, an AI-powered recruiting system, or an AI application to help call center agents handle problems. Who gets access to what? How do you “layer” the corpus to make sure the right people see what they need? This kind of exercise is the same thing we did at IBM in the 1980s, when we implemented this complex but critically important system called RACF. I hate to promote my age, but RACF designers thought through these issues of data security and access management many years ago. AI systems need a similar set of tools, and since the LLM has a tendency to “consolidate and aggregate” everything into the model, we may need multiple models for different users. In the case of HR, if build a talent intelligence database using Eightfold, Seekout, or Gloat which includes job titles, skills, levels, and details about credentials and job history, and then we decide to add “salary” …  oops.. well all of a sudden we have a data privacy problem. I just finished an in-depth discussion with SAP-SuccessFactors going through the AI architecture, and what you see is a set of “mini AI apps” developed to operate in Joule (SAP’s copilot) for various use cases. SAP has spent years building workflows, access patterns, and various levels of user security. They designed the system to handle confidential data securely. Remember also that tools like ChatGPT, which access the internet, can possibly import or leak data in a harmful way. And users may accidentally use the Gen AI tools to create unacceptable content, dangerous communications, and invoke other “jailbreak” behaviors. In your talent intelligence strategy, how will you manage payroll data and other private information? If the LLM uses this data for analysis we have to make sure that only appropriate users can see it. Finding 3: Corporate AI projects need focus on “prompt engineering” and system monitoring. In a typical IT project we spend a lot of time on the user experience. We design portals, screens, mobile apps, and experiences with the help of UI designers, artists, and craftsmen. But in Gen AI systems we want the user to “tell us what they’re looking for.” How do we train or support the user in prompting the system well? If you’ve ever tried to use a support chatbot from a company like Paypal you know how difficult this can be. I spent weeks trying to get Paypal’s bot to tell me how to shut down my account, but it never came close to giving me the right answer. (Eventually I figured it out, even though I still get invoices from a contractor who has since deceased!) We have to think about these issues. In our case, we’ve built a “prompt library” and series of workflows to help HR professionals get the most out of Galileo to make the system easy to use. And vendors like Paradox, Visier (Vee), and SAP are building sophisticated workflows that let users ask a simple question (“what candidates are at stage 3 of the pipeline”) and get a well formatted answer. If you ask a recruiting bot something like “who are the top candidates for this position” and plug it into the ATS, will it give you a good answer? I’m not sure, to be honest – so the vendors (or you) have to train it and build workflows to predict what users will ask. This means we’ll be monitoring these systems, looking at interactions that don’t work, and constantly tuning them to get better. A few years ago I interviewed the VP of Digital Transformation at DBS (Digital Bank of Singapore), one of the most sophisticated digital banks in the world. He told me they built an entire team to watch every click on the website so they could constantly move buttons, simplify interfaces, and make information easier to find. We’re going to need to do the same thing with AI, since we can’t really predict what questions people will ask. Finding 4: Vendors will need to be vetted. The next “traditional IT” topic is going to be the vetting of vendors. If I were a large bank or insurance company and I was looking at advanced AI systems, I would scrutinize the vendor’s reputation and experience in detail. Just because a firm like OpenAI has built a great LLM doesn’t mean that they, as a vendor, are capable of meeting your needs. Does the vendor have the resources, expertise, and enterprise feature set you require? I recently talked with a large enterprise in the middle east who has major facilities in Saudi Arabia, Dubai, and other countries in the region. They do not and will not let user information, queries, or generated data leave their jurisdiction. Does the vendor you select have the ability to handle this requirement? Small AI vendors will struggle with these issues, leading IT to do risk assessment in a new way. There are also consultants popping up who specialize in “bias detection” or testing of AI systems. Large companies can do this themselves, but I expect that over time there will be consulting firms who help you evaluate the accuracy and quality of these systems. If the system is trained on your data, how well have you tested it? In many cases the vendor-provided AI uses data from the outside world: what data is it using and how safe is it for your application? Finding 5: Change management, training, and organization design are critical. Finally, as with all technology projects, we have to think about change management and communication. What is this system designed to do? How will it impact your job? What should you do if the answers are not clear or correct? All these issues are important. There’s a need for user training. Our experience shows that users adopt these systems quickly, but they may not understand how to ask a question or how to interpret an answer. You may need to create prompt libraries (like Galileo), or interactive conversation journeys. And then offer support so users can resolve answers which are wrong, unclear, or inconsistent. And most importantly of all, there’s the issue of roles and org design. Suppose we offer an intelligent system to let sales people quickly find answers to product questions, pricing, and customer history. What is the new role of sales ops? Do we have staff to update and maintain the quality of the data? Should we reorganize our sales team as a result? We’ve already discovered that Galileo really breaks down barriers within HR, for example, showing business partners or HR leaders how to handle issues that may be in another person’s domain. These are wonderful outcomes which should encourage leaders to rethink how the roles are defined. In our company, as we use AI for our research, I see our research team operating at a higher level. People are sharing information, analyzing cross-domain information more quickly, and taking advantage of interviews and external data at high speed. They’re writing articles more quickly and can now translate material into multiple languages. Our member support and advisory team, who often rely on analysts for expertise, are quickly becoming consultants. And as we release Galileo to clients, the level of questions and inquiries will become more sophisticated. This process will happen in every sales organization, customer service organization, engineering team, finance, and HR team. Imagine the “new questions” people will ask. Bottom Line: Corporate AI Systems Become IT Projects At the end of the day the AI technology revolution will require lots of traditional IT practices. While AI applications are groundbreaking powerful, the implementation issues are more traditional than you think. I will never forget the failed implementation of Siebel during my days at Sybase. The company was enamored with the platform, bought, and forced us to use it. Yet the company never told us why they bought it, explained how to use it, or built workflows and job roles to embed it into the company. In only a year Sybase dumped the system after the sales organization simply rejected it. Nobody wants an outcome like that with something as important as AI. As you learn and become more enamored with the power of AI, I encourage you to think about the other tech projects you’ve worked on. It’s time to move beyond the hype and excitement and think about real-world success.
    Training
    2023年12月17日
  • Training
    人工智能正在以比我预期更快的速度改变企业学习AI Is Transforming Corporate Learning Even Faster Than I Expected 在《AI正在比我预想的更快地改变企业学习AI Is Transforming Corporate Learning Even Faster Than I Expected》这一文中,Josh Bersin强调了AI对企业学习和发展(L&D)领域的革命性影响。L&D市场价值高达3400亿美元,涵盖了从员工入职到操作程序等一系列活动。传统模型正在随着像Galileo™这样的生成性AI技术的发展而演变,这改变了内容的创建、个性化和传递方式。本文探讨了AI在L&D中的主要用例,包括内容生成、个性化学习体验、技能发展,以及用AI驱动的知识工具替代传统培训。举例包括Arist的AI内容创作、Uplimit的个性化AI辅导,以及沃尔玛实施AI进行即时培训。这种转型是深刻的,呈现了一个AI不仅增强而且重新定义L&D策略的未来。 在受人工智能影响的所有领域中,最大的变革也许发生在企业学习中。经过一年的实验,现在很明显人工智能将彻底改变这个领域。 让我们讨论一下 L&D 到底是什么。企业培训无处不在,这就是为什么它是一个价值 3400 亿美元的市场。工作中发生的一切(从入职到填写费用账户再到复杂的操作程序)在某种程度上都需要培训。即使在经济衰退期间,企业在 L&D 上的支出仍稳定在人均 1200-1500 美元。 然而,正如研发专业人士所知,这个问题非常复杂。有数百种培训平台、工具、内容库和方法。我估计 L&D 技术空间的规模超过 140 亿美元,这甚至不包括搜索引擎、知识管理工具以及 Zoom、Teams 和 Webex 等平台等系统。多年来,我们经历了许多演变:电子学习、混合学习、微型学习,以及现在的工作流程中的学习。 生成式人工智能即将永远改变这一切。 考虑一下我们面临的问题。企业培训并不是真正的教学,而是创造一个学习的环境。传统的教学设计以教师为主导,以过程为中心,但在工作中常常表现不佳。人们通过多种方式学习,通常没有老师,他们寻找参考资料,复制别人正在做的事情,并依靠经理、同事和专家的帮助。因此,必须扩展传统的教学设计模型,以帮助人们学习他们需要的东西。 输入生成人工智能,这是一种旨在合成信息的技术。像Galileo™这样的生成式人工智能工具 可以以传统教学设计师无法做到的方式理解、整合、重组和传递来自大型语料库的信息。这种人工智能驱动的学习方法不仅效率更高,而且效果更好,能够在工作流程中进行学习。 早期,在工作流程中学习意味着搜索信息并希望找到相关的东西。这个过程非常耗时,而且常常没有结果。生成式人工智能通过其神经网络的魔力,现在已经准备好解决这些问题,就像 L&D 的瑞士军刀一样。 这是一个简单的例子。我问Galileo™(该公司经过 25 年的研究和案例研究提供支持):“我该如何应对总是迟到的员工?请给我一个叙述来帮助我?” 它没有带我去参加管理课程或给我看一堆视频,而是简单地回答了问题。这种类型的互动是企业学习的大部分内容。 让我总结一下人工智能在学习与发展中的四个主要用例: 生成内容:人工智能可以大大减少内容创建所涉及的时间和复杂性。例如,移动学习工具Arist拥有AI生成功能Sidekick,可以将综合的操作信息转化为一系列的教学活动。这个过程可能需要几周甚至几个月的时间,现在可以在几天甚至几小时内完成。 我们在Josh Bersin 学院使用 Arist ,我们的新移动课程现在几乎每月都会推出。Sana、Docebo Shape和以用户为中心的学习平台 360 Learning 等其他工具也同样令人兴奋。 个性化学习者体验:人工智能可以帮助根据个人需求定制学习路径,改进根据工作角色分配学习路径的传统模型。人工智能可以理解内容的细节,并使用该信息来个性化学习体验。这种方法比杂乱的学习体验平台(LXP)有效得多,因为LXP通常无法真正理解内容的细节。 Uplimit是一家致力于构建人工智能平台来帮助教授人工智能的初创公司,它正在使用其Cobot和其他工具为学习人工智能的技术专业人员提供个性化的指导和技巧。Cornerstone 的新 AI 结构按技能推荐课程,Sana 平台将 Galileo 等工具与学习连接起来,SuccessFactors 中的新 AI 功能还为用户提供了基于角色和活动的精选学习视图。 识别和发展技能:人工智能可以帮助识别内容中的技能并推断个人的技能。这有助于提供正确的培训并确定其有效性。虽然许多公司正在研究高级技能分类策略,但真正的价值在于可以通过人工智能识别和开发的细粒度、特定领域的技能。 人才情报领域的先驱者Eightfold、Gloat和SeekOut可以推断员工技能并立即推荐学习解决方案。实际上,我们正在使用这项技术来推出我们的人力资源职业导航器,该导航器将于明年初推出。 用知识工具取代培训:人工智能在学习与发展中最具颠覆性的用例也许是完全取代某些类型培训的潜力。人工智能可以创建提供信息和解决问题的智能代理或聊天机器人,从而可能消除对某些类型培训的需求。这种方法不仅效率更高,而且效果更好,因为它可以在个人需要时为他们提供所需的信息。 沃尔玛今天正在实施这一举措,我们的新平台 Galileo 正在帮助万事达卡和劳斯莱斯等公司在无需培训的情况下按需查找人力资源信息和政策信息。LinkedIn Learning 正在向 Gen AI 搜索开放其软技能内容,很快 Microsoft Copilot 将通过 Viva Learning 找到培训。 这里潜力巨大 在我作为分析师的这些年里,我从未见过一种技术具有如此大的潜力。人工智能将彻底改变 L&D 格局,重塑我们的工作方式,以便 L&D 专业人员可以花时间为企业提供咨询。 L&D 专业人员应该做什么?花一些时间来了解这项技术,或者参加Josh Bersin 学院的一些新的人工智能课程以了解更多信息。 随着我们继续推出像伽利略这样的工具,我知道你们每个人都会对未来的机会感到惊讶。L&D 的未来已经到来,而这一切都由人工智能驱动。
    Training
    2023年12月13日