• Training
    HR:清醒吧!员工更信任AI而非HR 多年来,我一直是HR的支持者和朋友。在与HR团队的每次交流中,我都对他们的热情、投入和善意印象深刻。然而,尽管我们尽了最大努力,一项针对851名职场专业人士的最新调查发现,“员工更信任AI,而非HR。” 什么?这怎么可能? 在你否定这个结果之前,让我解释一下数据。这并不像表面看起来那样简单。 数据揭示了什么? 1. AI被认为更值得信任 当被问到“你更信任AI还是HR专业人士”时,54%的受访者表示更信任AI,而27%表示更信任HR。这个数据虽然听起来奇怪,但实际上反映的是“信任”的问题。员工知道经理有偏见,因此任何由HR提供的绩效评估、加薪或其他反馈可能都会受到某种偏见的影响(甚至是近期偏见)。 而AI没有“个人意见”。在基于真实数据的情况下,它的决策往往更“值得信赖”。65%的受访者相信AI工具会被公平使用。 这很合理:我们已经从认为AI会毁灭世界的担忧中跨越了鸿沟,现在更多地将其视为统计和基于数据的决策系统。而且你可以问AI“为什么选择这个候选人”或“为什么这样评估这个员工”,AI会给出精准且明确的答案。(而人往往难以清楚地解释自己的决定。) 2. AI已被信任用于绩效评估 尽管目前市场上可用的AI绩效评估工具还很少(如Rippling的工具),但39%的受访者认为AI的绩效评估会更公平,33%认为基于AI的薪酬决策不会有偏见。同样,这很可能是因为AI能够清晰地解释其决策,而管理者往往依赖“直觉”。 3. AI更受欢迎作为职业教练 当被问到“你是否重视AI工具在职业目标设定方面的指导能力”时,64%的受访者表示“是”。这再次表明员工对反馈和指导的需求,而这是许多管理者做得不够好(或者不够开放)的地方。 这不是对HR的否定,而是对管理者信任度的质疑 对我而言,这些数据揭示了三个重要点,每个都可能让你感到意外: 1. 员工对管理者的决策能力存疑 我们并不总是信任“管理者”在招聘、绩效和薪酬方面做出公正、不偏不倚的决定。员工知道偏见存在,因此希望有一个系统可以更公平地选择和评估他们。 2. AI从“令人恐惧”到“被信任”的转变 我们已经跨越了“AI令人害怕”的心理鸿沟,开始更多地将其视为可信赖的工具,这使得企业可以更大规模地将AI用于人事决策。 3. HR需要迅速适应AI时代 对于HR部门来说,前进的方向已经明确。我们现在必须立刻学习AI工具,将它们引入最重要的HR领域,并投入时间去管理、培训和利用这些工具。 关于HR赢得信任的能力,现在的逻辑变成了这样:公司内部支持和信任的建立将越来越依赖于HR如何选择和实施AI系统。员工的期望很高,因此我们必须满足这些需求。不管你喜欢与否,AI正在改变我们管理人的方式。
    Training
    2024年11月21日
  • Training
    Josh Bersin:通过效率实现高速增长:新时代的主题 最近的选举中,各种信息混杂,但有一条呼声响彻领导人的耳中:美国政府必须提高效率。美国选民似乎对芯片和基础设施法案上花费的数十亿美元并不感冒:他们想要的是更低的税收和更负责任的政府。 正如埃隆·马斯克所解释的那样,降低成本是一项涉及数千个细节的工作。每当你聘请一名经理,就会产生更多的费用中心。本周,亚马逊首席执行官安迪·贾西 (Andy Jassey) 宣布他希望减少经理人数。正如我在最近的 HBR 文章(通过力量倍增器发展你的公司)中所讨论的那样,如果你围绕“更少的人”进行设计,你的公司实际上可以发展得更快。 围绕更少的员工来优化公司意味着什么?这意味着要改变很多事情: 在没有组织发展计划的情况下,不要分配员工 不要在发展前就招聘员工并期望收入会随之而来(Salesforce招聘 1000 名销售代表来销售 AI?) 迫使管理人员在基层实现自动化,并不断重新思考岗位职责 消除复杂的职位名称,减少级别,以便于人员调动 停止根据“控制范围”支付管理人员的薪酬——根据产出、收入、盈利能力或增长进行评估 加倍投资培训,并开始在不同的职业类别之间进行交叉培训 告诉那些请求增加员工数量的领导“重新考虑减少员工数量的计划” 用奖金支付员工工资,避免根据绩效高薪加薪(这会使不公平制度化) 大力投资人工智能和自动化测试项目,让一线员工给你出主意 除非你有非常明确的商业案例,否则避免大规模的 ERP 升级 培育精英管理文化,奖励人们的技能和表现,而不是“达到目标”。 许多人力资源实践必须进行调整。最重要的是人才密度的理念,让每个人都能表现出色……并重新思考我们的招聘方式,这样我们就不会在不知不觉中招聘了太多员工。 我们从小就接受这种古老的钟形曲线观念:只有 10% 的人能被评为 1,20% 的人被评为 2,依此类推。这个愚蠢的想法本应迫使人们竞争,这样人们就会争相获得备受推崇的 1 评级。虽然这在逻辑上说得通,但效果却适得其反。如果你相信(就像我一样)每个人都能成为高绩效者,那么这种制度会伤害最有抱负的人的绩效。 每个员工都能在合适的角色中发挥出超强的表现。 研究表明,真正的团队表现遵循“力量曲线”——少数人(篮球界的勒布朗·詹姆斯或斯蒂芬·库里)的表现比其他人高出 10 到 100 倍。其他团队成员见证了他们的成功,并找到了属于自己的“超强表现”角色。如果所有高评价的位置都被占满,其他人的动力又是什么呢?我们希望每个人都能感觉到自己可以成为超级明星,我们希望公司能帮助他们找到这个机会。 我们招聘员工时,不是以附加的方式满足“缺少的技能”或“缺少的人数”,而是以“力量倍增效应”为目的。新员工是否会成倍地提高整个团队的绩效?或者他们只是“填补了一个看似空缺的职位”。后一种做法是走向官僚主义的旅程;前一种做法是超级竞争性增长的秘诀。(我们称之为“人才密度”) 为什么现在提出这些观点? 在一个员工减少的世界里,我们所有人都将面临人才短缺的问题。随着AI的加速发展,我们必须把公司视为由超高绩效员工组成的网络。 我无法预测联邦政府将会做些什么,但希望这些有启发性的思考能够影响华盛顿。是的,我们还有工会等问题需要考虑,但即使是世界上最大的机构也有其局限性。 如今,自动化触手可及,任何“大公司”都可能受到小公司的威胁。因此,越早开始“精简”思维,越能获得优势。   作者:Josh Bersin
    Training
    2024年11月13日
  • Training
    2024年组织中人力资源部门的21个关键角色-来自AIHR 组织中人力资源部门的21个关键角色,分为“关键角色”、“合规角色”和“新兴角色”三个部分,如下所示: 关键角色 吸引候选人:开发和执行策略以吸引合适的候选人。 选择候选人:从众多申请者中挑选出最适合的候选人。 内部和外部招聘:内部晋升和外部招聘的管理。 绩效评估:对员工的工作表现进行评估。 薪酬:设计和实施薪酬策略。 员工福利管理:设计和管理员工福利计划。 学习与发展:确保员工技能与组织需求保持一致。 合规角色 晋升:晋升机制的设计与实施。 问题解决小组:创建和管理解决问题的小组。 全面质量管理(TQM):实施全面质量管理以提高服务或产品质量。 信息共享:确保重要信息能够及时传达给所有员工。 组织发展:通过战略性的人力资源管理提升组织效能。 调查管理:管理各种员工调查,收集反馈以改进工作环境。 合规管理:确保公司遵守所有相关法律和规章制度。 商业合作伙伴:HR作为管理层的战略合作伙伴,提供人力资源解决方案。 新兴角色 数据与分析管理:使用数据分析来支持决策过程。 人力资源技术管理:管理和优化HR相关的技术和系统。 变更管理:领导和管理组织变更。 员工体验:设计和改进员工的整体工作体验。 多元化、公平、包容和归属感(DEIB):推广和实施多元化和包容性策略。 公关:管理公司的公共形象和应对公关危机。 原文来自:https://www.aihr.com/blog/human-resources-roles/   Attracting candidates, Selecting candidates, Hiring from within and from outside, Performance appraisals, Compensation, Employee benefit management, Learning & development, Promotions, Problem-solving groups, Total quality management (TQM), Information sharing, Organizational development, Survey management, Compliance management, Business partnering, Data & analytics management, HR technology management, Change management, Employee experience, DEIB, PR 吸引候选人、选择候选人、内部和外部招聘、绩效评估、薪酬、员工福利管理、学习与发展、晋升、问题解决小组、全面质量管理 (TQM)、信息共享、组织发展、调查管理、合规管理、业务合作、数据与分析管理、人力资源技术管理、变革管理、员工体验、DEIB、公共关系  
    Training
    2024年05月12日
  • Training
    Valoir 报告显示 HR 尚未准备好迎接 AI,你呢? 研究显示,人力资源管理领导者面临的主要问题包括缺少 AI 相关的专业知识以及面临的风险和合规性问题。 弗吉尼亚州阿灵顿--Valoir 发布的一项全球新报告显示,尽管 AI 驱动的自动化似乎无法避免,但人力资源部门似乎并未做好准备。这项涵盖超过150位人力资源执行官的调查揭示了利用 AI 的巨大机会,但同时也显示出在制定政策、实施实践和进行培训方面普遍存在不足,以便安全有效地将 AI 技术应用于人力资源管理。 “虽然许多机构开始采用生成式 AI,但很少有组织建立必要的政策、准则和保障措施。作为员工数据的保护者和公司政策的制定者,人力资源领导者需要在 AI 的政策和培训方面走在前列,不仅为自己的团队,也为广大员工群体做好准备。” 以下内容需要特别注意: “AI 正在快速融入人力资源管理领域,特别是在招聘、人才发展和劳动力管理等方面。然而,引入 AI 也伴随着诸如数据泄露、误解、偏见和不当内容等风险,”Valoir 的首席执行官 Rebecca Wettemann 表示。“面对这些挑战并采取措施减少风险的人力资源部门,可以显著提升其从 AI 中获得的益处。” 人力资源的自动化与战略转型潜力 报告指出,有35%的人力资源部门员工的日常工作非常适合自动化处理。在所有人力资源管理活动中,招聘环节最有潜力应用 AI 技术,并且已成为采纳率最高的领域,近四分之一的组织已经开始利用 AI 支持的招聘流程。人才发展、劳动力管理以及培训和发展同样被视为 AI 自动化的关键领域。 生成式 AI 正在加速人力资源部门的生产力提升及风险增加 尽管到2023年中旬,超过三分之四的人力资源领域工作者已经尝试使用过某种形式的生成式 AI,但仅有16%的组织制定了关于使用生成式 AI 的具体政策。而且,真正关于其伦理使用的政策数量更是寥寥无几。人力资源领导者认为,缺乏 AI 相关技能和专业知识是采纳 AI 的最大障碍,但只有14%的组织制定了有效的 AI 使用培训政策。这些政策对于确保所有员工都能充分利用 AI 带来的好处并最小化风险是至关重要的。 “尽管生成式 AI 正被广泛采纳,但几乎没有哪些组织建立了必要的政策、准则和保护措施。作为员工数据的守护者和公司政策的制定者,人力资源领导者必须在 AI 政策和培训方面先行一步,这不仅是为了他们自己的团队,也是为了整个员工群体的利益,”Wettemann 表示。 报告的关键知识点: Integration Challenges: HR faces challenges in managing AI use due to lack of policies, practices, and training. Early Adoption vs. Preparedness: While HR has been an early adopter of AI, most organizations still lack the proper frameworks for safe and effective AI adoption. Rapid Product Release: Post-Chat GPT announcement, HR software vendors have rapidly released generative AI products with varying capabilities. AI’s Double-Edged Sword: AI offers great benefits but also poses risks of "accidents" due to immature technology, inadequate policies, and lack of training. AI Experimentation and Automation Opportunity: Over three-quarters of HR workers have experimented with generative AI. 35% of HR tasks could potentially be automated by AI. Current AI Utilization: The main opportunities for HR benefits from AI are in recruiting, learning and development, and talent management, with recruiting leading in AI adoption. Adoption Barriers: Main hurdles include lack of AI expertise (28%), fear of compliance and risk (23%), and lack of resources (21%). Policy and Training Deficiencies: Only 16% of organizations have policies on generative AI use, and less than 16% have training policies for AI usage. Risk Areas in AI: Data compromises, AI hallucinations, bias and toxicity, and recommendation bias are identified as primary risks. Future Plans for AI: Over 50% of organizations plan to apply AI in recruiting, talent management, and training within the next 24 months. Least Likely AI Adoption: Benefits management has the lowest likelihood of current or future AI adoption due to data sensitivity concerns. AI Skills and Expertise: The significant gap in AI skills and expertise impacts the adoption and effective use of AI in HR. HR’s Role in AI Adoption: HR needs to develop policies, provide training, and ensure ethical AI use aligning with organizational principles. Recommendations for HR: Suggestions include experimenting with generative AI, developing ethical AI usage policies, creating role-specific AI training, and identifying employee groups at risk from AI automation.
    Training
    2024年03月12日
  • Training
    Josh Bersin人工智能实施越来越像传统IT项目 Josh Bersin的文章《人工智能实施越来越像传统IT项目》提出了五个主要发现: 数据管理:强调数据质量、治理和架构在AI项目中的重要性,类似于IT项目。 安全和访问管理:突出AI实施中强大的安全措施和访问控制的重要性。 工程和监控:讨论了持续工程支持和监控的需求,类似于IT基础设施管理。 供应商管理:指出了AI项目中彻底的供应商评估和选择的重要性。 变更管理和培训:强调了有效变更管理和培训的必要性,这对AI和IT项目都至关重要。 原文如下,我们一起来看看: As we learn more and more about corporate implementations of AI, I’m struck by how they feel more like traditional IT projects every day. Yes, Generative AI systems have many special characteristics: they’re intelligent, we need to train them, and they have radical and transformational impact on users. And the back-end processing is expensive. But despite the talk about advanced models and life-like behavior, these projects have traditional aspects. I’ve talked with more than a dozen large companies about their various AI strategies and I want to encourage buyers to think about the basics. Finding 1: Corporate AI projects are all about the data. Unlike the implementation of a new ERP system, payroll system, recruiting, or learning platform, an AI platform is completely data dependent. Regardless of the product you’re buying (an intelligent agent like Galileo™, an intelligent recruiting system like Eightfold, or an AI-enabling platform to provide sales productivity), success depends on your data strategy. If your enterprise data is a mess, the AI won’t suddenly make sense of it. This week I read a story about Microsoft’s Copilot promoting election lies and conspiracy theories. While I can’t tell how widespread this may be, it simply points out that “you own the data quality, training, and data security” of your AI systems. Walmart’s My Assistant AI for employees already proved itself to be 2-3x more accurate at handling employee inquiries about benefits, for example. But in order to do this the company took advantage of an amazing IT architecture that brings all employee information into a single profile, a mobile experience with years of development, and a strong architecture for global security. One of our clients, a large defense contractor, is exploring the use of AI to revolutionize its massive knowledge management environment. While we know that Gen AI can add tremendous value here, the big question is “what data should we load” and how do we segment the data so the right people access the right information? They’re now working on that project. During our design of Galileo we spent almost a year combing through the information we’ve amassed for 25 years to build a corpus that delivers meaningful answers. Luckily we had been focused on data management from the beginning, but if we didn’t have a solid data architecture (with consistent metadata and information types), the project would have been difficult. So core to these projects is a data management team who understands data sources, metadata, and data integration tools. And once the new AI system is working, we have to train it, update it, and remove bias and errors on a regular basis. Finding 2: Corporate AI projects need heavy focus on security and access management. Let’s suppose you find a tool, platform, or application that delivers a groundbreaking solution to your employees. It could be a sales automation system, an AI-powered recruiting system, or an AI application to help call center agents handle problems. Who gets access to what? How do you “layer” the corpus to make sure the right people see what they need? This kind of exercise is the same thing we did at IBM in the 1980s, when we implemented this complex but critically important system called RACF. I hate to promote my age, but RACF designers thought through these issues of data security and access management many years ago. AI systems need a similar set of tools, and since the LLM has a tendency to “consolidate and aggregate” everything into the model, we may need multiple models for different users. In the case of HR, if build a talent intelligence database using Eightfold, Seekout, or Gloat which includes job titles, skills, levels, and details about credentials and job history, and then we decide to add “salary” …  oops.. well all of a sudden we have a data privacy problem. I just finished an in-depth discussion with SAP-SuccessFactors going through the AI architecture, and what you see is a set of “mini AI apps” developed to operate in Joule (SAP’s copilot) for various use cases. SAP has spent years building workflows, access patterns, and various levels of user security. They designed the system to handle confidential data securely. Remember also that tools like ChatGPT, which access the internet, can possibly import or leak data in a harmful way. And users may accidentally use the Gen AI tools to create unacceptable content, dangerous communications, and invoke other “jailbreak” behaviors. In your talent intelligence strategy, how will you manage payroll data and other private information? If the LLM uses this data for analysis we have to make sure that only appropriate users can see it. Finding 3: Corporate AI projects need focus on “prompt engineering” and system monitoring. In a typical IT project we spend a lot of time on the user experience. We design portals, screens, mobile apps, and experiences with the help of UI designers, artists, and craftsmen. But in Gen AI systems we want the user to “tell us what they’re looking for.” How do we train or support the user in prompting the system well? If you’ve ever tried to use a support chatbot from a company like Paypal you know how difficult this can be. I spent weeks trying to get Paypal’s bot to tell me how to shut down my account, but it never came close to giving me the right answer. (Eventually I figured it out, even though I still get invoices from a contractor who has since deceased!) We have to think about these issues. In our case, we’ve built a “prompt library” and series of workflows to help HR professionals get the most out of Galileo to make the system easy to use. And vendors like Paradox, Visier (Vee), and SAP are building sophisticated workflows that let users ask a simple question (“what candidates are at stage 3 of the pipeline”) and get a well formatted answer. If you ask a recruiting bot something like “who are the top candidates for this position” and plug it into the ATS, will it give you a good answer? I’m not sure, to be honest – so the vendors (or you) have to train it and build workflows to predict what users will ask. This means we’ll be monitoring these systems, looking at interactions that don’t work, and constantly tuning them to get better. A few years ago I interviewed the VP of Digital Transformation at DBS (Digital Bank of Singapore), one of the most sophisticated digital banks in the world. He told me they built an entire team to watch every click on the website so they could constantly move buttons, simplify interfaces, and make information easier to find. We’re going to need to do the same thing with AI, since we can’t really predict what questions people will ask. Finding 4: Vendors will need to be vetted. The next “traditional IT” topic is going to be the vetting of vendors. If I were a large bank or insurance company and I was looking at advanced AI systems, I would scrutinize the vendor’s reputation and experience in detail. Just because a firm like OpenAI has built a great LLM doesn’t mean that they, as a vendor, are capable of meeting your needs. Does the vendor have the resources, expertise, and enterprise feature set you require? I recently talked with a large enterprise in the middle east who has major facilities in Saudi Arabia, Dubai, and other countries in the region. They do not and will not let user information, queries, or generated data leave their jurisdiction. Does the vendor you select have the ability to handle this requirement? Small AI vendors will struggle with these issues, leading IT to do risk assessment in a new way. There are also consultants popping up who specialize in “bias detection” or testing of AI systems. Large companies can do this themselves, but I expect that over time there will be consulting firms who help you evaluate the accuracy and quality of these systems. If the system is trained on your data, how well have you tested it? In many cases the vendor-provided AI uses data from the outside world: what data is it using and how safe is it for your application? Finding 5: Change management, training, and organization design are critical. Finally, as with all technology projects, we have to think about change management and communication. What is this system designed to do? How will it impact your job? What should you do if the answers are not clear or correct? All these issues are important. There’s a need for user training. Our experience shows that users adopt these systems quickly, but they may not understand how to ask a question or how to interpret an answer. You may need to create prompt libraries (like Galileo), or interactive conversation journeys. And then offer support so users can resolve answers which are wrong, unclear, or inconsistent. And most importantly of all, there’s the issue of roles and org design. Suppose we offer an intelligent system to let sales people quickly find answers to product questions, pricing, and customer history. What is the new role of sales ops? Do we have staff to update and maintain the quality of the data? Should we reorganize our sales team as a result? We’ve already discovered that Galileo really breaks down barriers within HR, for example, showing business partners or HR leaders how to handle issues that may be in another person’s domain. These are wonderful outcomes which should encourage leaders to rethink how the roles are defined. In our company, as we use AI for our research, I see our research team operating at a higher level. People are sharing information, analyzing cross-domain information more quickly, and taking advantage of interviews and external data at high speed. They’re writing articles more quickly and can now translate material into multiple languages. Our member support and advisory team, who often rely on analysts for expertise, are quickly becoming consultants. And as we release Galileo to clients, the level of questions and inquiries will become more sophisticated. This process will happen in every sales organization, customer service organization, engineering team, finance, and HR team. Imagine the “new questions” people will ask. Bottom Line: Corporate AI Systems Become IT Projects At the end of the day the AI technology revolution will require lots of traditional IT practices. While AI applications are groundbreaking powerful, the implementation issues are more traditional than you think. I will never forget the failed implementation of Siebel during my days at Sybase. The company was enamored with the platform, bought, and forced us to use it. Yet the company never told us why they bought it, explained how to use it, or built workflows and job roles to embed it into the company. In only a year Sybase dumped the system after the sales organization simply rejected it. Nobody wants an outcome like that with something as important as AI. As you learn and become more enamored with the power of AI, I encourage you to think about the other tech projects you’ve worked on. It’s time to move beyond the hype and excitement and think about real-world success.
    Training
    2023年12月17日
  • Training
    人工智能正在以比我预期更快的速度改变企业学习AI Is Transforming Corporate Learning Even Faster Than I Expected 在《AI正在比我预想的更快地改变企业学习AI Is Transforming Corporate Learning Even Faster Than I Expected》这一文中,Josh Bersin强调了AI对企业学习和发展(L&D)领域的革命性影响。L&D市场价值高达3400亿美元,涵盖了从员工入职到操作程序等一系列活动。传统模型正在随着像Galileo™这样的生成性AI技术的发展而演变,这改变了内容的创建、个性化和传递方式。本文探讨了AI在L&D中的主要用例,包括内容生成、个性化学习体验、技能发展,以及用AI驱动的知识工具替代传统培训。举例包括Arist的AI内容创作、Uplimit的个性化AI辅导,以及沃尔玛实施AI进行即时培训。这种转型是深刻的,呈现了一个AI不仅增强而且重新定义L&D策略的未来。 在受人工智能影响的所有领域中,最大的变革也许发生在企业学习中。经过一年的实验,现在很明显人工智能将彻底改变这个领域。 让我们讨论一下 L&D 到底是什么。企业培训无处不在,这就是为什么它是一个价值 3400 亿美元的市场。工作中发生的一切(从入职到填写费用账户再到复杂的操作程序)在某种程度上都需要培训。即使在经济衰退期间,企业在 L&D 上的支出仍稳定在人均 1200-1500 美元。 然而,正如研发专业人士所知,这个问题非常复杂。有数百种培训平台、工具、内容库和方法。我估计 L&D 技术空间的规模超过 140 亿美元,这甚至不包括搜索引擎、知识管理工具以及 Zoom、Teams 和 Webex 等平台等系统。多年来,我们经历了许多演变:电子学习、混合学习、微型学习,以及现在的工作流程中的学习。 生成式人工智能即将永远改变这一切。 考虑一下我们面临的问题。企业培训并不是真正的教学,而是创造一个学习的环境。传统的教学设计以教师为主导,以过程为中心,但在工作中常常表现不佳。人们通过多种方式学习,通常没有老师,他们寻找参考资料,复制别人正在做的事情,并依靠经理、同事和专家的帮助。因此,必须扩展传统的教学设计模型,以帮助人们学习他们需要的东西。 输入生成人工智能,这是一种旨在合成信息的技术。像Galileo™这样的生成式人工智能工具 可以以传统教学设计师无法做到的方式理解、整合、重组和传递来自大型语料库的信息。这种人工智能驱动的学习方法不仅效率更高,而且效果更好,能够在工作流程中进行学习。 早期,在工作流程中学习意味着搜索信息并希望找到相关的东西。这个过程非常耗时,而且常常没有结果。生成式人工智能通过其神经网络的魔力,现在已经准备好解决这些问题,就像 L&D 的瑞士军刀一样。 这是一个简单的例子。我问Galileo™(该公司经过 25 年的研究和案例研究提供支持):“我该如何应对总是迟到的员工?请给我一个叙述来帮助我?” 它没有带我去参加管理课程或给我看一堆视频,而是简单地回答了问题。这种类型的互动是企业学习的大部分内容。 让我总结一下人工智能在学习与发展中的四个主要用例: 生成内容:人工智能可以大大减少内容创建所涉及的时间和复杂性。例如,移动学习工具Arist拥有AI生成功能Sidekick,可以将综合的操作信息转化为一系列的教学活动。这个过程可能需要几周甚至几个月的时间,现在可以在几天甚至几小时内完成。 我们在Josh Bersin 学院使用 Arist ,我们的新移动课程现在几乎每月都会推出。Sana、Docebo Shape和以用户为中心的学习平台 360 Learning 等其他工具也同样令人兴奋。 个性化学习者体验:人工智能可以帮助根据个人需求定制学习路径,改进根据工作角色分配学习路径的传统模型。人工智能可以理解内容的细节,并使用该信息来个性化学习体验。这种方法比杂乱的学习体验平台(LXP)有效得多,因为LXP通常无法真正理解内容的细节。 Uplimit是一家致力于构建人工智能平台来帮助教授人工智能的初创公司,它正在使用其Cobot和其他工具为学习人工智能的技术专业人员提供个性化的指导和技巧。Cornerstone 的新 AI 结构按技能推荐课程,Sana 平台将 Galileo 等工具与学习连接起来,SuccessFactors 中的新 AI 功能还为用户提供了基于角色和活动的精选学习视图。 识别和发展技能:人工智能可以帮助识别内容中的技能并推断个人的技能。这有助于提供正确的培训并确定其有效性。虽然许多公司正在研究高级技能分类策略,但真正的价值在于可以通过人工智能识别和开发的细粒度、特定领域的技能。 人才情报领域的先驱者Eightfold、Gloat和SeekOut可以推断员工技能并立即推荐学习解决方案。实际上,我们正在使用这项技术来推出我们的人力资源职业导航器,该导航器将于明年初推出。 用知识工具取代培训:人工智能在学习与发展中最具颠覆性的用例也许是完全取代某些类型培训的潜力。人工智能可以创建提供信息和解决问题的智能代理或聊天机器人,从而可能消除对某些类型培训的需求。这种方法不仅效率更高,而且效果更好,因为它可以在个人需要时为他们提供所需的信息。 沃尔玛今天正在实施这一举措,我们的新平台 Galileo 正在帮助万事达卡和劳斯莱斯等公司在无需培训的情况下按需查找人力资源信息和政策信息。LinkedIn Learning 正在向 Gen AI 搜索开放其软技能内容,很快 Microsoft Copilot 将通过 Viva Learning 找到培训。 这里潜力巨大 在我作为分析师的这些年里,我从未见过一种技术具有如此大的潜力。人工智能将彻底改变 L&D 格局,重塑我们的工作方式,以便 L&D 专业人员可以花时间为企业提供咨询。 L&D 专业人员应该做什么?花一些时间来了解这项技术,或者参加Josh Bersin 学院的一些新的人工智能课程以了解更多信息。 随着我们继续推出像伽利略这样的工具,我知道你们每个人都会对未来的机会感到惊讶。L&D 的未来已经到来,而这一切都由人工智能驱动。
    Training
    2023年12月13日