• AI applications
    Josh Bersin人工智能实施越来越像传统IT项目 Josh Bersin的文章《人工智能实施越来越像传统IT项目》提出了五个主要发现: 数据管理:强调数据质量、治理和架构在AI项目中的重要性,类似于IT项目。 安全和访问管理:突出AI实施中强大的安全措施和访问控制的重要性。 工程和监控:讨论了持续工程支持和监控的需求,类似于IT基础设施管理。 供应商管理:指出了AI项目中彻底的供应商评估和选择的重要性。 变更管理和培训:强调了有效变更管理和培训的必要性,这对AI和IT项目都至关重要。 原文如下,我们一起来看看: As we learn more and more about corporate implementations of AI, I’m struck by how they feel more like traditional IT projects every day. Yes, Generative AI systems have many special characteristics: they’re intelligent, we need to train them, and they have radical and transformational impact on users. And the back-end processing is expensive. But despite the talk about advanced models and life-like behavior, these projects have traditional aspects. I’ve talked with more than a dozen large companies about their various AI strategies and I want to encourage buyers to think about the basics. Finding 1: Corporate AI projects are all about the data. Unlike the implementation of a new ERP system, payroll system, recruiting, or learning platform, an AI platform is completely data dependent. Regardless of the product you’re buying (an intelligent agent like Galileo™, an intelligent recruiting system like Eightfold, or an AI-enabling platform to provide sales productivity), success depends on your data strategy. If your enterprise data is a mess, the AI won’t suddenly make sense of it. This week I read a story about Microsoft’s Copilot promoting election lies and conspiracy theories. While I can’t tell how widespread this may be, it simply points out that “you own the data quality, training, and data security” of your AI systems. Walmart’s My Assistant AI for employees already proved itself to be 2-3x more accurate at handling employee inquiries about benefits, for example. But in order to do this the company took advantage of an amazing IT architecture that brings all employee information into a single profile, a mobile experience with years of development, and a strong architecture for global security. One of our clients, a large defense contractor, is exploring the use of AI to revolutionize its massive knowledge management environment. While we know that Gen AI can add tremendous value here, the big question is “what data should we load” and how do we segment the data so the right people access the right information? They’re now working on that project. During our design of Galileo we spent almost a year combing through the information we’ve amassed for 25 years to build a corpus that delivers meaningful answers. Luckily we had been focused on data management from the beginning, but if we didn’t have a solid data architecture (with consistent metadata and information types), the project would have been difficult. So core to these projects is a data management team who understands data sources, metadata, and data integration tools. And once the new AI system is working, we have to train it, update it, and remove bias and errors on a regular basis. Finding 2: Corporate AI projects need heavy focus on security and access management. Let’s suppose you find a tool, platform, or application that delivers a groundbreaking solution to your employees. It could be a sales automation system, an AI-powered recruiting system, or an AI application to help call center agents handle problems. Who gets access to what? How do you “layer” the corpus to make sure the right people see what they need? This kind of exercise is the same thing we did at IBM in the 1980s, when we implemented this complex but critically important system called RACF. I hate to promote my age, but RACF designers thought through these issues of data security and access management many years ago. AI systems need a similar set of tools, and since the LLM has a tendency to “consolidate and aggregate” everything into the model, we may need multiple models for different users. In the case of HR, if build a talent intelligence database using Eightfold, Seekout, or Gloat which includes job titles, skills, levels, and details about credentials and job history, and then we decide to add “salary” …  oops.. well all of a sudden we have a data privacy problem. I just finished an in-depth discussion with SAP-SuccessFactors going through the AI architecture, and what you see is a set of “mini AI apps” developed to operate in Joule (SAP’s copilot) for various use cases. SAP has spent years building workflows, access patterns, and various levels of user security. They designed the system to handle confidential data securely. Remember also that tools like ChatGPT, which access the internet, can possibly import or leak data in a harmful way. And users may accidentally use the Gen AI tools to create unacceptable content, dangerous communications, and invoke other “jailbreak” behaviors. In your talent intelligence strategy, how will you manage payroll data and other private information? If the LLM uses this data for analysis we have to make sure that only appropriate users can see it. Finding 3: Corporate AI projects need focus on “prompt engineering” and system monitoring. In a typical IT project we spend a lot of time on the user experience. We design portals, screens, mobile apps, and experiences with the help of UI designers, artists, and craftsmen. But in Gen AI systems we want the user to “tell us what they’re looking for.” How do we train or support the user in prompting the system well? If you’ve ever tried to use a support chatbot from a company like Paypal you know how difficult this can be. I spent weeks trying to get Paypal’s bot to tell me how to shut down my account, but it never came close to giving me the right answer. (Eventually I figured it out, even though I still get invoices from a contractor who has since deceased!) We have to think about these issues. In our case, we’ve built a “prompt library” and series of workflows to help HR professionals get the most out of Galileo to make the system easy to use. And vendors like Paradox, Visier (Vee), and SAP are building sophisticated workflows that let users ask a simple question (“what candidates are at stage 3 of the pipeline”) and get a well formatted answer. If you ask a recruiting bot something like “who are the top candidates for this position” and plug it into the ATS, will it give you a good answer? I’m not sure, to be honest – so the vendors (or you) have to train it and build workflows to predict what users will ask. This means we’ll be monitoring these systems, looking at interactions that don’t work, and constantly tuning them to get better. A few years ago I interviewed the VP of Digital Transformation at DBS (Digital Bank of Singapore), one of the most sophisticated digital banks in the world. He told me they built an entire team to watch every click on the website so they could constantly move buttons, simplify interfaces, and make information easier to find. We’re going to need to do the same thing with AI, since we can’t really predict what questions people will ask. Finding 4: Vendors will need to be vetted. The next “traditional IT” topic is going to be the vetting of vendors. If I were a large bank or insurance company and I was looking at advanced AI systems, I would scrutinize the vendor’s reputation and experience in detail. Just because a firm like OpenAI has built a great LLM doesn’t mean that they, as a vendor, are capable of meeting your needs. Does the vendor have the resources, expertise, and enterprise feature set you require? I recently talked with a large enterprise in the middle east who has major facilities in Saudi Arabia, Dubai, and other countries in the region. They do not and will not let user information, queries, or generated data leave their jurisdiction. Does the vendor you select have the ability to handle this requirement? Small AI vendors will struggle with these issues, leading IT to do risk assessment in a new way. There are also consultants popping up who specialize in “bias detection” or testing of AI systems. Large companies can do this themselves, but I expect that over time there will be consulting firms who help you evaluate the accuracy and quality of these systems. If the system is trained on your data, how well have you tested it? In many cases the vendor-provided AI uses data from the outside world: what data is it using and how safe is it for your application? Finding 5: Change management, training, and organization design are critical. Finally, as with all technology projects, we have to think about change management and communication. What is this system designed to do? How will it impact your job? What should you do if the answers are not clear or correct? All these issues are important. There’s a need for user training. Our experience shows that users adopt these systems quickly, but they may not understand how to ask a question or how to interpret an answer. You may need to create prompt libraries (like Galileo), or interactive conversation journeys. And then offer support so users can resolve answers which are wrong, unclear, or inconsistent. And most importantly of all, there’s the issue of roles and org design. Suppose we offer an intelligent system to let sales people quickly find answers to product questions, pricing, and customer history. What is the new role of sales ops? Do we have staff to update and maintain the quality of the data? Should we reorganize our sales team as a result? We’ve already discovered that Galileo really breaks down barriers within HR, for example, showing business partners or HR leaders how to handle issues that may be in another person’s domain. These are wonderful outcomes which should encourage leaders to rethink how the roles are defined. In our company, as we use AI for our research, I see our research team operating at a higher level. People are sharing information, analyzing cross-domain information more quickly, and taking advantage of interviews and external data at high speed. They’re writing articles more quickly and can now translate material into multiple languages. Our member support and advisory team, who often rely on analysts for expertise, are quickly becoming consultants. And as we release Galileo to clients, the level of questions and inquiries will become more sophisticated. This process will happen in every sales organization, customer service organization, engineering team, finance, and HR team. Imagine the “new questions” people will ask. Bottom Line: Corporate AI Systems Become IT Projects At the end of the day the AI technology revolution will require lots of traditional IT practices. While AI applications are groundbreaking powerful, the implementation issues are more traditional than you think. I will never forget the failed implementation of Siebel during my days at Sybase. The company was enamored with the platform, bought, and forced us to use it. Yet the company never told us why they bought it, explained how to use it, or built workflows and job roles to embed it into the company. In only a year Sybase dumped the system after the sales organization simply rejected it. Nobody wants an outcome like that with something as important as AI. As you learn and become more enamored with the power of AI, I encourage you to think about the other tech projects you’ve worked on. It’s time to move beyond the hype and excitement and think about real-world success.
    AI applications
    2023年12月17日
  • AI applications
    视频:Leading Through Transformation The Future of HR in the AI Era Leading Through Transformation The Future of HR in the AI Era Jiajia Chen Senior Group Product Manager Nvidia 点击访问:https://www.youtube.com/watch?v=toiy_sBDXHs 以下为演讲稿翻译整理,仅供参考: 引领变革:人工智能时代人力资源的未来 欢迎大家,我很高兴有机会讨论一个自2022年底以来成为焦点的话题。随着chat的广泛成功,许多人开始思考一个问题:我还会有工作吗?对于一些父母来说,这个问题可能会有所不同:我的孩子将来会有工作吗?在深入这个问题之前,让我简单介绍一下自己。我早期的职业生涯涉及多个商业领域,包括人力资源,后来我专注于人工智能产品管理。我拥有几个学位,包括法律学位、MBA学位、经济学科学学位和软件工程学位。我曾在Nidia管理人工智能基础设施产品组合几年。去年晚些时候,我转移到另一个名为Nidia Omniverse的产品组,这是一个数字孪生平台工业元宇宙。我们的企业客户可以使用Omniverse来创建数字孪生工业元宇宙,通过利用模拟和生成性人工智能以及与大型生态系统合作。通过这些经历,我对人工智能和人力资源有了深刻的理解。在这次演讲中,我希望能提供一个框架,帮助大家思考如何在人工智能时代领导转型,如何保持相关性并比人工智能发展得更快。 人工智能并不是一个新概念。让我们快速回顾一下人工智能发展的简史,为今天的对话奠定基础。人工智能领域诞生于1950年代。1950年,艾伦·图灵提出了模仿人类智能的通用机器的概念。1956年,人工智能这一术语被创造出来。在1970年代和1980年代,人工智能最初的乐观预期开始减弱,因为进展没有达到高期望,人工智能研究的资金减少,领域经历了被称为人工智能冬天的时期。在人工智能冬天期间,研究人员专注于发展专家系统,这是基于规则的系统,旨在模仿人类专家在特定领域的知识和决策能力。这种方法在实际应用中取得了一些进展,例如医学诊断和工业自动化。1980年代,人工智能的焦点转向了机器学习和神经网络。机器学习算法允许计算机在没有明确编程的情况下从数据中学习,并做出预测或决策。受人类大脑结构启发的神经网络引起了关注,并被应用于各种任务,包括图像和语音识别。得益于大量数据的可用性和计算能力的进步,人工智能经历了复兴。Nidia的贡献是关键的。 2022年11月推出的ChatGPT标志着人工智能的关键时刻。生成性人工智能正在推动机器创造的边界。人工智能越来越多地融入各种应用和行业,正在金融、医疗保健、网络安全等领域发挥作用,转变行业并创造新的机会。 你们中有多少人尝试过ChatGPT?你们喜欢它的哪些功能?是否用它来草拟电子邮件、创建培训材料,或者提出棘手的问题,试图愚弄chat GPT,证明你的人类智能更高级?人工智能预计将在各个维度对工作场所产生重大变化。 以下是人工智能可能带来的九个变化。 首先,提高生产力:人工智能是否会提高生产力和经济增长?许多人这样预期,但也有很多人告诉你,到目前为止,这种生成性人工智能趋势并没有大幅提高生产力,除了提供一些有趣的玩具。你们中的一些人可能听说过“生产力悖论”,这是1970年代和1980年代在美国发生的现象。我的预测是,人工智能不会发生这种情况。人工智能可以更快地传播,且所需的资本投资更少。这是因为人工智能在短期内的应用主要是软件革命,所需的大部分基础设施,如计算设备、网络和云服务,已经到位。你现在可以通过手机立即使用chat GPT和迅速增长的类似软件。 其次,收入不平等:人工智能是否会带来自动化的奢华时代,还是只会加剧现有的不平等?美国国家经济研究局发布的一份报告称,自1980年以来,美国工资变化的50%到70%可以归因于蓝领工人被自动化取代或降级导致的工资下降。人工智能、机器人技术和新的复杂技术导致财富高度集中。直到最近,受过大学教育的白领专业人士基本上没有受到低教育工人的命运。拥有研究生学位的人看到他们的薪水上涨,而低教育工人的薪水显著下降。这一问题将加剧,低技能的白领工人也将受到影响。 第三,劳动力技能提升和风险转移:随着某些任务的自动化,人工智能需要专注于提升和重新技能化劳动力。员工需要获得新的技能和知识,以适应不断变化的工作要求,并有效地与人工智能系统协作。有关这一主题的研究很多,不同研究的数据也有所不同。彭博社的研究显示,由于人工智能对工作的影响,全球将有超过1.2亿工人在未来三年内需要重新培训。据信,由于人工智能相关部署,中国将有超过5000万工人需要重新培训。美国将需要重新培训1150万人,以适应劳动力市场的需求。巴西、日本和德国的数百万工人也将需要帮助应对人工智能、机器人技术及相关技术带来的变化。根据麦肯锡的一项研究,由于快速自动化的采用,多达3.75亿工人可能需要转换职业类别。 第四,重新定义工作角色:人工智能有潜力重塑工作角色并创造新的角色。一些任务和工作可能会完全自动化,导致某些领域的工作流失。然而,人工智能也为创造涉及管理和协作人工智能系统、分析人工智能生成的内容、开发和维护人工智能技术的新角色创造了机会。例子包括美国政府试图将制造业带回美国。许多人认为,像第二次世界大战后一样,将创造数百万高薪的蓝领工人工作。然而,这最有可能不会发生,因为在美国建造的新工厂几乎不会雇用许多人类工人。一切都将通过机器人或管理系统自动化。 第五,增强决策制定:人工智能系统可以分析大量数据,检测模式,并生成支持决策过程的洞察。这可以使员工和管理者获得更准确、更及时的信息,使各种职能(如运营、市场营销、财务、人力资源)的决策更加明智。2019年哈佛商业评论提出了一个概念,称为人工智能驱动的决策,与数据驱动的决策相比,它允许我们克服作为人类处理器的固有局限性,如低效和认知偏见,因为你可以指派机器来处理大量数据,让我们人类应用判断力、文化价值观和情境来选择决策选项。 第六,人工智能与人类的协作:人工智能技术使得人与智能系统之间的协作成为可能。这种协作可能涉及利用人工智能在数据分析、模式识别和预测方面的优势,而人类则提供批判性思维、创造力、同理心和复杂问题解决技能。如果能够有效地实现人与人工智能系统的协作,可以带来改进的成果和创新。的确,许多公司已经使用人工智能自动化流程,但到目前为止,证据表明,那些旨在取代员工的部署只会带来短期的生产力提升。在一项涉及1500家公司的基本研究中发现,当人类和机器一起工作时,公司取得了最显著的绩效提升。 第七,增强智能:人工智能可以通过补充和增强人类能力来增强人类智能。它可以协助人们执行诸如信息检索、数据分析和问题解决等任务。人工智能支持的虚拟助手和机器人可以为人们提供即时支持和指导,提高他们的效率和效果。 第八,伦理考虑:人工智能在工作场所的整合引发了与隐私、安全、公平、透明度和问责制相关的伦理考虑。组织需要建立伦理框架和指南来确保人工智能系统的合理和可信赖的开发和部署。 第九,监控和评估AI实施。这个变化涉及到持续监控人工智能在工作场所的影响,并从员工那里收集反馈,以识别改进领域。定期的评估和反馈循环将有助于完善人工智能的实施和使用,确保其在增进工作效率、创新和其他方面的应用是有效和恰当的。(以上为AI补充,仅供参考) 目前,我们已经详细讨论了人工智能在工作场所创造的变化,以及人力资源应该如何应对这些变化。 现在,让我分享这张早先在一次HR会议上使用的幻灯片。2016年,我在一个名为“HR新模型”的会议上发表了演讲。现在,让我们看看这个模型。一个典型的组织结构包括首席执行官、人力资源业务伙伴、共享服务和一个运营部门,支持管理者和员工群体。公司是否能用这个模型应对人工智能在工作场所带来的变化?我们是否需要一个不同的模型?在回答这个问题之前,让我们看看应对每种类型变化需要发生什么。在这张幻灯片上,我展示了我简单的颜色编码技术。我简单地将所有类型的能力和技能分类并用不同颜色高亮显示。现在我们可以看到几个主要类别和一些零散项目。让我们稍微深入一些颜色分类的挑战。 首先,以蓝色突出显示的助理挑战和两个工作场所的变化。HR可以评估利用人工智能的技能和能力要求,为员工提供必要的资源,使他们能够理解和利用人工智能技术,以及如何通过人工智能来增强他们的工作。这包括关于人工智能概念、数据分析、自动化工具和人工智能支持决策的培训。HR可以培养持续学习的文化。 其次,以绿色突出显示的变革管理和沟通,在四个不同的工作场所变化中出现。HR可以积极地向员工传达人工智能实施的目的和好处,以提高生产力和效率。HR可以协助经理和员工分析工作并重新设计工作流程,以利用人工智能技术。这涉及识别可以自动化或由人工智能增强的任务和活动,简化工作流程,消除冗余或低价值测试,并确定人类和人工智能如何合作以优化生产力和效率。 第三,以热粉色突出显示的职业发展和内部流动性,在三个不同的工作场所变化中出现。HR可以进行技能评估,以确定组织内现有技能,并确定需要解决的AI相关角色的差距。这包括识别与人工智能技术合作所需的技术技能,如机器学习,以及有效沟通、批判性思维和问题解决所必需的软技能。 最后,以灰蓝色突出显示的伦理指导和治理,在三个不同的工作场所变化中出现。HR可以与法律、合规团队等相关利益相关者协作,为人工智能变革建立治理框架。那些仍以黑色显示的功能在未来几年将看到更多的自动化和置换,投资较少,因为这些能力在人工智能转型中的相关性较低。 为了跟上甚至领导人工智能趋势及其对工作场所的影响,HR可以采取几个积极的步骤。以下是我们可以考虑的一些关键行动:持续学习,HR专业人士可以深入了解人工智能技术、应用和影响;识别人力资源中的人工智能用例,HR可以探索各种可以增强其功能和简化流程的人工智能应用,例如自动化日常行政任务、改进候选人筛选和选拔流程,以及提供个性化的学习和发展机会;评估组织的人工智能准备情况,HR可以评估组织当前的基础设施、技术能力和文化,以确定其采用人工智能的准备情况;通信和透明度,人工智能实施期间的沟通和透明度对于缓解对工作安全的担忧、澄清人工智能采用的好处以及确保员工理解人工智能技术将如何增强而非取代他们的工作至关重要;监控和评估人工智能实施,HR可以持续监控人工智能对工作场所的影响,并从员工那里收集反馈,以识别改进领域。定期的评估和反馈循环将有助于完善人工智能实施。  
    AI applications
    2023年07月02日