美国劳工部发布职场人工智能使用原则,保护员工权益(附录原文)

2024年05月16日 1902次浏览


今天5月16日,美国劳工部发布了一套针对人工智能(AI)在职场使用的原则,旨在为雇主提供指导,确保人工智能技术的开发和使用以员工为核心,提升所有员工的工作质量和生活质量。代理劳工部长朱莉·苏在声明中指出:“员工必须是我们国家AI技术发展和使用方法的核心。这些原则反映了拜登-哈里斯政府的信念,人工智能不仅要遵守现有法律,还要提升所有员工的工作和生活质量。”

根据劳工部发布的内容,这些AI原则包括:

  1. 以员工赋权为中心:员工及其代表,特别是来自弱势群体的代表,应被告知并有真正的发言权参与AI系统的设计、开发、测试、培训、使用和监督。这确保了AI技术在整个生命周期中考虑到员工的需求和反馈。

  2. 道德开发AI:AI系统应以保护员工为目标设计、开发和培训。这意味着在开发AI时,需要优先考虑员工的安全、健康和福祉,防止技术对员工造成不利影响。

  3. 建立AI治理和人工监督:组织应有明确的治理体系、程序、人工监督和评估流程,确保AI系统在职场中的使用符合伦理规范,并有适当的监督机制来防止误用。

  4. 确保AI使用的透明度:雇主应对员工和求职者透明地展示其使用的AI系统。这包括向员工说明AI系统的功能、目的以及其在工作中的具体应用,增强员工的信任感。

  5. 保护劳动和就业权利:AI系统不应违反或破坏员工的组织权、健康和安全权、工资和工时权以及反歧视和反报复保护。这确保了员工在AI技术的应用下,其基本劳动权益不受侵害。

  6. 使用AI来支持员工:AI系统应协助、补充和支持员工,并改善工作质量。这意味着AI应被用来提升员工的工作效率和舒适度,而不是取代员工或增加其工作负担。

  7. 支持受AI影响的员工:雇主应在与AI相关的工作转换期间支持或提升员工的技能。这包括提供培训和职业发展机会,帮助员工适应新的工作环境和技术要求。

  8. 确保负责任地使用员工数据:AI系统收集、使用或创建的员工数据应限于合法商业目的,并被负责地保护和处理。这确保了员工数据的隐私和安全,防止数据滥用。


这些原则是根据拜登总统发布的《安全、可靠和可信赖的人工智能开发和使用行政命令》制定的,旨在为开发者和雇主提供路线图,确保员工在AI技术带来的新机遇中受益,同时避免潜在的危害。

拜登政府强调,这些原则不仅适用于特定行业,而是应在各个领域广泛应用。原则不是详尽的列表,而是一个指导框架,供企业根据自身情况进行定制,并在员工参与下实施最佳实践。通过这种方式,拜登政府希望能在确保AI技术推动创新和机会的同时,保护员工的权益,避免技术可能带来的负面影响。

这套原则发布后,您认为它会对贵公司的AI技术使用和员工权益保护产生怎样的影响?



英文如下:

Department of Labor's Artificial Intelligence and Worker Well-being: Principles for Developers and Employers

Since taking office, President Biden, Vice President Harris, and the entire Biden-Harris Administration have moved with urgency to harness AI's potential to spur innovation, advance opportunity, and transform the nature of many jobs and industries, while also protecting workers from the risk that they might not share in these gains. As part of this commitment, the AI Executive Order directed the Department of Labor to create Principles for Developers and Employers when using AI in the workplace. These Principles will create a roadmap for developers and employers on how to harness AI technologies for their businesses while ensuring workers benefit from new opportunities created by AI and are protected from its potential harms.

The precise scope and nature of how AI will change the workplace remains uncertain. AI can positively augment work by replacing and automating repetitive tasks or assisting with routine decisions, which may reduce the burden on workers and allow them to better perform other responsibilities. Consequently, the introduction of AI-augmented work will create demand for workers to gain new skills and training to learn how to use AI in their day-to-day work. AI will also continue creating new jobs, including those focused on the development, deployment, and human oversight of AI. But AI-augmented work also poses risks if workers no longer have autonomy and direction over their work or their job quality declines. The risks of AI for workers are greater if it undermines workers' rights, embeds bias and discrimination in decision-making processes, or makes consequential workplace decisions without transparency, human oversight and review. There are also risks that workers will be displaced entirely from their jobs by AI.

In recent years, unions and employers have come together to collectively bargain new agreements setting sensible, worker-protective guardrails around the use of AI and automated systems in the workplace. In order to provide AI developers and employers across the country with a shared set of guidelines, the Department of Labor developed "Artificial Intelligence and Worker Well-being: Principles for Developers and Employers" as directed by President Biden's Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, with input from workers, unions, researchers, academics, employers, and developers, among others, and through public listening sessions.

APPLYING THE PRINCIPLES

The following Principles apply to the development and deployment of AI systems in the workplace, and should be considered during the whole lifecycle of AI – from design to development, testing, training, deployment and use, oversight, and auditing. The Principles are applicable to all sectors and intended to be mutually reinforcing, though not all Principles will apply to the same extent in every industry or workplace. The Principles are not intended to be an exhaustive list but instead a guiding framework for businesses. AI developers and employers should review and customize the best practices based on their own context and with input from workers.

The Department's AI Principles for Developers and Employers include:

  • [North Star] Centering Worker Empowerment: Workers and their representatives, especially those from underserved communities, should be informed of and have genuine input in the design, development, testing, training, use, and oversight of AI systems for use in the workplace.

  • Ethically Developing AI: AI systems should be designed, developed, and trained in a way that protects workers.

  • Establishing AI Governance and Human Oversight: Organizations should have clear governance systems, procedures, human oversight, and evaluation processes for AI systems for use in the workplace.

  • Ensuring Transparency in AI Use: Employers should be transparent with workers and job seekers about the AI systems that are being used in the workplace.

  • Protecting Labor and Employment Rights: AI systems should not violate or undermine workers' right to organize, health and safety rights, wage and hour rights, and anti-discrimination and anti-retaliation protections.

  • Using AI to Enable Workers: AI systems should assist, complement, and enable workers, and improve job quality.

  • Supporting Workers Impacted by AI: Employers should support or upskill workers during job transitions related to AI.

  • Ensuring Responsible Use of Worker Data: Workers' data collected, used, or created by AI systems should be limited in scope and location, used only to support legitimate business aims, and protected and handled responsibly.