市值: $2.6585T -1.350%
成交额(24h): $43.2279B -59.180%
  • 市值: $2.6585T -1.350%
  • 成交额(24h): $43.2279B -59.180%
  • 恐惧与贪婪指数:
  • 市值: $2.6585T -1.350%
加密货币
话题
百科
资讯
加密话题
视频
热门新闻
加密货币
话题
百科
资讯
加密话题
视频
bitcoin
bitcoin

$83866.330841 USD

1.10%

ethereum
ethereum

$1813.856658 USD

1.17%

tether
tether

$0.999635 USD

-0.01%

xrp
xrp

$2.119598 USD

3.11%

bnb
bnb

$597.151856 USD

0.66%

solana
solana

$121.000827 USD

4.92%

usd-coin
usd-coin

$0.999962 USD

-0.01%

dogecoin
dogecoin

$0.169845 USD

5.02%

cardano
cardano

$0.659954 USD

1.59%

tron
tron

$0.238468 USD

-0.33%

unus-sed-leo
unus-sed-leo

$9.192940 USD

-3.85%

chainlink
chainlink

$12.887613 USD

1.16%

toncoin
toncoin

$3.312822 USD

-6.18%

stellar
stellar

$0.259431 USD

-0.16%

avalanche
avalanche

$18.154746 USD

0.32%

加密货币新闻

OpenAI Exodus:安全监管机构离开,敲响警钟

2024/05/18 19:03

在首席科学家 Ilya Sutskever 离职后,OpenAI 的 Superalignment 团队联合负责人 Jan Leike 因担心该公司将产品开发优先于人工智能安全而辞职。 OpenAI 随后解散了 Superalignment 团队,在正在进行的内部重组中将其职能整合到其他研究项目中。

OpenAI Exodus:安全监管机构离开,敲响警钟

OpenAI Exodus: Departing Researchers Sound Alarm on Safety Concerns

OpenAI Exodus:离开的研究人员对安全问题敲响了警钟

A seismic shift has occurred within the hallowed halls of OpenAI, the pioneering artificial intelligence (AI) research laboratory. In a chorus of resignations, key figures tasked with safeguarding the existential dangers posed by advanced AI have bid farewell, leaving an ominous void in the organization's ethical foundation.

开创性的人工智能 (AI) 研究实验室 OpenAI 的神圣大厅内发生了翻天覆地的变化。在一片辞职声中,负责保护先进人工智能带来的生存危险的关键人物纷纷告别,在该组织的道德基础上留下了不祥的空白。

Following the departure of Ilya Sutskever, OpenAI's esteemed chief scientist and co-founder, the company has been rocked by the resignation of Jan Leike, another prominent researcher who co-led the "superalignment" team. Leike's departure stems from deep-rooted concerns about OpenAI's priorities, which he believes have shifted away from AI safety and towards a relentless pursuit of product development.

继 OpenAI 受人尊敬的首席科学家兼联合创始人 Ilya Sutskever 离职后,“超级对齐”团队共同领导的另一位著名研究员 Jan Leike 的辞职也震动了该公司。 Leike 的离职源于对 OpenAI 优先事项的根深蒂固的担忧,他认为 OpenAI 的优先事项已经从人工智能安全转向对产品开发的不懈追求。

In a series of thought-provoking public posts, Leike lambasted OpenAI's leadership for prioritizing short-term deliverables over the urgent need to establish a robust safety culture and mitigate the potential risks associated with the development of artificial general intelligence (AGI). AGI, a hypothetical realm of AI, holds the promise of surpassing human capabilities across a broad spectrum of tasks, but also raises profound ethical and existential questions.

在一系列发人深省的公开帖子中,Leike 痛斥 OpenAI 的领导层优先考虑短期交付成果,而不是建立强大的安全文化和减轻与通用人工智能 (AGI) 发展相关的潜在风险的迫切需要。 AGI 是人工智能的一个假设领域,它有望在广泛的任务中超越人类的能力,但也提出了深刻的伦理和存在问题。

Leike's critique centers around the glaring absence of adequate resources allocated to his team's safety research, particularly in terms of computing power. He maintains that OpenAI's management has consistently overlooked the critical importance of investing in safety initiatives, despite the looming threat posed by AGI.

雷克的批评集中在他的团队的安全研究明显缺乏足够的资源,特别是在计算能力方面。他坚持认为,尽管 AGI 构成的威胁迫在眉睫,但 OpenAI 的管理层始终忽视了投资安全举措的至关重要性。

"I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time until we finally reached a breaking point. Over the past few months, my team has been sailing against the wind," Leike lamented.

“相当长一段时间以来,我一直与 OpenAI 领导层对公司核心优先事项的看法存在分歧,直到我们最终达到了临界点。在过去的几个月里,我的团队一直在逆风航行,”Leike 感叹道。

In a desperate attempt to address the mounting concerns surrounding AI safety, OpenAI established a dedicated research team in July 2023, tasking them with preparing for the advent of advanced AI systems that could potentially outmaneuver and even overpower their creators. Sutskever was appointed as the co-lead of this newly formed team, which was granted a generous allocation of 20% of OpenAI's computational resources.

为了解决围绕人工智能安全日益增长的担忧,OpenAI 于 2023 年 7 月成立了一个专门的研究团队,责成他们为先进人工智能系统的出现做好准备,这些系统可能会超越甚至压倒其创造者。 Sutskever 被任命为这个新成立团队的联合领导者,该团队获得了 OpenAI 20% 计算资源的慷慨分配。

However, the recent departures of Sutskever and Leike have cast a long shadow over the future of OpenAI's AI safety research program. In a move that has sent shockwaves throughout the AI community, OpenAI has reportedly disbanded the "superalignment" team, effectively integrating its functions into other research projects within the organization. This decision is widely seen as a consequence of the ongoing internal restructuring, which was initiated in response to a governance crisis that shook OpenAI to its core in November 2023.

然而,最近 Sutskever 和 Leike 的离职给 OpenAI 人工智能安全研究项目的未来蒙上了长长的阴影。据报道,OpenAI 解散了“超级对齐”团队,有效地将其职能整合到组织内的其他研究项目中,此举在整个人工智能社区引起了轩然大波。这一决定被广泛认为是正在进行的内部重组的结果,该重组是为了应对 2023 年 11 月震撼 OpenAI 核心的治理危机而发起的。

Sutskever, who played a pivotal role in the effort that briefly ousted Sam Altman as CEO before he was reinstated amidst employee backlash, has consistently emphasized the paramount importance of ensuring that OpenAI's AGI developments align with the interests of humanity. As a member of OpenAI's six-member board, Sutskever has repeatedly stressed the need to align the organization's goals with the greater good.

Sutskever 在短暂罢免 Sam Altman 首席执行官职务的过程中发挥了关键作用,随后在员工强烈反对下恢复了职位。他一直强调确保 OpenAI 的 AGI 发展符合人类利益的至关重要性。作为 OpenAI 六人董事会的成员,Sutskever 多次强调需要使组织的目标与更大的利益保持一致。

Leike's resignation serves as a stark reminder of the profound challenges facing OpenAI and the broader AI community as they grapple with the immense power and ethical implications of AGI. His departure signals a growing concern that OpenAI's priorities have become misaligned, potentially jeopardizing the safety and well-being of humanity in the face of rapidly advancing AI technologies.

Leike 的辞职清楚地提醒人们,OpenAI 和更广泛的人工智能社区在应对 AGI 的巨大力量和道德影响时所面临的深刻挑战。他的离开标志着人们越来越担心 OpenAI 的优先事项已经变得不一致,面对快速发展的人工智能技术,可能会危及人类的安全和福祉。

The exodus of key researchers from OpenAI's AI safety team should serve as a wake-up call to all stakeholders, including policymakers, industry leaders, and the general public. It is imperative that we heed the warnings of these experts and prioritize the implementation of robust safety measures as we venture into the uncharted territory of AGI. The future of humanity may depend on it.

OpenAI 人工智能安全团队关键研究人员的离开应该给所有利益相关者敲响警钟,包括政策制定者、行业领导者和公众。当我们冒险进入通用人工智能的未知领域时,我们必须听取这些专家的警告,并优先实施强有力的安全措施。人类的未来可能取决于它。

免责声明:info@kdj.com

所提供的信息并非交易建议。根据本文提供的信息进行的任何投资,kdj.com不承担任何责任。加密货币具有高波动性,强烈建议您深入研究后,谨慎投资!

如您认为本网站上使用的内容侵犯了您的版权,请立即联系我们(info@kdj.com),我们将及时删除。

2025年04月06日 发表的其他文章