市值: $2.6754T -0.860%
體積(24小時): $130.019B 63.090%
  • 市值: $2.6754T -0.860%
  • 體積(24小時): $130.019B 63.090%
  • 恐懼與貪婪指數:
  • 市值: $2.6754T -0.860%
加密
主題
加密植物
資訊
加密術
影片
頭號新聞
加密
主題
加密植物
資訊
加密術
影片
bitcoin
bitcoin

$83571.608249 USD

-1.38%

ethereum
ethereum

$1826.028236 USD

-3.02%

tether
tether

$0.999839 USD

-0.01%

xrp
xrp

$2.053149 USD

-2.48%

bnb
bnb

$601.140115 USD

-0.44%

solana
solana

$120.357332 USD

-3.79%

usd-coin
usd-coin

$0.999833 USD

-0.02%

dogecoin
dogecoin

$0.166175 USD

-3.43%

cardano
cardano

$0.652521 USD

-3.00%

tron
tron

$0.236809 USD

-0.59%

toncoin
toncoin

$3.785339 USD

-5.02%

chainlink
chainlink

$13.253231 USD

-3.91%

unus-sed-leo
unus-sed-leo

$9.397427 USD

-0.19%

stellar
stellar

$0.266444 USD

-1.00%

sui
sui

$2.409007 USD

1.15%

加密貨幣新聞文章

OpenAI Exodus:安全監管機構離開,敲響警鐘

2024/05/18 19:03

在首席科學家 Ilya Sutskever 離職後,OpenAI 的 Superalignment 團隊聯合負責人 Jan Leike 因擔心該公司將產品開發優先於人工智慧安全而辭職。 OpenAI 隨後解散了 Superalignment 團隊,在正在進行的內部重組中將其職能整合到其他研究項目中。

OpenAI Exodus:安全監管機構離開,敲響警鐘

OpenAI Exodus: Departing Researchers Sound Alarm on Safety Concerns

OpenAI Exodus:離開的研究人員對安全問題敲響了警鐘

A seismic shift has occurred within the hallowed halls of OpenAI, the pioneering artificial intelligence (AI) research laboratory. In a chorus of resignations, key figures tasked with safeguarding the existential dangers posed by advanced AI have bid farewell, leaving an ominous void in the organization's ethical foundation.

開創性的人工智慧 (AI) 研究實驗室 OpenAI 的神聖大廳內發生了翻天覆地的變化。在一片辭職聲中,負責保護先進人工智慧帶來的生存危險的關鍵人物紛紛告別,在該組織的道德基礎上留下了不祥的空白。

Following the departure of Ilya Sutskever, OpenAI's esteemed chief scientist and co-founder, the company has been rocked by the resignation of Jan Leike, another prominent researcher who co-led the "superalignment" team. Leike's departure stems from deep-rooted concerns about OpenAI's priorities, which he believes have shifted away from AI safety and towards a relentless pursuit of product development.

繼 OpenAI 受人尊敬的首席科學家兼聯合創始人 Ilya Sutskever 離職後,「超級對齊」團隊共同領導的另一位著名研究員 Jan Leike 的辭職也震動了該公司。 Leike 的離職源於對 OpenAI 優先事項的根深蒂固的擔憂,他認為 OpenAI 的優先事項已經從人工智慧安全轉向對產品開發的不懈追求。

In a series of thought-provoking public posts, Leike lambasted OpenAI's leadership for prioritizing short-term deliverables over the urgent need to establish a robust safety culture and mitigate the potential risks associated with the development of artificial general intelligence (AGI). AGI, a hypothetical realm of AI, holds the promise of surpassing human capabilities across a broad spectrum of tasks, but also raises profound ethical and existential questions.

在一系列發人深省的公開貼文中,Leike 痛斥 OpenAI 的領導層優先考慮短期交付成果,而不是建立強大的安全文化和減輕與通用人工智慧 (AGI) 發展相關的潛在風險的迫切需求。 AGI 是人工智慧的一個假設領域,它有望在廣泛的任務中超越人類的能力,但也提出了深刻的倫理和存在問題。

Leike's critique centers around the glaring absence of adequate resources allocated to his team's safety research, particularly in terms of computing power. He maintains that OpenAI's management has consistently overlooked the critical importance of investing in safety initiatives, despite the looming threat posed by AGI.

雷克的批評集中在他的團隊的安全研究明顯缺乏足夠的資源,特別是在計算能力方面。他堅持認為,儘管 AGI 構成的威脅迫在眉睫,但 OpenAI 的管理層始終忽視了投資安全措施的至關重要性。

"I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time until we finally reached a breaking point. Over the past few months, my team has been sailing against the wind," Leike lamented.

「相當長一段時間以來,我一直與OpenAI 領導層對公司核心優先事項的看法存在分歧,直到我們最終達到了臨界點。在過去的幾個月裡,我的團隊一直在逆風航行,」Leike 感嘆道。

In a desperate attempt to address the mounting concerns surrounding AI safety, OpenAI established a dedicated research team in July 2023, tasking them with preparing for the advent of advanced AI systems that could potentially outmaneuver and even overpower their creators. Sutskever was appointed as the co-lead of this newly formed team, which was granted a generous allocation of 20% of OpenAI's computational resources.

為了解決圍繞人工智慧安全日益增長的擔憂,OpenAI 於 2023 年 7 月成立了一個專門的研究團隊,責成他們為先進人工智慧系統的出現做好準備,這些系統可能會超越甚至壓倒其創造者。 Sutskever 被任命為這個新成立團隊的共同領導者,該團隊獲得了 OpenAI 20% 計算資源的慷慨分配。

However, the recent departures of Sutskever and Leike have cast a long shadow over the future of OpenAI's AI safety research program. In a move that has sent shockwaves throughout the AI community, OpenAI has reportedly disbanded the "superalignment" team, effectively integrating its functions into other research projects within the organization. This decision is widely seen as a consequence of the ongoing internal restructuring, which was initiated in response to a governance crisis that shook OpenAI to its core in November 2023.

然而,最近 Sutskever 和 Leike 的離職給 OpenAI 人工智慧安全研究計畫的未來蒙上了長長的陰影。據報道,OpenAI 解散了「超級對齊」團隊,有效地將其職能整合到組織內的其他研究項目中,此舉在整個人工智慧社群中引起了軒然大波。這項決定被廣泛認為是正在進行的內部重組的結果,該重組是為了應對 2023 年 11 月震撼 OpenAI 核心的治理危機而發起的。

Sutskever, who played a pivotal role in the effort that briefly ousted Sam Altman as CEO before he was reinstated amidst employee backlash, has consistently emphasized the paramount importance of ensuring that OpenAI's AGI developments align with the interests of humanity. As a member of OpenAI's six-member board, Sutskever has repeatedly stressed the need to align the organization's goals with the greater good.

Sutskever 在短暫罷免 Sam Altman 執行長職務的過程中發揮了關鍵作用,隨後在員工強烈反對下恢復了職位。作為 OpenAI 六人董事會的成員,Sutskever 多次強調需要將組織的目標與更大的利益保持一致。

Leike's resignation serves as a stark reminder of the profound challenges facing OpenAI and the broader AI community as they grapple with the immense power and ethical implications of AGI. His departure signals a growing concern that OpenAI's priorities have become misaligned, potentially jeopardizing the safety and well-being of humanity in the face of rapidly advancing AI technologies.

Leike 的辭職清楚地提醒人們,OpenAI 和更廣泛的人工智慧社群在應對 AGI 的巨大力量和道德影響時所面臨的深刻挑戰。他的離開標誌著人們越來越擔心 OpenAI 的優先事項已經變得不一致,面對快速發展的人工智慧技術,可能會危及人類的安全和福祉。

The exodus of key researchers from OpenAI's AI safety team should serve as a wake-up call to all stakeholders, including policymakers, industry leaders, and the general public. It is imperative that we heed the warnings of these experts and prioritize the implementation of robust safety measures as we venture into the uncharted territory of AGI. The future of humanity may depend on it.

OpenAI 人工智慧安全團隊關鍵研究人員的離開應該給所有利害關係人敲響警鐘,包括政策制定者、產業領導者和公眾。當我們冒險進入通用人工智慧的未知領域時,我們必須聽取這些專家的警告,並優先實施強而有力的安全措施。人類的未來可能取決於它。

免責聲明:info@kdj.com

所提供的資訊並非交易建議。 kDJ.com對任何基於本文提供的資訊進行的投資不承擔任何責任。加密貨幣波動性較大,建議您充分研究後謹慎投資!

如果您認為本網站使用的內容侵犯了您的版權,請立即聯絡我們(info@kdj.com),我們將及時刪除。

2025年04月03日 其他文章發表於