bitcoin
bitcoin

$98106.66 USD 

-1.03%

ethereum
ethereum

$3435.43 USD 

3.74%

tether
tether

$1.00 USD 

0.07%

solana
solana

$256.80 USD 

0.42%

bnb
bnb

$657.11 USD 

5.27%

xrp
xrp

$1.48 USD 

0.77%

dogecoin
dogecoin

$0.430119 USD 

4.88%

usd-coin
usd-coin

$0.999724 USD 

-0.02%

cardano
cardano

$1.06 USD 

8.47%

tron
tron

$0.214074 USD 

6.36%

avalanche
avalanche

$42.36 USD 

6.51%

toncoin
toncoin

$6.57 USD 

19.54%

stellar
stellar

$0.525352 USD 

70.06%

shiba-inu
shiba-inu

$0.000026 USD 

3.49%

polkadot-new
polkadot-new

$8.66 USD 

36.75%

加密貨幣新聞文章

OpenAI 因人工智慧產生的資料錯誤而受到隱私投訴

2024/04/29 16:04

奧地利資料保護組織 Noyb 對 OpenAI 提起隱私投訴,指控其 ChatGPT 聊天機器人提供虛假訊息,並拒絕更正或刪除資訊。投訴稱 OpenAI 的行為違反了歐盟隱私規則,並凸顯了人們對人工智慧產生資料的準確性和透明度的擔憂。 Noyb 敦促奧地利資料保護機構調查 OpenAI 的資料處理實務並確保遵守歐盟法律。

OpenAI 因人工智慧產生的資料錯誤而受到隱私投訴

OpenAI Faces Privacy Complaint Over Alleged Inaccurate and Untraceable AI-Generated Data

OpenAI 因人工智慧生成資料不準確且無法追蹤而面臨隱私投訴

In a groundbreaking move, the data rights protection advocacy group Noyb has filed a complaint against OpenAI, the renowned artificial intelligence (AI) developer, alleging violations of privacy rules within the European Union (EU). The complaint stems from concerns over incorrect information provided by OpenAI's generative AI chatbot, ChatGPT, and the company's alleged refusal to address or provide transparency into its data handling practices.

資料權利保護倡導組織 Noyb 開創性地對著名人工智慧 (AI) 開發商 OpenAI 提出投訴,指控其違反歐盟 (EU) 內的隱私規則。該投訴源於對 OpenAI 的生成式人工智慧聊天機器人 ChatGPT 提供的不正確資訊的擔憂,以及該公司涉嫌拒絕解決其數據處理實踐或提供透明度的問題。

According to Noyb, the complainant, an unnamed public figure, sought information about themselves from ChatGPT, only to receive repeated instances of inaccurate data. Upon requesting corrections or erasure of the erroneous information, OpenAI reportedly denied their request, claiming it was not feasible. Furthermore, OpenAI declined to disclose details about the training data used for ChatGPT and its sources.

據 Noyb 稱,投訴人是一位不願透露姓名的公眾人物,他從 ChatGPT 尋求有關自己的信息,但一再收到不准確的數據。據報導,在要求更正或刪除錯誤訊息後,OpenAI 拒絕了他們的請求,聲稱這是不可行的。此外,OpenAI 拒絕透露有關 ChatGPT 所用訓練資料及其來源的詳細資訊。

Maartje de Graaf, a data protection lawyer at Noyb, expressed the group's concerns in a statement: "If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around."

Noyb 的資料保護律師Maartje de Graaf 在聲明中表達了該組織的擔憂:「如果一個系統不能產生準確和透明的結果,它就不能用於產生有關個人的資料。該技術必須遵循法律要求,而不是另一種方式。

The complaint underscores the growing scrutiny faced by AI-driven language models, particularly regarding their potential implications for data privacy and accuracy. Noyb has taken its case to the Austrian data protection authority, requesting an investigation into OpenAI's data processing practices and the measures it employs to ensure the accuracy of personal data processed by its large language models.

該投訴凸顯了人工智慧驅動的語言模型面臨日益嚴格的審查,特別是它們對資料隱私和準確性的潛在影響。 Noyb 已向奧地利資料保護機構提起訴訟,要求對 OpenAI 的資料處理實踐及其為確保其大型語言模型處理的個人資料的準確性而採取的措施進行調查。

"It's clear that companies are currently unable to make chatbots like ChatGPT comply with EU law when processing data about individuals," de Graaf stated.

de Graaf 表示:“很明顯,公司目前無法使 ChatGPT 等聊天機器人在處理個人資料時遵守歐盟法律。”

Noyb, also known as the European Center for Digital Rights, is based in Vienna, Austria, and has been instrumental in pursuing legal actions and media initiatives to uphold the EU's General Data Protection Regulation (GDPR) laws.

Noyb 也稱為歐洲數位權利中心,總部位於奧地利維也納,在採取法律行動和媒體倡議以維護歐盟《一般資料保護規範》(GDPR) 法律方面發揮了重要作用。

The complaint against OpenAI is not an isolated incident. In December 2023, a study conducted by two European nonprofit organizations exposed inaccuracies and misleading information provided by Microsoft's Bing AI chatbot, rebranded as Copilot, during political elections in Germany and Switzerland. The chatbot furnished incorrect details about candidates, polls, scandals, and voting procedures, while misrepresenting its sources.

針對 OpenAI 的投訴並非個案。 2023 年 12 月,兩家歐洲非營利組織進行的一項研究揭露了微軟 Bing AI 聊天機器人(更名為 Copilot)在德國和瑞士政治選舉期間提供的不準確和誤導性資訊。該聊天機器人提供了有關候選人、民意調查、醜聞和投票程序的錯誤詳細信息,同時歪曲了其消息來源。

Furthermore, Google faced criticism for its Gemini AI chatbot's "woke" and inaccurate image generation capabilities. The company apologized for the incident and announced plans to refine its model.

此外,Google因其 Gemini AI 聊天機器人的「喚醒」和不準確的圖像生成能力而面臨批評。該公司對此事件表示歉意,並宣布計劃改進其模型。

These incidents highlight the urgent need for greater transparency, accountability, and adherence to legal frameworks by companies developing and deploying AI-powered chatbots. The potential for misuse of personal data, dissemination of misinformation, and algorithmic bias calls for robust regulatory oversight and ethical considerations to safeguard individuals' privacy rights in the digital age.

這些事件凸顯了開發和部署人工智慧聊天機器人的公司迫切需要提高透明度、問責制和遵守法律框架。濫用個人資料、傳播錯誤訊息和演算法偏見的可能性需要強有力的監管監督和道德考慮,以保護數位時代的個人隱私權。

免責聲明:info@kdj.com

所提供的資訊並非交易建議。 kDJ.com對任何基於本文提供的資訊進行的投資不承擔任何責任。加密貨幣波動性較大,建議您充分研究後謹慎投資!

如果您認為本網站使用的內容侵犯了您的版權,請立即聯絡我們(info@kdj.com),我們將及時刪除。

2024年11月24日 其他文章發表於