bitcoin
bitcoin

$88045.97 USD 

-3.25%

ethereum
ethereum

$3108.09 USD 

-2.64%

tether
tether

$1.00 USD 

-0.10%

solana
solana

$213.66 USD 

0.16%

bnb
bnb

$634.65 USD 

2.72%

dogecoin
dogecoin

$0.388167 USD 

-0.42%

xrp
xrp

$0.782510 USD 

13.51%

usd-coin
usd-coin

$0.999954 USD 

0.00%

cardano
cardano

$0.561541 USD 

-3.02%

tron
tron

$0.177981 USD 

0.87%

shiba-inu
shiba-inu

$0.000025 USD 

-1.68%

toncoin
toncoin

$5.37 USD 

2.30%

avalanche
avalanche

$32.04 USD 

-2.69%

sui
sui

$3.38 USD 

3.48%

pepe
pepe

$0.000022 USD 

16.39%

加密货币新闻

OpenAI 因人工智能生成的数据错误而受到隐私投诉

2024/04/29 16:04

奥地利数据保护组织 Noyb 对 OpenAI 提起隐私投诉,指控其 ChatGPT 聊天机器人提供虚假信息,并拒绝更正或删除信息。投诉称 OpenAI 的行为违反了欧盟隐私规则,并凸显了人们对人工智能生成数据的准确性和透明度的担忧。 Noyb 敦促奥地利数据保护机构调查 OpenAI 的数据处理实践并确保遵守欧盟法律。

OpenAI 因人工智能生成的数据错误而受到隐私投诉

OpenAI Faces Privacy Complaint Over Alleged Inaccurate and Untraceable AI-Generated Data

OpenAI 因人工智能生成数据不准确且无法追踪而面临隐私投诉

In a groundbreaking move, the data rights protection advocacy group Noyb has filed a complaint against OpenAI, the renowned artificial intelligence (AI) developer, alleging violations of privacy rules within the European Union (EU). The complaint stems from concerns over incorrect information provided by OpenAI's generative AI chatbot, ChatGPT, and the company's alleged refusal to address or provide transparency into its data handling practices.

数据权利保护倡导组织 Noyb 开创性地对著名人工智能 (AI) 开发商 OpenAI 提出投诉,指控其违反欧盟 (EU) 内的隐私规则。该投诉源于对 OpenAI 的生成式人工智能聊天机器人 ChatGPT 提供的不正确信息的担忧,以及该公司涉嫌拒绝解决其数据处理实践或提供透明度的问题。

According to Noyb, the complainant, an unnamed public figure, sought information about themselves from ChatGPT, only to receive repeated instances of inaccurate data. Upon requesting corrections or erasure of the erroneous information, OpenAI reportedly denied their request, claiming it was not feasible. Furthermore, OpenAI declined to disclose details about the training data used for ChatGPT and its sources.

据 Noyb 称,投诉人是一位不愿透露姓名的公众人物,他从 ChatGPT 寻求有关自己的信息,但一再收到不准确的数据。据报道,在要求更正或删除错误信息后,OpenAI 拒绝了他们的请求,声称这是不可行的。此外,OpenAI 拒绝透露有关 ChatGPT 所用训练数据及其来源的详细信息。

Maartje de Graaf, a data protection lawyer at Noyb, expressed the group's concerns in a statement: "If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around."

Noyb 的数据保护律师 Maartje de Graaf 在一份声明中表达了该组织的担忧:“如果一个系统不能产生准确和透明的结果,它就不能用于生成有关个人的数据。该技术必须遵循法律要求,而不是另一种方式。”

The complaint underscores the growing scrutiny faced by AI-driven language models, particularly regarding their potential implications for data privacy and accuracy. Noyb has taken its case to the Austrian data protection authority, requesting an investigation into OpenAI's data processing practices and the measures it employs to ensure the accuracy of personal data processed by its large language models.

该投诉凸显了人工智能驱动的语言模型面临着日益严格的审查,特别是它们对数据隐私和准确性的潜在影响。 Noyb 已向奥地利数据保护机构提起诉讼,要求对 OpenAI 的数据处理实践及其为确保其大型语言模型处理的个人数据的准确性而采取的措施进行调查。

"It's clear that companies are currently unable to make chatbots like ChatGPT comply with EU law when processing data about individuals," de Graaf stated.

de Graaf 表示:“很明显,公司目前无法使 ChatGPT 等聊天机器人在处理个人数据时遵守欧盟法律。”

Noyb, also known as the European Center for Digital Rights, is based in Vienna, Austria, and has been instrumental in pursuing legal actions and media initiatives to uphold the EU's General Data Protection Regulation (GDPR) laws.

Noyb 也称为欧洲数字权利中心,总部位于奥地利维也纳,在采取法律行动和媒体倡议以维护欧盟《通用数据保护条例》(GDPR) 法律方面发挥了重要作用。

The complaint against OpenAI is not an isolated incident. In December 2023, a study conducted by two European nonprofit organizations exposed inaccuracies and misleading information provided by Microsoft's Bing AI chatbot, rebranded as Copilot, during political elections in Germany and Switzerland. The chatbot furnished incorrect details about candidates, polls, scandals, and voting procedures, while misrepresenting its sources.

针对 OpenAI 的投诉并非孤立事件。 2023 年 12 月,两家欧洲非营利组织进行的一项研究揭露了微软 Bing AI 聊天机器人(更名为 Copilot)在德国和瑞士政治选举期间提供的不准确和误导性信息。该聊天机器人提供了有关候选人、民意调查、丑闻和投票程序的错误详细信息,同时歪曲了其消息来源。

Furthermore, Google faced criticism for its Gemini AI chatbot's "woke" and inaccurate image generation capabilities. The company apologized for the incident and announced plans to refine its model.

此外,谷歌因其 Gemini AI 聊天机器人的“唤醒”和不准确的图像生成能力而面临批评。该公司对该事件表示歉意,并宣布计划改进其模型。

These incidents highlight the urgent need for greater transparency, accountability, and adherence to legal frameworks by companies developing and deploying AI-powered chatbots. The potential for misuse of personal data, dissemination of misinformation, and algorithmic bias calls for robust regulatory oversight and ethical considerations to safeguard individuals' privacy rights in the digital age.

这些事件凸显了开发和部署人工智能聊天机器人的公司迫切需要提高透明度、问责制和遵守法律框架。滥用个人数据、传播错误信息和算法偏见的可能性需要强有力的监管监督和道德考虑,以保护数字时代的个人隐私权。

免责声明:info@kdj.com

所提供的信息并非交易建议。根据本文提供的信息进行的任何投资,kdj.com不承担任何责任。加密货币具有高波动性,强烈建议您深入研究后,谨慎投资!

如您认为本网站上使用的内容侵犯了您的版权,请立即联系我们(info@kdj.com),我们将及时删除。

2024年11月15日 发表的其他文章