bitcoin
bitcoin

$75806.30 USD 

8.66%

ethereum
ethereum

$2724.94 USD 

11.83%

tether
tether

$1.00 USD 

0.05%

solana
solana

$188.00 USD 

12.44%

bnb
bnb

$592.77 USD 

4.66%

usd-coin
usd-coin

$0.999493 USD 

-0.05%

xrp
xrp

$0.544248 USD 

5.68%

dogecoin
dogecoin

$0.197821 USD 

15.11%

tron
tron

$0.162604 USD 

1.41%

cardano
cardano

$0.363259 USD 

8.45%

toncoin
toncoin

$4.80 USD 

1.87%

shiba-inu
shiba-inu

$0.000019 USD 

6.99%

avalanche
avalanche

$27.18 USD 

12.82%

chainlink
chainlink

$12.23 USD 

12.36%

bitcoin-cash
bitcoin-cash

$378.89 USD 

10.11%

加密货币新闻

人工智能的美丽新世界:华盛顿敲响紧急安全、隐私警报

2024/03/31 08:48

在新兴的人工智能领域,人们对生成式人工智能模型的安全性、隐私性和完整性产生了根本性的担忧。缺乏全面的训练数据验证、漏洞百出的安全措施和不加区别的数据摄取会带来重大风险。由于模型对大量数据集的依赖,隐私问题加剧,引发了对动态对话提示的保护、雇主机密以及恶意内容渗透训练数据的可能性的担忧。

人工智能的美丽新世界:华盛顿敲响紧急安全、隐私警报

AI's Brave New World: Sounding the Alarm on Security and Privacy

人工智能的美丽新世界:敲响安全和隐私的警钟

In the vibrant heart of Washington, D.C., a sobering conversation unfolded last week, a discussion that laid bare the profound implications of artificial intelligence (AI) on the pillars of security and privacy.

上周,在充满活力的华盛顿特区中心,一场发人深省的对话展开,这场讨论揭示了人工智能 (AI) 对安全和隐私支柱的深远影响。

As the echoes of academic laboratories and venture capital chambers reverberate through the corridors of progress, the unbridled enthusiasm surrounding generative AI is reminiscent of the nascent days of the internet. However, this time, the speed with which we are hurtling towards AI's "Brave New World" is fueled by the relentless ambition of vendors, the sirens of minor-league venture capital, and the amplification of Twitter echo chambers.

当学术实验室和风险投资室的回声在进步的走廊中回荡时,围绕生成式人工智能的肆无忌惮的热情让人想起互联网的新生时代。然而,这一次,我们冲向人工智能“美丽新世界”的速度是由供应商的不懈野心、小联盟风险投资的警报以及推特回声室的放大推动的。

Therein lies the genesis of our current predicament. The so-called "public" foundation models upon which generative AI rests are marred by blemishes that render them both unreliable and unsuitable for widespread consumer and commercial use. Privacy protections, when they exist at all, are riddled with holes, leaking sensitive data like a sieve. Security constructs are a work in progress, with the sprawling attack surface and the myriad threat vectors still largely unexplored. And as for the illusory guardrails, the less said, the better.

这就是我们目前困境的根源。生成式人工智能所依赖的所谓“公共”基础模型存在缺陷,导致它们不可靠且不适合广泛的消费者和商业用途。隐私保护即使存在,也会漏洞百出,像筛子一样泄露敏感数据。安全构造是一项正在进行的工作,攻击面不断扩大,而无数的威胁向量在很大程度上仍未被探索。而至于那些虚幻的护栏,还是少说为好。

How did we arrive at this precarious juncture? How did security and privacy become casualties on the path to AI's brave new world?

我们是如何走到这个危险的关头的?安全和隐私如何成为人工智能美丽新世界道路上的牺牲品?

Tainted Foundation Models: A Pandora's Box of Data

受污染的基础模型:数据的潘多拉魔盒

The very foundation of generative AI is built upon a shaky ground, as these so-called "open" models are anything but. Vendors tout varying degrees of openness, granting access to model weights, documentation, or test data. Yet, none provide the critical training data sets, their manifests, or lineage, rendering it impossible to replicate or reproduce their models.

生成式人工智能的基础是建立在一个不稳固的基础上的,因为这些所谓的“开放”模型根本不是。供应商宣传不同程度的开放性,允许访问模型权重、文档或测试数据。然而,没有一个提供关键的训练数据集、它们的清单或谱系,导致无法复制或再现它们的模型。

This lack of transparency means that consumers and organizations using these models have no way of verifying or validating the data they ingest, exposing themselves to potential copyright infringements, illegal content, and malicious code. Moreover, without a manifest of the training data sets, there is no way to ascertain whether nefarious actors have planted trojan horse content, leading to unpredictable and potentially devastating consequences when the models are deployed.

这种缺乏透明度意味着使用这些模型的消费者和组织无法验证或验证他们获取的数据,从而使自己面临潜在的版权侵权、非法内容和恶意代码的风险。此外,如果没有训练数据集的清单,就无法确定不法分子是否植入了特洛伊木马内容,从而在部署模型时导致不可预测且可能具有破坏性的后果。

Once a model is compromised, there is no going back. The only recourse is to obliterate it, a costly and irreversible solution.

一旦模型受到损害,就无法挽回。唯一的办法就是消灭它,这是一个代价高昂且不可逆转的解决方案。

Porous Security: A Hacker's Paradise

漏洞百出的安全:黑客的天堂

Generative AI models are veritable security honeypots, with all data amalgamated into a single, vulnerable container. This creates an unprecedented array of attack vectors, leaving the industry grappling with the daunting task of safeguarding these models from cyber threats and preventing their exploitation as tools of malicious actors.

生成式人工智能模型是名副其实的安全蜜罐,所有数据都合并到一个易受攻击的容器中。这创造了一系列前所未有的攻击媒介,使该行业面临着保护这些模型免受网络威胁并防止它们被恶意行为者利用的艰巨任务。

Attackers can poison the index, corrupt the weights, extract sensitive data, and even determine whether specific data was used in the training set. These are but a fraction of the security risks that lurk within the shadows of generative AI.

攻击者可以毒害索引、破坏权重、提取敏感数据,甚至确定训练集中是否使用了特定数据。这些只是潜伏在生成人工智能阴影下的安全风险的一小部分。

State-sponsored cyber activities are a further source of concern, as malicious actors can embed trojan horses and other cyber threats within the vast data sets that AI models consume. This poses a serious threat to national security and the integrity of critical infrastructure.

国家资助的网络活动是另一个令人担忧的问题,因为恶意行为者可以在人工智能模型消耗的庞大数据集中嵌入特洛伊木马和其他网络威胁。这对国家安全和关键基础设施的完整性构成严重威胁。

Leaky Privacy: A Constant Flow of Data

隐私泄露:持续的数据流

The very strength of AI models, their ability to learn from vast data sets, is also their greatest vulnerability when it comes to privacy. The indiscriminate ingestion of data, often without regard for consent or confidentiality, creates unprecedented privacy risks for individuals and society as a whole.

人工智能模型的最大优势,即它们从海量数据集中学习的能力,也是它们在隐私方面的最大弱点。不加区别地获取数据,往往不考虑同意或保密性,给个人和整个社会带来前所未有的隐私风险。

In an era defined by AI, privacy has become a societal imperative, and regulations focused solely on individual data rights are woefully inadequate. Beyond static data, it is crucial to safeguard dynamic conversational prompts as intellectual property. These prompts, which guide the creative output of AI models, should not be used to train the model or shared with other users.

在人工智能定义的时代,隐私已成为社会的当务之急,而仅关注个人数据权利的法规是远远不够的。除了静态数据之外,保护动态对话提示的知识产权也至关重要。这些指导人工智能模型创造性输出的提示不应用于训练模型或与其他用户共享。

Similarly, employers have a vested interest in protecting the confidentiality of prompts and responses generated by employees using AI models. In the event of liability issues, a secure audit trail is essential to establish the provenance and intent behind these interactions.

同样,保护员工使用人工智能模型生成的提示和响应的机密性也符合雇主的既得利益。如果出现责任问题,安全的审计跟踪对于确定这些交互背后的来源和意图至关重要。

A Call to Action: Regulators and Policymakers Must Step In

行动呼吁:监管机构和政策制定者必须介入

The technology we are grappling with is unlike anything we have encountered before in the history of computing. AI exhibits emergent, latent behavior at scale, rendering traditional approaches to security, privacy, and confidentiality obsolete.

我们正在努力解决的技术与我们在计算历史上遇到的任何技术都不同。人工智能大规模地展现出突发的、潜在的行为,使得传统的安全、隐私和保密方法变得过时。

Industry leaders have acted with reckless abandon, leaving regulators and policymakers with no choice but to intervene. It is imperative that governments establish clear guidelines and regulations to govern the development and deployment of generative AI, with a particular focus on addressing the pressing concerns of security and privacy.

行业领导者的行为不计后果,让监管机构和政策制定者别无选择,只能进行干预。政府必须制定明确的指导方针和法规来管理生成人工智能的开发和部署,特别关注解决安全和隐私方面的紧迫问题。

Conclusion

结论

The Brave New World of AI holds immense promise, but it is imperative that we proceed with caution, ensuring that our pursuit of progress does not come at the expense of our security and privacy. The time for complacency has passed. It is time for regulators, policymakers, and the technology industry to work together to establish a robust framework that safeguards these fundamental rights in the age of AI.

人工智能的美丽新世界蕴藏着巨大的希望,但我们必须谨慎行事,确保我们对进步的追求不会以牺牲我们的安全和隐私为代价。自满的时代已经过去了。监管机构、政策制定者和科技行业现在应该共同努力,建立一个强有力的框架,以保障人工智能时代的这些基本权利。

免责声明:info@kdj.com

所提供的信息并非交易建议。根据本文提供的信息进行的任何投资,kdj.com不承担任何责任。加密货币具有高波动性,强烈建议您深入研究后,谨慎投资!

如您认为本网站上使用的内容侵犯了您的版权,请立即联系我们(info@kdj.com),我们将及时删除。

2024年11月07日 发表的其他文章