bitcoin
bitcoin

$75897.46 USD 

9.04%

ethereum
ethereum

$2693.84 USD 

11.00%

tether
tether

$1.00 USD 

0.08%

solana
solana

$188.57 USD 

13.39%

bnb
bnb

$589.32 USD 

4.42%

usd-coin
usd-coin

$0.999951 USD 

0.00%

xrp
xrp

$0.542428 USD 

5.66%

dogecoin
dogecoin

$0.195238 USD 

15.54%

tron
tron

$0.162889 USD 

1.59%

cardano
cardano

$0.358343 USD 

7.04%

toncoin
toncoin

$4.74 USD 

1.94%

shiba-inu
shiba-inu

$0.000019 USD 

6.65%

avalanche
avalanche

$26.58 USD 

10.85%

chainlink
chainlink

$12.03 USD 

11.34%

bitcoin-cash
bitcoin-cash

$374.71 USD 

9.28%

加密货币新闻

新的人工智能安全威胁出现:恶意软件蠕虫和 Deepfake 声音带来挑战

2024/03/31 02:02

研究人员已经发现了 ChatGPT 等人工智能工具带来的潜在安全威胁。 Morris II 等恶意软件蠕虫可以利用人工智能架构,在用户不知情的情况下复制恶意提示来危害系统。为了防止这些威胁,用户应谨慎对待未知的电子邮件和链接,投资防病毒软件,并使用强密码、系统更新和有限的文件共享。此外,OpenAI 的新语音引擎可重现人类声音,引发了人们对恶意行为者潜在利用的担忧。

新的人工智能安全威胁出现:恶意软件蠕虫和 Deepfake 声音带来挑战

Novel Security Threats Emerge with the Rise of Generative Artificial Intelligence

随着生成人工智能的兴起,出现了新的安全威胁

Recent advancements in artificial intelligence (AI) have propelled Generative AI tools, such as OpenAI's ChatGPT and Google's Gemini, to the forefront of technological innovation. While these tools hold immense promise for revolutionizing various industries, researchers caution that inherent security threats loom large, potentially jeopardizing their safe and widespread adoption.

人工智能 (AI) 的最新进展将 OpenAI 的 ChatGPT 和 Google 的 Gemini 等生成式 AI 工具推向了技术创新的前沿。尽管这些工具为各个行业带来革命性的巨大希望,但研究人员警告说,固有的安全威胁迫在眉睫,可能会危及它们的安全和广泛采用。

Malware Worms: A Looming Threat to Generative AI

恶意软件蠕虫:对生成人工智能的迫在眉睫的威胁

In a groundbreaking study, researchers have uncovered a critical vulnerability in the architecture of Generative AI systems. This vulnerability, exploited by a sophisticated malware worm aptly named Morris II, could spell disaster for unsuspecting users.

在一项开创性的研究中,研究人员发现了生成式人工智能系统架构中的一个关键漏洞。这个漏洞被一种名为 Morris II 的复杂恶意软件蠕虫所利用,可能会给毫无戒心的用户带来灾难。

Similar to the infamous Morris worm of 1988, which crippled a staggering 10% of internet-connected computers, Morris II possesses the ability to self-replicate and spread relentlessly throughout Generative AI systems. Its insidious nature lies in its ability to bypass traditional security measures by compromising prompts, the textual instructions that guide GenAI's operations.

与 1988 年臭名昭著的莫里斯蠕虫病毒(导致 10% 的联网计算机瘫痪)相似,莫里斯 II 拥有自我复制的能力,并在整个生成人工智能系统中无情地传播。它的阴险本质在于它能够通过损害提示(指导 GenAI 操作的文本指令)来绕过传统安全措施。

Morris II manipulates prompts, transforming them into malicious directives that entice Generative AI platforms to perform destructive actions unbeknownst to the user or the system itself. This stealthy approach allows the worm to wreak havoc without triggering any red flags.

Morris II 操纵提示,将其转化为恶意指令,诱使生成式 AI 平台在用户或系统本身不知情的情况下执行破坏性操作。这种隐秘的方法使蠕虫能够造成严重破坏,而不会触发任何危险信号。

Safeguarding Against Malware Threats

防范恶意软件威胁

In light of these alarming findings, experts urge Generative AI users to exercise heightened vigilance against suspicious emails and links originating from unknown or untrustworthy sources. Additionally, investing in robust antivirus software capable of detecting and eliminating malware, including these elusive computer worms, is strongly recommended.

鉴于这些令人震惊的发现,专家敦促生成式人工智能用户对来自未知或不可信来源的可疑电子邮件和链接保持高度警惕。此外,强烈建议投资购买能够检测和消除恶意软件(包括这些难以捉摸的计算机蠕虫)的强大防病毒软件。

Implementing strong password protection, regularly updating systems, and limiting file-sharing further bolster defenses against malware attacks. By adhering to these precautionary measures, users can significantly reduce the risk of malware infiltration.

实施强大的密码保护、定期更新系统和限制文件共享可以进一步增强对恶意软件攻击的防御。通过遵守这些预防措施,用户可以显着降低恶意软件渗透的风险。

Voice Engine: A New Frontier with Potential Security Implications

语音引擎:具有潜在安全影响的新领域

Amidst the concerns surrounding malware threats, OpenAI has unveiled a groundbreaking new tool called Voice Engine. This innovative technology leverages text input and a mere 15-second voice sample to recreate an individual's voice with remarkable accuracy.

鉴于对恶意软件威胁的担忧,OpenAI 推出了一款名为“语音引擎”的突破性新工具。这项创新技术利用文本输入和仅 15 秒的语音样本,以极高的准确性重新创建个人的语音。

While Voice Engine holds immense potential for diverse applications, researchers caution that its very nature could inadvertently become a tool for nefarious actors. The GenAI model underpinning Voice Engine could potentially be exploited to impersonate voices, leading to malicious activities.

虽然语音引擎在各种应用中具有巨大的潜力,但研究人员警告说,其本质可能会无意中成为不法分子的工具。支持语音引擎的 GenAI 模型可能会被用来模仿语音,从而导致恶意活动。

As Voice Engine transitions from its current testing phase to widespread availability, it is imperative that developers prioritize security measures to mitigate its potential misuse.

随着语音引擎从当前的测试阶段过渡到广泛的可用性,开发人员必须优先考虑安全措施,以减少其潜在的误用。

Regulatory Oversight and the Future of Generative AI

监管监督和生成人工智能的未来

The emerging security threats associated with Generative AI have sparked concerns among regulatory bodies worldwide. Recognizing the potential for misuse, they are actively exploring frameworks to ensure the safe and ethical development and deployment of these technologies.

与生成人工智能相关的新兴安全威胁引发了全球监管机构的担忧。认识到滥用的可能性,他们正在积极探索框架,以确保这些技术的安全和合乎道德的开发和部署。

Striking a balance between innovation and security is paramount, ensuring that the transformative power of Generative AI can be harnessed without compromising user safety. As the field continues to evolve at a rapid pace, ongoing research and collaboration are essential to address the evolving security landscape and pave the way for a responsible and secure future for Generative AI.

在创新和安全之间取得平衡至关重要,确保在不损害用户安全的情况下利用生成式人工智能的变革力量。随着该领域继续快速发展,持续的研究和协作对于解决不断变化的安全形势至关重要,并为生成人工智能的负责任和安全的未来铺平道路。

免责声明:info@kdj.com

所提供的信息并非交易建议。根据本文提供的信息进行的任何投资,kdj.com不承担任何责任。加密货币具有高波动性,强烈建议您深入研究后,谨慎投资!

如您认为本网站上使用的内容侵犯了您的版权,请立即联系我们(info@kdj.com),我们将及时删除。

2024年11月07日 发表的其他文章