|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
在 ChatGPT 博客文章系列的第二部分中,我们探讨了人工智能集成的安全影响。基于我们之前发现的 XSS 漏洞,我们研究了攻击者如何利用 ChatGPT 来获得对用户数据的持久访问并操纵应用程序行为。我们分析了利用 XSS 漏洞窃取 JWT 访问令牌的情况,强调了未经授权的帐户访问的可能性。我们还调查了 ChatGPT 中的自定义指令带来的风险,展示了攻击者如何操纵响应以促进错误信息、网络钓鱼和敏感数据盗窃。
ChatGPT: Unveiling the Post-Exploitation Risks and Mitigation Strategies
ChatGPT:揭示利用后的风险和缓解策略
The integration of artificial intelligence (AI) into our daily routines has brought forth a paradigm shift in how we interact with technology. However, with the advent of powerful language models like ChatGPT, security researchers are actively scrutinizing the potential implications and vulnerabilities that arise from their usage. In this comprehensive analysis, we delving deeper into the post-exploitation risks associated with ChatGPT, shedding light on the techniques attackers could employ to gain persistent access to user data and manipulate application behavior.
人工智能 (AI) 融入我们的日常生活,使我们与技术互动的方式发生了范式转变。然而,随着 ChatGPT 等强大语言模型的出现,安全研究人员正在积极审查其使用过程中产生的潜在影响和漏洞。在这份全面的分析中,我们深入研究了与 ChatGPT 相关的利用后风险,揭示了攻击者可以用来获得对用户数据的持久访问和操纵应用程序行为的技术。
The Cross-Site Scripting (XSS) Vulnerability
跨站脚本 (XSS) 漏洞
In a previous investigation, our team uncovered two Cross-Site Scripting (XSS) vulnerabilities in ChatGPT. These vulnerabilities allowed a malicious actor to exploit the /api/auth/session endpoint, exfiltrating the user's JWT access token and gaining unauthorized access to their account. While the limited validity period of the access token mitigates the risk of permanent account compromise, it underscores the need for robust security measures to prevent such attacks in the first place.
在之前的调查中,我们的团队发现了 ChatGPT 中的两个跨站脚本 (XSS) 漏洞。这些漏洞允许恶意行为者利用 /api/auth/session 端点,窃取用户的 JWT 访问令牌并获得对其帐户的未经授权的访问。虽然访问令牌的有限有效期可以降低永久帐户泄露的风险,但它强调需要采取强有力的安全措施来首先防止此类攻击。
Persistent Access through Custom Instructions
通过自定义指令进行持久访问
Custom Instructions in ChatGPT offer users the ability to set persistent contexts for customized conversations. However, this feature could pose security risks, including Stored Prompt Injection. Attackers could leverage XSS vulnerabilities or manipulate custom instructions to alter ChatGPT's responses, potentially facilitating misinformation dissemination, phishing, scams, and the theft of sensitive data. Notably, this manipulative influence could persist even after the user's session token has expired, underscoring the threat of long-term, unauthorized access and control.
ChatGPT 中的自定义指令使用户能够为自定义对话设置持久上下文。但是,此功能可能会带来安全风险,包括存储提示注入。攻击者可以利用 XSS 漏洞或操纵自定义指令来改变 ChatGPT 的响应,从而可能促进错误信息传播、网络钓鱼、诈骗和敏感数据的盗窃。值得注意的是,即使在用户的会话令牌过期后,这种操纵影响也可能持续存在,这凸显了长期、未经授权的访问和控制的威胁。
Recent Mitigations and the Bypass
最近的缓解措施和绕过
In response to the identified vulnerabilities, OpenAI has implemented measures to mitigate the risk of prompt injection attacks. The "browser tool" and markdown image rendering are now only permitted when the URL has been previously present in the conversation. This aims to prevent attackers from embedding dynamic, sensitive data within the URL query parameter or path.
针对已发现的漏洞,OpenAI 已采取措施来降低即时注入攻击的风险。现在,仅当 URL 先前已存在于对话中时才允许使用“浏览器工具”和 Markdown 图像渲染。这样做的目的是防止攻击者在 URL 查询参数或路径中嵌入动态敏感数据。
However, our testing revealed a bypass technique that allows attackers to circumvent these restrictions. By exploiting the /backend-api/conversation/{uuid}/url_safe?url={url} endpoint, attackers can validate client-side URLs in ChatGPT responses and identify whether a specific string, including custom instructions, is present within the conversation text. This bypass opens up avenues for attackers to continue exfiltrating information despite the implemented mitigations.
然而,我们的测试揭示了一种绕过技术,允许攻击者绕过这些限制。通过利用 /backend-api/conversation/{uuid}/url_safe?url={url} 端点,攻击者可以验证 ChatGPT 响应中的客户端 URL,并识别对话文本中是否存在特定字符串(包括自定义指令) 。尽管实施了缓解措施,但这种绕过为攻击者继续窃取信息开辟了途径。
Exfiltration Techniques Despite Mitigations
尽管有缓解措施的渗漏技术
Despite OpenAI's efforts to mitigate information exfiltration, we identified several techniques that attackers could still employ:
尽管 OpenAI 努力减少信息泄露,但我们还是发现了攻击者仍然可以使用的几种技术:
Static URLs for Each Character:
每个角色的静态 URL:
Attackers could encode sensitive data into static URLs, creating a unique URL for each character they wish to exfiltrate. By using ChatGPT to generate images for each character and observing the order in which the requests are received, attackers can piece together the data on their server.
攻击者可以将敏感数据编码到静态 URL 中,为他们想要泄露的每个字符创建一个唯一的 URL。通过使用 ChatGPT 为每个角色生成图像并观察接收请求的顺序,攻击者可以将其服务器上的数据拼凑在一起。
One Long Static URL:
一个长静态 URL:
Alternatively, attackers could use a single long static URL and ask ChatGPT to create a markdown image up to the character they wish to leak. This approach reduces the number of prompt characters required but may be slower for ChatGPT to render.
或者,攻击者可以使用单个长静态 URL 并要求 ChatGPT 创建一个 Markdown 图像,直到他们想要泄露的字符。此方法减少了所需的提示字符数量,但 ChatGPT 的渲染速度可能较慢。
Using Domain Patterns:
使用域模式:
The fastest method with the least prompt character requirement is using custom top-level domains. However, this method incurs a cost, as each domain would need to be purchased. Attackers could use a custom top-level domain for each character to create distinctive badges that link to the sensitive data.
提示字符要求最少的最快方法是使用自定义顶级域。然而,这种方法会产生成本,因为需要购买每个域。攻击者可以为每个角色使用自定义顶级域来创建链接到敏感数据的独特徽章。
Other Attack Vectors
其他攻击媒介
Beyond the aforementioned techniques, attackers may also explore the potential for Stored Prompt Injection gadgets within ChatGPTs and the recently introduced ChatGPT memory. These areas could provide additional avenues for exploitation and unauthorized access.
除了上述技术之外,攻击者还可能探索 ChatGPT 和最近推出的 ChatGPT 内存中存储提示注入小工具的潜力。这些区域可能为利用和未经授权的访问提供额外的途径。
OpenAI's Response and Future Mitigation Strategies
OpenAI 的应对措施和未来缓解策略
OpenAI is actively working to address the identified vulnerabilities and improve the security of ChatGPT. While the implemented mitigations have made exfiltration more challenging, attackers continue to devise bypass techniques. The ongoing arms race between attackers and defenders highlights the need for continuous monitoring and adaptation of security measures.
OpenAI 正在积极致力于解决已发现的漏洞并提高 ChatGPT 的安全性。尽管实施的缓解措施使信息泄露变得更具挑战性,但攻击者仍在继续设计绕过技术。攻击者和防御者之间持续的军备竞赛凸显了持续监控和调整安全措施的必要性。
Conclusion
结论
The integration of AI into our lives brings forth both opportunities and challenges. While ChatGPT and other language models offer immense potential, it is crucial to remain vigilant of the potential security risks they introduce. By understanding the post-exploitation techniques that attackers could employ, we can develop robust countermeasures and ensure the integrity and security of our systems. As the threat landscape evolves, organizations must prioritize security awareness, adopt best practices, and collaborate with researchers to mitigate the evolving risks associated with AI-powered technologies.
人工智能融入我们的生活既带来了机遇,也带来了挑战。虽然 ChatGPT 和其他语言模型提供了巨大的潜力,但对它们引入的潜在安全风险保持警惕至关重要。通过了解攻击者可能采用的后利用技术,我们可以制定强大的对策并确保系统的完整性和安全性。随着威胁形势的发展,组织必须优先考虑安全意识,采用最佳实践,并与研究人员合作,以减轻与人工智能技术相关的不断变化的风险。
免责声明:info@kdj.com
所提供的信息并非交易建议。根据本文提供的信息进行的任何投资,kdj.com不承担任何责任。加密货币具有高波动性,强烈建议您深入研究后,谨慎投资!
如您认为本网站上使用的内容侵犯了您的版权,请立即联系我们(info@kdj.com),我们将及时删除。
-
- 罗马特莱维喷泉引入新规定,游客对“丑陋”的投币池感到失望
- 2024-11-05 20:30:35
- 前往意大利城市的度假者经常前往特莱维喷泉投硬币以求好运。
-
- 南非加密货币玩家热切等待美国大选结果
- 2024-11-05 20:30:35
- 2024 年美国大选今天举行。民主党人卡马拉·哈里斯(现任副总统)和共和党人唐纳德·特朗普(前总统)
-
- 由于比特币和其他资产放缓,Memecoin 价格在美国大选日之前飙升
- 2024-11-05 20:30:02
- 进入美国大选日,模因币类别的加密货币目前价格水平大幅上涨。
-
- 2020 年大选已势在必行。以下是可能决定结果的因素。
- 2024-11-05 20:25:01
- 仅仅因为这次选举的结果不确定,并不意味着实际结果不会具有决定性