|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
在 ChatGPT 部落格文章系列的第二部分中,我們探討了人工智慧整合的安全影響。基於我們先前發現的 XSS 漏洞,我們研究了攻擊者如何利用 ChatGPT 來獲得對用戶資料的持久存取並操縱應用程式行為。我們分析了利用 XSS 漏洞竊取 JWT 存取權杖的情況,強調了未經授權的帳戶存取的可能性。我們還調查了 ChatGPT 中的自訂指令帶來的風險,顯示了攻擊者如何操縱回應以促進錯誤訊息、網路釣魚和敏感資料竊取。
ChatGPT: Unveiling the Post-Exploitation Risks and Mitigation Strategies
ChatGPT:揭示利用後的風險和緩解策略
The integration of artificial intelligence (AI) into our daily routines has brought forth a paradigm shift in how we interact with technology. However, with the advent of powerful language models like ChatGPT, security researchers are actively scrutinizing the potential implications and vulnerabilities that arise from their usage. In this comprehensive analysis, we delving deeper into the post-exploitation risks associated with ChatGPT, shedding light on the techniques attackers could employ to gain persistent access to user data and manipulate application behavior.
人工智慧 (AI) 融入我們的日常生活,使我們與科技互動的方式發生了典範轉移。然而,隨著 ChatGPT 等強大語言模型的出現,安全研究人員正在積極審查其使用過程中產生的潛在影響和漏洞。在這份全面的分析中,我們深入研究了與 ChatGPT 相關的利用後風險,揭示了攻擊者可以用來獲得對用戶資料的持久存取和操縱應用程式行為的技術。
The Cross-Site Scripting (XSS) Vulnerability
跨站腳本 (XSS) 漏洞
In a previous investigation, our team uncovered two Cross-Site Scripting (XSS) vulnerabilities in ChatGPT. These vulnerabilities allowed a malicious actor to exploit the /api/auth/session endpoint, exfiltrating the user's JWT access token and gaining unauthorized access to their account. While the limited validity period of the access token mitigates the risk of permanent account compromise, it underscores the need for robust security measures to prevent such attacks in the first place.
在先前的調查中,我們的團隊發現了 ChatGPT 中的兩個跨站腳本 (XSS) 漏洞。這些漏洞允許惡意行為者利用 /api/auth/session 端點,竊取使用者的 JWT 存取權令牌並獲得對其帳戶的未經授權的存取。雖然存取令牌的有限有效期可以降低永久帳戶洩露的風險,但它強調需要採取強有力的安全措施來首先防止此類攻擊。
Persistent Access through Custom Instructions
透過自訂指令進行持久訪問
Custom Instructions in ChatGPT offer users the ability to set persistent contexts for customized conversations. However, this feature could pose security risks, including Stored Prompt Injection. Attackers could leverage XSS vulnerabilities or manipulate custom instructions to alter ChatGPT's responses, potentially facilitating misinformation dissemination, phishing, scams, and the theft of sensitive data. Notably, this manipulative influence could persist even after the user's session token has expired, underscoring the threat of long-term, unauthorized access and control.
ChatGPT 中的自訂指令使用戶能夠為自訂對話設定持久上下文。但是,此功能可能會帶來安全風險,包括儲存提示注入。攻擊者可以利用 XSS 漏洞或操縱自訂指令來改變 ChatGPT 的回應,從而可能促進錯誤訊息傳播、網路釣魚、詐騙和敏感資料的盜竊。值得注意的是,即使在使用者的會話令牌過期後,這種操縱影響也可能持續存在,這凸顯了長期、未經授權的存取和控制的威脅。
Recent Mitigations and the Bypass
最近的緩解措施和繞過
In response to the identified vulnerabilities, OpenAI has implemented measures to mitigate the risk of prompt injection attacks. The "browser tool" and markdown image rendering are now only permitted when the URL has been previously present in the conversation. This aims to prevent attackers from embedding dynamic, sensitive data within the URL query parameter or path.
針對已發現的漏洞,OpenAI 已採取措施來降低即時注入攻擊的風險。現在,只有當 URL 先前已存在於對話中時才允許使用「瀏覽器工具」和 Markdown 圖像渲染。這樣做的目的是防止攻擊者在 URL 查詢參數或路徑中嵌入動態敏感資料。
However, our testing revealed a bypass technique that allows attackers to circumvent these restrictions. By exploiting the /backend-api/conversation/{uuid}/url_safe?url={url} endpoint, attackers can validate client-side URLs in ChatGPT responses and identify whether a specific string, including custom instructions, is present within the conversation text. This bypass opens up avenues for attackers to continue exfiltrating information despite the implemented mitigations.
然而,我們的測試揭示了一種繞過技術,允許攻擊者繞過這些限制。透過利用 /backend-api/conversation/{uuid}/url_safe?url={url} 端點,攻擊者可以驗證 ChatGPT 回應中的客戶端 URL,並識別對話文字中是否存在特定字串(包括自訂指令) 。儘管實施了緩解措施,但這種繞過為攻擊者繼續竊取資訊開闢了途徑。
Exfiltration Techniques Despite Mitigations
儘管有緩解措施的滲漏技術
Despite OpenAI's efforts to mitigate information exfiltration, we identified several techniques that attackers could still employ:
儘管 OpenAI 努力減少資訊洩露,但我們還是發現了攻擊者仍然可以使用的幾種技術:
Static URLs for Each Character:
每個角色的靜態 URL:
Attackers could encode sensitive data into static URLs, creating a unique URL for each character they wish to exfiltrate. By using ChatGPT to generate images for each character and observing the order in which the requests are received, attackers can piece together the data on their server.
攻擊者可以將敏感資料編碼到靜態 URL 中,為他們想要洩漏的每個字元建立一個唯一的 URL。透過使用 ChatGPT 為每個角色產生影像並觀察接收請求的順序,攻擊者可以將其伺服器上的資料拼湊在一起。
One Long Static URL:
一個長靜態 URL:
Alternatively, attackers could use a single long static URL and ask ChatGPT to create a markdown image up to the character they wish to leak. This approach reduces the number of prompt characters required but may be slower for ChatGPT to render.
或者,攻擊者可以使用單一長靜態 URL 並要求 ChatGPT 創建一個 Markdown 圖像,直到他們想要洩漏的字元。此方法減少了所需的提示字元數量,但 ChatGPT 的渲染速度可能較慢。
Using Domain Patterns:
使用域模式:
The fastest method with the least prompt character requirement is using custom top-level domains. However, this method incurs a cost, as each domain would need to be purchased. Attackers could use a custom top-level domain for each character to create distinctive badges that link to the sensitive data.
提示字元要求最少的最快方法是使用自訂頂級網域。然而,這種方法會產生成本,因為需要購買每個網域。攻擊者可以為每個角色使用自訂頂級網域來建立連結到敏感資料的獨特徽章。
Other Attack Vectors
其他攻擊媒介
Beyond the aforementioned techniques, attackers may also explore the potential for Stored Prompt Injection gadgets within ChatGPTs and the recently introduced ChatGPT memory. These areas could provide additional avenues for exploitation and unauthorized access.
除了上述技術之外,攻擊者還可能探索 ChatGPT 和最近推出的 ChatGPT 記憶體中儲存提示注入小工具的潛力。這些區域可能為利用和未經授權的存取提供額外的途徑。
OpenAI's Response and Future Mitigation Strategies
OpenAI 的應對措施和未來緩解策略
OpenAI is actively working to address the identified vulnerabilities and improve the security of ChatGPT. While the implemented mitigations have made exfiltration more challenging, attackers continue to devise bypass techniques. The ongoing arms race between attackers and defenders highlights the need for continuous monitoring and adaptation of security measures.
OpenAI 正在積極致力於解決已發現的漏洞並提高 ChatGPT 的安全性。儘管實施的緩解措施使資訊外洩變得更具挑戰性,但攻擊者仍在繼續設計繞過技術。攻擊者和防禦者之間持續的軍備競賽凸顯了持續監控和調整安全措施的必要性。
Conclusion
結論
The integration of AI into our lives brings forth both opportunities and challenges. While ChatGPT and other language models offer immense potential, it is crucial to remain vigilant of the potential security risks they introduce. By understanding the post-exploitation techniques that attackers could employ, we can develop robust countermeasures and ensure the integrity and security of our systems. As the threat landscape evolves, organizations must prioritize security awareness, adopt best practices, and collaborate with researchers to mitigate the evolving risks associated with AI-powered technologies.
人工智慧融入我們的生活既帶來了機遇,也帶來了挑戰。雖然 ChatGPT 和其他語言模型提供了巨大的潛力,但對它們引入的潛在安全風險保持警惕至關重要。透過了解攻擊者可能採用的後利用技術,我們可以建立強大的對策並確保系統的完整性和安全性。隨著威脅情勢的發展,組織必須優先考慮安全意識,採用最佳實踐,並與研究人員合作,以減輕與人工智慧技術相關的不斷變化的風險。
免責聲明:info@kdj.com
所提供的資訊並非交易建議。 kDJ.com對任何基於本文提供的資訊進行的投資不承擔任何責任。加密貨幣波動性較大,建議您充分研究後謹慎投資!
如果您認為本網站使用的內容侵犯了您的版權,請立即聯絡我們(info@kdj.com),我們將及時刪除。
-
- 週一美國現貨比特幣 ETF 出現顯著的淨現金流出
- 2024-11-05 21:30:24
- 當美國選民前往投票箱選出第 47 任總統時,以比特幣為首的加密貨幣市場 BTC $68,647 24 小時波動:0.8%
-
- Gartner 2024 年第三季業績超乎市場預期
- 2024-11-05 20:35:01
- Gartner, Inc.(NYSE:IT)報告了 2024 年第三季的可喜業績,展示了各項指標的穩健財務成長。
-
- 羅馬特萊維噴泉引入新規定,遊客對「醜陋」的投幣池感到失望
- 2024-11-05 20:30:35
- 前往義大利城市的度假者經常前往特萊維噴泉投硬幣以求好運。
-
- 南非加密貨幣玩家熱切等待美國大選結果
- 2024-11-05 20:30:35
- 2024 年美國大選今天舉行。民主黨人卡馬拉·哈里斯(現任副總統)和共和黨人唐納德·特朗普(前總統)