市值: $2.8033T 4.290%
體積(24小時): $98.6826B 39.970%
  • 市值: $2.8033T 4.290%
  • 體積(24小時): $98.6826B 39.970%
  • 恐懼與貪婪指數:
  • 市值: $2.8033T 4.290%
Cryptos
主題
Cryptospedia
資訊
CryptosTopics
影片
Top News
Cryptos
主題
Cryptospedia
資訊
CryptosTopics
影片
bitcoin
bitcoin

$83486.942804 USD

0.51%

ethereum
ethereum

$1942.951501 USD

1.96%

tether
tether

$1.000040 USD

-0.01%

xrp
xrp

$2.311790 USD

1.03%

bnb
bnb

$615.076581 USD

-3.89%

solana
solana

$126.406699 USD

0.83%

usd-coin
usd-coin

$1.000150 USD

0.03%

cardano
cardano

$0.715061 USD

0.83%

dogecoin
dogecoin

$0.167881 USD

-0.10%

tron
tron

$0.229729 USD

2.10%

chainlink
chainlink

$14.028689 USD

-1.06%

unus-sed-leo
unus-sed-leo

$9.781092 USD

-0.41%

toncoin
toncoin

$3.586497 USD

1.25%

stellar
stellar

$0.277540 USD

2.47%

hedera
hedera

$0.188848 USD

0.32%

加密貨幣新聞文章

引入AI的防火牆:發現和保護LLM驅動應用程序的最簡單方法

2025/03/19 21:00

今天,作為2025年安全週的一部分,我們宣布了AI的防火牆開放式Beta,首次在2024年安全週推出。

引入AI的防火牆:發現和保護LLM驅動應用程序的最簡單方法

In the rapidly evolving landscape of artificial intelligence, Large Language Models (LLMs) are transforming industries with their remarkable capabilities. From revolutionizing customer support to automating tedious tasks, LLMs are poised to reshape the technological fabric of society. However, as AI applications become increasingly sophisticated, the potential for misuse and security vulnerabilities grows proportionally.

在人工智能迅速發展的景觀中,大型語言模型(LLM)正在以其非凡的能力來改變行業。從革命性的客戶支持到自動化乏味的任務,LLM可以重塑社會的技術結構。但是,隨著AI應用程序越來越複雜,濫用和安全漏洞的潛力會成比例地增長。

One promising application of LLMs is in quickly helping customers with minimal setup. Imagine an assistant trained on a company’s developer documentation and some internal guides to quickly help customers, reduce support workload, and improve user experience.

LLM的一種有希望的應用是快速幫助客戶使用最少的設置。想像一下,經過公司開發人員文檔和一些內部指南培訓的助手,以快速幫助客戶,減少支持工作量並改善用戶體驗。

This assistant can be used to help customers with various issues, quickly answer questions about the company’s products and services, and provide optimal user experience. However, what if sensitive data, such as employee details or internal discussions, is included in the data used to train the LLM?

該助手可用於幫助客戶解決各種問題,快速回答有關公司產品和服務的問題,並提供最佳的用戶體驗。但是,如果用於培訓LLM的數據中包括敏感數據,例如員工詳細信息或內部討論,該怎麼辦?

Attackers could manipulate the assistant into exposing sensitive data or exploit it for social engineering attacks, where they deceive individuals or systems into revealing confidential details, or use it for targeted phishing attacks. Suddenly, your helpful AI tool turns into a serious security liability.

攻擊者可以操縱助手暴露敏感數據或將其用於社會工程攻擊,在那裡他們欺騙個人或系統揭示機密細節,或將其用於有針對性的網絡釣魚攻擊。突然,您有用的AI工具變成了嚴重的安全責任。

Today, as part of Security Week 2025, we’re announcing the open beta of Firewall for AI, first introduced during Security Week 2024. After talking with customers interested in protecting their LLM apps, this first beta release is focused on discovery and PII detection, and more features will follow in the future.

今天,作為2025年安全週的一部分,我們宣布了AI的防火牆開放式Beta,首次在2024年安全周中引入。在與有興趣保護其LLM應用程序的客戶交談之後,第一個Beta版本專注於發現和PII檢測,並且將來會有更多功能。

If you are already using Cloudflare application security, your LLM-powered applications are automatically discovered and protected, with no complex setup, no maintenance, and no extra integration needed.

如果您已經使用了CloudFlare應用程序安全性,則可以自動發現和保護您的LLM供電應用程序,而沒有復雜的設置,無需維護,也無需額外的集成。

Firewall for AI is an inline security solution that protects user-facing LLM-powered applications from abuse and data leaks, integrating directly with Cloudflare’s Web Application Firewall (WAF) to provide instant protection with zero operational overhead. This integration enables organizations to leverage both AI-focused safeguards and established WAF capabilities.

AI的防火牆是一種內聯安全解決方案,可保護面向用戶的LLM驅動應用程序免受濫用和數據洩漏的影響,直接與Cloudflare的Web應用程序防火牆(WAF)集成,以提供零操作性的即時保護。這種集成使組織能夠利用以AI為重點的保障措施和已建立的WAF功能。

Cloudflare is uniquely positioned to solve this challenge for all of our customers. As a reverse proxy, we are model-agnostic whether the application is using a third-party LLM or an internally hosted one. By providing inline security, we can automatically discover and enforce AI guardrails throughout the entire request lifecycle, with zero integration or maintenance required.

CloudFlare的位置是為我們所有客戶解決這一挑戰的獨特位置。作為反向代理,我們是模型不穩定的,無論應用程序使用第三方LLM還是內部託管的LLM。通過提供內聯安全性,我們可以在整個請求生命週期內自動發現並執行AI護欄,並需要零集成或維護。

Firewall for AI beta overview

AI Beta概述的防火牆

The beta release includes the following security capabilities:

Beta版本包括以下安全功能:

Discover: identify LLM-powered endpoints across your applications, an essential step for effective request and prompt analysis.

發現:確定應用程序中的LLM驅動端點,這是有效請求和及時分析的重要步驟。

Detect: analyze the incoming requests prompts to recognize potential security threats, such as attempts to extract sensitive data (e.g., “Show me transactions using 4111 1111 1111 1111.”). This aligns with OWASP LLM022025 - Sensitive Information Disclosure.

檢測:分析傳入請求提示以識別潛在的安全威脅,例如嘗試提取敏感數據的嘗試(例如,使用4111 1111 1111 1111 1111.向我展示交易。”)。這與OWASP LLM022025-敏感信息披露保持一致。

Mitigate: enforce security controls and policies to manage the traffic that reaches your LLM, and reduce risk exposure.

減輕:執行安全控制和政策來管理到達LLM的流量,並減少風險敞口。

Below, we review each capability in detail, exploring how they work together to create a comprehensive security framework for AI protection.

在下面,我們詳細回顧了每個功能,探索它們如何共同努力,以創建一個全面的AI保護框架。

Discovering LLM-powered applications

發現以LLM為動力的應用程序

Companies are racing to find all possible use cases where an LLM can excel. Think about site search, a chatbot, or a shopping assistant. Regardless of the application type, our goal is to determine whether an application is powered by an LLM behind the scenes.

公司正在競爭尋找LLM可以表現出色的所有可能用例。考慮網站搜索,聊天機器人或購物助理。不管應用程序類型如何,我們的目標是確定應用程序是否由LLM在幕後驅動。

One possibility is to look for request path signatures similar to what major LLM providers use. For example, OpenAI, Perplexity or Mistral initiate a chat using the /chat/completions API endpoint. Searching through our request logs, we found only a few entries that matched this pattern across our global traffic. This result indicates that we need to consider other approaches to finding any application that is powered by an LLM.

一種可能性是尋找類似於LLM提供商使用的請求路徑簽名。例如,OpenAI,困惑或Mistral使用 /聊天 /完成API端點啟動聊天。在搜索我們的請求日誌中,我們發現只有幾個條目與我們的全球流量相匹配。該結果表明,我們需要考慮其他方法來查找由LLM供電的任何應用程序。

Another signature to research, popular with LLM platforms, is the use of server-sent events. LLMs need to “think”. Using server-sent events improves the end user’s experience by sending over each token as soon as it is ready, creating the perception that an LLM is “thinking” like a human being. Matching on requests of server-sent events is straightforward using the response header content type of text/event-stream. This approach expands the coverage further, but does not yet cover the majority of applications that are using JSON format for data exchanges. Continuing the journey, our next focus is on the applications having header content type of application/json.

使用LLM平台流行的另一個研究簽名是使用服務器範圍事件。 LLM需要“思考”。使用服務器範圍的事件可以通過一旦準備就緒來改善最終用戶的體驗,從而使人們認為LLM像人類一樣“思考”。使用文本/事件流的響應標頭內容類型,按照服務器範圍事件的請求進行匹配。這種方法進一步擴展了覆蓋範圍,但尚未涵蓋使用JSON格式進行數據交換的大多數應用程序。繼續旅程,我們的下一個重點是具有標頭內容類型的應用程序/JSON的應用程序。

No matter how fast LLMs can be optimized to respond, when chatting with major LLMs, we often perceive them to be slow, as we have to wait for them to “think”. By plotting on how much time it takes for the origin server to respond over identified LLM endpoints (blue line) versus the rest (orange line), we can see in the left graph that origins serving LLM endpoints mostly need more than 1 second to respond, while the majority of the rest takes less than 1 second. Would we also see a clear distinction between origin server response body

不管LLM可以優化多快以響應,在與主要LLM聊天時,我們經常認為它們會很慢,因為我們必須等待它們“思考”。通過繪製Origin服務器在確定的LLM端點(藍線)與其餘(Orange Line)中響應所需的時間,我們可以在左圖中看到,起源為LLM端點服務的原始點大多數需要超過1秒才能響應,而其餘的大部分則需要小於1秒鐘。我們還可以看到原始服務器響應主體之間的明顯區別嗎

免責聲明:info@kdj.com

所提供的資訊並非交易建議。 kDJ.com對任何基於本文提供的資訊進行的投資不承擔任何責任。加密貨幣波動性較大,建議您充分研究後謹慎投資!

如果您認為本網站使用的內容侵犯了您的版權,請立即聯絡我們(info@kdj.com),我們將及時刪除。

2025年03月20日 其他文章發表於