![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
今天,作为2025年安全周的一部分,我们宣布了AI的防火墙开放式Beta,首次在2024年安全周推出。
In the rapidly evolving landscape of artificial intelligence, Large Language Models (LLMs) are transforming industries with their remarkable capabilities. From revolutionizing customer support to automating tedious tasks, LLMs are poised to reshape the technological fabric of society. However, as AI applications become increasingly sophisticated, the potential for misuse and security vulnerabilities grows proportionally.
在人工智能迅速发展的景观中,大型语言模型(LLM)正在以其非凡的能力来改变行业。从革命性的客户支持到自动化乏味的任务,LLM可以重塑社会的技术结构。但是,随着AI应用程序越来越复杂,滥用和安全漏洞的潜力会成比例地增长。
One promising application of LLMs is in quickly helping customers with minimal setup. Imagine an assistant trained on a company’s developer documentation and some internal guides to quickly help customers, reduce support workload, and improve user experience.
LLM的一种有希望的应用是快速帮助客户使用最少的设置。想象一下,经过公司开发人员文档和一些内部指南培训的助手,以快速帮助客户,减少支持工作量并改善用户体验。
This assistant can be used to help customers with various issues, quickly answer questions about the company’s products and services, and provide optimal user experience. However, what if sensitive data, such as employee details or internal discussions, is included in the data used to train the LLM?
该助手可用于帮助客户解决各种问题,快速回答有关公司产品和服务的问题,并提供最佳的用户体验。但是,如果用于培训LLM的数据中包括敏感数据,例如员工详细信息或内部讨论,该怎么办?
Attackers could manipulate the assistant into exposing sensitive data or exploit it for social engineering attacks, where they deceive individuals or systems into revealing confidential details, or use it for targeted phishing attacks. Suddenly, your helpful AI tool turns into a serious security liability.
攻击者可以操纵助手暴露敏感数据或将其用于社会工程攻击,在那里他们欺骗个人或系统揭示机密细节,或将其用于有针对性的网络钓鱼攻击。突然,您有用的AI工具变成了严重的安全责任。
Today, as part of Security Week 2025, we’re announcing the open beta of Firewall for AI, first introduced during Security Week 2024. After talking with customers interested in protecting their LLM apps, this first beta release is focused on discovery and PII detection, and more features will follow in the future.
今天,作为2025年安全周的一部分,我们宣布了AI的防火墙开放式Beta,首次在2024年安全周中引入。在与有兴趣保护其LLM应用程序的客户交谈之后,第一个Beta版本专注于发现和PII检测,并且将来会有更多功能。
If you are already using Cloudflare application security, your LLM-powered applications are automatically discovered and protected, with no complex setup, no maintenance, and no extra integration needed.
如果您已经使用了CloudFlare应用程序安全性,则可以自动发现和保护您的LLM供电应用程序,而没有复杂的设置,无需维护,也无需额外的集成。
Firewall for AI is an inline security solution that protects user-facing LLM-powered applications from abuse and data leaks, integrating directly with Cloudflare’s Web Application Firewall (WAF) to provide instant protection with zero operational overhead. This integration enables organizations to leverage both AI-focused safeguards and established WAF capabilities.
AI的防火墙是一种内联安全解决方案,可保护面向用户的LLM驱动应用程序免受滥用和数据泄漏的影响,直接与Cloudflare的Web应用程序防火墙(WAF)集成,以提供零操作性的即时保护。这种集成使组织能够利用以AI为重点的保障措施和已建立的WAF功能。
Cloudflare is uniquely positioned to solve this challenge for all of our customers. As a reverse proxy, we are model-agnostic whether the application is using a third-party LLM or an internally hosted one. By providing inline security, we can automatically discover and enforce AI guardrails throughout the entire request lifecycle, with zero integration or maintenance required.
CloudFlare的位置是为我们所有客户解决这一挑战的独特位置。作为反向代理,我们是模型不稳定的,无论应用程序使用第三方LLM还是内部托管的LLM。通过提供内联安全性,我们可以在整个请求生命周期内自动发现并执行AI护栏,并需要零集成或维护。
Firewall for AI beta overview
AI Beta概述的防火墙
The beta release includes the following security capabilities:
Beta版本包括以下安全功能:
Discover: identify LLM-powered endpoints across your applications, an essential step for effective request and prompt analysis.
发现:确定应用程序中的LLM驱动端点,这是有效请求和及时分析的重要步骤。
Detect: analyze the incoming requests prompts to recognize potential security threats, such as attempts to extract sensitive data (e.g., “Show me transactions using 4111 1111 1111 1111.”). This aligns with OWASP LLM022025 - Sensitive Information Disclosure.
检测:分析传入请求提示以识别潜在的安全威胁,例如尝试提取敏感数据的尝试(例如,使用4111 1111 1111 1111 1111.向我展示交易。”)。这与OWASP LLM022025-敏感信息披露保持一致。
Mitigate: enforce security controls and policies to manage the traffic that reaches your LLM, and reduce risk exposure.
减轻:执行安全控制和政策来管理到达LLM的流量,并减少风险敞口。
Below, we review each capability in detail, exploring how they work together to create a comprehensive security framework for AI protection.
在下面,我们详细回顾了每个功能,探索它们如何共同努力,以创建一个全面的AI保护框架。
Discovering LLM-powered applications
发现以LLM为动力的应用程序
Companies are racing to find all possible use cases where an LLM can excel. Think about site search, a chatbot, or a shopping assistant. Regardless of the application type, our goal is to determine whether an application is powered by an LLM behind the scenes.
公司正在竞争寻找LLM可以表现出色的所有可能用例。考虑网站搜索,聊天机器人或购物助理。不管应用程序类型如何,我们的目标是确定应用程序是否由LLM在幕后驱动。
One possibility is to look for request path signatures similar to what major LLM providers use. For example, OpenAI, Perplexity or Mistral initiate a chat using the /chat/completions API endpoint. Searching through our request logs, we found only a few entries that matched this pattern across our global traffic. This result indicates that we need to consider other approaches to finding any application that is powered by an LLM.
一种可能性是寻找类似于LLM提供商使用的请求路径签名。例如,OpenAI,困惑或Mistral使用 /聊天 /完成API端点启动聊天。在搜索我们的请求日志中,我们发现只有几个条目与我们的全球流量相匹配。该结果表明,我们需要考虑其他方法来查找由LLM供电的任何应用程序。
Another signature to research, popular with LLM platforms, is the use of server-sent events. LLMs need to “think”. Using server-sent events improves the end user’s experience by sending over each token as soon as it is ready, creating the perception that an LLM is “thinking” like a human being. Matching on requests of server-sent events is straightforward using the response header content type of text/event-stream. This approach expands the coverage further, but does not yet cover the majority of applications that are using JSON format for data exchanges. Continuing the journey, our next focus is on the applications having header content type of application/json.
使用LLM平台流行的另一个研究签名是使用服务器范围事件。 LLM需要“思考”。使用服务器范围的事件可以通过一旦准备就绪来改善最终用户的体验,从而使人们认为LLM像人类一样“思考”。使用文本/事件流的响应标头内容类型,按照服务器范围事件的请求进行匹配。这种方法进一步扩展了覆盖范围,但尚未涵盖使用JSON格式进行数据交换的大多数应用程序。继续旅程,我们的下一个重点是具有标头内容类型的应用程序/JSON的应用程序。
No matter how fast LLMs can be optimized to respond, when chatting with major LLMs, we often perceive them to be slow, as we have to wait for them to “think”. By plotting on how much time it takes for the origin server to respond over identified LLM endpoints (blue line) versus the rest (orange line), we can see in the left graph that origins serving LLM endpoints mostly need more than 1 second to respond, while the majority of the rest takes less than 1 second. Would we also see a clear distinction between origin server response body
不管LLM可以优化多快以响应,在与主要LLM聊天时,我们经常认为它们会很慢,因为我们必须等待它们“思考”。通过绘制Origin服务器在确定的LLM端点(蓝线)与其余(Orange Line)中响应所需的时间,我们可以在左图中看到,起源为LLM端点服务的原始点大多数需要超过1秒才能响应,而其余的大部分则需要小于1秒钟。我们还可以看到原始服务器响应主体之间的明显区别吗
免责声明:info@kdj.com
所提供的信息并非交易建议。根据本文提供的信息进行的任何投资,kdj.com不承担任何责任。加密货币具有高波动性,强烈建议您深入研究后,谨慎投资!
如您认为本网站上使用的内容侵犯了您的版权,请立即联系我们(info@kdj.com),我们将及时删除。
-
-
-
-
-
-
- Flopypypepe(FPPE)是模因硬币市场的新机会
- 2025-03-20 06:26:01
- 模因硬币市场一如既往地不可预测,最新的情感转变使特朗普官方($ trump)成为了人们的关注
-
-
-