市值: $2.6677T -0.160%
成交额(24h): $46.0584B -28.290%
  • 市值: $2.6677T -0.160%
  • 成交额(24h): $46.0584B -28.290%
  • 恐惧与贪婪指数:
  • 市值: $2.6677T -0.160%
加密货币
话题
百科
资讯
加密话题
视频
热门新闻
加密货币
话题
百科
资讯
加密话题
视频
bitcoin
bitcoin

$84827.363534 USD

1.04%

ethereum
ethereum

$1582.488947 USD

-0.22%

tether
tether

$0.999953 USD

0.00%

xrp
xrp

$2.053481 USD

-0.91%

bnb
bnb

$589.801258 USD

1.27%

solana
solana

$135.018936 USD

3.25%

usd-coin
usd-coin

$1.000042 USD

0.01%

tron
tron

$0.245539 USD

0.25%

dogecoin
dogecoin

$0.154252 USD

-0.69%

cardano
cardano

$0.612452 USD

-0.19%

unus-sed-leo
unus-sed-leo

$9.233367 USD

-2.14%

chainlink
chainlink

$12.476940 USD

0.69%

avalanche
avalanche

$19.023043 USD

0.27%

stellar
stellar

$0.240851 USD

2.27%

toncoin
toncoin

$2.941934 USD

0.71%

加密货币新闻

引入CloudFlare容器:在工人平台上本地运行容器

2025/04/11 22:00

几乎是开发者周的结束,我们还没有谈论容器:直到现在。

引入CloudFlare容器:在工人平台上本地运行容器

It is almost the end of Developer Week and we haven’t talked about containers: until now. As some of you may know, we’ve been working on a container platform behind the scenes for some time.

这几乎是开发者周的结束,我们还没有谈论容器:直到现在。你们中有些人可能知道,我们已经在幕后的容器平台上工作了一段时间。

In late June, we plan to release Containers in open beta, and today we’ll give you a sneak peek at what makes it unique.

在6月下旬,我们计划在Open Beta中释放容器,今天,我们将为您提供使其独特之处的偷窥。

Workers are the simplest way to ship software around the world with little overhead. But sometimes you need to do more. You might want to:

工人是在世界各地运输软件的最简单方法,而没有开销。但是有时您需要做更多的事情。您可能需要:

Run user-generated code in any language

以任何语言运行用户生成的代码

Execute a CLI tool that needs a full Linux environment

执行需要完整Linux环境的CLI工具

Use several gigabytes of memory or multiple CPU cores

使用几GB的内存或多个CPU内核

Port an existing application from AWS, GCP, or Azure without a major rewrite

在没有重大重写的情况下,将AWS,GCP或Azure的现有应用程序移植

Cloudflare Containers let you do all of that while being simple, scalable, and global.

CloudFlare容器可让您在简单,可扩展和全局的同时执行所有操作。

Through a deep integration with Workers and an architecture built on Durable Objects, Workers can be your:

通过与工人的深入整合和建立在耐用物体上的建筑,工人可以成为您的:

API Gateway: Letting you control routing, authentication, caching, and rate-limiting before requests reach a container

API网关:让您在请求到达容器之前控制路由,身份验证,缓存和限速性

Service Mesh: Creating private connections between containers with a programmable routing layer

服务网格:具有可编程路由层的容器之间创建私人连接

Orchestrator: Allowing you to write custom scheduling, scaling, and health checking logic for your containers

编排者:允许您为容器编写自定义调度,缩放和健康检查逻辑

Instead of having to deploy new services, write custom Kubernetes operators, or wade through control plane configuration to extend the platform, you just write code.

您不必部署新服务,而是编写自定义Kubernetes运算符,或者通过控制平面配置涉水以扩展平台,只需编写代码即可。

Let’s see what it looks like.

让我们看看它的外观。

Deploying different application types

部署不同的应用程序类型

A stateful workload: executing AI-generated code

陈述工作量:执行AI生成的代码

First, let’s take a look at a stateful example.

首先,让我们看看一个陈述的例子。

Imagine you are building a platform where end-users can run code generated by an LLM. This code is untrusted, so each user needs their own secure sandbox. Additionally, you want users to be able to run multiple requests in sequence, potentially writing to local files or saving in-memory state.

想象一下,您正在建立一个平台,最终用户可以在其中运行LLM生成的代码。此代码是不信任的,因此每个用户需要自己的安全沙箱。此外,您希望用户能够按顺序运行多个请求,可能会写入本地文件或保存内存状态。

To do this, you need to create a container on-demand for each user session, then route subsequent requests to that container. Here’s how you can accomplish this:

为此,您需要为每个用户会话创建一个按需的容器,然后将后续请求路由到该容器。这是您可以完成此操作的方法:

First, you write some basic Wrangler config, then you route requests to containers via your Worker:

首先,您编写一些基本的牧马人配置,然后通过您的工作人员将请求路由到容器:

Then, deploy your code with a single command: wrangler deploy. This builds your container image, pushes it to Cloudflare’s registry, readies containers to boot quickly across the globe, and deploys your Worker.

然后,使用一个命令部署代码:牧马人部署。这会构建您的容器映像,将其推向Cloudflare的注册表,收购容器快速启动全球,并部署您的工人。

That’s it.

就是这样。

How does it work?

它如何工作?

Your Worker creates and starts up containers on-demand. Each time you call env.CODE_EXECUTOR.get(id) with a unique ID, it sends requests to a unique container instance. The container will automatically boot on the first fetch, then put itself to sleep after a configurable timeout, in this case 1 minute. You only pay for the time that the container is actively running.

您的工人会按需创建并启动容器。每当您使用唯一ID调用Env.Code_executor.get(ID)时,它将请求发送到唯一的容器实例。容器将自动在第一个获取上启动,然后在可配置的超时后(在这种情况下为1分钟)将自己入睡。您仅支付容器积极运行的时间。

When you request a new container, we boot one in a Cloudflare location near the incoming request. This means that low-latency workloads are well-served no matter the region. Cloudflare takes care of all the pre-warming and caching so you don’t have to think about it.

当您请求新容器时,我们在传入请求附近的Cloudflare位置启动一个。这意味着无论该地区如何,低延迟工作负载都是服务良好的。 Cloudflare会照顾所有预热和缓存,因此您不必考虑它。

This allows each user to run code in their own secure environment.

这允许每个用户在自己的安全环境中运行代码。

Stateless and global: FFmpeg everywhere

无状态和全球:ffmpeg无处不在

Stateless and autoscaling applications work equally well on Cloudflare Containers.

无状态和自动化应用程序在CloudFlare容器上同样效果很好。

Imagine you want to run a container that takes a video file and turns it into an animated GIF using FFmpeg. Unlike the previous example, any container can serve any request, but you still don’t want to send bytes across an ocean and back unnecessarily. So, ideally the app can be deployed everywhere.

想象一下,您想运行一个使用视频文件的容器,并使用FFMPEG将其变成动画GIF。与上一个示例不同,任何容器都可以提供任何请求,但是您仍然不想在海洋上发送字节并不必要地返回。因此,理想情况下,该应用程序可以在任何地方部署。

To do this, you declare a container in Wrangler config and turn on autoscaling. This specific configuration ensures that one instance is always running and if CPU usage increases beyond 75% of capacity, additional instances are added:

为此,您可以在Wrangler Config中声明一个容器,然后打开自动化。这种特定的配置可确保一个实例始终运行,并且如果CPU使用增加超过容量的75%,则添加了其他实例:

To route requests, you just call env.GIF_MAKER.fetch and requests are automatically sent to the closest container:

要路由请求,您只需致电env.gif_maker.fetch和请求即可自动发送到最近的容器:

Going beyond the basics

超越基础知识

From the examples above, you can see that getting a basic container service running on Workers just takes a few lines of config and a little Workers code. There’s no need to worry about capacity, artifact registries, regions, or scaling.

从上面的示例中,您可以看到,在工人上运行基本的容器服务只需几行配置和小工时代码即可。无需担心容量,人工制品注册机构,区域或扩展。

For more advanced use, we’ve designed Cloudflare Containers to run on top of Durable Objects and work in tandem with Workers. Let’s take a look at the underlying architecture and see some of the advanced use cases it enables.

为了获得更高级的用途,我们设计了Cloudflare容器以在耐用物体的顶部运行,并与工人协同工作。让我们看一下基础体系结构,并查看其启用的一些高级用例。

Durable Objects as programmable sidecars

耐用的对象作为可编程的辅助设备

Routing to containers is enabled using Durable Objects under the hood. In the examples above, the Container class from cloudflare:workers just wraps a container-enabled Durable Object and provides helper methods for common patterns. In the rest of this post, we’ll look at examples using Durable Objects directly, as this should shed light on the platform’s underlying design.

使用引擎盖下的耐用对象启用了连接到容器的路由。在上面的示例中,Cloudflare的容器类别:工人只是包装启用容器耐用的对象,并为常见模式提供辅助方法。在本文的其余部分,我们将直接使用耐用对象来研究示例,因为这应该阐明平台的基础设计。

Each Durable Object acts as a programmable sidecar that can proxy requests to the container and manages its lifecycle. This allows you to control and extend your containers in ways that are hard on other

每个耐用的对象都充当可编程的侧面,可以将请求委托到容器并管理其生命周期。这使您可以控制和扩展容器,而这些容器很难

免责声明:info@kdj.com

所提供的信息并非交易建议。根据本文提供的信息进行的任何投资,kdj.com不承担任何责任。加密货币具有高波动性,强烈建议您深入研究后,谨慎投资!

如您认为本网站上使用的内容侵犯了您的版权,请立即联系我们(info@kdj.com),我们将及时删除。

2025年04月19日 发表的其他文章