市值: $2.6677T -0.160%
體積(24小時): $46.0584B -28.290%
  • 市值: $2.6677T -0.160%
  • 體積(24小時): $46.0584B -28.290%
  • 恐懼與貪婪指數:
  • 市值: $2.6677T -0.160%
加密
主題
加密植物
資訊
加密術
影片
頭號新聞
加密
主題
加密植物
資訊
加密術
影片
bitcoin
bitcoin

$84827.363534 USD

1.04%

ethereum
ethereum

$1582.488947 USD

-0.22%

tether
tether

$0.999953 USD

0.00%

xrp
xrp

$2.053481 USD

-0.91%

bnb
bnb

$589.801258 USD

1.27%

solana
solana

$135.018936 USD

3.25%

usd-coin
usd-coin

$1.000042 USD

0.01%

tron
tron

$0.245539 USD

0.25%

dogecoin
dogecoin

$0.154252 USD

-0.69%

cardano
cardano

$0.612452 USD

-0.19%

unus-sed-leo
unus-sed-leo

$9.233367 USD

-2.14%

chainlink
chainlink

$12.476940 USD

0.69%

avalanche
avalanche

$19.023043 USD

0.27%

stellar
stellar

$0.240851 USD

2.27%

toncoin
toncoin

$2.941934 USD

0.71%

加密貨幣新聞文章

引入CloudFlare容器:在工人平台上本地運行容器

2025/04/11 22:00

幾乎是開發者周的結束,我們還沒有談論容器:直到現在。

引入CloudFlare容器:在工人平台上本地運行容器

It is almost the end of Developer Week and we haven’t talked about containers: until now. As some of you may know, we’ve been working on a container platform behind the scenes for some time.

這幾乎是開發者周的結束,我們還沒有談論容器:直到現在。你們中有些人可能知道,我們已經在幕後的容器平台上工作了一段時間。

In late June, we plan to release Containers in open beta, and today we’ll give you a sneak peek at what makes it unique.

在6月下旬,我們計劃在Open Beta中釋放容器,今天,我們將為您提供使其獨特之處的偷窺。

Workers are the simplest way to ship software around the world with little overhead. But sometimes you need to do more. You might want to:

工人是在世界各地運輸軟件的最簡單方法,而沒有開銷。但是有時您需要做更多的事情。您可能需要:

Run user-generated code in any language

以任何語言運行用戶生成的代碼

Execute a CLI tool that needs a full Linux environment

執行需要完整Linux環境的CLI工具

Use several gigabytes of memory or multiple CPU cores

使用幾GB的內存或多個CPU內核

Port an existing application from AWS, GCP, or Azure without a major rewrite

在沒有重大重寫的情況下,將AWS,GCP或Azure的現有應用程序移植

Cloudflare Containers let you do all of that while being simple, scalable, and global.

CloudFlare容器可讓您在簡單,可擴展和全局的同時執行所有操作。

Through a deep integration with Workers and an architecture built on Durable Objects, Workers can be your:

通過與工人的深入整合和建立在耐用物體上的建築,工人可以成為您的:

API Gateway: Letting you control routing, authentication, caching, and rate-limiting before requests reach a container

API網關:讓您在請求到達容器之前控制路由,身份驗證,緩存和限速性

Service Mesh: Creating private connections between containers with a programmable routing layer

服務網格:具有可編程路由層的容器之間創建私人連接

Orchestrator: Allowing you to write custom scheduling, scaling, and health checking logic for your containers

編排者:允許您為容器編寫自定義調度,縮放和健康檢查邏輯

Instead of having to deploy new services, write custom Kubernetes operators, or wade through control plane configuration to extend the platform, you just write code.

您不必部署新服務,而是編寫自定義Kubernetes運算符,或者通過控制平面配置涉水以擴展平台,只需編寫代碼即可。

Let’s see what it looks like.

讓我們看看它的外觀。

Deploying different application types

部署不同的應用程序類型

A stateful workload: executing AI-generated code

陳述工作量:執行AI生成的代碼

First, let’s take a look at a stateful example.

首先,讓我們看看一個陳述的例子。

Imagine you are building a platform where end-users can run code generated by an LLM. This code is untrusted, so each user needs their own secure sandbox. Additionally, you want users to be able to run multiple requests in sequence, potentially writing to local files or saving in-memory state.

想像一下,您正在建立一個平台,最終用戶可以在其中運行LLM生成的代碼。此代碼是不信任的,因此每個用戶需要自己的安全沙箱。此外,您希望用戶能夠按順序運行多個請求,可能會寫入本地文件或保存內存狀態。

To do this, you need to create a container on-demand for each user session, then route subsequent requests to that container. Here’s how you can accomplish this:

為此,您需要為每個用戶會話創建一個按需的容器,然後將後續請求路由到該容器。這是您可以完成此操作的方法:

First, you write some basic Wrangler config, then you route requests to containers via your Worker:

首先,您編寫一些基本的牧馬人配置,然後通過您的工作人員將請求路由到容器:

Then, deploy your code with a single command: wrangler deploy. This builds your container image, pushes it to Cloudflare’s registry, readies containers to boot quickly across the globe, and deploys your Worker.

然後,使用一個命令部署代碼:牧馬人部署。這會構建您的容器映像,將其推向Cloudflare的註冊表,收購容器快速啟動全球,並部署您的工人。

That’s it.

就是這樣。

How does it work?

它如何工作?

Your Worker creates and starts up containers on-demand. Each time you call env.CODE_EXECUTOR.get(id) with a unique ID, it sends requests to a unique container instance. The container will automatically boot on the first fetch, then put itself to sleep after a configurable timeout, in this case 1 minute. You only pay for the time that the container is actively running.

您的工人會按需創建並啟動容器。每當您使用唯一ID調用Env.Code_executor.get(ID)時,它將請求發送到唯一的容器實例。容器將自動在第一個獲取上啟動,然後在可配置的超時後(在這種情況下為1分鐘)將自己入睡。您僅支付容器積極運行的時間。

When you request a new container, we boot one in a Cloudflare location near the incoming request. This means that low-latency workloads are well-served no matter the region. Cloudflare takes care of all the pre-warming and caching so you don’t have to think about it.

當您請求新容器時,我們在傳入請求附近的Cloudflare位置啟動一個。這意味著無論該地區如何,低延遲工作負載都是服務良好的。 Cloudflare會照顧所有預熱和緩存,因此您不必考慮它。

This allows each user to run code in their own secure environment.

這允許每個用戶在自己的安全環境中運行代碼。

Stateless and global: FFmpeg everywhere

無狀態和全球:ffmpeg無處不在

Stateless and autoscaling applications work equally well on Cloudflare Containers.

無狀態和自動化應用程序在CloudFlare容器上同樣效果很好。

Imagine you want to run a container that takes a video file and turns it into an animated GIF using FFmpeg. Unlike the previous example, any container can serve any request, but you still don’t want to send bytes across an ocean and back unnecessarily. So, ideally the app can be deployed everywhere.

想像一下,您想運行一個使用視頻文件的容器,並使用FFMPEG將其變成動畫GIF。與上一個示例不同,任何容器都可以提供任何請求,但是您仍然不想在海洋上發送字節並不必要地返回。因此,理想情況下,該應用程序可以在任何地方部署。

To do this, you declare a container in Wrangler config and turn on autoscaling. This specific configuration ensures that one instance is always running and if CPU usage increases beyond 75% of capacity, additional instances are added:

為此,您可以在Wrangler Config中聲明一個容器,然後打開自動化。這種特定的配置可確保一個實例始終運行,並且如果CPU使用增加超過容量的75%,則添加了其他實例:

To route requests, you just call env.GIF_MAKER.fetch and requests are automatically sent to the closest container:

要路由請求,您只需致電env.gif_maker.fetch和請求即可自動發送到最近的容器:

Going beyond the basics

超越基礎知識

From the examples above, you can see that getting a basic container service running on Workers just takes a few lines of config and a little Workers code. There’s no need to worry about capacity, artifact registries, regions, or scaling.

從上面的示例中,您可以看到,在工人上運行基本的容器服務只需幾行配置和小工時代碼即可。無需擔心容量,人工製品註冊機構,區域或擴展。

For more advanced use, we’ve designed Cloudflare Containers to run on top of Durable Objects and work in tandem with Workers. Let’s take a look at the underlying architecture and see some of the advanced use cases it enables.

為了獲得更高級的用途,我們設計了Cloudflare容器以在耐用物體的頂部運行,並與工人協同工作。讓我們看一下基礎體系結構,並查看其啟用的一些高級用例。

Durable Objects as programmable sidecars

耐用的對像作為可編程的輔助設備

Routing to containers is enabled using Durable Objects under the hood. In the examples above, the Container class from cloudflare:workers just wraps a container-enabled Durable Object and provides helper methods for common patterns. In the rest of this post, we’ll look at examples using Durable Objects directly, as this should shed light on the platform’s underlying design.

使用引擎蓋下的耐用對象啟用了連接到容器的路由。在上面的示例中,Cloudflare的容器類別:工人只是包裝啟用容器耐用的對象,並為常見模式提供輔助方法。在本文的其餘部分,我們將直接使用耐用對象來研究示例,因為這應該闡明平台的基礎設計。

Each Durable Object acts as a programmable sidecar that can proxy requests to the container and manages its lifecycle. This allows you to control and extend your containers in ways that are hard on other

每個耐用的對像都充當可編程的側面,可以將請求委託到容器並管理其生命週期。這使您可以控制和擴展容器,而這些容器很難

免責聲明:info@kdj.com

所提供的資訊並非交易建議。 kDJ.com對任何基於本文提供的資訊進行的投資不承擔任何責任。加密貨幣波動性較大,建議您充分研究後謹慎投資!

如果您認為本網站使用的內容侵犯了您的版權,請立即聯絡我們(info@kdj.com),我們將及時刪除。

2025年04月19日 其他文章發表於