bitcoin
bitcoin

$98587.07 USD 

-0.44%

ethereum
ethereum

$3340.76 USD 

-1.41%

tether
tether

$1.00 USD 

-0.01%

solana
solana

$258.82 USD 

-1.41%

bnb
bnb

$646.32 USD 

1.97%

xrp
xrp

$1.54 USD 

10.80%

dogecoin
dogecoin

$0.455843 USD 

15.03%

usd-coin
usd-coin

$0.999896 USD 

0.00%

cardano
cardano

$1.08 USD 

22.53%

tron
tron

$0.207763 USD 

3.79%

avalanche
avalanche

$43.79 USD 

20.63%

shiba-inu
shiba-inu

$0.000027 USD 

7.34%

toncoin
toncoin

$5.54 USD 

-0.28%

stellar
stellar

$0.434830 USD 

53.53%

polkadot-new
polkadot-new

$7.38 USD 

18.75%

加密貨幣新聞文章

CRAM:一種新的晶片設計,可以將人工智慧協定的功耗降低幾個數量級

2024/08/12 19:03

人工智慧 (AI) 及其能源需求繼續為第四次工業革命提供動力。如今,任何人都可以使用先進的人工智慧工具

CRAM:一種新的晶片設計,可以將人工智慧協定的功耗降低幾個數量級

Artificial Intelligence (AI) continues to power the 4th industrial revolution, alongside its energy demands. Today, anyone can access advanced AI tools and integrate them into their systems to improve efficiency and reduce workload. The energy required to power these algorithms increases as the demand for AI applications increases. As such, environmentalists are already pointing out sustainability concerns surrounding the tech. Thankfully, a team of researchers has created a highly efficient alternative. Here's what you need to know.

人工智慧 (AI) 及其能源需求繼續為第四次工業革命提供動力。如今,任何人都可以存取先進的人工智慧工具並將其整合到自己的系統中,以提高效率並減少工作量。隨著人工智慧應用需求的增加,驅動這些演算法所需的能量也會增加。因此,環保人士已經指出了圍繞該技術的可持續性問題。值得慶幸的是,一組研究人員創造了一種高效的替代方案。這是您需要了解的內容。

Growing AI Energy Demands Creating an Energy Crisis

不斷增長的人工智慧能源需求引發能源危機

New AI systems continue to launch at an increasing frequency. The most recent global energy use forecast predicts that AI energy consumption will double from 460 terawatt-hours (TWh) in 2022 to 1,000 TWh by 2026. These protocols include recommenders, large language models (LLMs), image and video processing and creation, Web3 services, and more.

新的人工智慧系統不斷以越來越高的頻率推出。最新的全球能源使用預測預測,人工智慧能源消耗將從2022 年的460 太瓦時(TWh) 翻一番,到2026 年達到1,000 TWh。處理與創建、Web3服務等等。

According to the researcher's study, AI systems require data transference that equates to “200 times the energy used for computation when reading three 64-bit source operands from and writing one 64-bit destination operand to an off-chip main memory.” As such, reducing energy consumption for artificial intelligence (AI) computing applications is a prime concern for developers who will need to overcome this roadblock to achieve large-scale adoption and mature the tech.

根據研究人員的研究,人工智慧系統所需的資料傳輸相當於「從片外主記憶體讀取三個64 位元來源運算元和向片外主記憶體寫入一個64 位元目標運算元時計算所用能量的200倍」。因此,降低人工智慧 (AI) 運算應用的能耗是開發人員最關心的問題,他們需要克服這一障礙以實現大規模採用並使技術成熟。

Thankfully, a group of innovative engineers from the University of Minnesota have stepped up with a possible solution that could reduce the power consumption of AI protocols by orders of magnitude. To accomplish this task, the researchers introduce a new chip design that improves on the Von Neumann Architecture found in most chips today.

值得慶幸的是,明尼蘇達大學的一群創新工程師已經提出了一個可能的解決方案,可以將人工智慧協定的功耗降低幾個數量級。為了完成這項任務,研究人員引入了一種新的晶片設計,該設計改進了當今大多數晶片中的馮諾依曼架構。

Von Neumann Architecture

馮·諾依曼架構

John von Neumann revolutionized the computer sector in 1945 when he separated logic and memory units, enabling more efficient computing at the time. In this arrangement, the logic and data are stored in different physical locations. His invention improved performance because it allowed both to be accessed simultaneously.

約翰·馮·諾依曼 (John von Neumann) 於 1945 年徹底改變了電腦領域,他將邏輯單元和記憶體單元分開,在當時實現了更高效的計算。在這種佈置中,邏輯和資料儲存在不同的實體位置。他的發明提高了性能,因為它允許同時訪問兩者。

Today, most computers still use the Von Neuman structure with your HD storing your programs and the random access memory (RAM) housing programming instructions and temporary data. Today's RAM accomplishes this task using various methods including DRAM, which leverages capacitors, and SRAM, which has multiple circuits.

如今,大多數電腦仍然使用馮諾依曼結構,硬碟儲存程序,隨機存取記憶體 (RAM) 儲存程式指令和臨時資料。如今的 RAM 使用各種方法來完成此任務,包括利用電容器的 DRAM 和具有多個電路的 SRAM。

Notably, this structure worked great for decades. However, the constant transfer of data between logic and memory requires lots of energy. This energy transfer increases as data requirements and computational load increase. As such, it creates a performance bottleneck that limits efficiency as computing power increases.

值得注意的是,這種結構幾十年來一直運作良好。然而,邏輯和記憶體之間不斷傳輸資料需要大量能量。這種能量傳遞隨著資料需求和計算負載的增加而增加。因此,它會產生一個效能瓶頸,隨著運算能力的增加而限制效率。

Attempted Improvements on Energy Demands

嘗試改善能源需求

Over the years, many attempts have been made to improve Von Neumann's architecture. These attempts have created different variations of the memory process with the goal of bringing the two actions closer physically. Currently, the three main variations include.

多年來,人們進行了​​許多嘗試來改進馮諾依曼的體系結構。這些嘗試創造了記憶過程的不同變化,目的是讓這兩個動作在物理上更接近。目前,三個主要變體包括。

Near-memory Processing

近記憶體處理

This upgrade moves logic physically closer to memory. This was accomplished using a 3D-stacked infrastructure. Moving the logic closer reduced the distance and energy needed to transfer the data required to power computations. This architecture provided improved efficiency.

此升級使邏輯在物理上更接近記憶體。這是使用 3D 堆疊基礎設施完成的。將邏輯移得更近可以減少傳輸計算所需資料所需的距離和能量。這種架構提高了效率。

In-memory Computing

記憶體計算

Another current method of improving computational architecture is in-memory computing. Notably, there are two variations of this style of chip. The original integrates clusters of logic next to the memory on a single chip. This deployment enables the elimination of transistors used in predecessors. However, there are many who consider this method not “true” to the in-memory structure because it still has separate memory locations, which means that initial performance issues that resulted from the data transfer exist, albeit on a smaller scale.

目前改進計算架構的另一種方法是記憶體計算。值得注意的是,這種類型的晶片有兩種變體。最初的技術將邏輯集群與內存一起整合在單一晶片上。這種部署可以消除前代產品中使用的電晶體。然而,許多人認為這種方法對於記憶體結構來說並不“真實”,因為它仍然具有單獨的記憶體位置,這意味著資料傳輸導致的初始效能問題存在,儘管規模較小。

True In-memory

真正的記憶體中

The final type of chip architecture is “true in-memory.” To qualify as this type of architecture, the memory needs to perform computations directly. This structure enhances capabilities and performance because the data for logic operations remains in its location. The researcher's latest version of true in-memory architecture is CRAM.

最後一種晶片架構是「真正的記憶體中」。為了符合這種類型的架構,記憶體需要直接執行計算。這種結構增強了功能和效能,因為邏輯操作的資料保留在其位置。研究人員最新版本的真正記憶體架構是 CRAM。

(CRAM)

(補習班)

Computational random-access memory (CRAM) enables true in-memory computations as the data is processed within the same array. The researchers modified a standard 1T1M STT-MRAM architecture to make CRAM possible. The CRAM layout integrates micro transistors into each cell and builds on the magnetic tunnel junction-based CPUs.

計算隨機存取記憶體 (CRAM) 可實現真正的記憶體運算,因為資料是在同一陣列中處理的。研究人員修改了標準 1T1M STT-MRAM 架構,使 CRAM 成為可能。 CRAM 佈局將微型電晶體整合到每個單元中,並建立​​在基於磁性隧道接面的 CPU 之上。

This approach provides better control and performance. The team then stacked an additional transistor, logic line (LL), and logic bit line (LBL) in each cell, enabling real-time computation within the same memory bank.

這種方法提供了更好的控制和性能。然後,該團隊在每個單元中堆疊了一個額外的電晶體、邏輯線 (LL) 和邏輯位元線 (LBL),從而在同一儲存體中實現即時計算。

History of CRAM

CRAM的歷史

Today's AI systems require a new structure that can meet their computational demands without diminishing sustainability concerns. Recognizing this demand, engineers decided to delve deep into CRAM capabilities for the first time. Their results were published in the NPJ scientific journal under the report “Experimental demonstration of magnetic tunnel junction-based computational random-access memory.”

當今的人工智慧系統需要一種新的結構來滿足其運算需求,同時又不減少永續性問題。在認識到這項需求後,工程師決定首次深入研究 CRAM 功能。他們的研究結果發表在 NPJ 科學雜誌上,題為「基於磁性隧道結的計算隨機存取記憶體的實驗演示」。

The first CRAM leveraged an MTJ device structure. These spintronic devices improved on previous storage methods by using electron spin rather than transistors to transfer and store

第一個 CRAM 利用了 MTJ 裝置結構。這些自旋電子裝置透過使用電子自旋而不是晶體管來傳輸和存儲,從而改進了先前的儲存方法

新聞來源:www.securities.io

免責聲明:info@kdj.com

所提供的資訊並非交易建議。 kDJ.com對任何基於本文提供的資訊進行的投資不承擔任何責任。加密貨幣波動性較大,建議您充分研究後謹慎投資!

如果您認為本網站使用的內容侵犯了您的版權,請立即聯絡我們(info@kdj.com),我們將及時刪除。

2024年11月23日 其他文章發表於