|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
人工智能 (AI) 及其能源需求继续为第四次工业革命提供动力。如今,任何人都可以使用先进的人工智能工具
Artificial Intelligence (AI) continues to power the 4th industrial revolution, alongside its energy demands. Today, anyone can access advanced AI tools and integrate them into their systems to improve efficiency and reduce workload. The energy required to power these algorithms increases as the demand for AI applications increases. As such, environmentalists are already pointing out sustainability concerns surrounding the tech. Thankfully, a team of researchers has created a highly efficient alternative. Here's what you need to know.
人工智能 (AI) 及其能源需求继续为第四次工业革命提供动力。如今,任何人都可以访问先进的人工智能工具并将其集成到自己的系统中,以提高效率并减少工作量。随着人工智能应用需求的增加,驱动这些算法所需的能量也会增加。因此,环保主义者已经指出了围绕该技术的可持续性问题。值得庆幸的是,一组研究人员创造了一种高效的替代方案。这是您需要了解的内容。
Growing AI Energy Demands Creating an Energy Crisis
不断增长的人工智能能源需求引发能源危机
New AI systems continue to launch at an increasing frequency. The most recent global energy use forecast predicts that AI energy consumption will double from 460 terawatt-hours (TWh) in 2022 to 1,000 TWh by 2026. These protocols include recommenders, large language models (LLMs), image and video processing and creation, Web3 services, and more.
新的人工智能系统不断以越来越高的频率推出。最新的全球能源使用预测预测,人工智能能源消耗将从 2022 年的 460 太瓦时 (TWh) 翻一番,到 2026 年达到 1,000 TWh。这些协议包括推荐、大语言模型 (LLM)、图像和视频处理和创建、Web3服务等等。
According to the researcher's study, AI systems require data transference that equates to “200 times the energy used for computation when reading three 64-bit source operands from and writing one 64-bit destination operand to an off-chip main memory.” As such, reducing energy consumption for artificial intelligence (AI) computing applications is a prime concern for developers who will need to overcome this roadblock to achieve large-scale adoption and mature the tech.
根据研究人员的研究,人工智能系统需要的数据传输相当于“从片外主存储器读取三个 64 位源操作数和向片外主存储器写入一个 64 位目标操作数时计算所用能量的 200 倍”。因此,降低人工智能 (AI) 计算应用的能耗是开发人员最关心的问题,他们需要克服这一障碍以实现大规模采用并使技术成熟。
Thankfully, a group of innovative engineers from the University of Minnesota have stepped up with a possible solution that could reduce the power consumption of AI protocols by orders of magnitude. To accomplish this task, the researchers introduce a new chip design that improves on the Von Neumann Architecture found in most chips today.
值得庆幸的是,明尼苏达大学的一群创新工程师已经提出了一种可能的解决方案,可以将人工智能协议的功耗降低几个数量级。为了完成这项任务,研究人员引入了一种新的芯片设计,该设计改进了当今大多数芯片中的冯诺依曼架构。
Von Neumann Architecture
冯·诺依曼架构
John von Neumann revolutionized the computer sector in 1945 when he separated logic and memory units, enabling more efficient computing at the time. In this arrangement, the logic and data are stored in different physical locations. His invention improved performance because it allowed both to be accessed simultaneously.
约翰·冯·诺依曼 (John von Neumann) 于 1945 年彻底改变了计算机领域,他将逻辑单元和内存单元分开,在当时实现了更高效的计算。在这种布置中,逻辑和数据存储在不同的物理位置。他的发明提高了性能,因为它允许同时访问两者。
Today, most computers still use the Von Neuman structure with your HD storing your programs and the random access memory (RAM) housing programming instructions and temporary data. Today's RAM accomplishes this task using various methods including DRAM, which leverages capacitors, and SRAM, which has multiple circuits.
如今,大多数计算机仍然使用冯·诺依曼结构,硬盘存储程序,随机存取存储器 (RAM) 存储编程指令和临时数据。如今的 RAM 使用各种方法来完成此任务,包括利用电容器的 DRAM 和具有多个电路的 SRAM。
Notably, this structure worked great for decades. However, the constant transfer of data between logic and memory requires lots of energy. This energy transfer increases as data requirements and computational load increase. As such, it creates a performance bottleneck that limits efficiency as computing power increases.
值得注意的是,这种结构几十年来一直运行良好。然而,逻辑和内存之间不断传输数据需要大量能量。这种能量传递随着数据需求和计算负载的增加而增加。因此,它会产生一个性能瓶颈,随着计算能力的增加而限制效率。
Attempted Improvements on Energy Demands
尝试改善能源需求
Over the years, many attempts have been made to improve Von Neumann's architecture. These attempts have created different variations of the memory process with the goal of bringing the two actions closer physically. Currently, the three main variations include.
多年来,人们进行了许多尝试来改进冯·诺依曼的体系结构。这些尝试创造了记忆过程的不同变体,目的是使这两个动作在物理上更加接近。目前,三个主要变体包括。
Near-memory Processing
近内存处理
This upgrade moves logic physically closer to memory. This was accomplished using a 3D-stacked infrastructure. Moving the logic closer reduced the distance and energy needed to transfer the data required to power computations. This architecture provided improved efficiency.
此升级使逻辑在物理上更接近内存。这是使用 3D 堆叠基础设施完成的。将逻辑移得更近可以减少传输计算所需数据所需的距离和能量。这种架构提高了效率。
In-memory Computing
内存计算
Another current method of improving computational architecture is in-memory computing. Notably, there are two variations of this style of chip. The original integrates clusters of logic next to the memory on a single chip. This deployment enables the elimination of transistors used in predecessors. However, there are many who consider this method not “true” to the in-memory structure because it still has separate memory locations, which means that initial performance issues that resulted from the data transfer exist, albeit on a smaller scale.
当前改进计算架构的另一种方法是内存计算。值得注意的是,这种类型的芯片有两种变体。最初的技术将逻辑集群与内存一起集成在单个芯片上。这种部署可以消除前代产品中使用的晶体管。然而,许多人认为这种方法对于内存结构来说并不“真实”,因为它仍然具有单独的内存位置,这意味着数据传输导致的初始性能问题存在,尽管规模较小。
True In-memory
真正的内存中
The final type of chip architecture is “true in-memory.” To qualify as this type of architecture, the memory needs to perform computations directly. This structure enhances capabilities and performance because the data for logic operations remains in its location. The researcher's latest version of true in-memory architecture is CRAM.
最后一种芯片架构是“真正的内存中”。为了符合这种类型的架构,内存需要直接执行计算。这种结构增强了功能和性能,因为逻辑操作的数据保留在其位置。研究人员最新版本的真正内存架构是 CRAM。
(CRAM)
(补习班)
Computational random-access memory (CRAM) enables true in-memory computations as the data is processed within the same array. The researchers modified a standard 1T1M STT-MRAM architecture to make CRAM possible. The CRAM layout integrates micro transistors into each cell and builds on the magnetic tunnel junction-based CPUs.
计算随机存取存储器 (CRAM) 可实现真正的内存计算,因为数据是在同一阵列中处理的。研究人员修改了标准 1T1M STT-MRAM 架构,使 CRAM 成为可能。 CRAM 布局将微型晶体管集成到每个单元中,并建立在基于磁性隧道结的 CPU 之上。
This approach provides better control and performance. The team then stacked an additional transistor, logic line (LL), and logic bit line (LBL) in each cell, enabling real-time computation within the same memory bank.
这种方法提供了更好的控制和性能。然后,该团队在每个单元中堆叠了一个额外的晶体管、逻辑线 (LL) 和逻辑位线 (LBL),从而在同一存储体中实现实时计算。
History of CRAM
CRAM的历史
Today's AI systems require a new structure that can meet their computational demands without diminishing sustainability concerns. Recognizing this demand, engineers decided to delve deep into CRAM capabilities for the first time. Their results were published in the NPJ scientific journal under the report “Experimental demonstration of magnetic tunnel junction-based computational random-access memory.”
当今的人工智能系统需要一种新的结构来满足其计算需求,同时又不减少可持续性问题。认识到这一需求后,工程师决定首次深入研究 CRAM 功能。他们的研究结果发表在 NPJ 科学杂志上,题为“基于磁隧道结的计算随机存取存储器的实验演示”。
The first CRAM leveraged an MTJ device structure. These spintronic devices improved on previous storage methods by using electron spin rather than transistors to transfer and store
第一个 CRAM 利用了 MTJ 器件结构。这些自旋电子器件通过使用电子自旋而不是晶体管来传输和存储,从而改进了以前的存储方法
免责声明:info@kdj.com
所提供的信息并非交易建议。根据本文提供的信息进行的任何投资,kdj.com不承担任何责任。加密货币具有高波动性,强烈建议您深入研究后,谨慎投资!
如您认为本网站上使用的内容侵犯了您的版权,请立即联系我们(info@kdj.com),我们将及时删除。
-
- Crypto Whale 出售 7.3 万美元 ETH,保留大量以太坊持有量
- 2024-11-23 14:40:02
- 加密货币世界中的一头巨鲸一直成为头条新闻。八年前,这只鲸鱼积累了大量以太坊($ETH)。
-
- DOGE2014:最新的加密货币热潮挑战 DOGE 的统治地位
- 2024-11-23 14:35:02
- DOGE2014 是一种新的模因币,通过代币销毁、质押奖励和战略增长计划挑战 DOGE 的主导地位。
-
- PANews周末精选:比特币突破100万美元的催化剂不是“数字黄金叙事”而是链上AI
- 2024-11-23 14:25:02
- PANews 编者注:PANews 精选了一周的优质内容,帮助您填补周末的空白。
-
- 狗狗币价格飙升令人兴奋,但 WLTQ 10,000% 的增长抢尽风头
- 2024-11-23 14:20:31
- 最近狗狗币价格上涨 20%,反映出其利用看涨市场条件的能力。然而,随着狗狗币价格的兴奋