bitcoin
bitcoin

$98715.65 USD 

0.59%

ethereum
ethereum

$3443.40 USD 

4.25%

tether
tether

$1.00 USD 

0.03%

solana
solana

$259.49 USD 

1.74%

bnb
bnb

$671.29 USD 

8.14%

xrp
xrp

$1.56 USD 

10.38%

dogecoin
dogecoin

$0.470793 USD 

18.73%

usd-coin
usd-coin

$0.999844 USD 

0.01%

cardano
cardano

$1.10 USD 

22.59%

tron
tron

$0.220420 USD 

11.59%

avalanche
avalanche

$43.26 USD 

14.07%

shiba-inu
shiba-inu

$0.000028 USD 

12.69%

toncoin
toncoin

$6.16 USD 

12.45%

stellar
stellar

$0.439160 USD 

51.48%

polkadot-new
polkadot-new

$8.15 USD 

34.47%

加密貨幣新聞文章

21 世紀人工智慧的演進:革命

2024/04/09 22:04

History of AI: Part Six — The Revolution

Fetch.ai
Fetch.ai

Fetch.ai

人工智慧的歷史:第六部分 — The RevolutionFetch.ai

Follow

·跟隨

Published in

發佈於Fetch.ai

3 min readJust now

--

·閱讀3分鐘·剛剛--

Artist view of Artificial Intelligence in 2000–2010 art style.

In this series chronicling AI history, we’re finally entering the 21st century.

在這個記錄人工智慧歷史的系列中,我們終於進入了 21 世紀。

So far, we have seen that AI embarked on a remarkable transformation over 50 years and slowly reshaped technology. Let’s now have a look at how it started reshaping our daily lives. Over the first two decades of this century, AI evolved from a mere concept to a reality.

到目前為止,我們已經看到人工智慧在50多年來開始了一場非凡的變革,並慢慢重塑了技術。現在讓我們看看它是如何開始重塑我們的日常生活的。在本世紀的前二十年裡,人工智慧從單純的概念演變成現實。

The Emergence of New Applications

It was the early 2000s. The world survived Y2K — and was more excited about computers than ever. This is where new AI applications emerged. AI now was not only limited to research labs — but was slowly getting integrated into daily lives. It started helping with household gadgets to outer space exploration. For instance, in 2002, Roomba, a groundbreaking autonomous vacuum cleaner, was put to test in the markets. Just two years later, NASA’s Mars rovers, Spirit and Opportunity, made history by autonomously navigating the Martian terrain.

新應用程式的出現是 2000 年代初。世界度過了千禧年——並且對電腦比以往任何時候都更加興奮。這就是新的人工智慧應用出現的地方。人工智慧現在不僅限於研究實驗室,而且正在慢慢融入日常生活。它開始幫助家用電器探索外太空。例如,2002 年,Roomba,一款開創性的自動吸塵器,在市場上進行了測試。僅僅兩年後,美國太空總署的火星探測器「勇氣號」和「機遇號」就透過在火星地形上自主導航而創造了歷史。

From simplifying daily chores to tackling the complexities of space exploration — AI was here. By the mid-2000s, AI was taking significant strides forward. One pivotal moment was in 2006 when the concept of “machine reading” was introduced. This breakthrough opened doors for AI systems to process and understand text independently, revolutionizing language comprehension.

從簡化日常瑣事到解決太空探索的複雜性——人工智慧就在這裡。到 2000 年代中期,人工智慧取得了長足的進步。一個關鍵時刻是 2006 年「機器閱讀」概念的提出。這項突破為人工智慧系統獨立處理和理解文本打開了大門,徹底改變了語言理解。

This is where three key pieces of technology emerged: Big Data, Deep Learning and Large Language Models.

這就是三大關鍵技術的出現:大數據、深度學習和大型語言模型。

Big Data and Economic Impact

By 2009, around the time the world was rushing to recover from the great economic collapse — nearly every sector of the U.S. economy was managing colossal volumes of data. By all reports — the data averaged around 200 terabytes per sector. The decade saw a big change in how we deal with data. It became more available, computers got faster and cheaper. This allowed us to use advanced machine learning techniques. This period was all about the rise of big data, which transformed how industries handled information.

大數據和經濟影響到 2009 年,世界正急於從經濟大崩潰中復甦——美國經濟的幾乎每個部門都在管理大量數據。根據所有報告,每個扇區的數據平均約為 200 TB。這十年見證了我們處理數據的方式發生了巨大變化。它變得更加可用,電腦變得更快、更便宜。這使我們能夠使用先進的機器學習技術。這段時期是大數據的興起,它改變了產業處理資訊的方式。

Instead of limiting ourselves to select samples, we began utilizing all available data for analysis. This comprehensive approach enhanced decision-making and optimization processes. Big data was distinguished by its large scale, rapid pace, diverse nature, intrinsic value, and accuracy. This necessitated the development of innovative processing models to fully capitalize on their potential.

我們不再局限於選擇樣本,而是開始利用所有可用的數據進行分析。這種綜合方法增強了決策和優化流程。大數據以其規模大、速度快、性質多樣、內在價值和準確性而聞名。這就需要開發創新的加工模型,以充分發揮其潛力。

Deep Learning: Advancements and Challenges

Deep learning emerged as an important piece of technology during this period. It modelled complex data abstractions using deep neural networks with multiple processing layers. Despite the Universal Approximation Theorem suggesting that deep networks weren’t necessary for approximating continuous functions, deep learning proved effective in addressing issues like overfitting, common in shallow networks. As a result, deep neural networks could generate far more intricate models than their shallow counterparts.

深度學習:進步與挑戰深度學習在這段時期成為一項重要的技術。它使用具有多個處理層的深度神經網路對複雜的資料抽象進行建模。儘管通用逼近定理表明深度網路對於逼近連續函數不是必需的,但事實證明深度學習可以有效解決淺層網路中常見的過度擬合等問題。因此,深層神經網路可以產生比淺層神經網路更複雜的模型。

However, deep learning faced its own set of challenges. One significant issue was the vanishing gradient problem in recurrent neural networks, where gradients between layers diminished over time. Innovations such as Long Short-Term Memory (LSTM) units were developed to mitigate this problem.

然而,深度學習也面臨一系列挑戰。一個重要的問題是循環神經網路中的梯度消失問題,其中層之間的梯度隨著時間的推移而減少。長短期記憶 (LSTM) 單元等創新技術的發展就是為了緩解這個問題。

State-of-the-art deep neural networks began rivalling human accuracy in fields like computer vision, as seen in tasks involving the MNIST database and traffic sign recognition. Furthermore, language processing engines, exemplified by IBM’s Watson, outperformed humans in general trivia, and advancements in deep learning achieved remarkable feats in games like Go and Doom.

最先進的深度神經網路開始在電腦視覺等領域與人類的準確性相媲美,如涉及 MNIST 資料庫和交通標誌識別的任務中所見。此外,以 IBM Watson 為代表的語言處理引擎在一般瑣事上的表現優於人類,深度學習的進步在圍棋和毀滅戰士等遊戲中取得了非凡的成就。

Large language Models

In 2017, Google researchers wrote a paper titled Attention Is All You Need. This paper introduced the transformer architecture that improved upon the existing Seq2seq technology. The transformer architecture relied heavily on the attention mechanism, which had been developed by Bahdanau and others in 2014. This innovation laid the foundation for many subsequent advancements in AI language models. Large language models slowly started revolutionizing the field of artificial intelligence. In 2018, BERT, an encoder-only model, became widespread.

大型語言模型 2017 年,Google研究人員寫了一篇題為《Attention Is All You Need》的論文。本文介紹了在現有 Seq2seq 技術基礎上改進的 Transformer 架構。 Transformer 架構嚴重依賴 Bahdanau 等人於 2014 年開發的注意力機制。這項創新為 AI 語言模型的許多後續進步奠定了基礎。大型語言模型慢慢開始徹底改變人工智慧領域。 2018 年,純編碼器模型 BERT 開始普及。

Then came GPT.

然後是 GPT。

The GPT model was actually introduced in 2018 and met with lukewarm reception. It was GPT-2 in 2019 that garnered widespread attention. It was so powerful that OpenAI initially hesitated to release it to the public due to concerns about its potential for misuse. The model’s ability to generate contextually relevant text raised ethical questions about the responsible use of AI.

GPT 模型其實在 2018 年推出,但反應冷淡。 2019年引起廣泛關注的是GPT-2。它的功能如此強大,以至於 OpenAI 最初由於擔心其可能被濫用而猶豫是否向公眾發布它。該模型生成上下文相關文本的能力引發了負責任地使用人工智慧的道德問題。

But then, right at the onset of the next decade — came GPT-3.

但就在下一個十年開始,GPT-3 出現了。

免責聲明:info@kdj.com

所提供的資訊並非交易建議。 kDJ.com對任何基於本文提供的資訊進行的投資不承擔任何責任。加密貨幣波動性較大,建議您充分研究後謹慎投資!

如果您認為本網站使用的內容侵犯了您的版權,請立即聯絡我們(info@kdj.com),我們將及時刪除。

2024年11月23日 其他文章發表於