![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
History of AI: Part Six — The Revolution
![Fetch.ai](https://miro.medium.com/v2/resize:fill:88:88/1*ynj1Fd46FfjbjP_s0646IQ.png)
![Fetch.ai](https://miro.medium.com/v2/resize:fill:48:48/1*ynj1Fd46FfjbjP_s0646IQ.png)
Fetch.ai
人工智能的历史:第六部分 — The RevolutionFetch.ai
Follow
·跟随
Fetch.ai
发表于Fetch.ai
--
·阅读3分钟·刚刚--
In this series chronicling AI history, we’re finally entering the 21st century.
在这个记录人工智能历史的系列中,我们终于进入了 21 世纪。
So far, we have seen that AI embarked on a remarkable transformation over 50 years and slowly reshaped technology. Let’s now have a look at how it started reshaping our daily lives. Over the first two decades of this century, AI evolved from a mere concept to a reality.
到目前为止,我们已经看到人工智能在50多年来开始了一场非凡的变革,并慢慢重塑了技术。现在让我们看看它是如何开始重塑我们的日常生活的。在本世纪的前二十年里,人工智能从单纯的概念演变为现实。
The Emergence of New Applications
It was the early 2000s. The world survived Y2K — and was more excited about computers than ever. This is where new AI applications emerged. AI now was not only limited to research labs — but was slowly getting integrated into daily lives. It started helping with household gadgets to outer space exploration. For instance, in 2002, Roomba, a groundbreaking autonomous vacuum cleaner, was put to test in the markets. Just two years later, NASA’s Mars rovers, Spirit and Opportunity, made history by autonomously navigating the Martian terrain.
新应用程序的出现那是 2000 年代初。世界度过了千禧年——并且对计算机比以往任何时候都更加兴奋。这就是新的人工智能应用出现的地方。人工智能现在不仅限于研究实验室,而且正在慢慢融入日常生活。它开始帮助家用电器探索外太空。例如,2002 年,Roomba,一款开创性的自动吸尘器,在市场上进行了测试。仅仅两年后,美国宇航局的火星探测器“勇气号”和“机遇号”就通过在火星地形上自主导航而创造了历史。
From simplifying daily chores to tackling the complexities of space exploration — AI was here. By the mid-2000s, AI was taking significant strides forward. One pivotal moment was in 2006 when the concept of “machine reading” was introduced. This breakthrough opened doors for AI systems to process and understand text independently, revolutionizing language comprehension.
从简化日常琐事到解决太空探索的复杂性——人工智能就在这里。到 2000 年代中期,人工智能取得了长足的进步。一个关键时刻是 2006 年“机器阅读”概念的提出。这一突破为人工智能系统独立处理和理解文本打开了大门,彻底改变了语言理解。
This is where three key pieces of technology emerged: Big Data, Deep Learning and Large Language Models.
这就是三大关键技术的出现:大数据、深度学习和大型语言模型。
Big Data and Economic Impact
By 2009, around the time the world was rushing to recover from the great economic collapse — nearly every sector of the U.S. economy was managing colossal volumes of data. By all reports — the data averaged around 200 terabytes per sector. The decade saw a big change in how we deal with data. It became more available, computers got faster and cheaper. This allowed us to use advanced machine learning techniques. This period was all about the rise of big data, which transformed how industries handled information.
大数据和经济影响到 2009 年,世界正急于从经济大崩溃中复苏——美国经济的几乎每个部门都在管理大量数据。根据所有报告,每个扇区的数据平均约为 200 TB。这十年见证了我们处理数据的方式发生了巨大变化。它变得更加可用,计算机变得更快、更便宜。这使我们能够使用先进的机器学习技术。这一时期是大数据的兴起,它改变了行业处理信息的方式。
Instead of limiting ourselves to select samples, we began utilizing all available data for analysis. This comprehensive approach enhanced decision-making and optimization processes. Big data was distinguished by its large scale, rapid pace, diverse nature, intrinsic value, and accuracy. This necessitated the development of innovative processing models to fully capitalize on their potential.
我们不再局限于选择样本,而是开始利用所有可用的数据进行分析。这种综合方法增强了决策和优化流程。大数据以其规模大、速度快、性质多样、内在价值和准确性而著称。这就需要开发创新的加工模型,以充分发挥其潜力。
Deep Learning: Advancements and Challenges
Deep learning emerged as an important piece of technology during this period. It modelled complex data abstractions using deep neural networks with multiple processing layers. Despite the Universal Approximation Theorem suggesting that deep networks weren’t necessary for approximating continuous functions, deep learning proved effective in addressing issues like overfitting, common in shallow networks. As a result, deep neural networks could generate far more intricate models than their shallow counterparts.
深度学习:进步与挑战深度学习在这一时期成为一项重要的技术。它使用具有多个处理层的深度神经网络对复杂的数据抽象进行建模。尽管通用逼近定理表明深度网络对于逼近连续函数不是必需的,但事实证明深度学习可以有效解决浅层网络中常见的过度拟合等问题。因此,深层神经网络可以生成比浅层神经网络更复杂的模型。
However, deep learning faced its own set of challenges. One significant issue was the vanishing gradient problem in recurrent neural networks, where gradients between layers diminished over time. Innovations such as Long Short-Term Memory (LSTM) units were developed to mitigate this problem.
然而,深度学习也面临着一系列挑战。一个重要的问题是循环神经网络中的梯度消失问题,其中层之间的梯度随着时间的推移而减小。长短期记忆 (LSTM) 单元等创新技术的开发就是为了缓解这个问题。
State-of-the-art deep neural networks began rivalling human accuracy in fields like computer vision, as seen in tasks involving the MNIST database and traffic sign recognition. Furthermore, language processing engines, exemplified by IBM’s Watson, outperformed humans in general trivia, and advancements in deep learning achieved remarkable feats in games like Go and Doom.
最先进的深度神经网络开始在计算机视觉等领域与人类的准确性相媲美,如涉及 MNIST 数据库和交通标志识别的任务中所见。此外,以 IBM Watson 为代表的语言处理引擎在一般琐事上的表现优于人类,深度学习的进步在围棋和毁灭战士等游戏中取得了非凡的成就。
Large language Models
In 2017, Google researchers wrote a paper titled Attention Is All You Need. This paper introduced the transformer architecture that improved upon the existing Seq2seq technology. The transformer architecture relied heavily on the attention mechanism, which had been developed by Bahdanau and others in 2014. This innovation laid the foundation for many subsequent advancements in AI language models. Large language models slowly started revolutionizing the field of artificial intelligence. In 2018, BERT, an encoder-only model, became widespread.
大型语言模型 2017 年,谷歌研究人员写了一篇题为《Attention Is All You Need》的论文。本文介绍了在现有 Seq2seq 技术基础上改进的 Transformer 架构。 Transformer 架构严重依赖于 Bahdanau 等人于 2014 年开发的注意力机制。这一创新为 AI 语言模型的许多后续进步奠定了基础。大型语言模型慢慢开始彻底改变人工智能领域。 2018 年,纯编码器模型 BERT 开始普及。
Then came GPT.
然后是 GPT。
The GPT model was actually introduced in 2018 and met with lukewarm reception. It was GPT-2 in 2019 that garnered widespread attention. It was so powerful that OpenAI initially hesitated to release it to the public due to concerns about its potential for misuse. The model’s ability to generate contextually relevant text raised ethical questions about the responsible use of AI.
GPT 模型实际上于 2018 年推出,但反响冷淡。 2019年引起广泛关注的是GPT-2。它的功能如此强大,以至于 OpenAI 最初由于担心其可能被滥用而犹豫是否向公众发布它。该模型生成上下文相关文本的能力引发了有关负责任地使用人工智能的道德问题。
But then, right at the onset of the next decade — came GPT-3.
但就在下一个十年伊始,GPT-3 出现了。
免责声明:info@kdj.com
The information provided is not trading advice. kdj.com does not assume any responsibility for any investments made based on the information provided in this article. Cryptocurrencies are highly volatile and it is highly recommended that you invest with caution after thorough research!
If you believe that the content used on this website infringes your copyright, please contact us immediately (info@kdj.com) and we will delete it promptly.
-
-
-
- 北极Pablo硬币(APC):产生嗡嗡声的模因硬币预售
- 2025-02-08 17:11:06
- 投资者一直在寻找下一个加密货币的轰动,而模因硬币被证明是市场上最有利可图的机会。
-
- Pepe鲸鱼垃圾储存量以积累超流动性(炒作)
- 2025-02-08 17:11:06
- 加密货币市场鲸鱼活动表明,持有人已经在销售Pepe的职位以积累过度流动性(HYPE)。
-
-
- 今天的比特币价格徘徊在$ 96K
- 2025-02-08 17:11:06
- 今天的加密货币市场已经以缓慢的票据恢复了贸易,市值下跌至3.13万亿美元,下降了1.37%
-
- REMITTIX(RTX):使加密货币实用
- 2025-02-08 17:11:06
- 加密货币应该简化资金。快速,无边界,没有传统的银行业头痛 - 这就是它的承诺。
-
- 坎耶·韦斯特(Kanye West
- 2025-02-08 17:01:05
- 坎耶·韦斯特(Kanye West)回到X上,您永远不会猜到接下来发生了什么。
-