市值: $3.688T 1.470%
成交额(24h): $372.191B 61.800%
  • 市值: $3.688T 1.470%
  • 成交额(24h): $372.191B 61.800%
  • 恐惧与贪婪指数:
  • 市值: $3.688T 1.470%
加密货币
话题
百科
资讯
加密话题
视频
热门新闻
加密货币
话题
百科
资讯
加密话题
视频
bitcoin
bitcoin

$108064.256573 USD

2.62%

ethereum
ethereum

$3416.451426 USD

4.04%

xrp
xrp

$3.182014 USD

-0.61%

tether
tether

$0.998286 USD

-0.06%

solana
solana

$258.371362 USD

-5.60%

bnb
bnb

$703.182066 USD

-0.59%

dogecoin
dogecoin

$0.378176 USD

-4.38%

usd-coin
usd-coin

$1.000010 USD

-0.01%

cardano
cardano

$1.062758 USD

-0.47%

tron
tron

$0.239600 USD

-1.00%

chainlink
chainlink

$25.901897 USD

10.66%

avalanche
avalanche

$38.079479 USD

-2.52%

sui
sui

$4.720134 USD

-3.00%

stellar
stellar

$0.462876 USD

-3.68%

hedera
hedera

$0.354732 USD

0.20%

加密货币新闻

大型概念模型 (LCM) 提供了一些令人兴奋的前景

2025/01/07 11:13

在今天的专栏中,我探讨了生成式人工智能和大型语言模型 (LLM) 的一个有趣的新进展,其中包括超越当代单词

大型概念模型 (LCM) 提供了一些令人兴奋的前景

Large concept models (LCMs) offer some exciting prospects. In today’s column, I explore an intriguing new advancement for generative AI and large language models (LLMs) consisting of moving beyond contemporary words-based approaches to sentence-oriented approaches.

大型概念模型 (LCM) 提供了一些令人兴奋的前景。在今天的专栏中,我探讨了生成式人工智能和大型语言模型 (LLM) 的一个有趣的新进展,其中包括超越当代基于单词的方法到面向句子的方法。

The extraordinary deal is this. You might be vaguely aware that most LLMs currently focus on words and accordingly generate responses on a word-at-a-time basis. Suppose that instead of looking at the world via individual words, we could use sentences as a core element. Whole sentences come into AI, and complete sentences are generated out of AI.

非凡的交易是这样的。您可能隐约意识到,大多数法学硕士目前都专注于单词,因此每次都会生成回复。假设我们可以使用句子作为核心元素,而不是通过单个单词来观察世界。完整的句子进入人工智能,完整的句子由人工智能生成。

To do this, the twist is that sentences are reducible to underlying concepts, and those computationally ferreted-out concepts become the esteemed coinage of the realm for this groundbreaking architectural upheaval of conventional generative AI and LLMs. The new angle radically becomes that we then design, build, and field so-called large concept models (LCMs) in lieu of old-fashioned large language models.

要做到这一点,关键在于句子可以简化为基本概念,而那些通过计算找出的概念成为传统生成人工智能和法学硕士这一突破性架构变革领域受人尊敬的创造物。新的角度从根本上变成了我们设计、构建和部署所谓的大型概念模型(LCM)来代替老式的大型语言模型。

Let’s talk about it.

我们来谈谈吧。

This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI including identifying and explaining various impactful AI complexities (see the link here). For my coverage of the top-of-the-line OpenAI ChatGPT o1 and o3 models and their advanced reasoning functionality, see the link here and the link here.

对人工智能创新突破的分析是我正在进行的《福布斯》专栏报道的一部分,内容涉及人工智能的最新进展,包括识别和解释各种有影响力的人工智能复杂性(请参阅此处的链接)。有关我对顶级 OpenAI ChatGPT o1 和 o3 模型及其高级推理功能的介绍,请参阅此处的链接和此处的链接。

There is an ongoing concern in the AI community that perhaps AI researchers and AI developers are treading too much of the same ground right now. We seem to have landed on an impressive architecture contrivance for how to shape generative AI and LLMs and few want to depart from the success so far attained.

人工智能社区一直担心,人工智能研究人员和人工智能开发人员现在可能走得太同一了。我们似乎已经在如何塑造生成式人工智能和法学硕士方面找到了令人印象深刻的架构设计,几乎没有人愿意放弃迄今为止所取得的成功。

If it isn’t broken, don’t fix it.

如果没有损坏,就不要修理它。

The problem is that not everyone concurs that the prevailing architecture isn’t actually broken. By broken — and to quickly clarify, the issue is more of limitations and constraints than it is one of something inherently being wrong. A strong and vocal viewpoint is that we are hitting the topmost thresholds of what contemporary LLMs can accomplish. There isn’t much left in the gas tank, and we are soon to hit a veritable wall.

问题在于,并非所有人都认为流行的架构实际上并未被破坏。通过打破 - 并快速澄清,问题更多的是限制和约束,而不是本质上错误的问题之一。一个强烈而直言不讳的观点是,我们正在达到当代法学硕士所能实现的最高门槛。油箱里已经所剩无几了,我们很快就会碰上真正的墙。

As such, there are brave souls who are seeking alternative architectural avenues. Exciting but a gamble at the same time. They might hit the jackpot and discover the next level of AI. Fame and fortune await. On the other hand, they might waste time on a complete dead-end. Smarmy cynics will call them foolish for their foolhardy ambitions. It could harm your AI career and knock you out of getting that sweet AI high-tech freewheeling job you’ve been eyeing for the longest time.

因此,有一些勇敢的人正在寻找替代的建筑途径。令人兴奋,但同时也是一场赌博。他们可能会中大奖并发现人工智能的新水平。名誉和财富正在等待着。另一方面,他们可能会在一条完全死胡同上浪费时间。那些自作聪明的愤世嫉俗者会因为他们鲁莽的野心而称他们为愚蠢的人。它可能会损害你的人工智能职业生涯,让你无法获得你渴望已久的人工智能高科技随心所欲的工作。

I continue to give airtime to those who are heads-down seriously aiming to upset the apple cart. For example, my analysis of the clever chain-of-continuous thought approach for LLMs merits dutiful consideration, see the link here. Another exciting possibility is the neuro-symbolic or hybrid AI approach that marries artificial neural networks (ANNs) with rules-based reasoning, see my discussion at the link here.

我继续为那些低着头、认真地想要搞乱苹果车的人提供广播时间。例如,我对法学硕士巧妙的连续思维方法的分析值得认真考虑,请参阅此处的链接。另一个令人兴奋的可能性是神经符号或混合人工智能方法,它将人工神经网络 (ANN) 与基于规则的推理结合起来,请参阅此处链接中我的讨论。

There is no doubt in my mind that a better mousetrap is still to be found, and all legitimate new-world explorers should keep sailing the winds of change. May your voyage be fruitful.

毫无疑问,我认为仍然可以找到更好的捕鼠器,所有合法的新世界探险家都应该继续在变革之风中航行。愿您的旅程硕果累累。

The approach I’ll be identifying this time around has to do with the existing preoccupation with words.

这次我将确定的方法与现有的对文字的关注有关。

Actually, it might be more appropriate to say a preoccupation with tokens. When you enter words into a prompt, those words are converted into numeric values referred to as tokens. The rest of the AI processing computationally crunches on those numeric values or tokens, see my detailed description of how this works at the link here. Ultimately, the AI-generated response is in token format and must be converted back into text so that you get a readable answer.

实际上,说是对代币的关注可能更合适。当您在提示中输入单词时,这些单词将转换为称为标记的数值。人工智能处理的其余部分会在计算上处理这些数值或标记,请在此处的链接中查看我对其工作原理的详细描述。最终,人工智能生成的响应采用令牌格式,必须转换回文本,以便您获得可读的答案。

In a sense, you give words to AI, and the AI gives you words in return (albeit via the means of tokenization).

从某种意义上说,你向人工智能提供单词,人工智能也给你单词作为回报(尽管是通过标记化的方式)。

Do we have to do things that way?

我们必须这样做吗?

No, there doesn’t seem to be a fundamental irrefutable law of nature that says we must confine ourselves to a word-at-a-time focus. Feel free to consider alternatives. Let your wild thoughts flow.

不,似乎没有一个基本的、无可辩驳的自然法则规定我们必须将自己限制在一次只讲一个字的焦点上。请随意考虑替代方案。让你狂野的想法流动起来。

Here is an idea. Imagine that whole sentences were the unit of interest. Rather than parsing and aiming at single words, we conceive of a sentence as our primary unit of measure. A sentence is admittedly a collection of words. No disagreement there. The gist is that the sentence is seen as a sentence. Right now, a sentence happens to be treated as a string of words.

这是一个想法。想象一下整个句子是感兴趣的单位。我们不是解析和瞄准单个单词,而是将句子视为我们的主要衡量单位。诚然,句子是单词的集合。那里没有分歧。要点是句子被视为句子。现在,一个句子恰好被视为一串单词。

Give the AI a sentence, and you get back a generated sentence in return.

给人工智能一个句子,你就会得到一个生成的句子作为回报。

Boom, drop the mic.

砰,放下麦克风。

Making sense of sentences is a bit of a head-scratcher. How do you look at an entire sentence and identify what the meaning or significance of the sentence is?

理解句子的意思有点令人头疼。您如何看待整个句子并确定该句子的含义或意义是什么?

Aha, let’s assume that sentences are representative of concepts. Each sentence will

啊哈,我们假设句子代表概念。每句话都会

免责声明:info@kdj.com

The information provided is not trading advice. kdj.com does not assume any responsibility for any investments made based on the information provided in this article. Cryptocurrencies are highly volatile and it is highly recommended that you invest with caution after thorough research!

If you believe that the content used on this website infringes your copyright, please contact us immediately (info@kdj.com) and we will delete it promptly.

2025年01月20日 发表的其他文章