|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Cryptocurrency News Articles
Chain of Thought: Reasoning Emerges in Language Models
Jan 29, 2025 at 05:00 am
The new models trained to express extended chain of thought are going to generalize outside of their breakthrough domains of code and math.
This post is early to accommodate some last minute travel on my end!
The new models trained to express extended chain of thought are going to generalize outside of their breakthrough domains of code and math. The “reasoning” process of language models that we use today is chain of thought reasoning. We ask the model to work step by step because it helps it manage complexity, especially in domains where the answer requires precision across multiple specific tokens. The domains where chain of thought (CoT) is most useful today are code, mathematics, and other “reasoning” tasks1. These are the domains where models like o1, R1, Gemini-Thinking, etc. were designed for.
Different intelligences reason in different ways that correspond to how they store and manipulate information. Humans compress a lifetime of experience into our spectacular, low-power brains that draw on past experience almost magically. The words that follow in this blog are also autoregressive, like the output of a language model, but draw on hours and hours of background processing as I converge on this argument.
Language models, on the other hand, are extremely general and do not today have architectures (or use-cases) that continually re-expose them to relevant problems and fold information back in a compressed form. Language models are very large, sophisticated, parametric probability distributions. All of their knowledge and information processing power is stored in the raw weights. Therein, they need a way of processing information that matches this. Chain of thought is that alignment.
Chain of thought reasoning allows information to be naturally processed in smaller chunks, allowing the large, brute force probability distribution to work one token at a time. Chain of thought, while allowing more compute per important token, also allows the models to store intermediate information in their context window without needing explicit recurrence.
Recurrence is required for reasoning and this can either happen in the parameter or state-space. Chain of thoughts with transformers handles all of this in the state-space of the problems. The humans we look at as the most intelligent have embedded information directly in the parameters of our brains that we can draw on.
Here is the only assumption of this piece — chain of thought is a natural fit for language models to “reason” and therefore one should be optimistic about training methods that are designed to enhance it generalizing to many domains.2 By the end of 2025 we should have ample evidence of this given the pace of the technological development.
If the analogies of types of intelligence aren’t convincing enough, a far more practical way to view the new style of training is a method that teaches the model to be better at allocating more compute to harder problems. If the skill is compute allocation, it is fundamental to the models handling a variety of tasks. Today’s reasoning models do not solve this perfectly, but they open the door for doing so precisely.
The nature of this coming generalization is not that these models are one size fits all, best in all cases: speed, intelligence, price, etc. There’s still no free lunch. A realistic outcome for reasoning heavy models in the next 0-3 years is a world where:
Reasoning trained models are superhuman on tasks with verifiable domains, like those with initial progress: Code, math, etc.
Reasoning trained models are well better in peak performance than existing autoregressive models in many domains we would not expect and are not necessarily verifiable.
Reasoning trained models are still better in performance at the long-tail of tasks, but worse in cost given the high inference costs of long-context.
Many of the leading figures in AI have been saying for quite some time that powerful AI is going to be “spikey" when it shows up — meaning that the capabilities and improvements will vary substantially across domains — but encountering this reality is very unintuitive.
Some evidence for generalization of reasoning models already exists.
OpenAI has already published multiple safety-oriented research projects with their new reasoning models in Deliberative Alignment: Reasoning Enables Safer Language Models and Trading Inference-Time Compute for Adversarial Robustness. These papers show their new methods can be translated to various safety domains, i.e. model safety policies and jailbreaking. The deliberative alignment paper shows them integrating a softer reward signal into the reasoning training — having a language model check how the safety policies apply to outputs.
An unsurprising quote from the deliberative alignment release related to generalization:
we find that deliberative alignment enables strong generalization to out-of-distribution safety scenarios.
Safety, qualitatively, is very orthogonal to traditional reasoning problems. Safety is very subjective to the information provided and subtle context, where math and coding problems are often about many small, forward processing steps towards a final goal. More behaviors will fit in between those.
This generative verifier for safety is not a ground truth signal and could theoretically be subject to reward hacking, but it was avoided. Generative verifiers will be crucial to expanding this training to countless domains — they’re easy to use and largely a new development
Disclaimer:info@kdj.com
The information provided is not trading advice. kdj.com does not assume any responsibility for any investments made based on the information provided in this article. Cryptocurrencies are highly volatile and it is highly recommended that you invest with caution after thorough research!
If you believe that the content used on this website infringes your copyright, please contact us immediately (info@kdj.com) and we will delete it promptly.
-
- The Ripple Effect: How New Blockchain Innovations Are Shaping Crypto Regulations
- Jan 30, 2025 at 03:00 pm
- In what could be a pivotal moment for the cryptocurrency market, the ongoing battle between the U.S. Securities and Exchange Commission (SEC) and Ripple Labs over the XRP token is set to shape how digital assets are regulated in the future.
-
- Comparing Scarcity Models: XRP vs. Bitcoin, Ethereum, and Other Cryptocurrencies
- Jan 30, 2025 at 03:00 pm
- Scarcity plays a crucial role in the long-term value of any cryptocurrency. Assets with a limited or decreasing supply tend to attract demand, making them more valuable over time. Bitcoin's fixed supply of 21 million coins is often cited as a key reason for its value proposition, while Ethereum has introduced a burn mechanism to control inflation.
-
- The Meme Coin Market Is Primed for Explosive Growth in 2025, and These 3 Coins Are Poised to Lead the Charge
- Jan 30, 2025 at 02:50 pm
- The meme coin market has always been fertile ground for explosive growth, and 2025 is shaping up to be no different. With opportunities to get in early, investors are looking to capitalize on coins that could deliver life-changing returns.