Market Cap: $3.1922T 0.530%
Volume(24h): $68.9512B -49.950%
  • Market Cap: $3.1922T 0.530%
  • Volume(24h): $68.9512B -49.950%
  • Fear & Greed Index:
  • Market Cap: $3.1922T 0.530%
Cryptos
Topics
Cryptospedia
News
CryptosTopics
Videos
Top News
Cryptos
Topics
Cryptospedia
News
CryptosTopics
Videos
bitcoin
bitcoin

$96333.736218 USD

-0.12%

ethereum
ethereum

$2794.212975 USD

3.83%

xrp
xrp

$2.567236 USD

-0.92%

tether
tether

$1.000070 USD

0.02%

bnb
bnb

$665.513425 USD

1.27%

solana
solana

$171.604422 USD

-0.12%

usd-coin
usd-coin

$0.999978 USD

0.00%

dogecoin
dogecoin

$0.244805 USD

0.18%

cardano
cardano

$0.774995 USD

0.71%

tron
tron

$0.242596 USD

2.02%

chainlink
chainlink

$17.899842 USD

2.05%

avalanche
avalanche

$25.609807 USD

2.64%

sui
sui

$3.385756 USD

1.02%

stellar
stellar

$0.332895 USD

1.57%

litecoin
litecoin

$127.073849 USD

-2.04%

Cryptocurrency News Articles

TokenSkip: Optimizing Chain-of-Thought Processing in Large Language Models

Feb 23, 2025 at 08:50 am

Despite the breakthrough advances achieved through Chain-of-Thought (CoT) prompting, Large Language Models (LLMs) face significant challenges in complex reasoning tasks.

TokenSkip: Optimizing Chain-of-Thought Processing in Large Language Models

Large Language Models (LLMs) have made substantial progress in handling Chain-of-Thought (CoT) reasoning tasks, but they face challenges in terms of computational overhead, especially for longer CoT sequences. This directly affects inference latency and memory requirements.

Since LLM decoding is autoregressive in nature, as CoT sequences grow longer, there is a proportional increase in processing time and memory usage in attention layers where computational costs scale quadratically. Striking a balance between maintaining reasoning accuracy and computational efficiency has become a critical challenge, as attempts to reduce reasoning steps often compromise the model’s problem-solving capabilities.

To address the computational challenges of Chain-of-Thought (CoT) reasoning, various methodologies have been developed. Some approaches focus on streamlining the reasoning process by simplifying or skipping certain thinking steps, while others attempt to generate steps in parallel. A different strategy involves compressing reasoning steps into continuous latent representations, enabling LLMs to reason without generating explicit word tokens.

Moreover, prompt compression techniques to handle complex instructions and long-context inputs more efficiently range from using lightweight language models to generate concise prompts, employing implicit continuous tokens for task representation, and implementing direct compression by filtering for high-informative tokens.

In this work, researchers from The Hong Kong Polytechnic University and the University of Science and Technology of China propose TokenSkip, an approach to optimize CoT processing in LLMs. It enables models to skip less important tokens within CoT sequences while maintaining connections between critical reasoning tokens, with adjustable compression ratios.

The system works by first constructing compressed CoT training data through token pruning, followed by a supervised fine-tuning process. Initial testing across multiple models, including LLaMA-3.1-8B-Instruct and Qwen2.5-Instruct series shows promising results, particularly in maintaining reasoning capabilities while significantly reducing computational overhead.

The architecture of TokenSkip is built on the fundamental principle that different reasoning tokens contribute varying levels of importance to reaching the final answer. It consists of two main phases: training data preparation and inference.

During the training phase, the system generates CoT trajectories using the target LLM, and each remaining trajectory is pruned with a randomly selected compression ratio. The token pruning process is guided by an “importance scoring” mechanism, which assigns higher scores to tokens that are more critical for the final answer.

At inference time, TokenSkip maintains the autoregressive decoding approach but enhances efficiency by enabling LLMs to skip less important tokens. The structure of the input format is such that the question and compression ratio get separated by end-of-sequence tokens.

The results show that larger language models are more capable of maintaining performance while achieving higher compression rates. The Qwen2.5-14B-Instruct model achieves remarkable results with only a 0.4% performance drop while reducing token usage by 40%.

When compared with alternative approaches like prompt-based reduction and truncation, TokenSkip shows superior performance. While prompt-based reduction fails to achieve target compression ratios and truncation leads to significant performance degradation, TokenSkip maintains the specified compression ratio while preserving reasoning capabilities. On the MATH-500 dataset, it achieves a 30% reduction in token usage with less than a 4% performance drop.

In this paper, researchers introduce TokenSkip, a method that represents a significant advancement in optimizing CoT processing for LLMs by introducing a controllable compression mechanism based on token importance. The success of the method lies in maintaining reasoning accuracy while significantly reducing computational overhead by selectively preserving critical tokens and skipping less important ones. The approach has proven effective with LLMs, showing minimal performance degradation even at substantial compression ratios.

This research opens new possibilities for advancing efficient reasoning in LLMs, establishing a foundation for future developments in computational efficiency while maintaining robust reasoning capabilities.

Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 75k+ ML SubReddit.

Disclaimer:info@kdj.com

The information provided is not trading advice. kdj.com does not assume any responsibility for any investments made based on the information provided in this article. Cryptocurrencies are highly volatile and it is highly recommended that you invest with caution after thorough research!

If you believe that the content used on this website infringes your copyright, please contact us immediately (info@kdj.com) and we will delete it promptly.

Other articles published on Feb 23, 2025