![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
1. What story does DeFAI tell?
1.1 What is DeFAI?
In simple terms, DeFAI refers to AI + DeFi. The market has gone through multiple rounds of hype around AI, from AI computing power to AI memes, and from different technical architectures to various infrastructures. Although the overall market value of AI agents has recently seen a decline, the concept of DeFAI is emerging as a new breakthrough trend. Currently, DeFAI can be broadly categorized into three types: AI abstraction, autonomous DeFi agents, and market analysis and prediction. The specific divisions within these categories are illustrated in the figure below.
Image source: Created by the author
1.2 How does DeFAI work?
In the DeFi system, the core behind AI agents is LLM (Large Language Model), which involves multi-layered processes and technologies, covering all aspects from data collection to decision execution. According to the research by @3sigma in the IOSG document, most models follow six specific workflows: data collection, model inference, decision-making, custody and operation, interoperability, and wallet management. The following summarizes these processes:
1. Data Collection: The primary task of the AI agent is to gain a comprehensive understanding of its operating environment. This includes obtaining real-time data from multiple sources:
On-chain data: Real-time blockchain data such as transaction records, smart contract statuses, and network activity are obtained through indexers, oracles, etc. This helps the agent stay synchronized with market dynamics;
Off-chain data: Price information, market news, and macroeconomic indicators are obtained from external data providers (e.g., CoinMarketCap, Coingecko) to ensure the agent's understanding of external market conditions. This data is typically provided to the agent via API interfaces;
Decentralized data sources: Some agents may obtain price oracle data through decentralized data feed protocols, ensuring the decentralization and reliability of the data.
2. Model Inference: After data collection is complete, the AI agent enters the inference and computation phase. Here, the agent relies on multiple AI models for complex reasoning and prediction:
Supervised and unsupervised learning: By training on labeled or unlabeled data, AI models can analyze behaviors in markets and governance forums. For example, they can predict future market trends by analyzing historical trading data or infer the outcome of a voting proposal by analyzing governance forum data;
Reinforcement learning: Through trial and error and feedback mechanisms, AI models can autonomously optimize strategies. For instance, in token trading, the AI agent can simulate various trading strategies to determine the best time to buy or sell. This learning method allows the agent to continuously improve under changing market conditions;
Natural Language Processing (NLP): By understanding and processing user natural language inputs, the agent can extract key information from governance proposals or market discussions, helping users make better decisions. This is particularly important when scanning decentralized governance forums or processing user commands.
3. Decision-Making: Based on the collected data and inference results, the AI agent enters the decision-making phase. In this stage, the agent needs to analyze the current market conditions and weigh multiple variables:
Optimization engine: The agent uses an optimization engine to find the best execution plan under various conditions. For example, when providing liquidity or executing arbitrage strategies, the agent must consider factors such as slippage, transaction fees, network latency, and capital size to find the optimal execution path;
Multi-agent system collaboration: To cope with complex market conditions, a single agent may not be able to optimize all decisions comprehensively. In such cases, multiple AI agents can be deployed, each focusing on different task areas, collaborating to improve the overall decision-making efficiency of the system. For example, one agent focuses on market analysis while another agent focuses on executing trading strategies.
4. Custody and Operation: Since AI agents need to handle a large amount of computation, they typically require their models to be hosted on off-chain servers or distributed computing networks:
Centralized hosting: Some AI agents may rely on centralized cloud computing services like AWS to host their computing and storage needs. This approach helps ensure the efficient operation of the models but also brings potential risks of centralization;
Decentralized hosting: To reduce centralization risks, some agents use decentralized distributed computing networks (like Akash) and distributed storage solutions (like Arweave) to host models and data. Such solutions ensure the decentralized operation of models while providing data storage persistence;
On-chain interaction: Although the models themselves are hosted off-chain, AI agents need to interact with on-chain protocols to execute smart contract functions (such as trade execution and liquidity management) and manage assets. This requires secure key management and transaction signing mechanisms, such as MPC (Multi-Party Computation) wallets or smart contract wallets.
5. Interoperability: The key role of AI agents in the DeFi ecosystem is to interact seamlessly with multiple different DeFi protocols and platforms:
API integration: Agents exchange data and perform interactions with various decentralized exchanges, liquidity pools
부인 성명:info@kdj.com
제공된 정보는 거래 조언이 아닙니다. kdj.com은 이 기사에 제공된 정보를 기반으로 이루어진 투자에 대해 어떠한 책임도 지지 않습니다. 암호화폐는 변동성이 매우 높으므로 철저한 조사 후 신중하게 투자하는 것이 좋습니다!
본 웹사이트에 사용된 내용이 귀하의 저작권을 침해한다고 판단되는 경우, 즉시 당사(info@kdj.com)로 연락주시면 즉시 삭제하도록 하겠습니다.