
Luuu
@luuu
0 reply
0 recast
2 reactions
0 reply
0 recast
2 reactions
0 reply
0 recast
2 reactions
#dailychallenge day19
PondAI
- A sleeper AI infra project
- Crypto AI layer by running incentive-driven competitions for onchain prediction models that empower smarter DeFAI, security, and trading agents
- The best models receive incentives, and model developers retain ownership, provided they are integrated with real-time data and inference infrastructure.
Gaianet
- A decentralized computing infrastructure that enables everyone to create, deploy, scale, and monetize their own AI agents that reflect their styles, values, knowledge, and expertise.
- Supports AI-powered dApps & smart contracts
- Open-source framework 0 reply
0 recast
4 reactions
0 reply
0 recast
3 reactions
0 reply
0 recast
3 reactions
#dailychallenge day15
AGI Modularity Hypothesis
• The idea that Artificial General Intelligence (AGI) will be built from modular components, each specializing in different cognitive functions.
• Inspired by Human Brain: Similar to how the brain has specialized regions (e.g., vision, language, memory), AGI could have distinct modules for different tasks.
• Decentralized Processing: Instead of a single monolithic system, modular AGI would distribute intelligence across multiple specialized units.
• Improved Efficiency: Specialized modules could work together efficiently, reducing computational overhead compared to a single large model.
• Scalability: Allows for incremental improvements by upgrading or adding modules without retraining the entire system.
• Adaptability: Enables dynamic learning, where AGI can develop new skills by integrating new modules.
• Challenges: Ensuring seamless communication between modules, preventing conflicts, and maintaining generalization across tasks. 0 reply
0 recast
3 reactions
0 reply
0 recast
2 reactions
0 reply
0 recast
3 reactions
0 reply
0 recast
0 reaction
1 reply
0 recast
3 reactions
0 reply
0 recast
1 reaction
0 reply
0 recast
4 reactions
0 reply
0 recast
3 reactions
#dailychallenge
Research on DeepSeek r1 _2
* The cost-saving information related to training was already announced in the V3 paper released last Christmas, not in the R1 model.
* While MoE (Mixture of Experts) was implemented starting from V2, meaningful results only began to appear with V3.
* DeepSeekMoE, referring to Mixture of Experts, activates only the experts relevant to a specific topic. In contrast, models like GPT-3.5 activate the entire network during both training and inference, regardless of the token input.
* The total cost of $5,576,000 includes only the final training phase. Expenses for model structure design, algorithm development, data preparation, preliminary research, and comparative experiments were excluded.
* It is presumed that DeepSeek distilled 4o and Sonnet to generate training tokens. 0 reply
0 recast
3 reactions
0 reply
0 recast
4 reactions