Makushi 🐧 pfp

Makushi 🐧

@makushi

45 Following
42 Followers


Makushi 🐧 pfp
Makushi 🐧
@makushi
Our project won 2 prizes during the ETHGlobal Agentic Ethereum hackathon! πŸŽ‰ This two-week hackathon was an amazing experience for my teammate @zouzie and me. We had tons of fun, pushed our limits once again, and learned so much together. πŸ”₯ This was the 2nd biggest ETHGlobal hackathon in terms of submissions (1,710 hackers for 518 projects), so even though we were very proud of our project, we felt our chances of standing out were quite low. We are extremely grateful that two of the companies sponsoring the event chose our project among all these amazing builds. I'd like to extend a big thank you to AltLayer and @coinbase ! β™₯️
1 reply
0 recast
1 reaction

Makushi 🐧 pfp
Makushi 🐧
@makushi
In a future post, I’ll share one example of potential improvement leveraging Mobula β€œsmart money” API which tracks the best traders by realized PnL and other useful metrics that Viktor could use to weight some decisions. Thanks for reading, have a good one πŸ‘ (Happy to hear your thoughts on this :))
0 reply
0 recast
0 reaction

Makushi 🐧 pfp
Makushi 🐧
@makushi
However it would required multiple month/years of running to make an unbiased conclusion as to this system is reliable, that’s why I’m still working to shorten this delay and increasing its performances. πŸ”§
1 reply
0 recast
0 reaction

Makushi 🐧 pfp
Makushi 🐧
@makushi
βšͺ Results and conclusion All daily results are saved on Supabase and can be monitored on an interface (https://viktor-monitor.wakushi.com) so I can track the evolution of performances as I’m improving the system and tweaking some of its rules. This first version of Viktor was launched after achieving 55-60% of good predictions by comparing past windows analysis against its training vector embedded data on thousand of days.
1 reply
0 recast
0 reaction

Makushi 🐧 pfp
Makushi 🐧
@makushi
This lets us compute weighted averages for each outcome group, estimate potential returns, and assign a confidence score to the forecast. Tokens are then ranked based on their potential for positive price movement in the days ahead. πŸš€
1 reply
0 recast
0 reaction

Makushi 🐧 pfp
Makushi 🐧
@makushi
🟑 Analysis and cosine similarity search For each token, Viktor fetches the last 10 days of OHLCV data (Open, High, Low, Close, Volume, which is the underlying data of market charts). This window is then normalized and converted to natural language and embedded as a vector and compared, via Supabase pgvector, against ~56,000 (at time of writing) training windows stored in memory.
1 reply
0 recast
0 reaction

Makushi 🐧 pfp
Makushi 🐧
@makushi
Finally, Viktor performs a check for the presence of ETH or USDC pools on each token’s chain using RPC calls to the Uniswap Factory contract of the token’s chain. After this, the token discovery phase is complete and ready for vector analysis.
1 reply
0 recast
0 reaction

Makushi 🐧 pfp
Makushi 🐧
@makushi
This initial filter leaves us with ~300 tokens on average. These are scored using volatility, volume, liquidity, market cap, and social metrics, then sorted to keep only the most active and liquid ones.
1 reply
0 recast
0 reaction

Makushi 🐧 pfp
Makushi 🐧
@makushi
πŸ”΅ Token Discovery Next, Viktor uses the Mobula APIs to query all listed tokens (~27,000), then filters them based on: - Minimum market cap - Minimum liquidity - Exclusion of stablecoins - Compatibility with supported chains (enabling auto-trading via Uniswap’s SwapRouter)
1 reply
0 recast
0 reaction

Makushi 🐧 pfp
Makushi 🐧
@makushi
🟒 Performance Evaluation Using a daily CRON job, the system (let’s call it Viktor from now on) starts by evaluating the previous day’s predictions. It fetches the current prices of all selected tokens and records them to track daily performance easily.
1 reply
0 recast
0 reaction

Makushi 🐧 pfp
Makushi 🐧
@makushi
A month ago, I shared some theoretical work on how vector embeddings could be used to build market predictions. Today, let's dive into some of the integration details behind the analysis, and give a peek at the results. πŸ‘€β¬‡οΈ
1 reply
0 recast
0 reaction

ETHPrague pfp
ETHPrague
@ethprague
πŸ“’ Beyond excited to announce a historic fireside chat between Tim Berners-Lee, inventor of the World Wide Web, and @vitalik.eth, creator of Ethereum, at ETHPrague! The two brilliant minds will explore tech acceleration, open networks, and visions for Web3.0! πŸ”­ πŸ—“οΈ May 27-29
7 replies
10 recasts
57 reactions

Makushi 🐧 pfp
Makushi 🐧
@makushi
Anyone has book/courses recommandations to learn about cryptography ?
0 reply
0 recast
0 reaction

Makushi 🐧 pfp
Makushi 🐧
@makushi
Actually hilarious πŸ˜‚ That's a tough way to learn
0 reply
0 recast
1 reaction

Makushi 🐧 pfp
Makushi 🐧
@makushi
Depends on what you're building I guess People seem to vibe code for weekend projects, but at the same time weekend projects are moments to enjoy exploring and getting things done on our own, right ?
0 reply
0 recast
1 reaction

Makushi 🐧 pfp
Makushi 🐧
@makushi
That's the best description I've seen for Devin so far πŸ˜‚
0 reply
0 recast
1 reaction

Makushi 🐧 pfp
Makushi 🐧
@makushi
That's actually amazing, thanks for sharing This makes experimenting with stuff way easier
0 reply
0 recast
0 reaction

Makushi 🐧 pfp
Makushi 🐧
@makushi
For the past months I’ve been experimenting with this concept and I have a system running on a daily basis on a VPS. In the next post I’ll share the interesting results I’ve got so far, what I’ve done to improve it and some implementation details. Also I'm very interested in hearing your thoughts on this. Thanks for reading, have a good one πŸ‘
0 reply
0 recast
0 reaction

Makushi 🐧 pfp
Makushi 🐧
@makushi
Imagine a trader documenting market observations daily for 5 years, converting each analysis into vector embeddings. When we analyze today's market and transform this analysis into a vector, we can instantly search this historical dataset to find the most similar past market conditions using cosine similarity search. This search measures the angle between vector representations, not their size - values closer to 1 indicate higher similarity. This means similar market patterns are identified regardless of overall market scale. By providing any new market observation, the system returns historically similar situations based on semantic meaning, not just surface-level metrics. This would allows the system to learn from past trading decisions made in genuinely comparable market conditions.
1 reply
0 recast
0 reaction

Makushi 🐧 pfp
Makushi 🐧
@makushi
The key benefit is that it wouldn’t just match simple metrics like "prices dropped 5%" but deeper patterns like "altcoins falling while BTC dominance rises during regulatory uncertainty," enabling much more sophisticated pattern recognition that we would overlook or that we’re not aware of. That said, actual embedding-ready text observations should avoid specific token names and all prices should be normalized to prevent bias in the similarity matching. Normalizing removes the scale difference between assets (a $3,000 drop in BTC is very different from a $3,000 drop in a $5,000 token), makes historical comparisons possible regardless of price levels at different times and focuses the analysis on the pattern of movement rather than absolute numbers.
1 reply
0 recast
0 reaction