Ape/rture pfp

Ape/rture

@aperture

141 Following
71 Followers


Ape/rture pfp
Ape/rture
@aperture
It's never too late to improve your sleep 😉
0 reply
0 recast
0 reaction

Ape/rture pfp
Ape/rture
@aperture
if you need help with the indexer you might want to check out our tooling since it is real-time (and can also handle historical data). Let me know if I can set you up with an API key so you can test it. docs.indexing.co
0 reply
0 recast
0 reaction

Ape/rture pfp
Ape/rture
@aperture
If the indexer is lagging you might want to check out our tooling since it is real-time (and can also handle historical data). Let me know if I can set you up with an API key so you can test it. https://docs.indexing.co/
0 reply
0 recast
1 reaction

Ape/rture pfp
Ape/rture
@aperture
Not if you have: 1. Multiple parties using this network 2. Nodes are very very small and cheap compute since they only process a single block 3. Since nodes are small the plan is to tap into excess compute from devices like laptops/phones etc.
0 reply
0 recast
2 reactions

Ape/rture pfp
Ape/rture
@aperture
Or you scale horizontally with a distributed network like we do with The Neighborhood and The Indexing Company
1 reply
0 recast
1 reaction

Ape/rture pfp
Ape/rture
@aperture
We can handle this throughput with The Neighborhood https://neighborhood.indexing.co/ It's all because we do parallel process the data through our network. What helps for storage is filtering before data hits the DBs. For MegaETH specifically you can also chose to use the mini blocks for front-end confirmations, while you chose to use the EVM blocks (1s) to do the data in the DBs. Pipelines can luckily use the same types of filtering.
1 reply
0 recast
0 reaction

Ape/rture pfp
Ape/rture
@aperture
You can use The Indexing Companies public Big Query data set. Big query gives lots of free credits and you can also get other types of FC data. https://console.cloud.google.com/bigquery?sq=867429816176:87fef9a0cc334c199363075701d50e74
0 reply
0 recast
0 reaction

Ape/rture pfp
Ape/rture
@aperture
We are launching The Neighborhood by The Indexing Company, The new solution to fetch on-chain data. This is the first iteration. We are building towards an open distributed network that can process any data, while tapping into excess compute from anywhere. https://neighborhood.indexing.co/
0 reply
0 recast
0 reaction

Brock pfp
Brock
@runninyeti.eth
Excited to announce that /indexing has joined Avalanche Codebase to begin protocolizing our infra. The thesis remains simple: - Web1 = my compute - Web2 = their compute - Web3 = our compute While "crypto" has been a clear first use case for these distributed primitives, it certainly won't be the last. Idle compute is *everywhere* and we're on a mission to make it useful. https://x.com/avax/status/1907464995790545090
8 replies
11 recasts
28 reactions

Ape/rture pfp
Ape/rture
@aperture
Are service providers also welcomed? With The Indexing Company we could help with indexing+data infra for both Farcaster and the related on-chain activity. Probably saves you hiring an additional data engineer and reduce setup time to few weeks (vs. few months) Cc: @runninyeti.eth
0 reply
0 recast
1 reaction

Ape/rture pfp
Ape/rture
@aperture
And then to the point that we can do 1-3 at the same time at somepoint. We are developing just In Time Indexing, where you can get any data point or set from any chain at the speed of an API request. We know it's possible with good parallelization because we have done it, it's just a matter of processing power you throw at this problem. Hence why we are building a distributed protocol to do this.
0 reply
0 recast
0 reaction

Ape/rture pfp
Ape/rture
@aperture
I see your point and I think it is in the details. Although 2. the complete backfill can be a one time trigger. 3. for the time window can be automated if the system is down but knows the time period is was down. No need to automate for 3. Then there is a case where shipping a whole history can be easier vs. 3 where filtered on specific data(contracts/tokens/etc.) for a specific period where only that data is active. For the latter you need to know when these tx happened.
1 reply
0 recast
0 reaction

Ape/rture pfp
Ape/rture
@aperture
Seems on top of 1,2 and 3 we are also building 4: do steps 1-3 at the same time @runninyeti.eth
1 reply
0 recast
1 reaction

Ape/rture pfp
Ape/rture
@aperture
Maybe @runninyeti.eth can help. We've done indexing for NFTs many times (including Metadata, regardless of chain). Only start building it yourself if you want to be locked in for a few months and don't want to work on your actual product 😅
0 reply
0 recast
1 reaction

Ape/rture pfp
Ape/rture
@aperture
They start messaging @runninyeti.eth or get in touch with us here: https://www.indexing.co/get-in-touch Let us know if we can help you out, since we can help you bridge this downtime
0 reply
0 recast
2 reactions

Ape/rture pfp
Ape/rture
@aperture
To prove this is a human attempt
0 reply
0 recast
1 reaction

Brock pfp
Brock
@runninyeti.eth
building /indexing, distributed infra for open data This week we shipped: - v0 of docs.indexing.co (wip, feedback welcome - reach out to get started!) - chugging along with bare metal optimizations 🚂 - substrate chains: Bittensor, Astar, Enjin, Kusama, Polkadot Things are about to get fun🤘
1 reply
4 recasts
17 reactions

Ape/rture pfp
Ape/rture
@aperture
A few years back we did a series with Deus EX DAO. The concepts are still valid, but maybe the terminology is a bit dated. https://deusexdao.substack.com/p/tokenomics-guide-3-the-launch Also some info on the points part: https://0xkepler.substack.com/p/improving-point-airdrops
1 reply
0 recast
2 reactions

Brock pfp
Brock
@runninyeti.eth
Seeking $250k in dedicated funds for indexing.co to pick up where SimpleHash left off. We have the pipes, we have the templates, and we've been doing similar work for customers for years. The industry needs a dedicated offering though. SimpleHash is (was...) great, but often limited by their API-first approach to accessing data. Imagine token metadata, ownership, and pricing available in *any* pipeline and delivered directly to users. Open to investment, partnerships, grants, ... DC me if interested.
10 replies
18 recasts
39 reactions

Brock pfp
Brock
@runninyeti.eth
Live and on demand with indexing.co when, and where, you need it 🦄
1 reply
2 recasts
6 reactions