Content pfp
Content
@
0 reply
0 recast
0 reaction

Joe Petrich ๐ŸŸช pfp
Joe Petrich ๐ŸŸช
@jpetrich
An engineer on my team has been working on a migration of some indexed blockchain data to postgres and took our queries from an initial average latency of almost 2 seconds to under 100ms. The process to get there was fascinating, imo. Would anyone else be interested in a writeup of the optimization techniques he used?
11 replies
2 recasts
28 reactions

Pierre H. โ€” q/dau pfp
Pierre H. โ€” q/dau
@rektorship
I'd be interested ๐Ÿ™Œ sharing in return our experience at Ledger serving web3 data at web2 scale :)) https://www.ledger.com/blog/serving-web3-at-web2-scale
2 replies
0 recast
1 reaction

Joe Petrich ๐ŸŸช pfp
Joe Petrich ๐ŸŸช
@jpetrich
This is great! Thank you! I think what's interesting about our use case, and relevant to a lot of builders in consumer crypto, is that the subset of data we care about is actually tiny compared with indexers that need the entire chain, or multiple chains. It's a much different problem and deserves different solutions
1 reply
0 recast
2 reactions

Samuel ใƒ„ pfp
Samuel ใƒ„
@samuellhuber.eth
what are you interested in? mainly your own transactions coming from the platform itself?
1 reply
0 recast
0 reaction

Joe Petrich ๐ŸŸช pfp
Joe Petrich ๐ŸŸช
@jpetrich
Exclusively transactions that involve our single contracts' tokens on the blockchain side. Since our token ids are pseudorandom, you need to join with info from the metadata to do anything interesting, whereas lots of NFT indexers do tricks with token ID ranges to parse a contract into "collections"
1 reply
0 recast
1 reaction

Samuel ใƒ„ pfp
Samuel ใƒ„
@samuellhuber.eth
is the pseudorandomness a design flaw then? or what was the reason/need for that? sounds easier with tokenID ranges or ERC1155 minting out custom ERC20s with new ID? (not familiar with the exact contracts thats why I am asking, forgive me)
1 reply
0 recast
0 reaction