Content pfp
Content
@
0 reply
0 recast
2 reactions

W1NTΞR pfp
W1NTΞR
@w1nt3r
After years of "web3" development, I still can't get over how arbitrarily stupid RPC nodes limitations are. Take eth_getLogs: Alchemy offers 100k, QuickNode — 10k, but these numbers can be different depending on how many results will be returned. You either have to code against a single RPC provider (lock in) or run something like /ponder or graph (ops complexity++)
7 replies
0 recast
67 reactions

treethought pfp
treethought
@treethought.eth
These aren't really "arbitrary" limits. and Alchemy's limit is 10k as well btw. Logs fundamentally put a heavy load on nodes, and if you go further than that (especially depending on the node client) nodes will start to struggle and fall behind, drop your request, return a timeout/rpc error, and even just never provide a response. There may actually be limits in certain node clients themselves too, not sure about that though
1 reply
0 recast
0 reaction

W1NTΞR pfp
W1NTΞR
@w1nt3r
Well by arbitrary I mean more "non-standard". If Ethereum RPC spec had a clause saying you can only query X blocks (where X can be computed from more fundamental parameters like gas limit, block size, etc.), then I'd have no problem with it — at least all RPC providers would behave the same. This wouldn't stop Alchemy from supporting, say, 100*X block limit. Great for them! But if I wanted to code a universal client, I'd just stick with X. Today, however, there's no X and I have to empirically figure out X based on whatever popular RPC providers do. This adds complexity
1 reply
0 recast
1 reaction