Content pfp
Content
@
0 reply
0 recast
0 reaction

Varun Srinivasan pfp
Varun Srinivasan
@v
Scaling Gossip with Bundles Hubs are running into scaling issues with libp2p. We're proposing a change to "bundle" messages to fix some of these issues. This may add a ~1s delay to casts moving between clients. https://warpcast.notion.site/Scaling-Gossip-in-Hubble-e66c766fa6b04afcb407f4800134cd72?pvs=25
11 replies
11 recasts
183 reactions

Varun Srinivasan pfp
Varun Srinivasan
@v
Blockchains use libp2p, but they generate only 10's or 100's of items per second today. Hubs generate 10,000 items per second at peak traffic, which is 100x - 1000x over your average blockchain. Importantly, hubs do not have any notion of a "block" and each cast or like is treated as a separate gossip message.
2 replies
0 recast
2 reactions

Britt Kim pfp
Britt Kim
@brittkim.eth
“A bundle is valid if (1) At least one message merges successfully” Wouldn’t there now be a risk of a bad actor submitting various permutations of valid messages, each time producing unique bundles for the network the propagate? For a set of n messages, aren’t there 2^n possible bundles?
1 reply
0 recast
0 reaction

makemake  pfp
makemake
@makemake
thought about doing solana turbine style p2p?
1 reply
0 recast
2 reactions

jj 🛟 pfp
jj 🛟
@jj
https://github.com/farcasterxyz/hub-monorepo/blob/main/apps/hubble/src/network/p2p/gossipNode.ts The code I believe is here if anyone else is following along
0 reply
0 recast
1 reaction

jj 🛟 pfp
jj 🛟
@jj
Have you guys thought of just queueing at each hub instead of bundling, packing, unpacking. You could maintain low latencies and use the queues to dedup
0 reply
0 recast
1 reaction

jj 🛟 pfp
jj 🛟
@jj
How did you determine the 1s delay? Are these dynamic batches or fixed in size?
2 replies
0 recast
1 reaction

b5 pfp
b5
@bfive
Is the bigger problem traffic or memory pressure caused by the long list of dedupe hashes? If it’s the latter we could likely optimize libp2p gossip to use a probabilistic filter for (like a bloom filter) for hash set membership checks.
0 reply
0 recast
0 reaction

jj 🛟 pfp
jj 🛟
@jj
Something else I just thought of could be something like read only mode. Probably a bunch of hubs are a glorified read replica - so for those hubs that are just reading, you can hyper optimize that
0 reply
0 recast
0 reaction

vrypan |--o--| pfp
vrypan |--o--|
@vrypan.eth
I think there's room to improve syncing (for example bundles) if you take into account that Farcaster has special patterns. For example, most users use a single hub (their client's hub) 99% of the time. Can we optimize assuming that the hub will bundle messages in a specific way, and treat bundles as probably unique?
1 reply
0 recast
0 reaction

Brock pfp
Brock
@runninyeti.eth
Curious if there were any out of the box ideas on the table that got thrown out, but worth exploring longer term? For instance, reading this, my mind immediately goes towards federation. i.e. Solve scaling longer term by clustering hubs (by channel?) and letting clusters communicate
1 reply
0 recast
0 reaction

d_ttang pfp
d_ttang
@ttang.eth
Looks good
0 reply
0 recast
0 reaction