Sanjay pfp

Sanjay

@sanjay

210 Following
5288 Followers


Sanjay pfp
Sanjay
@sanjay
We've been so focused on snapchain, I forgot to do the Hub protocol release for Nov 27. Just released Hubble 1.17. Please upgrade before current version expires at Dec 11 midnight UTC. Apologies for the short notice!
0 reply
24 recasts
195 reactions

Sanjay pfp
Sanjay
@sanjay
Considering buying a house, and came across this very cool and very legal covenant (from 1946) for the land in the disclosures. Can't say I wasn't tempted to buy just to stick it to them.
6 replies
11 recasts
77 reactions

Sanjay pfp
Sanjay
@sanjay
I was originally leaning towards account ordering. But happy with where we ended up. The biggest issue with blockchains is managing block state growth. Thanks to @cassie for inspiring the solution on how to handle this in snapchains. Also, special thanks to @vrypan.eth for the App Ordering idea.
3 replies
132 recasts
412 reactions

Sanjay pfp
Sanjay
@sanjay
If you enjoy coming up with novel distributed systems algorithms, we have just the challenge for you.
3 replies
6 recasts
145 reactions

Sanjay pfp
Sanjay
@sanjay
Please upgrade Hubble to 1.14.4 which implements storage changes defined in https://github.com/farcasterxyz/protocol/discussions/191 We'll min version on Monday at the latest so all hubs will be ready for Aug 28 when storage units were originally scheduled to expire. If you are manually calculating storage, you'll need to update your logic. You may find the new helper functions implemented in the hub-nodejs package useful https://github.com/farcasterxyz/hub-monorepo/blob/main/packages/core/src/limits.ts
1 reply
5 recasts
39 reactions

Sanjay pfp
Sanjay
@sanjay
Since this day, we're at 50x the message count, 180x the peer count, ~40x the db size. Perf metrics are harder to compare since it was during an incident but currently p95 merge latency is ~30ms and p95 gossip delay is <1s (vs 2000ms and ~2.5hrs during the incident)
14 replies
108 recasts
522 reactions

Sanjay pfp
Sanjay
@sanjay
Hub message disruption today was caused by our hub losing gossip connectivity to all other hubs (unclear why exactly due to a logging bug). It was unable to regain connection due a bad interaction with a libp2p upgrade. Released 1.14.2 with a fix.
0 reply
6 recasts
39 reactions

Sanjay pfp
Sanjay
@sanjay
Hubble 1.14 is out. It includes a bunch of fixes around follows (consistency issues and large compaction events breaking event streams). If you are using shuttle, make sure you’re on the latest version before upgrading hubs, there’s a breaking api change for events.
4 replies
123 recasts
547 reactions

Sanjay pfp
Sanjay
@sanjay
Message processing was broken today because some events exceeded grpc client default size. If you're using shuttle, please upgrade to 0.4.1 to get the fix. If you are constructing hub clients manually and listening to events, make sure to pass in the following param
3 replies
8 recasts
141 reactions

Sanjay pfp
Sanjay
@sanjay
This one was tricky to track down. There were a lot of dead ends. Special thanks to @wazzymandias.eth for basically fixing it last night and not telling the rest of us 😂 @cassie and @sds for some deep libp2p and tcp tuning magic, which we thankfully didn't need. And finally to my good friend Claude, who pointed me to the `node --prof` command which is able to profile worker threads, would've been much more difficult to narrow down the root cause without it.
9 replies
22 recasts
122 reactions

Sanjay pfp
Sanjay
@sanjay
Released Hubble 1.13.2 with a crash fix (thanks @cassie). Please upgrade for improved stability. We're also planning to min version to 1.13.1 early next week so all hubs support long casts. We're noticing some sync issues due to older hubs that don't support it.
4 replies
4 recasts
69 reactions

Sanjay pfp
Sanjay
@sanjay
Reminder that the replicator is deprecated by shuttle. We’re going to remove the replicator from the hub codebase by end of next week to avoid any confusion. If you’re still using the replicator, please migrate to the shuttle package. Let me know if you have any questions around migration.
15 replies
344 recasts
2044 reactions

Sanjay pfp
Sanjay
@sanjay
We’re seeing message processing delays again. The team is working on scaling our systems to be able to handle it.
0 reply
0 recast
23 reactions

Sanjay pfp
Sanjay
@sanjay
alpha version of the package is out https://github.com/farcasterxyz/hub-monorepo/tree/main/packages/hub-shuttle
1 reply
6 recasts
34 reactions

Sanjay pfp
Sanjay
@sanjay
@ted @gregan the RWA thesis is finally playing out. When does the goldfinch goat pool open? There's clear investor demand https://x.com/NeerajKA/status/1775881228656181312?s=20
1 reply
0 recast
15 reactions

Sanjay pfp
Sanjay
@sanjay
Please reach out if you’re interested in using this, or have any feedback
3 replies
2 recasts
14 reactions

Sanjay pfp
Sanjay
@sanjay
We're planning to min version the hubs to 1.11.2 later today to improve network health. This version includes the fix to have all hubs use snapshots to catch up if they are too far behind. If you're on an older version, please run `./hubble.sh upgrade` to get up to date. https://warpcast.com/sanjay/0x4a7d7839
2 replies
3 recasts
11 reactions

Sanjay pfp
Sanjay
@sanjay
We've released 1.11.1. @wazzymandias.eth made it so that hubs will now automatically use snapshot sync to catch up if they are too many messages behind. Note that this will reset the db, if you're running a replicator you can disable this to be safe, by setting `CATCHUP_SYNC_WITH_SNAPSHOT=false` in your .env file
6 replies
2 recasts
13 reactions

Sanjay pfp
Sanjay
@sanjay
This is a very cool talk. Interestingly, it's almost exactly the same algorithm the hubs use right now. Prolly trees are more efficient since it collapses the number of levels required (we use timestamp prefix tries, so it always at least 10 levels), but apart from that it's exactly the same.
1 reply
1 recast
23 reactions

Sanjay pfp
Sanjay
@sanjay
Our existing sync architecture for Hubs is running into scaling issues. Been thinking about a new design that can scale to billions of messages and millions of fids. If you enjoy complex distributed systems problems, I would appreciate feedback https://warpcast.notion.site/Sync-V2-a9c0fd81d7b245a0b3fbd51e6007909f
16 replies
15 recasts
92 reactions