Sanjay pfp

Sanjay

@sanjay

213 Following
3604 Followers


Sanjay pfp
Sanjay
@sanjay
Probably the most challenging and fun project I’ve ever worked on. Incredible work by @dynemyte @suurkivi and @cassie. Special shoutout to the Informal Systems team for building Malachite, a rock solid rust Tendermint library which powers snapchain.
7 replies
36 recasts
170 reactions

Sanjay pfp
Sanjay
@sanjay
We've been so focused on snapchain, I forgot to do the Hub protocol release for Nov 27. Just released Hubble 1.17. Please upgrade before current version expires at Dec 11 midnight UTC. Apologies for the short notice!
0 reply
12 recasts
148 reactions

Sanjay pfp
Sanjay
@sanjay
Considering buying a house, and came across this very cool and very legal covenant (from 1946) for the land in the disclosures. Can't say I wasn't tempted to buy just to stick it to them.
7 replies
11 recasts
66 reactions

Sanjay pfp
Sanjay
@sanjay
I was originally leaning towards account ordering. But happy with where we ended up. The biggest issue with blockchains is managing block state growth. Thanks to @cassie for inspiring the solution on how to handle this in snapchains. Also, special thanks to @vrypan.eth for the App Ordering idea.
8 replies
105 recasts
293 reactions

Sanjay pfp
Sanjay
@sanjay
If you enjoy coming up with novel distributed systems algorithms, we have just the challenge for you.
3 replies
10 recasts
92 reactions

Sanjay pfp
Sanjay
@sanjay
Please upgrade Hubble to 1.14.4 which implements storage changes defined in https://github.com/farcasterxyz/protocol/discussions/191 We'll min version on Monday at the latest so all hubs will be ready for Aug 28 when storage units were originally scheduled to expire. If you are manually calculating storage, you'll need to update your logic. You may find the new helper functions implemented in the hub-nodejs package useful https://github.com/farcasterxyz/hub-monorepo/blob/main/packages/core/src/limits.ts
1 reply
5 recasts
29 reactions

Sanjay pfp
Sanjay
@sanjay
Since this day, we're at 50x the message count, 180x the peer count, ~40x the db size. Perf metrics are harder to compare since it was during an incident but currently p95 merge latency is ~30ms and p95 gossip delay is <1s (vs 2000ms and ~2.5hrs during the incident)
23 replies
38 recasts
278 reactions

Sanjay pfp
Sanjay
@sanjay
Hub message disruption today was caused by our hub losing gossip connectivity to all other hubs (unclear why exactly due to a logging bug). It was unable to regain connection due a bad interaction with a libp2p upgrade. Released 1.14.2 with a fix.
0 reply
6 recasts
24 reactions

Sanjay pfp
Sanjay
@sanjay
Hubble 1.14 is out. It includes a bunch of fixes around follows (consistency issues and large compaction events breaking event streams). If you are using shuttle, make sure you’re on the latest version before upgrading hubs, there’s a breaking api change for events.
4 replies
66 recasts
370 reactions

Sanjay pfp
Sanjay
@sanjay
Message processing was broken today because some events exceeded grpc client default size. If you're using shuttle, please upgrade to 0.4.1 to get the fix. If you are constructing hub clients manually and listening to events, make sure to pass in the following param
3 replies
19 recasts
113 reactions

Sanjay pfp
Sanjay
@sanjay
This one was tricky to track down. There were a lot of dead ends. Special thanks to @wazzymandias.eth for basically fixing it last night and not telling the rest of us 😂 @cassie and @sds for some deep libp2p and tcp tuning magic, which we thankfully didn't need. And finally to my good friend Claude, who pointed me to the `node --prof` command which is able to profile worker threads, would've been much more difficult to narrow down the root cause without it.
13 replies
10 recasts
100 reactions

Sanjay pfp
Sanjay
@sanjay
Released Hubble 1.13.2 with a crash fix (thanks @cassie). Please upgrade for improved stability. We're also planning to min version to 1.13.1 early next week so all hubs support long casts. We're noticing some sync issues due to older hubs that don't support it.
10 replies
4 recasts
66 reactions

Sanjay pfp
Sanjay
@sanjay
Reminder that the replicator is deprecated by shuttle. We’re going to remove the replicator from the hub codebase by end of next week to avoid any confusion. If you’re still using the replicator, please migrate to the shuttle package. Let me know if you have any questions around migration.
17 replies
204 recasts
1472 reactions

Sanjay pfp
Sanjay
@sanjay
We’re seeing message processing delays again. The team is working on scaling our systems to be able to handle it.
0 reply
2 recasts
14 reactions

Sanjay pfp
Sanjay
@sanjay
alpha version of the package is out https://github.com/farcasterxyz/hub-monorepo/tree/main/packages/hub-shuttle
2 replies
6 recasts
33 reactions

Sanjay pfp
Sanjay
@sanjay
@ted @gregan the RWA thesis is finally playing out. When does the goldfinch goat pool open? There's clear investor demand https://x.com/NeerajKA/status/1775881228656181312?s=20
2 replies
0 recast
21 reactions

Sanjay pfp
Sanjay
@sanjay
Please reach out if you’re interested in using this, or have any feedback
18 replies
2 recasts
24 reactions

Sanjay pfp
Sanjay
@sanjay
We're planning to min version the hubs to 1.11.2 later today to improve network health. This version includes the fix to have all hubs use snapshots to catch up if they are too far behind. If you're on an older version, please run `./hubble.sh upgrade` to get up to date. https://warpcast.com/sanjay/0x4a7d7839
5 replies
3 recasts
22 reactions

Sanjay pfp
Sanjay
@sanjay
We've released 1.11.1. @wazzymandias.eth made it so that hubs will now automatically use snapshot sync to catch up if they are too many messages behind. Note that this will reset the db, if you're running a replicator you can disable this to be safe, by setting `CATCHUP_SYNC_WITH_SNAPSHOT=false` in your .env file
10 replies
2 recasts
28 reactions

Sanjay pfp
Sanjay
@sanjay
This is a very cool talk. Interestingly, it's almost exactly the same algorithm the hubs use right now. Prolly trees are more efficient since it collapses the number of levels required (we use timestamp prefix tries, so it always at least 10 levels), but apart from that it's exactly the same.
3 replies
1 recast
28 reactions