Content
@
0 reply
0 recast
0 reaction
Ryan J. Shaw
@rjs
Update: I have doubts about shuttle I trimmed the shuttle example app down to bare bones "just write the messages" into pgsql. I'm running on a Hetzner CX52 VPS (16 cores / 32 GB RAM). The docs are vague, but imply you can run more than 1 worker. I started with 1 and observed high CPU utilization so left it there. There's no guidance on when to increase workers. There is no throughput measure I can see? In 12 hours, I have 8.7M messages in postgres. Hubble dashboard shows 502M messages, and I assume that's what I'll end up with in PG, therefore at this rate the process will complete in 29 days. I've started up 3 more workers.
3 replies
0 recast
9 reactions
downshift.thief
@downshift.eth
it’s pretty shocking that we don’t have a better solution than this…a ~130 GB sql dump would take far less time to restore from than this. speaking of…why aren’t those available to bootstrap from?
1 reply
0 recast
1 reaction
Ryan J. Shaw
@rjs
Right this is actually what I wanted to do with my indexer I want to start by producing a clean event stream that anybody can download from BitTorrent or S3 (if they pay me). You should be able to bootstrap in 140GB/your_download_bw seconds.
2 replies
0 recast
0 reaction