Content
@
0 reply
0 recast
2 reactions
Varun Srinivasan
@v
Anyone here worked on an eng team that processed more than 100M background jobs /day? Starting to run into issues with our setup. Looking for ideas on what people typically turn to at this scale.
28 replies
20 recasts
142 reactions
Alberto Ornaghi
@alor
Interesting challenge. it heavily depends on the kind of jobs you need to process. Do they have a shared state? Are they fire & forget? CPU or IO intensive? So many cases… Would love to follow the conversation on this topic.
1 reply
0 recast
2 reactions
Varun Srinivasan
@v
shared state. think generating home feeds, updating reaction counts etc
3 replies
0 recast
4 reactions
shazow
@shazow.eth
Don't underestimate how much a single machine can handle with everything in-memory, at least for coordinating the tasks (e.g. single redis instance can handle upwards of 100k qps, could have several read replicas too), the workers can fan out as much as necessary from there.
0 reply
0 recast
2 reactions