Content pfp
Content
@
0 reply
0 recast
2 reactions

Varun Srinivasan pfp
Varun Srinivasan
@v
Anyone here worked on an eng team that processed more than 100M background jobs /day? Starting to run into issues with our setup. Looking for ideas on what people typically turn to at this scale.
28 replies
20 recasts
107 reactions

brian is live on unlonely pfp
brian is live on unlonely
@briang
have you tried refreshing?
3 replies
0 recast
5 reactions

Daniel Sinclair pfp
Daniel Sinclair
@danielsinclair
Uber scaled Cadence (now Temporal) to something like ~12B/workflows and ~270B actions per month. Probably the wrong stack for your hub architecture though.
3 replies
0 recast
1 reaction

Alberto Ornaghi pfp
Alberto Ornaghi
@alor
Interesting challenge. it heavily depends on the kind of jobs you need to process. Do they have a shared state? Are they fire & forget? CPU or IO intensive? So many cases… Would love to follow the conversation on this topic.
1 reply
0 recast
2 reactions

CodinCowboy 🍖 pfp
CodinCowboy 🍖
@codincowboy
did warpcast end up rearchitecting the stack away from temporal?
1 reply
0 recast
0 reaction

Jacek.degen.eth 🎩 pfp
Jacek.degen.eth 🎩
@jacek
Consider shifting focus from batch jobs to streaming data. You could implement websockets or leverage AWS Kinesis. Identify the primary purposes and time intervals of the jobs. Short-interval tasks should migrate to streaming data for optimal performance.
2 replies
0 recast
10 reactions

jj 🛟 pfp
jj 🛟
@jj
👋
0 reply
0 recast
2 reactions

MOLO pfp
MOLO
@molo
At slack we had to scale up from a straightforward redis task queue: https://slack.engineering/scaling-slacks-job-queue/
0 reply
0 recast
1 reaction

Shashank  pfp
Shashank
@0xshash
dataflow + bigtable from gcp might be a good choice for low latency/ high throughput systems (this is what X does iirc)
1 reply
0 recast
1 reaction

Tiago pfp
Tiago
@alvesjtiago.eth
depends on the requirements for the background jobs but currently running more than that with a setup of Kafka and Golang stream processors for fast processing ones
1 reply
0 recast
1 reaction

Daniel Lombraña pfp
Daniel Lombraña
@teleyinex.eth
Nothing to add as several people have already shared to use streaming DBs to handle high capacity like yours. You probably know this book but just in case https://www.oreilly.com/library/view/designing-data-intensive-applications/9781491903063/ It uses Twitter feeds as an ex. in several cases to explain how to do it
1 reply
0 recast
3 reactions

osama pfp
osama
@osama
yea, did at segment. queues and durable execution all the way down with some nifty services. some peers started: Stealthrocket.tech. happy to chat and also connect
0 reply
0 recast
3 reactions

Gabriel Ayuso pfp
Gabriel Ayuso
@gabrielayuso.eth
Maybe a combo between pubsub / stream processing and batch jobs. A quick search brings me to Kafka, Flink as options to explore. Apache Beam being a good way to model the workflows decouples from the underlying engine.
1 reply
0 recast
2 reactions

downshift.thief pfp
downshift.thief
@downshift.eth
sounding to me to be getting closer to a message queue than a job queue, but i have no details on your setup
1 reply
0 recast
1 reaction

Sam (crazy candle person) ✦  pfp
Sam (crazy candle person) ✦
@samantha
@matallo.eth probs (?)
0 reply
0 recast
1 reaction

EmpiricalLagrange - tevm/acc pfp
EmpiricalLagrange - tevm/acc
@eulerlagrange.eth
Sneaky way to get prospective hires
0 reply
0 recast
0 reaction

0xChris pfp
0xChris
@0xchris
Erlang
0 reply
0 recast
0 reaction

jtgi pfp
jtgi
@jtgi
@martinamps u
0 reply
0 recast
0 reaction

Chinmay 🕹️🍿 pfp
Chinmay 🕹️🍿
@chinmay.eth
@kijijij any input ??
0 reply
0 recast
0 reaction

Furqan pfp
Furqan
@furqan
Ran multiple teams and projects that did this level of scale and more. Happy to share thoughts / feedback if it’s useful!
1 reply
0 recast
0 reaction