Content pfp
Content
@
0 reply
0 recast
0 reaction

Varun Srinivasan pfp
Varun Srinivasan
@v
Anyone here worked on an eng team that processed more than 100M background jobs /day? Starting to run into issues with our setup. Looking for ideas on what people typically turn to at this scale.
24 replies
27 recasts
178 reactions

brian is live on unlonely pfp
brian is live on unlonely
@br1an.eth
have you tried refreshing?
3 replies
0 recast
9 reactions

Daniel Sinclair pfp
Daniel Sinclair
@danielsinclair
Uber scaled Cadence (now Temporal) to something like ~12B/workflows and ~270B actions per month. Probably the wrong stack for your hub architecture though.
3 replies
0 recast
1 reaction

Alberto Ornaghi pfp
Alberto Ornaghi
@alor
Interesting challenge. it heavily depends on the kind of jobs you need to process. Do they have a shared state? Are they fire & forget? CPU or IO intensive? So many cases… Would love to follow the conversation on this topic.
1 reply
0 recast
2 reactions

Jacek 🎩 pfp
Jacek 🎩
@jacek
Consider shifting focus from batch jobs to streaming data. You could implement websockets or leverage AWS Kinesis. Identify the primary purposes and time intervals of the jobs. Short-interval tasks should migrate to streaming data for optimal performance.
2 replies
0 recast
13 reactions

John Jung pfp
John Jung
@jj
👋
0 reply
0 recast
3 reactions

Tiago pfp
Tiago
@alvesjtiago.eth
depends on the requirements for the background jobs but currently running more than that with a setup of Kafka and Golang stream processors for fast processing ones
1 reply
0 recast
1 reaction

Shashank  pfp
Shashank
@0xshash
dataflow + bigtable from gcp might be a good choice for low latency/ high throughput systems (this is what X does iirc)
1 reply
0 recast
1 reaction

MOLO pfp
MOLO
@molo
At slack we had to scale up from a straightforward redis task queue: https://slack.engineering/scaling-slacks-job-queue/
0 reply
0 recast
1 reaction

osama pfp
osama
@osama
yea, did at segment. queues and durable execution all the way down with some nifty services. some peers started: Stealthrocket.tech. happy to chat and also connect
0 reply
0 recast
3 reactions

Gabriel Ayuso ⌁ brewing pfp
Gabriel Ayuso ⌁ brewing
@gabrielayuso.eth
Maybe a combo between pubsub / stream processing and batch jobs. A quick search brings me to Kafka, Flink as options to explore. Apache Beam being a good way to model the workflows decouples from the underlying engine.
1 reply
0 recast
3 reactions

Daniel Lombraña  pfp
Daniel Lombraña
@teleyinex.eth
Nothing to add as several people have already shared to use streaming DBs to handle high capacity like yours. You probably know this book but just in case https://www.oreilly.com/library/view/designing-data-intensive-applications/9781491903063/ It uses Twitter feeds as an ex. in several cases to explain how to do it
1 reply
0 recast
3 reactions

Sam is at FarCon ✦  pfp
Sam is at FarCon ✦
@samantha
@matallo.eth probs (?)
0 reply
0 recast
1 reaction

EulerLagrange.eth @Farcon pfp
EulerLagrange.eth @Farcon
@eulerlagrange.eth
Sneaky way to get prospective hires
0 reply
0 recast
1 reaction

jtgi (@farcon) pfp
jtgi (@farcon)
@jtgi
@martinamps u
0 reply
0 recast
0 reaction

0xChris pfp
0xChris
@0xchris
Erlang
0 reply
0 recast
0 reaction

Ethspresso 🚌🔵🎩 pfp
Ethspresso 🚌🔵🎩
@ethspresso.eth
Airflow + K8s on bare metal is serving us well. At smaller scale than this though, something like 100k+ daily jobs of varying size and runtime. We use this for everything from ML training jobs to data transformations and 3rd party integrations. As @jacek mentioned, streaming is a good idea, for certain use-cases.
1 reply
0 recast
2 reactions

st3ve.eth pfp
st3ve.eth
@st3ve
whats ur tech stack to do this rn?
1 reply
0 recast
1 reaction

downshift $ ▯ 🎩🍖✨↑ pfp
downshift $ ▯ 🎩🍖✨↑
@downshift
sounding to me to be getting closer to a message queue than a job queue, but i have no details on your setup
1 reply
0 recast
1 reaction

Furqan pfp
Furqan
@furqan
Ran multiple teams and projects that did this level of scale and more. Happy to share thoughts / feedback if it’s useful!
1 reply
0 recast
1 reaction