Content
@
0 reply
0 recast
2 reactions
eggman π΅
@eggman.eth
lil behind-the-scenes eggery on /nova's WaifuCast; WaifuCasts are genned with a pipeline of AI agents working in tandem, with Claude-Sonnet-3.5 and a fine-tuned SDXL being the final models hit. Each agent I wrote had around 7000 chars of system prompting to get the exact output desired (a LOT of trial and error here). So, gigantic prompts + lots of transformed data - meaning our Claude-3.5 endpoint alone has already handled 35 million tokens of input. That's about 277 tokens per second; and doesn't include SDXL, or any standalone/small models hit BEFORE heading off to Claude. It also doesn't include the massive data retrieval/storage done via @neynar. Anthropic actually cut our compute off temporarily because we went way over their accepted limits on daily token input (within 2 hours); awesome team tho, got us de-limited fast after I explained we were just waifu-izing an entire social network. The FC community probably broke some records on this one. Absolutely crazy amount of compute processed.
9 replies
13 recasts
57 reactions
Cryptoversal ππ©
@cryptoversal
Still blown away by the consistent quality of the output given the variety of inputs into the model. How much testing was needed to tweak the prompts? 1000 $degen
2 replies
0 recast
8 reactions
eggman π΅
@eggman.eth
An annoying amount π The Anthropic endpoint is showing I ran through 139 different iterations of system prompting - but I think that includes every test version I did too (i.e: increments 10 times for me testing the same input with a diff image 10 times). This doesn't include the smol local model stuff, or SDXL testing with prefix/suffix/uc's etc. It was sort of just a constant part of building it. I'd do some work on the backend, then spend an hour or two back on the prompts, write up some stuff for parsing them/detecting unwanted content etc etc. I'd say it was maybe 65% direct dev/code work, 35% prompting work in total at a ballpark guess! 1001 $degen
1 reply
0 recast
4 reactions