Content pfp
Content
@
0 reply
0 recast
0 reaction

greg pfp
greg
@gregfromstl
Trying to build a frame that uses an LLM, and I'm realizing the 5 second timeout is going to be pretty prohibitive for what's possible
7 replies
1 recast
7 reactions

kevin pfp
kevin
@kevinoconnell
Gpt 3.5 turbo?
1 reply
0 recast
1 reaction

greg pfp
greg
@gregfromstl
My current app responds in exactly ~5 seconds when slow, so its 50/50 whether or not the error hits. A faster model definitely helps, I'm just concerned of these scenarios where you can't guarantee the frame will work every time
1 reply
0 recast
0 reaction

kevin pfp
kevin
@kevinoconnell
Mm, i wonder if you can use a faster modal off replicate
0 reply
0 recast
0 reaction