Content pfp
Content
@
0 reply
0 recast
0 reaction

greg pfp
greg
@gregfromstl
Trying to build a frame that uses an LLM, and I'm realizing the 5 second timeout is going to be pretty prohibitive for what's possible
8 replies
1 recast
7 reactions

Tony D’Addeo  pfp
Tony D’Addeo
@deodad
you could do it with multiple actions one to initiate and a follow up to fetch the results
1 reply
0 recast
1 reaction

jp  🦊🎩 pfp
jp 🦊🎩
@jpfraneto
what if the user spends that time reading something and you display the result on the next frame?
1 reply
0 recast
2 reactions

horsefacts pfp
horsefacts
@horsefacts.eth
Yeah, async patterns are still underexplored, hopefully this constraint leads to some creativity. For some things you might save a claim check server side and let the user come back for it or notify when it's ready. (Could do this with a reply like the cookie bot).
1 reply
0 recast
1 reaction

kevin pfp
kevin
@kevinoconnell
Gpt 3.5 turbo?
1 reply
0 recast
1 reaction

Matt pfp
Matt
@mane
5 second timeout with request responses?
1 reply
0 recast
0 reaction

vrypan |--o--| pfp
vrypan |--o--|
@vrypan.eth
Is this a 5sec limit to the response time?
1 reply
0 recast
0 reaction

Pierre Pauze 🔵 🚽 pfp
Pierre Pauze 🔵 🚽
@pierrepauze
What would you wanna do with LLM?
0 reply
0 recast
0 reaction

Lucas Baker pfp
Lucas Baker
@alpha
Only if you're relying on a generic GPT-4 API! Try running mistral locally and observe the possibilities
0 reply
0 recast
0 reaction