Content pfp
Content
@
0 reply
0 recast
0 reaction

Harris pfp
Harris
@harris-
had a stupid idea and made a stupid thing in case you want to test system prompts with a local llama3.2 3b model (maybe different ones in future but just running off cpu only for now) Currently it doesn't follow up any messages but could get the job done for some basic summary or something (reply to this cast with ideas for other input prompts and I'll throw them in for future initial posts) note: 1024 limit due to the cast length. it may crash if there is unicode and the length approaches that limit due to how it's being calculated on either side but idk I haven't really tested it and I didn't think the silliness was worth the extra effort (it already took way too much effort than it was worth to be at this toy level). https://warpcast.com/~/conversations/0xe5aa346dcc5baab2fdd328c0b95fdc0a39039ab9
2 replies
1 recast
2 reactions

Jorge Pablo Franetovic 🎩 pfp
Jorge Pablo Franetovic 🎩
@jpfraneto.eth
man the way you speak about the consequences of your genious made me feel weird physical reactions. they are not stupid, and this is awesome
1 reply
0 recast
2 reactions

Harris pfp
Harris
@harris-
I should've said silly (in the whimsical sense) not stupid, though without testing my code much I couldn't be sure whether the implementation was broken in an obvious way. The sunk cost also made me feel that way at least as I had to work across like 3 different codebases to try and put it all together and there are a lot of inefficiencies along each step of that way for something that seems like the most foundational things πŸ˜… Thank you for the kind words πŸ™
1 reply
0 recast
1 reaction

Jorge Pablo Franetovic 🎩 pfp
Jorge Pablo Franetovic 🎩
@jpfraneto.eth
yep. silly is less self sabotaging that stupid imho but there are a lot of things that i see happening there you created a new fid. how? you are listening to a reply to the cast. how? you are talking to an llm api. how? the bot didn’t reply to me as i expected tho
1 reply
0 recast
1 reaction