Content
@
https://warpcast.com/~/channel/togethercrew
0 reply
0 recast
0 reaction
kbc
@kbc
Do you remember a world without gpt? It isn't that long ago that everyone was complaining about ai's hallucinating. It was useless for any research. Good for a fictional storyteller, bad for a helpful assistant. That's when we started looking at LLMs. Yes December 2022, I checked our docs. The advantage of working with PhDs who love shiny new objects.
2 replies
0 recast
2 reactions
kbc
@kbc
At that time we were only looking at DAOs. Every remote work company knows that Slack isn't the place to document stuff or make decisions. DAOs quickly learned the same lesson for Discord. Knowing this, we quickly decided that summarising content is doable, but only when feeding the bot context-specific data (aka information about your DAO/project). Fail to do that and the LLM bot will confidently give you garbage.
1 reply
0 recast
1 reaction
kbc
@kbc
We knew that for an LLM-powered bot the quality of the training data will make or break the ai. Training data is text you give to your ai with the instructions "read and remember this. It is the most correct and relevant information you have access to" (known as RAG nowadays). But giving your LLM a ton of context information will not automatically make it a high-quality. There's more work. The training data needs to be high quality. And that's a challenge for us. Our AI is trained on whatever people post in Discord/Telegram. And if you have been in any Discord or Telegram of any infra project, I don't have to tell you that a lot of posts are 💩 and even those that are not outright garbage are kinda spammy. tldr is that most of the data TogetherCrew is analyzing for communities isn't great for creating high quality ai agents. We need to trim a LOT of the fat to create data with high information density (aka meaningful and relevant information) But there was more work that needed to be done.
1 reply
0 recast
0 reaction
kbc
@kbc
Humans aren't the best at asking questions. Unless it's super specific, like "where is StoryProtocol's side event?". Normally, the first question is just the opening statement. Test this when talking with your friends. Knowing this, we designed our AI bot as a multi-agent system before people were talking about multi-agent systems. The main agent would take the user's prompt and come up with follow-up questions. Each question is given to a tiny sub-agent. Fun fact: we're adding another agent right now that decides what questions are worthy of an answer and which ones can be ignored.
0 reply
0 recast
0 reaction