Colin
@colin
Prompt engineering is more challenging than expected. Learned this by building @paragraph. - token length too large so need to preprocess post before summarization - chatGPT does a bad job at estimating characters of it's response so we often hit FC char limits (need retry logic) - often encounters hallucinations
4 replies
0 recast
0 reaction
Giuliano Giacaglia
@giu
Iβve learned the same about token length. Iβm building netbox and have been working on a prototype. Token length is def an issue. Only when building, you realize the bottlenecks of new tech
1 reply
0 recast
0 reaction
Giuliano Giacaglia
@giu
Btw are you using OpenAI API or just using an open source model? The problem with OpenAI API that I see is that there is no way to control the limit for spending as far as I can tell
1 reply
0 recast
0 reaction
Colin
@colin
I'm using the OpenAI API. They do have monthly spending limits you can enforce. IIRC $100/mo is the default, but we submitted a quota request to bump it
0 reply
0 recast
0 reaction