Colin
@colin
Prompt engineering is more challenging than expected. Learned this by building @paragraph. - token length too large so need to preprocess post before summarization - chatGPT does a bad job at estimating characters of it's response so we often hit FC char limits (need retry logic) - often encounters hallucinations
4 replies
0 recast
0 reaction
Tom Winzig
@tomwinzig
Which model are you seeing hallucinations on? Just getting started building with GPT-4, was hoping it would have fewer hallucinations
1 reply
0 recast
0 reaction
Noah Bragg 🔥
@nbragg
What are you using chatgpt for in the business?
1 reply
0 recast
0 reaction
Giuliano Giacaglia
@giu
I’ve learned the same about token length. I’m building netbox and have been working on a prototype. Token length is def an issue. Only when building, you realize the bottlenecks of new tech
1 reply
0 recast
0 reaction
Nickolas
@franceschina
I've been experiencing this last few days... attempting to feed in a SQL database schema and then asking it to generate test data. have to do one table at a time else it hallucinates. eventually starts drifting/hallucinating anways, then needs corrections, then I run out of requests and have to wait a few hours :|
0 reply
0 recast
0 reaction