Justin Hunter pfp
Justin Hunter
@polluterofminds
I’ve been trying some of the larger context window models locally using Ollama and my takeaway is these models still perform poorly when given a lot of information. Chunking and summarization performs significantly better even if the token window is large enough for the entire document.
0 reply
0 recast
7 reactions