Content
@
0 reply
0 recast
0 reaction
tani
@tanishq
Well I don't want to keep jumping between different AI models as the things progress and companies do one up on each other, I wonder if most people would shift to aggregated wrappers that create a pipeline and allow easily switch-ability between models like LangChain does, but targeted to consumers instead.
1 reply
0 recast
0 reaction
m_j_r
@m-j-r.eth
100% I'd extend this even further. perhaps the knowledge base is really static/global & outcomes are just recombinant of non-inferential elements, so wrappers also bargain for lower cost by publicly aggregating as many instances of recombination, not internalizing into the models they chain together for their service.
1 reply
0 recast
0 reaction
tani
@tanishq
That makes even more sense! It wonβt work in closed models tho But at the same time, aggregation on multiple open models might just work?
1 reply
0 recast
0 reaction
m_j_r
@m-j-r.eth
I assume closed models pivot from data like common crawl imminently, too much blowback from creatives & small & open LLMs already ate that moat, unlike synthetic. imho the "polymath generalist" vibe w/ service like GPT is a red herring. the stochastic element just needs to reflect, plan, & reason from external data.
1 reply
0 recast
0 reaction
m_j_r
@m-j-r.eth
imho aggregation will occur as soon as the meta for open models switches to job-specific supervision over task-specific "appendages", both being considerably smaller LLMs with higher specificity for formats like RJP (https://paragraph.xyz/@metaend/introducing-rjp-technique) & embodied sensory data (e.g. RT-X)
0 reply
0 recast
0 reaction