Content pfp
Content
@
0 reply
0 recast
0 reaction

shazow pfp
shazow
@shazow.eth
Hot take: If models can't be a moat, then perhaps hidden agents can be? We've been practicing witchcraft incantations with system prompts and agent loops to squeeze out performance from the models we use. Things they couldn't do without additional context token repricing, or without additional steps of "reasoning". Are we headed to a world where only open models give "raw" access to make our own clever system prompts and agents, and AI platform providers continue to improve behind hidden agent systems like o1? 🍓
4 replies
35 recasts
28 reactions

Brent Fitzgerald pfp
Brent Fitzgerald
@bf
Yes, I’ve been thinking of it as the agent “secret sauce” which is anything that determines outward behavior, from super cutting edge stuff to just better system prompts or some proprietary fine tuning data. It’s easy to copy traditional UX. It’s going to be a lot harder to copy intelligence-based UX, which is determined by agent behavior.
1 reply
0 recast
1 reaction

tina pfp
tina
@howdai
If models can't be a moat - then the moat goes towards complexity or owning distribution. This is why I think Sam Altman has been signaling towards investing in the infrastructure stack - aka if OpenAI models are now free, you would still pick OpenAI grid to run the models because it’s optimized to do so. Hidden agents in this case could be a moat if it is a pathway to gaining / retaining distribution.
1 reply
0 recast
0 reaction

tekashi69$maximoto pfp
tekashi69$maximoto
@degencummunist.eth
I'm supporting you through /microsub! 28 $DEGEN (Please mute the keyword "ms!t" if you prefer not to see these casts.)
0 reply
0 recast
0 reaction

Raz pfp
Raz
@tzumby
The model is becoming a primitive now and all the hard work is moving into the hidden agents workflows you mentioned, 100%. But as models become better I feel like some of those hidden agent flows would become obsolete and keep being moved back to the main model.
0 reply
0 recast
0 reaction