Content
@
0 reply
0 recast
0 reaction
shazow
@shazow.eth
Hot take: If models can't be a moat, then perhaps hidden agents can be? We've been practicing witchcraft incantations with system prompts and agent loops to squeeze out performance from the models we use. Things they couldn't do without additional context token repricing, or without additional steps of "reasoning". Are we headed to a world where only open models give "raw" access to make our own clever system prompts and agents, and AI platform providers continue to improve behind hidden agent systems like o1? ๐
4 replies
35 recasts
28 reactions
tina
@howdai
If models can't be a moat - then the moat goes towards complexity or owning distribution. This is why I think Sam Altman has been signaling towards investing in the infrastructure stack - aka if OpenAI models are now free, you would still pick OpenAI grid to run the models because itโs optimized to do so. Hidden agents in this case could be a moat if it is a pathway to gaining / retaining distribution.
1 reply
0 recast
0 reaction
shazow
@shazow.eth
What's the moat if the interfaces are fungible and it's just a matter of shoving a different API key into an environment variable to switch to another provider?
1 reply
0 recast
0 reaction
tina
@howdai
imo some version of consumer habit / whoever owns the interface - but agreed that moats erode in fungible environments
0 reply
0 recast
1 reaction