Content pfp
Content
@
1 reply
0 recast
0 reaction

Kyle Mathews pfp
Kyle Mathews
@kam
My increasing conviction is that LLMs are just another tool in the product/engineering/startup toolkit — they're just too easy to use to be a moat. There's lots of tricks you can learn but they're easy to learn and immediately & widely copied. So the alpha remains in product sense and solving hard problems.
4 replies
1 recast
10 reactions

Kyle Mathews pfp
Kyle Mathews
@kam
LLMs enable basically fuzzy logic functions — functions are easy to write — the magic is in combining them.
1 reply
0 recast
1 reaction

July pfp
July
@july
Honestly I couldn’t agree more. Foundational models are going to become monetized overtime and there’s no way that it’s going to be amount in any capacity. As you said, the real products are going to emerge from careful thought and conviction behind product and solving real problems
2 replies
1 recast
3 reactions

Callum Wanderloots ✨ pfp
Callum Wanderloots ✨
@wanderloots.eth
Completely agree. This is where everyone went a bit misdirected imo with the GPTs. Wrapping an LLM just indicates to the original model that it’s worth implementing internally and the wrapper becomes instantly less effective
1 reply
0 recast
1 reaction

BrightFutureGuy 🎩🔮↑ pfp
BrightFutureGuy 🎩🔮↑
@bfg
This might be missing the point that those LLMs might soon be better in building the product … … and more importantly, be better in finding what products to build 🫣🤷‍♂️
1 reply
0 recast
0 reaction