Jordan Messina pfp
Jordan Messina
@jomessin
Our contrarian take with Nash is that you only need 1 MCP. These models are so good at writing code that, allowing them to execute arbitrary code and having access to up-to-date llm-friendly api docs, they can do anything. I bought the play-by-play data for the current NBA season for $50 and signed up for the free tier at the-odds-api.com. With just that, the Nash MCP was able to: - get the current Draft Kings lines for tonight's games (the-odds-api.com) - get the injury reports for the games (not exactly sure how it got these) - find the play-by-play data on my desktop - build a model - show me the +ev bets to make Check out the demo video (two-minutes at 2x). More info coming soon. https://www.loom.com/share/71549a6d6a6d4b2f937926112347d9e8?sid=25bd9d92-dd24-421f-9b1a-4c8e362b9695
4 replies
3 recasts
38 reactions

1dolinski pfp
1dolinski
@1dolinski
Say you wanted these bets to get into google sheets, would you get to 2 MCPs?
1 reply
0 recast
0 reaction

Jordan Messina pfp
Jordan Messina
@jomessin
No. Nash would do that via code + API
1 reply
0 recast
0 reaction

1dolinski pfp
1dolinski
@1dolinski
Ok so why need mcp at all then?
1 reply
0 recast
0 reaction

Jordan Messina pfp
Jordan Messina
@jomessin
Good question. What your experience is like with no MCP
1 reply
0 recast
0 reaction

1dolinski pfp
1dolinski
@1dolinski
For sure, in those cases why not use code and api as well? I’m a huge mcp fan — believe we’ll likely have many, as there are many api providers.
2 replies
0 recast
1 reaction

Jordan Messina pfp
Jordan Messina
@jomessin
Can you elaborate? Given the example, why would the-odds-api.com make a one-off MCP?
1 reply
0 recast
0 reaction

1dolinski pfp
1dolinski
@1dolinski
Let’s say a one-off means why would they implement it. All sorts of reasons they might add in MCP - it’s a natural extension of their API, same reason you’re doing it. But now it’s accessible from them directly - they can add it as an AI offering and lead the way in marketing - can react faster than aggregators to changes/indexing of their schema
1 reply
0 recast
1 reaction

Jordan Messina pfp
Jordan Messina
@jomessin
“coming from them directly” - do you mean from a trust perspective? Meaning you’re likely to only use MCPs built by the company whose API the MCP is wrapping? I think this is fair in the short term. A week ago I would have disagreed but I’ve seen so much hype around the supabase mcp. They are definitely profiting from jumping on the opportunity. Long term, excellent llms.txt will win. LLMs don’t need a million MCPs they need the google for llms.txt and then they can do anything If their schema changes their docs need to change. If they have a good llms.txt then the model keeps up with their schema (and the model is just using their api directly not a one-off mcp)
1 reply
0 recast
0 reaction

1dolinski pfp
1dolinski
@1dolinski
if trust means stability then yes that's one way of looking at it there is a world with llms.txt on each service, however there needs to be validation so minor deviations don't slip in or the AI has a memory to community note itself to recognize the deviation. in this my current definition set is OpenAPI -> handles specification Validation -> the api scope is understandable and does not deviate Connection -> way to bring ai together A live MCP is basically all 3, the standards it enforces allows AIs to move faster similar to a USB-C port, the AI (computer) can then connect with tools (phone, charger)
1 reply
0 recast
0 reaction

Jordan Messina pfp
Jordan Messina
@jomessin
And when the API changes the MCP breaks?
2 replies
0 recast
1 reaction