Jordan Messina
@jomessin
Our contrarian take with Nash is that you only need 1 MCP. These models are so good at writing code that, allowing them to execute arbitrary code and having access to up-to-date llm-friendly api docs, they can do anything. I bought the play-by-play data for the current NBA season for $50 and signed up for the free tier at the-odds-api.com. With just that, the Nash MCP was able to: - get the current Draft Kings lines for tonight's games (the-odds-api.com) - get the injury reports for the games (not exactly sure how it got these) - find the play-by-play data on my desktop - build a model - show me the +ev bets to make Check out the demo video (two-minutes at 2x). More info coming soon. https://www.loom.com/share/71549a6d6a6d4b2f937926112347d9e8?sid=25bd9d92-dd24-421f-9b1a-4c8e362b9695
4 replies
3 recasts
39 reactions
1dolinski
@1dolinski
Say you wanted these bets to get into google sheets, would you get to 2 MCPs?
1 reply
0 recast
0 reaction
Jordan Messina
@jomessin
No. Nash would do that via code + API
1 reply
0 recast
0 reaction
1dolinski
@1dolinski
Ok so why need mcp at all then?
1 reply
0 recast
0 reaction
Jordan Messina
@jomessin
Good question. What your experience is like with no MCP
1 reply
0 recast
0 reaction
1dolinski
@1dolinski
For sure, in those cases why not use code and api as well? I’m a huge mcp fan — believe we’ll likely have many, as there are many api providers.
2 replies
0 recast
1 reaction
Jordan Messina
@jomessin
Can you elaborate? Given the example, why would the-odds-api.com make a one-off MCP?
1 reply
0 recast
0 reaction
jj 🛟
@jj
While building skeet.build I’m quickly finding out a few thing: 1. Community mcp servers are half broken all the time and don’t really work 2. People think mcp servers are just wrappers around APIs, so it’s designed poorly. The best example is the linear mcp: To move an issue from todo to in progress, the api requires a uuid of all the different states. If you plugin an open api v3 spec then your llm will be like I’m sorry don’t have the proper inputs and just hang. Or you have to make like 3 other api calls to do one task. It actually requires thought to make the experience just “work”
1 reply
0 recast
1 reaction