Content pfp
Content
@
0 reply
0 recast
2 reactions

jacopo pfp
jacopo
@jacopo
cursor's ability to index docs is extremely powerful. while some docs sites are entirely indexable from a single entry point (foundry, gh readmes), others require more effort (wagmi @jxom) how do we optimize documentation for indexing? seems increasingly important as we rely more and more on ai for this
3 replies
5 recasts
20 reactions

weeb3dev pfp
weeb3dev
@weeb3dev
1. Easily crawlable structure 2. Turn doc site into an API to call? 3. Enable export of docs into well-formatted output via Markdown File(s) or JSON schema w/ URLs of site you might find this firecrawl + cursor workflow helpful https://x.com/daniel_mac8/status/1871524305684623602?s=46&t=W5NKnRc0GRxYHUY6v-G-ZA
0 reply
0 recast
1 reaction

Tangled Circuit pfp
Tangled Circuit
@tangledcircuit
Save as website (full) write a parser in Python and parse the website docs into a Md. Add the Md as a knowledge base or even just keep in a documentation folder and have .cursorrules file specifically use it as a golden rule. Most indexing seems to work well now, but for others I do this.
0 reply
0 recast
1 reaction

r4to pfp
r4to
@r4topunk
maybe vector sitemaps (newbie on subject, maybe i'm talking shit)
0 reply
0 recast
1 reaction