Content pfp
Content
@
0 reply
0 recast
0 reaction

July pfp
July
@july
https://arxiv.org/pdf/1812.07252.pdf
1 reply
0 recast
0 reaction

𝚐π”ͺ𝟾𝚑𝚑𝟾 pfp
𝚐π”ͺ𝟾𝚑𝚑𝟾
@gm8xx8
now I’m down the rabbit πŸ•³οΈ …again. currently reading https://language-to-reward.github.io πŸ˜‚ thanks for sharing July
3 replies
0 recast
3 reactions

July pfp
July
@july
MuJoCo seems interesting - specifically MuJoCo MPC they talk about: "combining this with a real-time optimizer, MuJoCo MPC, empowers an interactive behavior creation experience where users can immediately observe the results and provide feedback to the system..." https://mujoco.readthedocs.io/en/stable/overview.html
0 reply
0 recast
1 reaction

July pfp
July
@july
also higher levels in autonomy (motion planning and higher) have longer loops, and can work with something like LLMs to generate commands. obviously lower level controls loops are not going to be quick enough for LLMs to generate commands - which makes a lot of sense here as the author talks about
0 reply
0 recast
0 reaction

July pfp
July
@july
I think https://palm-e.github.io/ PaLM-e was the eye opening moment for me
1 reply
0 recast
0 reaction