Giuliano Giacaglia pfp
Giuliano Giacaglia
@giu
We're doing reinforcement learning from human feedback, but that's a super weak form of reinforcement learning. What is the equivalent reward model in AlphaGo for RLHF? It's what I call a vibe check Imagine if you wanted to train an AlphaGo RLHF, you would be giving 2 people 2 boards and said: which one do you prefer?
2 replies
7 recasts
34 reactions

yangwao ↑ pfp
yangwao ↑
@yangwao
Waiting till we will have LLM with Asperger
0 reply
0 recast
1 reaction