July pfp
July
@july
8 replies
3 recasts
66 reactions

July pfp
July
@july
it's like we're all running transformer models - sustained attention is difficult focusing on incoming input using the self-attention input is easy for transformers, keeping attention on key pieces of info over long periods of time is difficult - mostly because of the limited context window / and also through multiple layers of convolution makes me think of how LSTM used to handle sustained attention, it kept the cell state, which acted as a sort of memory and evolved with the each time step where as RAG feels more like how humans do it, where we remember some specific thing that happened, and query our knowledge base (specifically about it)
1 reply
1 recast
25 reactions

MOΞ pfp
MOΞ
@moe
google’s greatest contribution to humanity.
1 reply
0 recast
2 reactions

Leo pfp
Leo
@lsn
these people did not sustain attention at google
1 reply
0 recast
1 reaction

Chukwuka Osakwe pfp
Chukwuka Osakwe
@chukwukaosakwe
woah. the near guy worked at google?
1 reply
0 recast
1 reaction

notdevin  pfp
notdevin
@notdevin.eth
“Attention Viagra Is All You Need”
0 reply
0 recast
1 reaction

Ben  - [C/x] pfp
Ben - [C/x]
@benersing
Classic
0 reply
0 recast
1 reaction

Naomi  pfp
Naomi
@naomiii
Also the rarest and purest form of generosity according to Simone Weil.
0 reply
0 recast
1 reaction

↑ j4ck 🥶 icebreaker.xyz ↑ pfp
↑ j4ck 🥶 icebreaker.xyz ↑
@j4ck.eth
/microsub tip: 277 $DEGEN
1 reply
0 recast
0 reaction