Vitalik Buterin
@vitalik.eth
New way to encode a profile picture dropped: https://x.com/Ethan_smith_20/status/1801493585155526675 320 bits is basically a hash. Small enough to go on chain for every user.
5 replies
1621 recasts
6307 reactions
Thomas
@aviationdoctor.eth
Very cool concept for when high-fidelity is unimportant (which it usually is for images that serve illustrative purposes). Also, I’d love to see the results of this process being repeated, with 32 tokens sampled from each nth generation, in a game of progressively more hallucinogenic game of Chinese whispers
1 reply
21 recasts
132 reactions
Hocilef
@hocilef
What would be a case where that much high fidelity is actually needed?
1 reply
0 recast
1 reaction
Thomas
@aviationdoctor.eth
If you’re intending to show the picture of a particular individual or a news event as it happened, for instance, you don’t want the AI to extrapolate from a minimal baseline, or you’ll end up with something inaccurate and not truthful
1 reply
0 recast
1 reaction
Hocilef
@hocilef
I get it but would be the same with any lossy compression, AI or not. A human may not be able to te the difference while most pictures ending up on the Internet are filtered already.
1 reply
0 recast
1 reaction
Thomas
@aviationdoctor.eth
I think this is different. The lossy comprehension algos used today are very effective at keeping details (such as JPG) but at the cost of larger files. Here, the idea is to rebuild an image from just 320 bytes, and the extrapolation is necessarily stochastic. It goes in the opposite direction of compression, and in the process, it can make up details that don’t exist in the original. It becomes sensitive if you’re showing the image of a political figure looking different than they really are (imagine the current context around aging politicians being a matter of debate), or outright wrong (imagine documenting the war in Gaza)
1 reply
0 recast
1 reaction