Shane Glynn pfp
Shane Glynn
@cno
Reading a lot of “AI image vs cell phone camera image” stuff (like from @vgr earlier today) and… hoo boy… how do I explain this… :-)
1 reply
0 recast
0 reaction

Shane Glynn pfp
Shane Glynn
@cno
The image straight off a cell phone camera sensor has only a tangential relationship to what you see on the screen. And that’s before adding “filters”. You’re not comparing an AI image to a photo. You’re comparing two AI images to each other, which is why they look similar.
1 reply
0 recast
0 reaction

Venkatesh Rao ☀️ pfp
Venkatesh Rao ☀️
@vgr
Bullshit. One is generated from a prompt. The other starts with a real raw image. The CNNs inference process has some cosmetic similarities to filter convolutions but they are entirely different pipelines. This is like saying cows and leather hand bags have similar textures.
1 reply
0 recast
0 reaction

Shane Glynn pfp
Shane Glynn
@cno
Three quick things: 1. Go swear someplace else. 2. You really think all that extra ML silicon in SoCs is for filter convolutions? 3. Here’s a closeup from the “real photo”, you think the chef has those lines across his head IRL? https://i.imgur.com/BAXGMuv.jpg
1 reply
0 recast
0 reaction

Shane Glynn pfp
Shane Glynn
@cno
My point is that we have a solid decade of socializing ML manipulated images off as being “real”, they at the only photos most people take, so of course people are going to have a hard time differentiating between ML generated and ML edited photos.
1 reply
0 recast
0 reaction

Shane Glynn pfp
Shane Glynn
@cno
There’s a || to humans failing other humans on the Turing test. We trained people to act like reactionary trolls, the we trained models on that, so *of course* it’s hard to differentiate the people acting like trolls from the models acting like people acting like trolls.
0 reply
0 recast
0 reaction