ThinPicking
Member
- Joined
- Sep 9, 2019
- Messages
- 1,380
Mr Sausage expert, stick me in a room with a 60 inch 4k OLED displaying lossless output from a GAN at the native resolution of the display and I’ll tell you what’s fake.
Give me the master and I’ll run it through one of the myriad GitHub projects capable of detecting GAN output to indisputably prove my results without any confirmation from you.
That’s before we’ve even touched on my reference to that fact the observers determination of authenticity depends on more than the visual component. Assuming there are contextual, behavioural and audible features, they can also be detected computationally and perceptually (depending on the competency of the observer, which is really what’s being tested). Expression and context are a big part of this. And if you have any idea about how to seriously control for that in an experiment, paint it.
For example, let’s say some sperg retard put out a deepfake of Joe Biden in the oval office denouncing the fed, proposing to recapitalise the population of the US, reduce the tax code to a single sheet of A4 and make significant capital accumulation mathematically impossible (which could be done in week). Do you think my determination of authenticity is going to be made at the moment I view it? And do you think my determination of authenticity is going to depend on any individual or combination of the visual, audible or behavioural components? Are you getting it yet?
Now, you can imagine and make statements about some infallible endgame of undetectable methods for synthetic image/audio generation all you want (PBR shares the same fate as these optical flow techniques btw). The fact of the matter is this, it’s always going to be visible to your mind and tools you could easily obtain. Assuming your cognitive, auricular and ocular functions are intact. Assuming you haven’t deferred them.
Give me the master and I’ll run it through one of the myriad GitHub projects capable of detecting GAN output to indisputably prove my results without any confirmation from you.
That’s before we’ve even touched on my reference to that fact the observers determination of authenticity depends on more than the visual component. Assuming there are contextual, behavioural and audible features, they can also be detected computationally and perceptually (depending on the competency of the observer, which is really what’s being tested). Expression and context are a big part of this. And if you have any idea about how to seriously control for that in an experiment, paint it.
For example, let’s say some sperg retard put out a deepfake of Joe Biden in the oval office denouncing the fed, proposing to recapitalise the population of the US, reduce the tax code to a single sheet of A4 and make significant capital accumulation mathematically impossible (which could be done in week). Do you think my determination of authenticity is going to be made at the moment I view it? And do you think my determination of authenticity is going to depend on any individual or combination of the visual, audible or behavioural components? Are you getting it yet?
Now, you can imagine and make statements about some infallible endgame of undetectable methods for synthetic image/audio generation all you want (PBR shares the same fate as these optical flow techniques btw). The fact of the matter is this, it’s always going to be visible to your mind and tools you could easily obtain. Assuming your cognitive, auricular and ocular functions are intact. Assuming you haven’t deferred them.