Diffusion-generated image detection
Consideration
- generalization: use image generated by other architecture to detect
- Some special feature (or forensics traces) may be removed when image quality is impaired in real world (Paper01)
- Frequecy analysing is important (Paper 00, 01 use this method), but hard to detect
- DIRE is a sloid baseline, and the experiments are complete.
Summary
Dataset
GenImage: A Million-Scale Benchmark for Detecting AI-Generated Image : images generated by GANs and diffusion models.
Detection
Towards the Detection of Diffusion Model Deepfakes:Ruhr University Bochum + CISPA
On the detection of synthetic images generated by diffusion models University Federico II of Naples + NVIDIA
DIRE for Diffusion-Generated Image Detection: USTC + MSRA
Exposing the Fake: Effective Diffusion-Generated Images Detection: UESTC
Detecting Images Generated by Deep Diffusion Models using their Local Intrinsic Dimensionality: Heidelberg University
DE-FAKE: Detection and Attribution of Fake Images Generated by Text-to-Image Generation Models: CISPA + Salesforce
Generalizable Synthetic Image Detection via Language-guided Contrastive Learning: University of Macau
Other reference
Self-supervised Learning of Adversarial Example: Towards Good Generalizations for Deepfake Detection OST: Improving Generalization of DeepFake Detection via One-Shot Test-Time Training Are Diffusion Models Vulnerable to Membership Inference Attacks? related to Exposing the Fake: Effective Diffusion-Generated Images Detection
Hierarchical Fine-Grained Image Forgery Detection and Localization
Paper 00 - Towards the Detection of Diffusion Model Deepfakes
Contribution
- Indicate that there is a structural difference between synthetic images generated by GANs and DM(diffusion model)s, and DM-generated images are harder to detect (GAN-generated image has specific patterns)
- Detection by frequecy domain analysing: artifacts is not obvious in spectrum of DM-genrated images but is obvious in GAN-genrated images
- Proposed a method for evaluating: Pd@FAR
Consideration
- From table.2, we can found that the performance existing detector trained in LSUN Bedroom dataset has potential to improve, especially in the big datasets like ImageNet. Pd@FAR is the use of the probability of detection at a fixed false alarm rate.
- How to evaluate the classifier, is Pd@FAR reasonable and is there a metric better than Pd@FAR?
- can we propose an universal detection framework to detect both GANs and DMs generated images?
Paper 01 - On the detection of synthetic images generated by diffusion models
Contribution
- Partially prove that DMs-generated images also have distinctive fingerprints.
- Prove that performance will drop a lot when the images are compressed (like jpeg compress)
Consideration
- The vanish of feature in compressed image is a good point to explore –> how to deal with it?
- cannot use a fixed threshold, Pd@FAR is better than it at least.
Paper 02 - DIRE for Diffusion-Generated Image Detection
Paper 03 - Exposing the Fake: Effective Diffusion-Generated Images Detection
Stepwise Error for Diffusion-generated Image Detection: statistical-based SeDIDStat + neural network-based SeDIDNNs.
Contribution
- propose a new detection scheme for diffusion-genrated images. Difference: focus on the error at specific timesteps during generation process instead of only focus on $x_0$ and $x_0’$ comparing to DIRE
- Adapt insights from membership inference attacks to emphasize the distributional disparities between real and generated data
Consideration
- hyperparameter stepsize $\delta$ is selected manually (by doing experments), but if the diffusion model changes, do we need to change the hyperparameter (by doing the experments again)?
- best $T_{SE}$ relies on dataset