Checking Image Integrity for Anticipating Influence Attacks CI2(IA)
The availability of deepfake generation tools makes their creation and dissemination accessible. Moreover, the performance of these models produces results that can deceive humans. The project CI2(IA) aims at verifying the trustworthiness of images.
Objectives
Domain Adaptation for Digital Image Forensics
Availability of real and annotated data is not possible. Thus model generalization is really hard.
Ethical Deepfakes
Embedding watermark during deepfake generation allows us to easily authenticate a deepfake as such.
Semantic Analysis of Forged Images
Detect image manipulation and understand the semantic of the change.