Deepfakes, powered by deep learning, are advancing rapidly in creating hyper-realistic audio and video, offering benefits in entertainment and creative industries. However, they also present serious security, privacy, and ethical risks, including identity fraud, disinformation, and content manipulation. As deepfakes become more prevalent, concerns around detection and mitigation grow. Detection techniques aim to identify altered content and prevent misuse, while privacy risks arise from ML models replicating identities without consent. Additionally, fairness and bias in datasets can distort both generation and detection algorithms.
For our research projects, where we investigate open challenges in deepfake generation and detection, we are looking for excellent student assistants who are motivated to contribute to these cutting-edge research areas. Your tasks include
Conduct experiments to analyze the generation and detection of deepfakes in both centralized and distributed systems.
Implement audio and video deepfake generation algorithms and analyze their security, privacy, and fairness aspects.
Develop detection techniques to counter deepfake threats and protect against misuse.
Explore new attack- and defense strategies to improve the robustness and security of deepfake-related technologies.
If you are intrigued by this cutting-edge subject, please get in touch with us at info@trust.tu-darmstadt.de to obtain further information. To facilitate the process, please include a summary of your academic background and a copy of your transcripts.
Good knowledge in computer security, privacy, and deep learning.
Experience with Python.
Recommended: Experience with ML libraries in Python, such as Pytorch or TensorFlow.
Familiarity with audio/video processing techniques is a plus.
Strong analytical skills and motivation to work both independently and in a team
Stellenmerkmale
Dein Beschäftigungsumfang
Teilzeit (befristet)
Dein Gehalt
Nach Vereinbarung
Dein Arbeitsplatz:
vor Ort
Dein Büro:
Raum Darmstadt