ReFIP 2024

1st International Workshop on
Responsible Face Image Processing

Side event at IEEE Automatic Face and Gesture Recognition 2024 Conference
27-31 May 2024, ITU Campus, Istanbul, Turkey

GET STARTED
Photo by Wallpaper Flare

Call for papers

In the rapidly evolving landscape of facial biometric technology, the interplay between bias, fairness, transparency, and privacy is critical to building reliable artificial intelligence systems. Recent studies have raised numerous questions about the risks of implementing facial recognition technologies without comprehensive management of these concerns, fostering an ethical discussion about their social impact. Indeed, there is a growing demand for integrating facial image processing systems into various aspects of daily life, reflecting the potential for greater security and efficiency. On the other hand, this demand is encountering significant resistance because of these ethical concerns.

Biases in facial biometric systems can lead to discrimination, often reinforcing pre-existing biases. On top of that, prioritizing fairness in facial image processing research is becoming increasingly crucial due to ethical and legislative reasons, with the aim of providing equal performance and treatment for all users regardless of their race, gender, age, or other protected attributes. Due to the black-box nature of modern deep learning techniques, the transparency of such systems is also becoming an increasingly discussed and relevant topic as it relates to data management and how the decision-making mechanisms behind biometric technologies can be explained and understood. Transparency (and consequently explainability) of these kinds of systems is mandatory to exercise effective supervision and build trust in their use. Finally, privacy issues are among the most pressing in facial image processing research because of the sensitivity of the data collected. It is imperative to protect individual biometric information from misuse and abuse.

This workshop aims to foster contributions on sophisticated strategies addressing critical challenges in responsible facial image processing systems. In particular, it seeks to foster interdisciplinary exchange among scholars, practitioners, and decision-makers to tackle these challenges and propose innovative solutions, encompassing a more extensive ethical knowledge and the development of effective and transparent biometric solutions. The main goal is to explore approaches that enhance inclusiveness and equity in facial image processing systems, involving advanced algorithmic methods to minimize bias, improve privacy compliance, and enhance decision-making transparency. Furthermore, the workshop seeks to identify best practices for integrating these dimensions in the resulting systems.

Topics of interest include, but are not limited to:
  • Fairness
    • Dataset collection and preparation for fair facial image processing (e.g., designing methods for dealing with imbalances in data, collecting datasets for the analysis of biased and unfair situations);
    • Countermeasure design and development for fair facial image processing (e.g., formalizing and operationalizing fairness concepts, designing treatments that mitigate unfairness in pre-/in-/post-processing);
    • Evaluation protocols and metric formulation for fair facial image processing (e.g., formulating fairness-aware protocols to evaluate models, evaluating existing mitigation strategies in unexplored domains);
    • Applications of fair facial image processing (e.g., fairness methods for access control, border control, healthcare, entertainment, e-learning systems).
  • Accountability
    • Methods for accountable facial image processing (e.g., requirements to enable accountability, mechanisms for reporting or accounting, interfaces empowering users to control their facial data);
    • Processes for accountable facial image processing (e.g., feasibility and effectiveness of independent audits, certification processes that verify adherence to accountability standards);
    • Studies to assess and increase accountability in facial image processing (e.g., metrics and user studies of accountability mechanisms, studies to assess accountability of existing systems).
  • Transparency
    • Technical methods for explainable facial image processing (e.g., latent spaces explainability, neuro-symbolic reasoning, post-hoc methods, self-explainable methods);
    • Ethical considerations for explainable facial image processing (e.g., philosophical consideration of synthetic explanations, expected epistemic and moral goods, responsibility in policy guidelines);
    • Psychological concepts for explainable facial image processing (e.g., persuasiveness and robustness of explanations, psychometrics of human explanations, cognitive approaches for explanations);
    • Legal and administrative considerations for explainable facial image processing (e.g., black-box model auditing, explainability in regulatory compliance, human rights for explanations);
    • Applications of explainable facial image processing (e.g., explainable methods for access/border control, healthcare, entertainment, e-learning systems).
  • Privacy
    • Technical methods for privacy-preserving facial image processing (e.g., cryptographic techniques, federated learning, differential privacy);
    • Ethical considerations for privacy-preserving facial image processing (e.g., ethical frameworks, societal impact assessment, data sharing, anonymization, and privacy of synthetic data);
    • Psychological concepts for privacy-preserving facial image processing (e.g., user perceptions of privacy, trust in privacy-preserving systems, cognitive approaches to privacy);
    • Legal and administrative considerations for privacy-preserving facial image processing (e.g., compliance with privacy regulations, legal frameworks for privacy-preserving techniques, human rights in privacy preservation);
    • Applications of privacy-preserving facial image processing (e.g., privacy-preserving methods for access control, border control, healthcare, entertainment, e-learning systems).
Photo by picjumbo.com from Pexels

Submission and publication

All contributions will be reviewed by at least three members of the Program Committee. All papers should be anonymized (double-blind review process). We strongly encourage making code and data available anonymously (e.g., in an anonymous GitHub repository via Anonymous GitHub.

The following kinds of submissions will be considered:

  • Full papers (8 pages excluding references)
  • Short papers (4 pages + 1 page for references)

Submitted papers should not have been previously published or accepted for publication in substantially similar form in any peer-reviewed venue, such as journals, conferences, or workshops.

All submissions will go through a double-blind review process and be reviewed by at least three reviewers on the basis of relevance for the workshop, novelty/originality, significance, technical quality and correctness, quality and clarity of presentation, quality of references and reproducibility. Submitted papers must be formatted according to the main conference templates.


Authors should consult the main conference paper guidelines for the preparation of their papers.

All contributions must be submitted as PDF files to: https://cmt3.research.microsoft.com/FGReFIP2024/

Submitted papers will be rejected without review in case they are not properly anonymized, do not comply with the template, or do not follow the above guidelines.

Accepted papers will be published in the FG 2024 main conference proceedings.

Important dates

April 04 2024
11:59 PM PST (firm)

Submission deadline

April 19 2024
-

Accept/Reject Notification

April 22 2024
11:59 PM PST

Camera-ready deadline

May 31 2024
-

Workshop

Keynote speakers

Vitomir Struc

Vitomir Štruc

University of Ljubljana
Slovenia


Title: Face Image Quality Assessment (FIQA): Recent Advancements and Future Challenges

Abstract: Understanding the quality characteristics of facial images is of key importance for the reliability of various face-related tasks, ranging from biometric verification systems to problems in surveillance and security. In this talk, I will first introduce the general problem of Face Image Quality Assessment (FIQA) and discuss how it differs from the more perceptually driven Image Quality Assessment (IQA) task that is used for quality assessment of arbitrary natural images. I will then elaborate on the most interesting trends and solutions towards face image quality assessment and present two of our recent models for this task, i.e., (i) FaceQAN that predicts quality based on the analysis of adversarial noise, and (ii) DifFIQA that uses probabilistic denoising diffusion models to estimate face image quality. Next, I will discuss possibilities for making FIQA techniques light-wight and applicable to computing platforms with limited resources (through our eDifFIQA approach) and mechanisms for making FIQA techniques robust to geometric perturbations. Finally, I will share some insights with respect to face image quality assessment and highlight some open issues and future research directions in this space.

Short Bio.: Vitomir Štruc is a Full Professor at the University of Ljubljana, Slovenia. His research interests include problems related to biometrics, computer vision, image processing, and machine learning. He (co-)authored more than 150 research papers for leading international peer reviewed journals and conferences in these and related areas. Vitomir is a Senior Area Editor for the IEEE Transactions on Information Forensics and Security, a Subject Editor for Elsevier’s Signal Processing and an Associate Editor for Pattern Recognition, and IET Biometrics. He regularly serves on the organizing committees of visible international conferences, including IJCB, FG, WACV and CVPR. He was a General Co-Chair for IJCB 2023, and currently acts as a Program Chair for IEEE Face and Gesture 2024 and a Tutorial Chair for CVPR 2024. Dr. Struc is a Senior member of the IEEE, a member of IAPR, EURASIP, Slovenia’s ambassador for the European Association for Biometrics (EAB) and the former president and current executive committee member of the Slovenian Pattern Recognition Society, the Slovenian member of IAPR. Vitomir is also the current VP Technical Activities for the IEEE Biometrics Council, the secretary of the IAPR Technical Committee on Biometrics (TC4) and a member of the Supervisory Board of the EAB.

Program Co-Chairs

Atzori

Andrea Atzori

University of Cagliari
Italy

Boutros

Fadi Boutros

Fraunhofer IGD
Germany

Cascone

Lucia Cascone

University of Salerno
Italy

Damer

Naser Damer

Fraunhofer IGD
Germany

Marras

Mirko Marras

University of Cagliari
Italy

Tolosana

Ruben Tolosana

Universidad Autonoma de Madrid
Spain

Vera-Rodriguez

Ruben Vera-Rodriguez

Universidad Autonoma de Madrid
Spain

Program Committee

  • Andrea Abate - BIPLab
  • Anubhav Jain - New York University
  • Aythami Morales - Universidad Autonoma de Madrid
  • Casandra Rusti - University of Southern California, Information Sciences Institute
  • Chiara Pero - University of Salerno
  • Christian Rathgeb - Hochschule Darmstadt
  • Cunjian Chen - Monash
  • Darian Tomašević - University of Ljubljana
  • David Freire Obregon - Univesidad de Las Palmas de Gran Canaria
  • Giacomo Medda - University of Cagliari
  • Gian Luca Marcialis - University of Cagliari
  • Giulia Orrù - University of Cagliari
  • Jan Niklas Kolf - Fraunhofer IGD
  • Marco Huber - Fraunhofer IGD
  • Maria De Marsico - Sapienza University of Rome
  • Marija Ivanovska - University of Ljubljana
  • Marta Gomez-Barrero - Hochschule Ansbach
  • Massimo Tistarelli - University of Sassari
  • Modesto Castrillón-Santana - Universidad de Las Palmas de Gran Canaria
  • P Jonathon Phillips - NIST
  • Sinan Kalkan - Middle East Technical University
  • Žiga Babnik - University of Ljubljana

Program

  • TBD

Venue

The event will take place at the 18th IEEE International Conference on Automatic Face and Gesture Recognition at ITU Campus Istanbul, Turkey

Registration

  • TBD

Contacts

All inquiries should be sent to:

Email: refip.fg2024@gmail.com