Industrial and medical anomaly detection faces critical challenges from data scarcity and prohibitive annotation costs, particularly in evolving manufacturing and healthcare settings. To address this, propose CoZAD, a novel zero-shot anomaly detection framework that integrates soft confident learning with meta-learning and contrastive feature representation. Unlike traditional confident learning that discards uncertain samples, our method assigns confidence-based weights to all training data, preserving boundary information while emphasizing prototypical normal patterns. The framework quantifies data uncertainty through IQR-based thresholding and model uncertainty via covariance-based regularization within a Model-Agnostic Meta-Learning. Contrastive learning creates discriminative feature spaces where normal patterns form compact clusters, enabling rapid domain adaptation. Comprehensive evaluation across 10 datasets spanning industrial and medical domains demonstrates state-of-the-art performance, outperforming existing methods on 6 out of 7 industrial benchmarks with notable improvements on texture-rich datasets (99.2% I-AUROC on DTD-Synthetic, 97.2% on BTAD) and superior pixel-level localization (96.3% P-AUROC on MVTec-AD). The framework eliminates dependence on elaborate vision-language alignments or model ensembles, making it valuable for resource-constrained environments requiring rapid deployment.

A Contrastive Learning-Guided Confident Meta-Learning for Zero Shot Anomaly Detection

Muhammad Aqeel
Project Administration
;
Marco Cristani
Writing – Review & Editing
;
Francesco Setti
Supervision
2025-01-01

Abstract

Industrial and medical anomaly detection faces critical challenges from data scarcity and prohibitive annotation costs, particularly in evolving manufacturing and healthcare settings. To address this, propose CoZAD, a novel zero-shot anomaly detection framework that integrates soft confident learning with meta-learning and contrastive feature representation. Unlike traditional confident learning that discards uncertain samples, our method assigns confidence-based weights to all training data, preserving boundary information while emphasizing prototypical normal patterns. The framework quantifies data uncertainty through IQR-based thresholding and model uncertainty via covariance-based regularization within a Model-Agnostic Meta-Learning. Contrastive learning creates discriminative feature spaces where normal patterns form compact clusters, enabling rapid domain adaptation. Comprehensive evaluation across 10 datasets spanning industrial and medical domains demonstrates state-of-the-art performance, outperforming existing methods on 6 out of 7 industrial benchmarks with notable improvements on texture-rich datasets (99.2% I-AUROC on DTD-Synthetic, 97.2% on BTAD) and superior pixel-level localization (96.3% P-AUROC on MVTec-AD). The framework eliminates dependence on elaborate vision-language alignments or model ensembles, making it valuable for resource-constrained environments requiring rapid deployment.
2025
Zero-Shot Anomaly Detection, Contrastive Learning, Meta Learning, Soft Confident Learning
File in questo prodotto:
File Dimensione Formato  
Aqeel_A_Contrastive_Learning-Guided_Confident_Meta-learning_for_Zero_Shot_Anomaly_Detection_ICCVW_2025_paper.pdf

accesso aperto

Descrizione: Manuscript
Tipologia: Documento in Post-print
Licenza: Non specificato
Dimensione 505.42 kB
Formato Adobe PDF
505.42 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11562/1187191
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact