Persistent URL of this record https://hdl.handle.net/1887/4239055
Documents
-
- Download
- Title Pages
-
open access
-
- Download
- Chapter 1
-
open access
- Full text at publishers site
-
- Download
- Chapter 2
-
open access
- Full text at publishers site
-
- Download
- Chapter 3
-
open access
- Full text at publishers site
-
- Download
- Chapter 4
-
open access
- Full text at publishers site
-
- Download
- Chapter 6
-
open access
- Full text at publishers site
-
- Download
- Bibliography_Acknowledgements
-
open access
-
- Download
- Summary in English
-
open access
-
- Download
- Summary in Dutch
-
open access
-
- Download
- SIKS Dissertation Series_Curriculum Vitae
-
open access
-
- Download
- Propositions
-
open access
In Collections
This item can be found in the following collections:
Trustworthy anomaly detection for smart manufacturing
The research combines two powerful perspectives: improving the quality of data (data-centric AI) and enhancing the models that analyze this data (model-centric AI). From the data side, new techniques were developed to turn technical system logs into understandable graphs, making it easier to detect problems and trace their root causes. The methods also select the most relevant information in large and complex datasets, improving fault detection and prediction. From the model side, the work addresses key challenges in explainability, robustness, generalization, and automation. It introduces explainable AI tools that help engineers understand why something is flagged as...Show moreThis dissertation explores how we can make anomaly detection—identifying unusual or faulty behavior in complex systems—more trustworthy and effective, with a focus on smart manufacturing. In high-tech industries, early detection of faults is crucial to avoid downtime, reduce waste, and ensure product quality.
The research combines two powerful perspectives: improving the quality of data (data-centric AI) and enhancing the models that analyze this data (model-centric AI). From the data side, new techniques were developed to turn technical system logs into understandable graphs, making it easier to detect problems and trace their root causes. The methods also select the most relevant information in large and complex datasets, improving fault detection and prediction. From the model side, the work addresses key challenges in explainability, robustness, generalization, and automation. It introduces explainable AI tools that help engineers understand why something is flagged as abnormal, reveals weaknesses in current explanation methods under attack, and offers solutions to make models more resilient and adaptable across different environments. It also presents an automated way to fine-tune models—especially useful when labeled data is scarce. These contributions go beyond manufacturing: they offer practical tools for building reliable, understandable AI systems in any domain where detecting anomalies is important, such as cybersecurity, healthcare, or finance. Ultimately, this research supports the development of safer, smarter, and more transparent technologies that serve both industry and society.Show less
- All authors
- Li, Z.
- Supervisor
- Leeuwen, M. van; Bäck, T.H.W.
- Committee
- Davis, J.; Pechenizkiy, M.; Bonsangue, M.M.; Batenburg, K.J.; Baratchi, M.
- Qualification
- Doctor (dr.)
- Awarding Institution
- Leiden Institute of Advanced Computer Science (LIACS), Faculty of Science, Leiden University
- Date
- 2025-05-01
- Title of host publication
- SIKS Dissertation Series
- ISBN (print)
- 9789465222226
Publication Series
- Name
- 2025-24
Funding
- Sponsorship
- NWO