Refine
Year of publication
- 2024 (18)
- 2023 (181)
- 2022 (202)
- 2021 (162)
- 2020 (139)
- 2019 (169)
- 2018 (197)
- 2017 (224)
- 2016 (215)
- 2015 (222)
- 2014 (33)
- 2013 (17)
- 2012 (17)
- 2011 (22)
- 2010 (18)
- 2009 (15)
- 2008 (14)
- 2007 (14)
- 2006 (17)
- 2005 (10)
- 2004 (23)
- 2003 (28)
- 2002 (19)
- 2001 (7)
- 2000 (2)
- 1999 (1)
- 1996 (1)
- 1995 (2)
- 1992 (2)
- 1991 (1)
- 1987 (1)
- 1980 (2)
- 1979 (1)
- 1973 (1)
- 1917 (2)
- 1916 (1)
Document Type
- Conference Proceeding (642)
- Article (425)
- Other Publications (143)
- Part of a Book (141)
- Working Paper (128)
- Book (118)
- Report (115)
- Journal (Complete Issue of a Journal) (85)
- Master's Thesis (76)
- Doctoral Thesis (58)
- Bachelor Thesis (44)
- Study Thesis (10)
- Patent (7)
- Preprint (4)
- Course Material (2)
- Journal (2)
- Moving Images (1)
Language
- German (1112)
- English (881)
- Multiple languages (8)
Keywords
Institute
- Fakultät Architektur und Gestaltung (41)
- Fakultät Bauingenieurwesen (104)
- Fakultät Elektrotechnik und Informationstechnik (33)
- Fakultät Informatik (121)
- Fakultät Maschinenbau (60)
- Fakultät Wirtschafts-, Kultur- und Rechtswissenschaften (106)
- Institut für Angewandte Forschung - IAF (114)
- Institut für Naturwissenschaften und Mathematik - INM (3)
- Institut für Optische Systeme - IOS (39)
- Institut für Strategische Innovation und Technologiemanagement - IST (60)
Incremental one-class learning using regularized null-space training for industrial defect detection
(2024)
One-class incremental learning is a special case of class-incremental learning, where only a single novel class is incrementally added to an existing classifier instead of multiple classes. This case is relevant in industrial defect detection scenarios, where novel defects usually appear during operation. Existing rolled-out classifiers must be updated incrementally in this scenario with only a few novel examples. In addition, it is often required that the base classifier must not be altered due to approval and warranty restrictions. While simple finetuning often gives the best performance across old and new classes, it comes with the drawback of potentially losing performance on the base classes (catastrophic forgetting [1]). Simple prototype approaches [2] work without changing existing weights and perform very well when the classes are well separated but fail dramatically when not. In theory, null-space training (NSCL) [3] should retain the basis classifier entirely, as parameter updates are restricted to the null space of the network with respect to existing classes. However, as we show, this technique promotes overfitting in the case of one-class incremental learning. In our experiments, we found that unconstrained weight growth in null space is the underlying issue, leading us to propose a regularization term (R-NSCL) that penalizes the magnitude of amplification. The regularization term is added to the standard classification loss and stabilizes null-space training in the one-class scenario by counteracting overfitting. We test the method’s capabilities on two industrial datasets, namely AITEX and MVTec, and compare the performance to state-of-the-art algorithms for class-incremental learning.
Particularly for manufactured products subject to aesthetic evaluation, the industrial manufacturing process must be monitored, and visual defects detected. For this purpose, more and more computer vision-integrated inspection systems are being used. In optical inspection based on cameras or range scanners, only a few examples are typically known before novel examples are inspected. Consequently, no large data set of non-defective and defective examples could be used to train a classifier, and methods that work with limited or weak supervision must be applied. For such scenarios, I propose new data-efficient machine learning approaches based on one-class learning that reduce the need for supervision in industrial computer vision tasks. The developed novelty detection model automatically extracts features from the input images and is trained only on available non-defective reference data. On top of the feature extractor, a one-class classifier based on recent developments in deep learning is placed. I evaluate the novelty detector in an industrial inspection scenario and state-of-the-art benchmarks from the machine learning community. In the second part of this work, the model gets improved by using a small number of novel defective examples, and hence, another source of supervision gets incorporated. The targeted real-world inspection unit is based on a camera array and a flashing light illumination, allowing inline capturing of multichannel images at a high rate. Optionally, the integration of range data, such as laser or Lidar signals, is possible by using the developed targetless data fusion method.
Das Freistellungssemester 2020 wurde für Recherchen zu unterschiedlichen Aufmaßsystemen in der historischen Bauforschung genutzt. Durch die Covid-19-Pandemie entwickelte sich das Arbeitsprogramm jedoch anders als geplant und verlagerte sich weitgehend in den virtuellen Raum. In der Disziplin der historischen Bauforschung entstand gerade durch die zeitweise Unmöglichkeit des Reisens und der Präsenzlehre ein intensiver Austausch zur Methodik in zahlreichen Onlinekonferenzen.
Using multi-camera matching techniques for 3d reconstruction there is usually the trade-off between the quality of the computed depth map and the speed of the computations. Whereas high quality matching methods take several seconds to several minutes to compute a depth map for one set of images, real-time methods achieve only low quality results. In this paper we present a multi-camera matching method that runs in real-time and yields high resolution depth maps. Our method is based on a novel multi-level combination of normalized cross correlation, deformed matching windows based on the multi-level depth map information, and sub-pixel precise disparity maps. The whole process is implemented completely on the GPU. With this approach we can process four 0.7 megapixel images in 129 milliseconds to a full resolution 3d depth map. Our technique is tailored for the recognition of non-technical shapes, because our target application is face recognition.
Diese Masterarbeit erforscht das Potenzial großer Sprachmodelle in der Bauindustrie mit einem Fokus auf digitale Transformation, Effizienzsteigerung und Nachhaltigkeit. Durch eine umfassende Literaturanalyse und qualitative Experteninterviews werden spezifische Anwendungsfälle, Herausforderungen bei der Implementierung und ethische sowie datenschutzrechtliche Überlegungen untersucht.
Die Arbeit hebt hervor, wie große Sprachmodelle die Planungsprozesse optimieren, das Risikomanagement verbessern und maßgeschneiderte Lösungen entwickeln können, um ökonomische und ökologische Vorteile zu erzielen. Zudem werden praxisorientierte Empfehlungen für eine erfolgreiche Integration dieser Technik in das Bauwesen präsentiert, die sowohl die technologische Machbarkeit als auch soziale Akzeptanz berücksichtigen.
Abschließend werden zukünftige Forschungsrichtungen aufgezeigt, die darauf abzielen, die digitale Transformation im Bauwesen unter Einbeziehung ethischer Standards und Datenschutz zu beschleunigen.
Die Ergebnisse dieser Arbeit demonstrieren das Potenzial von großen Sprachmodellen, traditionelle Bauprozesse zu revolutionieren, und betonen die Notwendigkeit einer sorgfältigen Implementierung, um die Vorteile dieser Technologie vollständig auszuschöpfen.
This paper broadens the resource-based approach to explaining survival of new technology-based firms (NTBFs) by focusing on the entrepreneur's ability to transform resources in response to triggers resulting from market interactions. Network theory is used to define a construct that allows determining the status of venture emergence (VE).The operationalization of the VE construct is built on the firm's value network maturity in the four market dimensions customer, investor, partner, and human resource. Business plans of NTBFs represent the artifact that contains this data in the form of transaction relation descriptions. Using content analysis, a multi-step combined human and computer coding process has been developed to empirically determine NTBFs' status of VE.Results of the business plan analysis suggests that the level of transaction relations allows to draw conclusions on the status of VE. Moreover, applying the developed process, a business plan coding test shows that the transaction relation based VE status significantly relates to NTBFs' survival capabilities.
Research Report
(2024)