Refine
Year of publication
Document Type
- Conference Proceeding (492)
- Article (218)
- Part of a Book (48)
- Doctoral Thesis (31)
- Other Publications (28)
- Master's Thesis (14)
- Report (13)
- Working Paper (12)
- Book (9)
- Bachelor Thesis (8)
Language
- English (883) (remove)
Keywords
- (Strict) sign-regularity (1)
- 1D-CNN (1)
- 2 D environment Laser data (1)
- 360-degree coverage (1)
- 3D Extended Object Tracking (1)
- 3D Extended Object Tracking (EOT) (2)
- 3D shape tracking (1)
- 3D ship detection (1)
- 3D urban planning (1)
- AAL (3)
Institute
- Fakultät Architektur und Gestaltung (6)
- Fakultät Bauingenieurwesen (26)
- Fakultät Elektrotechnik und Informationstechnik (16)
- Fakultät Informatik (63)
- Fakultät Maschinenbau (12)
- Fakultät Wirtschafts-, Kultur- und Rechtswissenschaften (43)
- Institut für Angewandte Forschung - IAF (78)
- Institut für Optische Systeme - IOS (36)
- Institut für Strategische Innovation und Technologiemanagement - IST (38)
- Institut für Systemdynamik - ISD (98)
Black-box variational inference (BBVI) is a technique to approximate the posterior of Bayesian models by optimization. Similar to MCMC, the user only needs to specify the model; then, the inference procedure is done automatically. In contrast to MCMC, BBVI scales to many observations, is faster for some applications, and can take advantage of highly optimized deep learning frameworks since it can be formulated as a minimization task. In the case of complex posteriors, however, other state-of-the-art BBVI approaches often yield unsatisfactory posterior approximations. This paper presents Bernstein flow variational inference (BF-VI), a robust and easy-to-use method flexible enough to approximate complex multivariate posteriors. BF-VI combines ideas from normalizing flows and Bernstein polynomial-based transformation models. In benchmark experiments, we compare BF-VI solutions with exact posteriors, MCMC solutions, and state-of-the-art BBVI methods, including normalizing flow-based BBVI. We show for low-dimensional models that BF-VI accurately approximates the true posterior; in higher-dimensional models, BF-VI compares favorably against other BBVI methods. Further, using BF-VI, we develop a Bayesian model for the semi-structured melanoma challenge data, combining a CNN model part for image data with an interpretable model part for tabular data, and demonstrate, for the first time, the use of BBVI in semi-structured models.
Apnea is a sleep disorder characterized by breathing interruptions during sleep, impacting cardiorespiratory function and overall health. Traditional diagnostic methods, like polysomnography (PSG), are unobtrusive, leading to noninvasive monitoring. This study aims to develop and validate a novel sleep monitoring system using noninvasive sensor technology to estimate cardiorespiratory parameters and detect sleep apnea. We designed a seamless monitoring system integrating noncontact force-sensitive resistor sensors to collect ballistocardiogram signals associated with cardiorespiratory activity. We enhanced the sensor’s sensitivity and reduced the noise by designing a new concept of edge-measuring sensor using a hemisphere dome and mechanical hanger to distribute the force and mechanically amplify the micromovement caused by cardiac and respiration activities. In total, we deployed three edge-measuring sensors, two deployed under the thoracic and one under the abdominal regions. The system is supported with onboard signal preprocessing in multiple physical layers deployed under the mattress. We collected the data in four sleeping positions from 16 subjects and analyzed them using ensemble empirical mode decomposition (EMD) to avoid frequency mixing. We also developed an adaptive thresholding method to identify sleep apnea. The error was reduced to 3.98 and 1.43 beats/min (BPM) in heart rate (HR) and respiration estimation, respectively. The apnea was detected with an accuracy of 87%. We optimized the system such that only one edge-measuring sensor can measure the cardiorespiratory parameters. Such a reduction in the complexity and simplification of the instruction of use shows excellent potential for in-home and continuous monitoring.
Incremental one-class learning using regularized null-space training for industrial defect detection
(2024)
One-class incremental learning is a special case of class-incremental learning, where only a single novel class is incrementally added to an existing classifier instead of multiple classes. This case is relevant in industrial defect detection scenarios, where novel defects usually appear during operation. Existing rolled-out classifiers must be updated incrementally in this scenario with only a few novel examples. In addition, it is often required that the base classifier must not be altered due to approval and warranty restrictions. While simple finetuning often gives the best performance across old and new classes, it comes with the drawback of potentially losing performance on the base classes (catastrophic forgetting [1]). Simple prototype approaches [2] work without changing existing weights and perform very well when the classes are well separated but fail dramatically when not. In theory, null-space training (NSCL) [3] should retain the basis classifier entirely, as parameter updates are restricted to the null space of the network with respect to existing classes. However, as we show, this technique promotes overfitting in the case of one-class incremental learning. In our experiments, we found that unconstrained weight growth in null space is the underlying issue, leading us to propose a regularization term (R-NSCL) that penalizes the magnitude of amplification. The regularization term is added to the standard classification loss and stabilizes null-space training in the one-class scenario by counteracting overfitting. We test the method’s capabilities on two industrial datasets, namely AITEX and MVTec, and compare the performance to state-of-the-art algorithms for class-incremental learning.
Particularly for manufactured products subject to aesthetic evaluation, the industrial manufacturing process must be monitored, and visual defects detected. For this purpose, more and more computer vision-integrated inspection systems are being used. In optical inspection based on cameras or range scanners, only a few examples are typically known before novel examples are inspected. Consequently, no large data set of non-defective and defective examples could be used to train a classifier, and methods that work with limited or weak supervision must be applied. For such scenarios, I propose new data-efficient machine learning approaches based on one-class learning that reduce the need for supervision in industrial computer vision tasks. The developed novelty detection model automatically extracts features from the input images and is trained only on available non-defective reference data. On top of the feature extractor, a one-class classifier based on recent developments in deep learning is placed. I evaluate the novelty detector in an industrial inspection scenario and state-of-the-art benchmarks from the machine learning community. In the second part of this work, the model gets improved by using a small number of novel defective examples, and hence, another source of supervision gets incorporated. The targeted real-world inspection unit is based on a camera array and a flashing light illumination, allowing inline capturing of multichannel images at a high rate. Optionally, the integration of range data, such as laser or Lidar signals, is possible by using the developed targetless data fusion method.
Using multi-camera matching techniques for 3d reconstruction there is usually the trade-off between the quality of the computed depth map and the speed of the computations. Whereas high quality matching methods take several seconds to several minutes to compute a depth map for one set of images, real-time methods achieve only low quality results. In this paper we present a multi-camera matching method that runs in real-time and yields high resolution depth maps. Our method is based on a novel multi-level combination of normalized cross correlation, deformed matching windows based on the multi-level depth map information, and sub-pixel precise disparity maps. The whole process is implemented completely on the GPU. With this approach we can process four 0.7 megapixel images in 129 milliseconds to a full resolution 3d depth map. Our technique is tailored for the recognition of non-technical shapes, because our target application is face recognition.
This paper broadens the resource-based approach to explaining survival of new technology-based firms (NTBFs) by focusing on the entrepreneur's ability to transform resources in response to triggers resulting from market interactions. Network theory is used to define a construct that allows determining the status of venture emergence (VE).The operationalization of the VE construct is built on the firm's value network maturity in the four market dimensions customer, investor, partner, and human resource. Business plans of NTBFs represent the artifact that contains this data in the form of transaction relation descriptions. Using content analysis, a multi-step combined human and computer coding process has been developed to empirically determine NTBFs' status of VE.Results of the business plan analysis suggests that the level of transaction relations allows to draw conclusions on the status of VE. Moreover, applying the developed process, a business plan coding test shows that the transaction relation based VE status significantly relates to NTBFs' survival capabilities.
Research Report
(2024)
We quantify the effects of GATT/WTO membership on trade and welfare. Using an extensive database covering manufacturing trade for 186 countries over the period 1980–2016, we find that the average partial equilibrium impact of GATT/WTO membership on trade among member countries is large, positive, and significant. We contribute to the literature by estimating country-specific estimates and find them to vary widely across the countries in our sample with poorer members benefitting more. Using these estimates, we simulate the general equilibrium effects of GATT/WTO on welfare, which are sizable and heterogeneous across members. We show that countries not experiencing positive trade effects from joining GATT/WTO can still gain in terms of welfare, due to lower import prices and higher export demand.
Misbehave like Nobody’s Watching? Investor Attention to Corporate Misconduct and its Implications
(2023)
Healthy and good sleep is a prerequisite for a rested mind and body. Both form the basis for physical and mental health. Healthy sleep is hindered by sleep disorders, the medically diagnosed frequency of which increases sharply from the age of 40. This chapter describes the formal specification of an on-course practical implementation for a non-invasive system based on biomedical signal processing to support the diagnosis and treatment of sleep-related diseases. The system aims to continuously monitor vital data during sleep in a patient’s home environment over long periods by using non-invasive technologies. At the center of the development is the MORPHEUS Box (MoBo), which consists of five main conceptualizations: the MoBo core, the MoBo-HW, the MoBo algorithm, the MoBo API, and the MoBo app. These synergistic elements aim to support the diagnosis and treatment of sleep-related diseases. Although there are related developments in individual aspects concerning the system, no comparative approach is known that gives a similar scope of functionality, deployment flexibility, extensibility, or the possibility to use multiple user groups. With the specification provided in this chapter, the MORPHEUS project sets a good platform, data model, and transmission strategies to bring an innovative proposal to measure sleep quality and detect sleep diseases from non-invasive sensors.
With the advancement in sensor technology and the trend shift of health measurement from treatment after diagnosis to abnormalities detection long before the occurrence, the approach of turning private spaces into diagnostic spaces has gained much attention. In this work, we designed and implemented a low-cost and compact form factor module that can be deployed on the steering wheel of cars as well as most frequently touch objects at home in order to measure physiological signals from the fingertip of the subject as well as environmental parameters. We estimated the heart rate and SpO2 with the error of 2.83 bpm and 3.52%, respectively. The signal evaluation of skin temperature shows a promising output with respect to environmental recalibration. In addition, the electrodermal activity sensor followed the reference signal, appropriately which indicates the potential for further development and application in stress measurement.
The perception of the amount of stress is subjective to every person, and the perception of it changes depending on many factors. One of the factors that has an impact on perceived stress is the emotional state. In this work, we compare the emotional state of 40 German driving students and present different partitions that can be advantageous for using artificial intelligence and classification. Like this, we evaluate the data quality and prepare for the specific use. The Stress Perceived Questionnaire (PSQ20) was employed to assess the level of stress experienced by individuals while participating in a driving simulation for 5 and 25 min. As a result of our analysis, we present a categorisation of various emotional states into intervals, comparing different classifications and facilitating a more straightforward implementation of artificial intelligence for classification purposes.
Evaluation of a Contactless Accelerometer Sensor System for Heart Rate Monitoring During Sleep
(2024)
The monitoring of a patient's heart rate (HR) is critical in the diagnosis of diseases. In the detection of sleep disorders, it also plays an important role. Several techniques have been proposed, including using sensors to record physiological signals that are automatically examined and analysed. This work aims to evaluate using a contactless HR monitoring system based on an accelerometer sensor during sleep. For this purpose, the oscillations caused by chest movements during heart contractions are recorded by an installation mounted under the bed mattress. The processing algorithm presented in this paper filters the signals and determines the HR. As a result, an average error of about 5 bpm has been documented, i.e., the system can be considered to be used for the forecasted domain.
Infrastructure-making in interwar India was a dynamic, multilayered process involving roads and vehicles in urban and rural sites. One of their strongest playgrounds was Bombay Presidency and the Central Provinces in central and western India. Focusing on this region in the interwar period, this paper analyzes the varied relationship between peasant households and town-centred modernizing agents in the making of road transport infrastructures. The central argument of this paper is about the persistence of bullock carts over motor cars in the region. This persistence was grounded in the specific regional environment, the effects of the 1930s economic depression, and the priorities of social classes. Pinpointing these connections, the paper highlights that “modernization” of infrastructure was not a simple, linear process of progressivist change, nor did it mean the survival of apparently “old” technologies in the modern era. Instead, the paper pays attention to conflicting social complexities, implications, and meanings of the connection between infrastructure and modernity that modernization assumptions often overlook. Here, the paper shows how technological change occurred as a result of real, material class interests pulling infrastructural technology in different directions. This was where and why arguments of road-motor lobbyists and cart advocates eventually clashed, and Gandhian social workers resisted motor transport in defense of peasant interests.
In the digital age, information technology (IT) is a strategic asset for organizations. As a result, the IT costs are rising, and the cost-effective management of IT is crucial. Nevertheless, organizations still face major challenges and former studies lack comprehensiveness and depth. The goal of this paper is to generate a deep and holistic view on current management challenges of IT costs. In 15 expert interviews, we identify 23 challenges divided into 7 categories. The main challenges are to ensure transparency on IT cost information, to demonstrate the business impact of IT as well as to change the mindset for the value of IT and overcoming them requires attention to their interactions. Hence, this paper leads to a better understanding of the issues that IT cost management (ITCM) faces in the digital age and builds a base for future research.
Nowadays, organizations must invest strategically in information technology (IT) and choose the right digital initiatives to maximize their benefit. Nevertheless, Chief Information Officers still struggle to communicate IT costs and demonstrate the business value of IT. The goal of this paper is to support their effective communication. In focus groups, we analyzed how different stakeholders perceive IT costs and the business value of IT as the basis of communication. We identified 16 success factors to establish effective communication. Hence, this paper enables a better understanding of the perception and the operationalization of effective communication.
Prior quantitative research identified in the text of technology-based ventures' business plans distinctive performance patterns of evolving business models. Accordingly, interactions with customers, financiers, and people and the patenting strategy's status evolved and served as indicators of early-stage tech ventures' performance. With longitudinal data from five venture cases, this research sheds light on the evolving business model by validating the performance patterns, and elucidating how and why the ventures' business models evolved. Based on a generic systems theory framework for the indicators, the explanatory case studies re-contextualize the performance patterns taken from the snapshot perspective of business plans to the longitudinal perspective of technology-based ventures' life-cycle. This research confirms the relation of business model patterns of digital and non-digital ventures to the performance groups of failure, survival, or success and suggests a broader systems perspective for further research.
The digital transformation of business processes and the integration of IT systems leads to opportunities and risks for small and medium-sized enterprises (SMEs). Risks that can result in a lack of IT Governance, Risk and Compliance (GRC). The purpose of this paper is to present the Design and Evaluation phase of creating an artefact, to reduce these risks. With this, the Design Science Research approach based on Hevner is using. The artefact will be developed by selecting relevant existing frameworks and the identification of SME-specific competencies. The method enables IT-GRC managers to transfer or adapt the frameworks to an SME organizational structure. The results from ten interviews and further three feedback loops showed that the method can be applied in practice and that a tailoring of established frameworks can take place. Contrary to the previous basic orientation of the research, this paper focuses on the concretization of approaches.