Refine
Year of publication
Document Type
- Conference Proceeding (642) (remove)
Language
- English (492)
- German (149)
- Multiple languages (1)
Keywords
- 360-degree coverage (1)
- 3D Extended Object Tracking (1)
- 3D Extended Object Tracking (EOT) (2)
- 3D shape tracking (1)
- 3D ship detection (1)
- AAL (1)
- ADAM (1)
- AHI (1)
- Abrasive grain material (1)
- Abtragsprinzip (1)
Institute
- Fakultät Bauingenieurwesen (9)
- Fakultät Elektrotechnik und Informationstechnik (10)
- Fakultät Informatik (50)
- Fakultät Maschinenbau (9)
- Fakultät Wirtschafts-, Kultur- und Rechtswissenschaften (8)
- Institut für Angewandte Forschung - IAF (53)
- Institut für Optische Systeme - IOS (19)
- Institut für Strategische Innovation und Technologiemanagement - IST (29)
- Institut für Systemdynamik - ISD (64)
- Institut für Werkstoffsystemtechnik Konstanz - WIK (5)
Incremental one-class learning using regularized null-space training for industrial defect detection
(2024)
One-class incremental learning is a special case of class-incremental learning, where only a single novel class is incrementally added to an existing classifier instead of multiple classes. This case is relevant in industrial defect detection scenarios, where novel defects usually appear during operation. Existing rolled-out classifiers must be updated incrementally in this scenario with only a few novel examples. In addition, it is often required that the base classifier must not be altered due to approval and warranty restrictions. While simple finetuning often gives the best performance across old and new classes, it comes with the drawback of potentially losing performance on the base classes (catastrophic forgetting [1]). Simple prototype approaches [2] work without changing existing weights and perform very well when the classes are well separated but fail dramatically when not. In theory, null-space training (NSCL) [3] should retain the basis classifier entirely, as parameter updates are restricted to the null space of the network with respect to existing classes. However, as we show, this technique promotes overfitting in the case of one-class incremental learning. In our experiments, we found that unconstrained weight growth in null space is the underlying issue, leading us to propose a regularization term (R-NSCL) that penalizes the magnitude of amplification. The regularization term is added to the standard classification loss and stabilizes null-space training in the one-class scenario by counteracting overfitting. We test the method’s capabilities on two industrial datasets, namely AITEX and MVTec, and compare the performance to state-of-the-art algorithms for class-incremental learning.
Using multi-camera matching techniques for 3d reconstruction there is usually the trade-off between the quality of the computed depth map and the speed of the computations. Whereas high quality matching methods take several seconds to several minutes to compute a depth map for one set of images, real-time methods achieve only low quality results. In this paper we present a multi-camera matching method that runs in real-time and yields high resolution depth maps. Our method is based on a novel multi-level combination of normalized cross correlation, deformed matching windows based on the multi-level depth map information, and sub-pixel precise disparity maps. The whole process is implemented completely on the GPU. With this approach we can process four 0.7 megapixel images in 129 milliseconds to a full resolution 3d depth map. Our technique is tailored for the recognition of non-technical shapes, because our target application is face recognition.
This paper broadens the resource-based approach to explaining survival of new technology-based firms (NTBFs) by focusing on the entrepreneur's ability to transform resources in response to triggers resulting from market interactions. Network theory is used to define a construct that allows determining the status of venture emergence (VE).The operationalization of the VE construct is built on the firm's value network maturity in the four market dimensions customer, investor, partner, and human resource. Business plans of NTBFs represent the artifact that contains this data in the form of transaction relation descriptions. Using content analysis, a multi-step combined human and computer coding process has been developed to empirically determine NTBFs' status of VE.Results of the business plan analysis suggests that the level of transaction relations allows to draw conclusions on the status of VE. Moreover, applying the developed process, a business plan coding test shows that the transaction relation based VE status significantly relates to NTBFs' survival capabilities.
Misbehave like Nobody’s Watching? Investor Attention to Corporate Misconduct and its Implications
(2023)
With the advancement in sensor technology and the trend shift of health measurement from treatment after diagnosis to abnormalities detection long before the occurrence, the approach of turning private spaces into diagnostic spaces has gained much attention. In this work, we designed and implemented a low-cost and compact form factor module that can be deployed on the steering wheel of cars as well as most frequently touch objects at home in order to measure physiological signals from the fingertip of the subject as well as environmental parameters. We estimated the heart rate and SpO2 with the error of 2.83 bpm and 3.52%, respectively. The signal evaluation of skin temperature shows a promising output with respect to environmental recalibration. In addition, the electrodermal activity sensor followed the reference signal, appropriately which indicates the potential for further development and application in stress measurement.
The perception of the amount of stress is subjective to every person, and the perception of it changes depending on many factors. One of the factors that has an impact on perceived stress is the emotional state. In this work, we compare the emotional state of 40 German driving students and present different partitions that can be advantageous for using artificial intelligence and classification. Like this, we evaluate the data quality and prepare for the specific use. The Stress Perceived Questionnaire (PSQ20) was employed to assess the level of stress experienced by individuals while participating in a driving simulation for 5 and 25 min. As a result of our analysis, we present a categorisation of various emotional states into intervals, comparing different classifications and facilitating a more straightforward implementation of artificial intelligence for classification purposes.
Evaluation of a Contactless Accelerometer Sensor System for Heart Rate Monitoring During Sleep
(2024)
The monitoring of a patient's heart rate (HR) is critical in the diagnosis of diseases. In the detection of sleep disorders, it also plays an important role. Several techniques have been proposed, including using sensors to record physiological signals that are automatically examined and analysed. This work aims to evaluate using a contactless HR monitoring system based on an accelerometer sensor during sleep. For this purpose, the oscillations caused by chest movements during heart contractions are recorded by an installation mounted under the bed mattress. The processing algorithm presented in this paper filters the signals and determines the HR. As a result, an average error of about 5 bpm has been documented, i.e., the system can be considered to be used for the forecasted domain.
In the digital age, information technology (IT) is a strategic asset for organizations. As a result, the IT costs are rising, and the cost-effective management of IT is crucial. Nevertheless, organizations still face major challenges and former studies lack comprehensiveness and depth. The goal of this paper is to generate a deep and holistic view on current management challenges of IT costs. In 15 expert interviews, we identify 23 challenges divided into 7 categories. The main challenges are to ensure transparency on IT cost information, to demonstrate the business impact of IT as well as to change the mindset for the value of IT and overcoming them requires attention to their interactions. Hence, this paper leads to a better understanding of the issues that IT cost management (ITCM) faces in the digital age and builds a base for future research.
Nowadays, organizations must invest strategically in information technology (IT) and choose the right digital initiatives to maximize their benefit. Nevertheless, Chief Information Officers still struggle to communicate IT costs and demonstrate the business value of IT. The goal of this paper is to support their effective communication. In focus groups, we analyzed how different stakeholders perceive IT costs and the business value of IT as the basis of communication. We identified 16 success factors to establish effective communication. Hence, this paper enables a better understanding of the perception and the operationalization of effective communication.
Prior quantitative research identified in the text of technology-based ventures' business plans distinctive performance patterns of evolving business models. Accordingly, interactions with customers, financiers, and people and the patenting strategy's status evolved and served as indicators of early-stage tech ventures' performance. With longitudinal data from five venture cases, this research sheds light on the evolving business model by validating the performance patterns, and elucidating how and why the ventures' business models evolved. Based on a generic systems theory framework for the indicators, the explanatory case studies re-contextualize the performance patterns taken from the snapshot perspective of business plans to the longitudinal perspective of technology-based ventures' life-cycle. This research confirms the relation of business model patterns of digital and non-digital ventures to the performance groups of failure, survival, or success and suggests a broader systems perspective for further research.
The digital transformation of business processes and the integration of IT systems leads to opportunities and risks for small and medium-sized enterprises (SMEs). Risks that can result in a lack of IT Governance, Risk and Compliance (GRC). The purpose of this paper is to present the Design and Evaluation phase of creating an artefact, to reduce these risks. With this, the Design Science Research approach based on Hevner is using. The artefact will be developed by selecting relevant existing frameworks and the identification of SME-specific competencies. The method enables IT-GRC managers to transfer or adapt the frameworks to an SME organizational structure. The results from ten interviews and further three feedback loops showed that the method can be applied in practice and that a tailoring of established frameworks can take place. Contrary to the previous basic orientation of the research, this paper focuses on the concretization of approaches.
The random matrix approach is a robust algorithm to filter the mean and covariance matrix of noisy observations of a dynamic object. Afterward, virtual measurement models can be used to find iteratively the extent parameters of an object that would cause the same statistical moments within their measurements. In previous work, this was limited to elliptical targets and only contour measurements.In this paper, we introduce the parallel use of an elliptical, triangular and rectangular-shaped virtual measurement model and a shape classification that selects the model that fits best to the measurements. The measurement likelihood is modeled either via ray tracing, a uniformly or normally spatial distribution over the object’s extent or as a combination of those.The results show that the extent estimation works precisely and that the classification accuracy highly depends on the measurement noise.
Regional economies clearly benefit from thriving entrepreneurial ecosystems. However, ecosystems are not yet entirely gender-inclusive and therefore are not tapping their full potential. This is most critical with respect to technology-based entrepreneurship which features the largest gender imbalance. Despite the considerably growing amount of literature in the two research fields of female entrepreneurship and entrepreneurial ecosystems, the intersection of the two areas has not yet been outlined. We depict the state of knowledge with a structured review of the literature highlighting bibliometric information, methods used, and the main topics addressed in current articles. From there, recommendations for future research are derived.
Think BIQ: Gender Differences, Entrepreneurship Support and the Quality of Business Idea Description
(2023)
Entrepreneurship support, its influencing factors and female entrepreneurship are recently discussed topics with great relevance for society and politics. However, research on the subject has been divergent in its results and lacks a focus on the impact of support programs’ characteristics concerning different types of entrepreneurs. Thus, we conduct a fuzzy-set Qualitative Comparative Analysis on entrepreneurship support characteristics aiming to shed light on possible gender differences occurring in respective programs. We investigate the quality of business idea descriptions, as a predecessor for a high-potential business model, operationalized using inter alia causation and effectuation theory and social role theory as possible explanations. In our fuzzy-set Qualitative Comparative Analysis on a sample of 911 Norwegian ventures, we find a variety of differences related to the entrepreneurs’ gender. For instance, that financial support combined with a well described key contribution or careful planning seem to be more important antecedents for female entrepreneurs’ business idea quality than for males. Moreover, it seems a well-described key contribution has a positive effect on the outcome variable in most cases. Another interesting finding concerns the entrepreneurs’ network partners, where we found evident gender differences in our combinations. Female entrepreneurs seemingly benefitted from rather small networks, and males from big networks, although the former possess larger networks in the sample. In conclusion, we find that gender differences in combinations of entrepreneurship support for high business idea quality still occur even in a country like Norway, calling for an adaption of the provided support and environment.
Strategic renewal and the development of new types of innovation pose special challenges to established small and medium-sized companies. The paper at hand aims at answering the questions what the underlying mechanism of these challenges are and which approaches might help to properly counteracting them. This case study investigates the strategic renewal process and its corresponding interventions in a high-tech SME company during a four-year period. We analyse the findings in relation to existing frameworks for dynamic capabilities and strategic learning and provide new recommendations for practice and future research.
Research credits corporate entrepreneurship (CE) with enabling established companies to create new types of innovation. Scholars have focused on the organizational design of CE activities, proposing specific organizational units. These semi-autonomous units create a tense management situation between the core organization and its CE activities. Management and organization research considers control as a key managerial function for help. However, control has received limited research attention regarding CE units, leaving design issues for appropriate control of CE units unanswered. In this study, we link management control and CE to illustrate how control is understood in the context of CE. For this, we scanned the CE literature to identify underlying attributes and characteristics that allow specifying control for CE. We identified 11 attributes to describe control for CE activities in a first round and to derive future research paths.
Corporate Entrepreneurship (CE) units have become an increasingly important part of established companies’ development activities enabling them to also create more discontinuous innovations. As a result, companies have developed and implemented different forms of CE units, such as corporate accelerators, incubators, startup supplier programs, and corporate venture capital. Driven by the need to innovate, companies have even begun to use multiple CE units simultaneously. However, this has not been empirically investigated yet. Thus, with this study, we aim to shed some light on this by investigating the parallel use of multiple CE units in the German business landscape. We conducted an extensive desk research, combining, coding, and analyzing different sources. We found that 55 out of 165 large established companies have multiple CE units, which allowed us to characterize the parallel use and identify differences and similarities, e.g., in terms of industry, company size, and CE forms implemented. We conclude by presenting different implications for both practice and research and by pointing out directions for future research.
Spatial modulation (SM) is a low-complexity multiple-input/multiple-output transmission technique that combines index modulation and quadrature amplitude modulation for wireless communications. In this work, we consider the problem of link adaption for generalized spatial modulation (GSM) systems that use multiple active transmit antennas simultaneously. Link adaption algorithms require a real-time estimation of the link quality of the time-variant communication channels, e.g., by means of estimating the mutual information. However, determining the mutual information of SM is challenging because no closed-form expressions have been found so far. Recently, multilayer feedforward neural networks were applied to compute the achievable rate of an index modulation link. However, only a small SM system with two transmit and two receive antennas was considered. In this work, we consider a similar approach but investigate larger GSM systems with multiple active antennas. We analyze the portions of mutual information related to antenna selection and the IQ modulation processes, which depend on the GSM variant and the signal constellation.
Reliability is a crucial aspect of non-volatile NAND flash memories, and it is essential to thoroughly analyze the channel to prevent errors and ensure accurate readout. Es-timating the read reference voltages (RRV s) is a significant challenge due to the multitude of physical effects involved. The question arises which features are useful and necessary for the RRV estimation. Various possible features require specialized hardware or specific readout techniques to be usable. In contrast we consider sparse histograms based on the decision thresholds for hard-input and soft-input decoding. These offer a distinct advantage as they are derived directly from the raw readout data without the need for decoding. This paper focuses on the information-theoretic study of different features, especially on the exploration of the mutual information (MI) between feature vector and RRV. In particular, we investigate the dependency of the MI on the resolution of the histograms. With respect to the RRV estimation, sparse histograms provide sufficient information for near-optimum estimation.
Sleep is a multi-dimensional influencing factor on physical health, cognitive function, emotional well-being, mental health, daily performance, and productivity. The barriers such as time-consuming, invasiveness, and expense have caused a gradual shift in sleep monitoring from traditional and standard in-lab approach, e. g., polysomnography (PSG) to unobtrusive and noninvasive in-home sleep monitoring, yet further improvement is required. Despite an increasing interest in fiberoptic-based methods for cardiorespiratory estimation, the traditional mechanical-based sensors consist of force-sensitive resistors (FSR), lead zirconate titanate piezoelectric (PZT), and accelerometers yet serve as the dominant approach. The part of popularity lies in reducing the system’s complexity, expense, easy maintenance, and user-friendliness. However, care must be taken regarding the performance of such sensors with respect to accuracy and calibration.
Das klinische Standardverfahren und Referenz der Schlafmessung und der Klassifizierung der einzelnen Schlafstadien ist die Polysomnographie (PSG). Alternative Ansätze zu diesem aufwändigen Verfahren könnten einige Vorteile bieten, wenn die Messungen auf eine komfortablere Weise durchgeführt werden. Das Hauptziel dieser Forschung Studie ist es, einen Algorithmus für die automatische Klassifizierung von Schlafstadien zu entwickeln, der ausschließlich Bewegungs- und Atmungssignale verwendet.
Analysing observability is an important step in the
process of designing state feedback controllers. While for linear
systems observability has been widely studied and easy-to-check
necessary and sufficient conditions are available, for nonlinear
systems, such a general recipe does not exist and different classes
of systems require different techniques. In this paper, we analyse
observability for an industrial heating process where a stripe-
shaped plastic workpiece is moving through a heating zone where
it is heated up to a specific temperature by applying hot air to its
surface through a nozzle. A modeling approach for this process
is briefly presented, yielding a nonlinear Ordinary Differential
Equation model. Sensitivity-based observability analysis is used
to identify unobservable states and make suggestions for addi-
tional sensor locations. In practice, however, it is not possible
to place additional sensors, so the available measurements are
used to implement a simple open-loop state estimator with
offset compensation and numerical and experimental results are
presented.
Requirements Engineering in Business Analytics for Innovation and Product Lifecycle Management
(2014)
Considering Requirements Engineering (RE) in business analytics, involving market oriented management, computer science and statistics, may be valuable for managing innovation in Product Lifecycle Management (PLM). RE and business analytics can help maximize the value of corporate product information throughout the value chain starting with innovation management. Innovation and PLM must address 1) big data, 2) development of well-defined business goals and principles, 3) cost/benefit analysis, 4) continuous change management, and 5) statistical and report science. This paper is a positioning note that addresses some business case considerations for analytics project involving PLM data, patents, and innovations. We describe a number of research challenges in RE that addresses business analytics when high PLM data should be turned into a successful market oriented innovation management strategy. We provide a draft on how to address these research challenges.
Digitalization is one of the most frequently discussed topics in industry. New technologies, platform concepts and integrated data models do enable disruptive business models and drive changes in organization, processes, and tools. The goal is to make a company more efficient, productive and ultimately profitable. However, many companies are facing the challenge of how to approach digital transformation in a structured way and to realize these potential benefits. What they realize is that Product Lifecycle Management plays a key role in digitalization intends, as object, structure and process management along the life cycle is a foundation for many digitalization use cases. The introduced maturity model for assessing a firm’s capabilities along the product lifecycle has been used almost two hundred times. It allows a company to compare its performance with an industry specific benchmark to reveal individual strengths and weaknesses. Furthermore, an empirical study produced multidimensional correlation coefficients, which identify dependencies between business model characteristics and the maturity level of capabilities.
Nowadays established companies use Corporate Entrepreneurship (CE) as a means to create discontinuous innovations. Many companies thereby even implement multiple CE units that typically involve several entrepreneurial activities. This explorative study aimed to identify the reasons why established companies implement multiple CE units concurrently. In conducting a comparative case study with eight companies from different industries, valuable insights for science and practice were gained. We provide an overview of different 11 reasons for implementing multiple CE units. This shows that the combination of CE units used by companies differs depending on the reason. It further allowed to derive general approaches of established companies to the implementation of CE units. Last, we identify the concept of co-specialization to be a central driver explaining the creation of the need to set up multiple units. We conclude by indicating implications and subjects for future research.
Entrepreneurial motivations have become a frequently discussed topic in entrepreneurship research. However, few studies investigated entrepreneurs' motivation across gender and different venture types and tend to rely on surveys or case studies. By using a text mining approach, we investigate if there are differences between male and female entrepreneurs' motivation and if female entrepreneurs' motivation differs across different venture types. This text mining approach in combination with a qualitative content analysis was used to examine unique motivational data from 472 entrepreneurial projects from three different entrepreneurship support programs in Norway and Sweden. Findings suggest that motivation of female and male entrepreneurs differ only slightly, while motivation of female entrepreneurs differs according to the different venture types. We thus contribute to a better understanding of entrepreneurial motivation and to a better understanding of why female entrepreneurs start a business. This can, for instance, benefit the improvement of future female entrepreneurship support programs.
Corporate Entrepreneurship (CE) has now evolved into an imperative innovation practice of established companies. Despite organizational design models for CE activities and companies' frequent initiation of new activities, effectively managing them remains a challenging endeavor which results in disappointment about the outcomes of CE and its early termination. We assume specific types of goals for CE as one element of this unresolved management issue. While both practice and literature address goals in different contexts, no uniform picture has emerged so far. Although goals are commonly used to categorize CE activities, they seldomly seem to be the core subject of investigation. Based on this preliminary analysis and consolidation, we put the goals of CE in focus. In a systematic literature review, we reveal aspects of goals to unmask the different types of goals and their underlying dimensions and characteristics. Our review contributes to a better understanding of goals by (1) organizing relevant literature on goals of CE in a specific classification process, (2) describing dimensions and attributes for a systematic classification of CE goals; and (3) providing a framework showing differences of goals for the CE context. We conclude with a discussion and hints for future research paths.
This paper compares novel methods to efficiently include input constraints using the nonlinear Model Predictive Path Integral (MPPI) approach. The MPPI algorithm solves stochastic optimal control problems and is based on sampled trajectories. MPPI results from the physical path integral framework. Sample-based algorithms are characterized by the fact that they can be computed in parallel and offer the possibility to handle discontinuous dynamics and cost functions. However, using standard MPPI the input costs in the Lagrange term have to be chosen quadratic. This fact is unfavorable for various real applications. Further, in standard nonlinear model predictive control (NMPC) approaches hard box constraints on the control input trajectory can be treated directly. In this contribution, novel architectures based on integrator action are compared. The investigated input constraint MPPI controllers were tested on an autonomous self-balancing vehicle. Therefore both, simulation and real-world experiments are presented. This paper addresses the question of how the MPPI algorithm can be further developed to consider input box constraints. Videos of the self-balancing vehicle are available at: https: https://tinyurl.com/mvn8j7vf
Comparison of Data-Driven Modeling and Identification Approaches for a Self-Balancing Vehicle
(2023)
This paper gives a systematic comparison of different state–of–the–art modeling approaches and the corresponding parameter identification processes for a self–balancing vehicle. In detail, a nonlinear grey box model, its extension to consider friction effects, a parametric black box model based on regression neural networks, and a hybrid approach are presented. The parameters of the models are identified by solving a nonlinear least squares problem. The training, validation, and test datasets are collected in full–scale experiments using a self–balancing vehicle. The performance of the different models used for ego–motion prediction are compared in full–scale scenarios, as well. The investigated model architectures can be used to improve both, simulation environments and model–based controller design. This paper shows the upsides and downsides arising from using the different modeling approaches. Videos showing the self–balancing vehicle in action are available at: https://tinyurl.com/mvn8j7vf22nd
A key objective of this research is to take a more detailed look at a central aspect of resilience in small and medium-sized enterprises (SMEs). A literature review and expert interviews were used to investigate which factors have an impact on the innovative capacity of start-ups and whether these can also be adapted by SMEs. First of all, it must be stated that there are considerable structural and process-related differences between start-ups and SMEs. These can considerably inhibit cooperation between the two forms of enterprise. However, in the same context, success factors and issues in the start-up sector could also be identified that can improve cooperation with SMEs. These and other findings are then discussed in both an economic and an academic context. This article was written as part of the research activities of the Smart Services Competence Centre (proper name: Kompetenzzentrum Smart Services), a central contact point for all questions in the area of smart service digitalization in Baden-Wuerttemberg. Here, companies can obtain information about various digital technologies and take advantage of various measures for the development of new ideas and innovative services (Kompetenzzentrum Smart Services BW: Über das Kompetenzzentrum, 2021).
Measuring cardiorespiratory parameters in sleep, using non-contact sensors and the Ballistocardiography technique has received much attention due to the low-cost, unobtrusive, and non-invasive method. Designing a user-friendly, simple-to-use, and easy-to-deployment preserving less errorprone remains open and challenging due to the complex morphology of the signal. In this work, using four forcesensitive resistor sensors, we conducted a study by designing four distributions of sensors, in order to simplify the complexity of the system by identifying the region of interest for heartbeat and respiration measurement. The sensors are deployed under the mattress and attached to the bed frame without any interference with the subjects. The four distributions are combined in two linear horizontal, one linear vertical, and one square, covering the influencing region in cardiorespiratory activities. We recruited 4 subjects and acquired data in four regular sleeping positions, each for a duration of 80 seconds. The signal processing was performed using discrete wavelet transform bior 3.9 and smooth level of 4 as well as bandpass filtering. The results indicate that we have achieved the mean absolute error of 2.35 and 4.34 for respiration and heartbeat, respectively. The results recommend the efficiency of a triangleshaped structure of three sensors for measuring heartbeat and respiration parameters in all four regular sleeping positions.
Monitoring heart rate and breathing is essential in understanding the physiological processes for sleep analysis. Polysomnography (PSG) system have traditionally been used for sleep monitoring, but alternative methods can help to make sleep monitoring more portable in someone's home. This study conducted a series of experiments to investigate the use of pressure sensors placed under the bed as an alternative to PSG for monitoring heart rate and breathing during sleep. The following sets of experiments involved the addition of small rubber domes - transparent and black - that were glued to the pressure sensor. The resulting data were compared with the PSG system to determine the accuracy of the pressure sensor readings. The study found that the pressure sensor provided reliable data for extracting heart rate and respiration rate, with mean absolute errors (MAE) of 2.32 and 3.24 for respiration and heart rate, respectively. However, the addition of small rubber hemispheres did not significantly improve the accuracy of the readings, with MAEs of 2.3 bpm and 7.56 breaths per minute for respiration rate and heart rate, respectively. The findings of this study suggest that pressure sensors placed under the bed may serve as a viable alternative to traditional PSG systems for monitoring heart rate and breathing during sleep. These sensors provide a more comfortable and non-invasive method of sleep monitoring. However, the addition of small rubber domes did not significantly enhance the accuracy of the readings, indicating that it may not be a worthwhile addition to the pressure sensor system.
Sleep is an essential part of human existence, as we are in this state for approximately a third of our lives. Sleep disorders are common conditions that can affect many aspects of life. Sleep disorders are diagnosed in special laboratories with a polysomnography system, a costly procedure requiring much effort for the patient. Several systems have been proposed to address this situation, including performing the examination and analysis at the patient's home, using sensors to detect physiological signals automatically analysed by algorithms. This work aims to evaluate the use of a contactless respiratory recording system based on an accelerometer sensor in sleep apnea detection. For this purpose, an installation mounted under the bed mattress records the oscillations caused by the chest movements during the breathing process. The presented processing algorithm performs filtering of the obtained signals and determines the apnea events presence. The performance of the developed system and algorithm of apnea event detection (average values of accuracy, specificity and sensitivity are 94.6%, 95.3%, and 93.7% respectively) confirms the suitability of the proposed method and system for further ambulatory and in-home use.
Healthy sleep is one of the prerequisites for a good human body and brain condition, including general well-being. Unfortunately, there are several sleep disorders that can negatively affect this. One of the most common is sleep apnoea, in which breathing is impaired. Studies have shown that this disorder often remains undiagnosed. To avoid this, developing a system that can be widely used in a home environment to detect apnoea and monitor the changes once therapy has been initiated is essential. The conceptualisation of such a system is the main aim of this research. After a thorough analysis of the available literature and state of the art in this area of knowledge, a concept of the system was created, which includes the following main components: data acquisition (including two parts), storage of the data, apnoea detection algorithm, user and device management, data visualisation. The modules are interchangeable, and interfaces have been defined for data transfer, most of which operate using the MQTT protocol. System diagrams and detailed component descriptions, including signal requirements and visualisation mockups, have also been developed. The system's design includes the necessary concepts for the implementation and can be realised in a prototype in the next phase.
The influence of sleep on human health is enormous. Accordingly, sleep disorders can have a negative impact on it. To avoid this, they should be identified and treated in time. For this purpose, objective (with an appropriate device) or subjective (based on perceived values) measurement methods are used for sleep analysis to understand the problem. The aim of this work is to find out whether an exchange of the two methods is possible and can provide reliable results. In accordance with this goal, a study was conducted with people aged over 65 years old (a total of 154 night-time recordings) in which both measurement methods were compared. Sleep questionnaires and electronic devices for sleep assessment placed under the mattress were applied to achieve the study aims. The obtained results indicated that the correlation between both measurement methods could be observed for sleep characteristics such as total sleep time, total time in bed and sleep efficiency. However, there are also significant differences in absolute values of the two measurement approaches for some subjects/nights, which leads us to conclude that the substitution is more likely to be considered in case of long-term monitoring where the trends are of more importance and not the absolute values for individual nights.
The principal objective of this study is to investigate the impact of perceived stress on traffic and road safety. Therefore, we designed a study that allows the generation and collection of stress-relevant data. Drivers often experience stress due to their perception of lack of control during the driving process. This can lead to an increased likelihood of traffic accidents, driver errors, and traffic violations. To explore this phenomenon, we used the Stress Perceived Questionnaire (PSQ) to evaluate perceived stress levels during driving simulations and the EPQR questionnaire to determine the personality of the driver. With the presented study, participants can categorised based on their emotional stability and personality traits. Wearable devices were utilised to monitor each participant's instantaneous heart rate (HR) due to their non-intrusive and portable nature. The findings of this study deliver an overview of the link between stress and traffic and road safety. These findings can be utilised for future research and implementing strategies to reduce road accidents and promote traffic safety.
Development of an expert system to overpass citizens technological barriers on smart home and living
(2023)
Adopting new technologies can be overwhelming, even for people with experience in the field. For the general public, learning about new implementations, releases, brands, and enhancements can cause them to lose interest. There is a clear need to create point sources and platforms that provide helpful information about the novel and smart technologies, assisting users, technicians, and providers with products and technologies. The purpose of these platforms is twofold, as they can gather and share information on interests common to manufacturers and vendors. This paper presents the ”Finde-Dein-SmartHome” tool. Developed in association with the Smart Home & Living competence center [5] to help users learn about, understand, and purchase available technologies that meet their home automation needs. This tool aims to lower the usability barrier and guide potential customers to clear their doubts about privacy and pricing. Communities can use the information provided by this tool to identify market trends that could eventually lower costs for providers and incentivize access to innovative home technologies and devices supporting long-term care.
The development of automatic solutions for the detection of physiological events of interest is booming. Improvements in the collection and storage of large amounts of healthcare data allow access to these data faster and more efficiently. This fact means that the development of artificial intelligence models for the detection and monitoring of a large number of pathologies is becoming increasingly common in the medical field. In particular, developing deep learning models for detecting obstructive apnea (OSA) events is at the forefront. Numerous scientific studies focus on the architecture of the models and the results that these models can provide in terms of OSA classification and Apnea-Hypopnea-Index (AHI) calculation. However, little focus is put on other aspects of great relevance that are crucial for the training and performance of the models. Among these aspects can be found the set of physiological signals used and the preprocessing tasks prior to model training. This paper covers the essential requirements that must be considered before training the deep learning model for obstructive sleep apnea detection, in addition to covering solutions that currently exist in the scientific literature by analyzing the preprocessing tasks prior to training.
Recently published nonlinear model-based control
approaches achieve impressive performances in complex real-
world applications. However, due to model-plant mismatches
and unforeseen disturbances, the model-based controller’s per-
formance is limited in full-scale applications. In most applica-
tions, low-level control loops mitigate the model-plant mismatch
and the sensitivity to disturbances. But what is the influence
of these low-level control loops? In this paper, we present
the model predictive path integral (MPPI) control of a self-
balancing vehicle and investigate the influence of subordinate
control loops on closed-loop performance. Therefore, simulation
and full-scale experiments are performed and analyzed. Subor-
dinate control loops empower the MPPI controller because they
dampen the influence of disturbances, and thus improve the
model’s accuracy. This is the basis for the successful application
of model-based control approaches in real-world systems. All
in all, a model is used to design a low-level controller, then
its closed-loop behavior is determined, and this model is used
within the superimposed MPPI control loop – modeling for
control and vice versa.
In 3D extended object tracking (EOT), well-established models exist for tracking the object extent using various shape priors. A single update, however, has to be performed for every measurement using these models leading to a high computational runtime for high-resolution sensors. In this paper, we address this problem by using various model-independent downsampling schemes based on distance heuristics and random sampling as pre-processing before the update. We investigate the methods in a simulated and real-world tracking scenario using two different measurement models with measurements gathered from a LiDAR sensor. We found that there is a huge potential for speeding up 3D EOT by dropping up to 95\% of the measurements in our investigated scenarios when using random sampling. Since random sampling, however, can also result in a subset that does not represent the total set very well, leading to a poor tracking performance, there is still a high demand for further research.
Foil-air bearings (FABs) are predominantly used for high-speed, oil-free applications. Offering many advantages such as friction loss at high speeds, stability and price, they lack, however, load capacity as well as start-up and coast-down friction wear resistance.
The friction losses of FABs have been studied experimentally by many authors. In order to predict the friction and, consequently, the lifespan of a FAB, the start-up and coast-down regimes are modelled in such a way that allows for accurate, efficient simulation and later optimisation of lift-off speed and wear characteristics. The proposed simulation method applies the Kirchhoff-Love plate theory to the top foil mapping [20]. This system of differential equations is coupled with the underlying compliant foil to simulate the displacement due to the pressure buildup. Consequently, this coupled system allows for simulation from almost zero rounds per minute (rpm) to full speed. The underlying simulation model uses the finite difference method for spatial discretisation and a temporal explicit Runge-Kutta method.
Difficulties to overcome are the smooth combination of various friction regimes across the sliding surfaces as well as the synchronous coupling of Reynolds, deformation and kinematic equations with highly non-linear terms. Introducing an exponential pressure component based on Greenwood and Tripp’s theory avoids impingement between the rotor and foil.
Unter bestimmten Kontaktbedingungen zwischen Rad und Schiene können selbsterregte Schwingungen angeregt werden, die zu gegenphasigen Drehbewegungen der Radscheiben und hohen Torsionsmomenten in der Radsatzwelle führen. Zur Bestimmung des maximalen Torsionsmoments sind bislang aufwendige Testfahrten erforderlich, da keine Verfahren bekannt waren, die eine konservative Berechnung des Torsionsmoments ermöglichen [1]. In den vergangenen Jahren wurden die drei folgenden Berechnungsmethoden vertieft untersucht, um das maximale, dynamische Torsionsmoment zu berechnen:
- Simulationen von komplexen Mehrkörpersystemen (MKS)
- Differentialgleichungssysteme mit numerischer Berechnung
- Analytische Berechnung durch Reduktion auf ein Minimalmodell
In dieser Publikation sollen diese Berechnungsmethoden näher vorgestellt werden und durch eine Gegenüberstellung der jeweils berechneten und gemessenen Ergebnisse deren Möglichkeiten aber auch Limitationen aufgezeigt werden.
In the past years, algorithms for 3D shape tracking using radial functions in spherical coordinates represented with different methods have been proposed. However, we have seen that mainly measurements from the lateral surface of the target can be expected in a lot of dynamic scenarios and only few measurements from the top and bottom parts leading to an error-prone shape estimate in the top and bottom regions when using a representation in spherical coordinates. We, therefore, propose to represent the shape of the target using a radial function in cylindrical coordinates, as these only represent regions of the lateral surface, and no information from the top or bottom parts is needed. In this paper, we use a Fourier-Chebyshev double series for 3D shape representation since a mixture of Fourier and Chebyshev series is a suitable basis for expanding a radial function in cylindrical coordinates. We investigate the method in a simulated and real-world maritime scenario with a CAD model of the target boat as a reference. We have found that shape representation in cylindrical coordinates has decisive advantages compared to a shape representation in spherical coordinates and should preferably be used if no prior knowledge of the measurement distribution on the surface of the target is available.
Random matrices are used to filter the center of gravity (CoG) and the covariance matrix of measurements. However, these quantities do not always correspond directly to the position and the extent of the object, e.g. when a lidar sensor is used.In this paper, we propose a Gaussian processes regression model (GPRM) to predict the position and extension of the object from the filtered CoG and covariance matrix of the measurements. Training data for the GPRM are generated by a sampling method and a virtual measurement model (VMM). The VMM is a function that generates artificial measurements using ray tracing and allows us to obtain the CoG and covariance matrix that any object would cause. This enables the GPRM to be trained without real data but still be applied to real data due to the precise modeling in the VMM. The results show an accurate extension estimation as long as the reality behaves like the modeling and e.g. lidar measurements only occur on the side facing the sensor.
This paper aims to apply the basics of the Service-Dominant Logic, especially the concept of creating benefits through serving, to the stationary retail industry. In the industrial context, the shift from a product-driven point of view to a service-driven perspective has been discussed widely. However, there are only few connections to how this can be applied to the retail sector on a B2C-level and how retailers can use smart services in order to enable customer engagement, loyalty and retention. The expectations of customers towards future stationary retail develop significantly as consumers got used to the comfort of online shopping. Especially the younger generation—the Generation Z—seems to have changed their priorities from the bare purchase of products to an experience- and service-driven approach when shopping over-the-counter. To stay successful long-term, companies from this sector need to adapt to the expectations of their future main customer group. Therefore, this paper will analyse the specific needs of Generation Z, explain how smart services contribute to creating benefit for this customer group and how this affects the economic sustainability of these firms.