Refine
Year of publication
Document Type
- Conference Proceeding (642) (remove)
Language
- English (492)
- German (149)
- Multiple languages (1)
Keywords
- 360-degree coverage (1)
- 3D Extended Object Tracking (1)
- 3D Extended Object Tracking (EOT) (2)
- 3D shape tracking (1)
- 3D ship detection (1)
- AAL (1)
- ADAM (1)
- AHI (1)
- Abrasive grain material (1)
- Abtragsprinzip (1)
Institute
- Fakultät Bauingenieurwesen (9)
- Fakultät Elektrotechnik und Informationstechnik (10)
- Fakultät Informatik (50)
- Fakultät Maschinenbau (9)
- Fakultät Wirtschafts-, Kultur- und Rechtswissenschaften (8)
- Institut für Angewandte Forschung - IAF (53)
- Institut für Optische Systeme - IOS (19)
- Institut für Strategische Innovation und Technologiemanagement - IST (29)
- Institut für Systemdynamik - ISD (64)
- Institut für Werkstoffsystemtechnik Konstanz - WIK (5)
We consider classes of (Formula presented.)-by-(Formula presented.) sign regular matrices, i.e. of matrices with the property that all their minors of fixed order (Formula presented.) have one specified sign or are allowed also to vanish, (Formula presented.). If the sign is nonpositive for all (Formula presented.), such a matrix is called totally nonpositive. The application of the Cauchon algorithm to nonsingular totally nonpositive matrices is investigated and a new determinantal test for these matrices is derived. Also matrix intervals with respect to the checkerboard ordering are considered. This order is obtained from the usual entry-wise ordering on the set of the (Formula presented.)-by-(Formula presented.) matrices by reversing the inequality sign for each entry in a checkerboard fashion. For some classes of sign regular matrices, it is shown that if the two bound matrices of such a matrix interval are both in the same class then all matrices lying between these two bound matrices are in the same class, too.
This work investigates data compression algorithms for applications in non-volatile flash memories. The main goal of the data compression is to minimize the amount of user data such that the redundancy of the error correction coding can be increased and the reliability of the error correction can be improved. A compression algorithm is proposed that combines a modified move-to-front algorithm with Huffman coding. The proposed data compression algorithm has low complexity, but provides a compression gain comparable to the Lempel-Ziv-Welch algorithm.
Im Rahmen der Lehrveranstaltung "Nachhaltigkeit im industriellen Umfeld" im Masterstudiengang Umwelt- und Verfahrenstechnik der Hochschulen Konstanz und Ravensburg-Weingarten wurde 2015 eine studentische Fachkonferenz durchgeführt.
Die Studierenden entwickelten in Einzelarbeit oder als Zweierteam Konferenzbeiträge zu folgenden Themen:
- Innovationen und Spannendes aus dem Bereich der Energieerzeugung und -wandlung
- Aspekte der Schließung von Stoffkreisläufen und Vermeidung von Schadstoffeinträgen in die Umwelt
- Chancen und Herausforderungen Nachwachsender Rohstoffe bei verschiedenen Einsatzmöglichkeiten sowie Themen der Nachhaltigkeit in der Landwirtschaft
- verschiedene Blickwinkel auf das Thema Wasser (von der Abwasserreinigung bis zum Wasserkonsum der Konsumenten)
- die Betrachtung spezifischer Industrien und Unternehmen sowie deren Werkzeuge zur Umsetzung von Nachhaltigkeit
Die Ergebnisse der studentischen Fachkonferenz zur „Nachhaltigkeit im industriellen Umfeld“ werden in der vorliegenden Publikation präsentiert.
Design of tension components
(2017)
The paper gives an introduction as well as background information on proposed changes and amendments in EN 1993-1-11 “Design of structures with tension components”, implemented during the ongoing revision. Due to some deficits in the currently applicable standard this revision is not only limited to some restructuring and editorial changes, but includes also major technical changes in the following fields: safety concept and structural analysis, actions and loads, robustness and rep-arability, design of tension components and design of clamps and saddles.
Adjusting the friction response of the wheel-rail interface is a key factor in the mitigation of wear and rollingcontact fatigue (RCF) in rails. The use of top-of-rail (TOR) friction conditioners has the potential to reduce maintenance costs significantly. Unfortunately, conflicting results on the use of commercial TOR conditioners have been presented in the literature. In this work, the performance of commercial TOR conditioners and a laboratory-made formulation were tested, both on the lab scale and in field measurements. Friction results are discussed together with the structural and chemical analysis of the tested materials.
In this paper, we propose a novel method for real-time control of electric distribution grids with a limited number of measurements. The method copes with the changing grid behaviour caused by the increasing number of renewable energies and electric vehicles. Three AI based models are used. Firstly, a probabilistic forecasting estimates possible scenarios at unobserved grid nodes. Secondly, a state estimation is used to detect grid congestion. Finally, a grid control suggests multiple possible solutions for the detected problem. The best countermeasures are then detected by evaluating the systems stability for the next time-step.
Probabilistic Short-Term Low-Voltage Load Forecasting using Bernstein-Polynomial Normalizing Flows
(2021)
The transition to a fully renewable energy grid requires better forecasting of demand at the low-voltage level. However, high fluctuations and increasing electrification cause huge forecast errors with traditional point estimates. Probabilistic load forecasts take future uncertainties into account and thus enables various applications in low-carbon energy systems. We propose an approach for flexible conditional density forecasting of short-term load based on Bernstein-Polynomial Normalizing Flows where a neural network controls the parameters of the flow. In an empirical study with 363 smart meter customers, our density predictions compare favorably against Gaussian and Gaussian mixture densities and also outperform a non-parametric approach based on the pinball loss for 24h-ahead load forecasting for two different neural network architectures.
Nowadays, the importance of early active patient mobilization in the recovery and rehabilitation phase has increased significantly. One way to involve patients in the treatment is a gamification-like approach, which is one of the methods of motivation in various life processes. This article shows a system prototype for patients who require physical activity because of active early mobilization after medical interventions or during illness. Bedridden patients and people with a sedentary lifestyle (predominantly lying in bed) are also potential users. The main idea for the concept was non-contact system implementation for the patients making them feel effortless during its usage. The system consists of three related parts: hardware, software, and game application. To test the relevance and coherence of the system, it was used by 35 people. The participants were asked to play a video game requiring them to make body movements while lying down. Then they were asked to take part in a small survey to evaluate the system's usability. As a result, we offer a prototype consisting of hardware and software parts that can increase and diversify physical activity during active early mobilization of patients and prevent the occurrence of possible health problems due to predominantly low activity. The proposed design can be possibly implemented in hospitals, rehabilitation centers, and even at home.
Monitoring heart rate and breathing is essential in understanding the physiological processes for sleep analysis. Polysomnography (PSG) system have traditionally been used for sleep monitoring, but alternative methods can help to make sleep monitoring more portable in someone's home. This study conducted a series of experiments to investigate the use of pressure sensors placed under the bed as an alternative to PSG for monitoring heart rate and breathing during sleep. The following sets of experiments involved the addition of small rubber domes - transparent and black - that were glued to the pressure sensor. The resulting data were compared with the PSG system to determine the accuracy of the pressure sensor readings. The study found that the pressure sensor provided reliable data for extracting heart rate and respiration rate, with mean absolute errors (MAE) of 2.32 and 3.24 for respiration and heart rate, respectively. However, the addition of small rubber hemispheres did not significantly improve the accuracy of the readings, with MAEs of 2.3 bpm and 7.56 breaths per minute for respiration rate and heart rate, respectively. The findings of this study suggest that pressure sensors placed under the bed may serve as a viable alternative to traditional PSG systems for monitoring heart rate and breathing during sleep. These sensors provide a more comfortable and non-invasive method of sleep monitoring. However, the addition of small rubber domes did not significantly enhance the accuracy of the readings, indicating that it may not be a worthwhile addition to the pressure sensor system.
Sleep analysis using a Polysomnography system is difficult and expensive. That is why we suggest a non-invasive and unobtrusive measurement. Very few people want the cables or devices attached to their bodies during sleep. The proposed approach is to implement a monitoring system, so the subject is not bothered. As a result, the idea is a non-invasive monitoring system based on detecting pressure distribution. This system should be able to measure the pressure differences that occur during a single heartbeat and during breathing through the mattress. The system consists of two blocks signal acquisition and signal processing. This whole technology should be economical to be affordable enough for every user. As a result, preprocessed data is obtained for further detailed analysis using different filters for heartbeat and respiration detection. In the initial stage of filtration, Butterworth filters are used.
In this paper, a gain-scheduled nonlinear control structure is proposed for a surface vessel, which takes advantage of extended linearisation techniques. Thereby, an accurate tracking of desired trajectories can be guaranteed that contributes to a safe and reliable water transport. The PI state feedback control is extended by a feedforward control based on an inverse system model. To achieve an accurate trajectory tracking, however, an observer-based disturbance compensation is necessary: external disturbances by cross currents or wind forces in lateral direction and wave-induced measurement disturbances are estimated by a nonlinear observer and used for a compensation. The efficiency and the achieved tracking performance are shown by simulation results using a validated model of the ship Korona at the HTWG Konstanz, Germany. Here, both tracking behaviour and rejection of disturbance forces in lateral direction are considered.
We analyse the results of a finite element simulation of a macroscopic model, which describes the movement of a crowd, that is considered as a continuum. A new formulation based on the macroscopic model from Hughes [2] is given. We present a stable numerical algorithm by approximating with a viscosity solution. The fundamental setting is given by an arbitrary domain that can contain several obstacles, several entries and must have at least one exit. All pedestrians have the goal to leave the room as quickly as possible. Nobody prefers a particular exit.
List decoding for concatenated codes based on the Plotkin construction with BCH component codes
(2021)
Reed-Muller codes are a popular code family based on the Plotkin construction. Recently, these codes have regained some interest due to their close relation to polar codes and their low-complexity decoding. We consider a similar code family, i.e., the Plotkin concatenation with binary BCH component codes. This construction is more flexible regarding the attainable code parameters. In this work, we consider a list-based decoding algorithm for the Plotkin concatenation with BCH component codes. The proposed list decoding leads to a significant coding gain with only a small increase in computational complexity. Simulation results demonstrate that the Plotkin concatenation with the proposed decoding achieves near maximum likelihood decoding performance. This coding scheme can outperform polar codes for moderate code lengths.
The encoding of antenna patterns with generalized spatial modulation as well as other index modulation techniques require w-out-of-n encoding where all binary vectors of length n have the same weight w. This constant-weight property cannot be obtained by conventional linear coding schemes. In this work, we propose a new class of constant-weight codes that result from the concatenation of convolutional codes with constant-weight block codes. These constant-weight convolutional codes are nonlinear binary trellis codes that can be decoded with the Viterbi algorithm. Some constructed constant-weight convolutional codes are optimum free distance codes. Simulation results demonstrate that the decoding performance with Viterbi decoding is close to the performance of the best-known linear codes. Similarly, simulation results for spatial modulation with a simple on-off keying show a significant coding gain with the proposed coded index modulation scheme.
Spatial modulation (SM) is a low-complexity multiple-input/multiple-output transmission technique that combines index modulation and quadrature amplitude modulation for wireless communications. In this work, we consider the problem of link adaption for generalized spatial modulation (GSM) systems that use multiple active transmit antennas simultaneously. Link adaption algorithms require a real-time estimation of the link quality of the time-variant communication channels, e.g., by means of estimating the mutual information. However, determining the mutual information of SM is challenging because no closed-form expressions have been found so far. Recently, multilayer feedforward neural networks were applied to compute the achievable rate of an index modulation link. However, only a small SM system with two transmit and two receive antennas was considered. In this work, we consider a similar approach but investigate larger GSM systems with multiple active antennas. We analyze the portions of mutual information related to antenna selection and the IQ modulation processes, which depend on the GSM variant and the signal constellation.
Automotive computing applications like AI databases, ADAS, and advanced infotainment systems have a huge need for persistent memory. This trend requires NAND flash memories designed for extreme automotive environments. However, the error probability of NAND flash memories has increased in recent years due to higher memory density and production tolerances. Hence, strong error correction coding is needed to meet automotive storage requirements. Many errors can be corrected by soft decoding algorithms. However, soft decoding is very resource-intensive and should be avoided when possible. NAND flash memories are organized in pages, and the error correction codes are usually encoded page-wise to reduce the latency of random reads. This page-wise encoding does not reach the maximum achievable capacity. Reading soft information increases the channel capacity but at the cost of higher latency and power consumption. In this work, we consider cell-wise encoding, which also increases the capacity compared to page-wise encoding. We analyze the cell-wise processing of data in triple-level cell (TLC) NAND flash and show the performance gain when using Low-Density Parity-Check (LDPC) codes. In addition, we investigate a coding approach with page-wise encoding and cell-wise reading.
Reliability is a crucial aspect of non-volatile NAND flash memories, and it is essential to thoroughly analyze the channel to prevent errors and ensure accurate readout. Es-timating the read reference voltages (RRV s) is a significant challenge due to the multitude of physical effects involved. The question arises which features are useful and necessary for the RRV estimation. Various possible features require specialized hardware or specific readout techniques to be usable. In contrast we consider sparse histograms based on the decision thresholds for hard-input and soft-input decoding. These offer a distinct advantage as they are derived directly from the raw readout data without the need for decoding. This paper focuses on the information-theoretic study of different features, especially on the exploration of the mutual information (MI) between feature vector and RRV. In particular, we investigate the dependency of the MI on the resolution of the histograms. With respect to the RRV estimation, sparse histograms provide sufficient information for near-optimum estimation.
Large persistent memory is crucial for many applications in embedded systems and automotive computing like AI databases, ADAS, and cutting-edge infotainment systems. Such applications require reliable NAND flash memories made for harsh automotive conditions. However, due to high memory densities and production tolerances, the error probability of NAND flash memories has risen. As the number of program/erase cycles and the data retention times increase, non-volatile NAND flash memories' performance and dependability suffer. The read reference voltages of the flash cells vary due to these aging processes. In this work, we consider the issue of reference voltage adaption. The considered estimation procedure uses shallow neural networks to estimate the read reference voltages for different life-cycle conditions with the help of histogram measurements. We demonstrate that the training data for the neural networks can be enhanced by using shifted histograms, i.e., a training of the neural networks is possible based on a few measurements of some extreme points used as training data. The trained neural networks generalize well for other life-cycle conditions.
Soft-input decoding of concatenated codes based on the Plotkin construction and BCH component codes
(2020)
Low latency communication requires soft-input decoding of binary block codes with small to medium block lengths.
In this work, we consider generalized multiple concatenated (GMC) codes based on the Plotkin construction. These codes are similar to Reed-Muller (RM) codes. In contrast to RM codes, BCH codes are employed as component codes. This leads to improved code parameters. Moreover, a decoding algorithm is proposed that exploits the recursive structure of the concatenation. This algorithm enables efficient soft-input decoding of binary block codes with small to medium lengths. The proposed codes and their decoding achieve significant performance gains compared with RM codes and recursive GMC decoding.
Digitale Transformation
(2015)
Strategie der digitalen Ära
(2015)
A key objective of this research is to take a more detailed look at a central aspect of resilience in small and medium-sized enterprises (SMEs). A literature review and expert interviews were used to investigate which factors have an impact on the innovative capacity of start-ups and whether these can also be adapted by SMEs. First of all, it must be stated that there are considerable structural and process-related differences between start-ups and SMEs. These can considerably inhibit cooperation between the two forms of enterprise. However, in the same context, success factors and issues in the start-up sector could also be identified that can improve cooperation with SMEs. These and other findings are then discussed in both an economic and an academic context. This article was written as part of the research activities of the Smart Services Competence Centre (proper name: Kompetenzzentrum Smart Services), a central contact point for all questions in the area of smart service digitalization in Baden-Wuerttemberg. Here, companies can obtain information about various digital technologies and take advantage of various measures for the development of new ideas and innovative services (Kompetenzzentrum Smart Services BW: Über das Kompetenzzentrum, 2021).
Improving the tribological properties of Stainless Steels by low-temperature surface hardening
(2022)
This paper presents the integration of a spline based extension model into a probability hypothesis density (PHD) filter for extended targets. Using this filter the position and extension of each object as well as the number of present objects can jointly be estimated. Therefore, the spline extension model and the PHD filter are addressed and merged in a Gaussian mixture (GM) implementation. Simulation results using artificial laser measurements are used to evaluate the performance of the presented filter. Finally, the results are illustrated and discussed.
In 3D extended object tracking (EOT), well-established models exist for tracking the object extent using various shape priors. A single update, however, has to be performed for every measurement using these models leading to a high computational runtime for high-resolution sensors. In this paper, we address this problem by using various model-independent downsampling schemes based on distance heuristics and random sampling as pre-processing before the update. We investigate the methods in a simulated and real-world tracking scenario using two different measurement models with measurements gathered from a LiDAR sensor. We found that there is a huge potential for speeding up 3D EOT by dropping up to 95\% of the measurements in our investigated scenarios when using random sampling. Since random sampling, however, can also result in a subset that does not represent the total set very well, leading to a poor tracking performance, there is still a high demand for further research.
With the high resolution of modern sensors such as multilayer LiDARs, estimating the 3D shape in an extended object tracking procedure is possible. In recent years, 3D shapes have been estimated in spherical coordinates using Gaussian processes, spherical double Fourier series or spherical harmonics. However, observations have shown that in many scenarios only a few measurements are obtained from top or bottom surfaces, leading to error-prone estimates in spherical coordinates. Therefore, in this paper we propose to estimate the shape in cylindrical coordinates instead, applying harmonic functions. Specifically, we derive an expansion for 3D shapes in cylindrical coordinates by solving a boundary value problem for the Laplace equation. This shape representation is then integrated in a plain greedy association model and compared to shape estimation procedures in spherical coordinates. Since the shape representation is only integrated in a basic estimator, the results are preliminary and a detailed discussion for future work is presented at the end of the paper.
In this paper, a novel measurement model based on spherical double Fourier series (DFS) for estimating the 3D shape of a target concurrently with its kinematic state is introduced. Here, the shape is represented as a star-convex radial function, decomposed as spherical DFS. In comparison to ordinary DFS, spherical DFS do not suffer from ambiguities at the poles. Details will be given in the paper. The shape representation is integrated into a Bayesian state estimator framework via a measurement equation. As range sensors only generate measurements from the target side facing the sensor, the shape representation is modified to enable application of shape symmetries during the estimation process. The model is analyzed in simulations and compared to a shape estimation procedure using spherical harmonics. Finally, shape estimation using spherical and ordinary DFS is compared to analyze the effect of the pole problem in extended object tracking (EOT) scenarios.
In this paper, approximating the shape of a sailing boat using elliptic cones is investigated. Measurements are assumed to be gathered from the target's surface recorded by 3D scanning devices such as multilayer LiDAR sensors. Therefore, different models for estimating the sailing boat's extent are presented and evaluated in simulated and real-world scenarios. In particular, the measurement source association problem is addressed in the models. Simulated investigations are conducted with a static and a moving elliptic cone. The real-world scenario was recorded with a Velodyne Alpha Prime (VLP-128) mounted on a ferry of Lake Constance. Final results of this paper constitute the extent estimation of a single sailing boat using LiDAR data applying various measurement models.
In the past years, algorithms for 3D shape tracking using radial functions in spherical coordinates represented with different methods have been proposed. However, we have seen that mainly measurements from the lateral surface of the target can be expected in a lot of dynamic scenarios and only few measurements from the top and bottom parts leading to an error-prone shape estimate in the top and bottom regions when using a representation in spherical coordinates. We, therefore, propose to represent the shape of the target using a radial function in cylindrical coordinates, as these only represent regions of the lateral surface, and no information from the top or bottom parts is needed. In this paper, we use a Fourier-Chebyshev double series for 3D shape representation since a mixture of Fourier and Chebyshev series is a suitable basis for expanding a radial function in cylindrical coordinates. We investigate the method in a simulated and real-world maritime scenario with a CAD model of the target boat as a reference. We have found that shape representation in cylindrical coordinates has decisive advantages compared to a shape representation in spherical coordinates and should preferably be used if no prior knowledge of the measurement distribution on the surface of the target is available.
Summary of the 8th Workshop on Metallization and Interconnection for Crystalline Silicon Solar Cells
(2019)
This article gives a summary of the 8th Metallization and Interconnection workshop and attempts to place each contribution in the appropriate context. The field of metallization and interconnection continues to progress at a very fast pace. Several printing techniques can now achieve linewidths below 20 μm. Screen printing is more than ever the dominating metallization technology in the industry, with finger widths of 45 μm in routine mass production and values below 20 μm in the lab. Plating technology is also being improved, particularly through the development of lower cost patterning techniques. Interconnection technology is changing fast, with introduction in mass production of multiwire and shingled cells technologies. New models and characterization techniques are being introduced to study and understand in detail these new interconnection technologies.
Summary of the 9th workshop on metallization and interconnection for crystalline silicon solar cells
(2021)
The 9th edition of the Workshop on Metallization and Interconnection for Crystalline Silicon Solar Cells was held as an online event but nevertheless reached the workshop goals of knowledge sharing and networking. The technology of screen-printed contacts of high temperature pastes continues its fast progress enabled by better understanding of the phenomena taking place during printing and firing, and progress in materials. Great improvements were also achieved in low temperature paste printing and plated metallization. In the field of interconnection, progress was reported on multiwire approaches, electrically conductive adhesives and on foil-based approaches. Common to many contributions at the workshop was the use of advanced laser processes to improve performance or throughput.
The evaluation of the effectiveness of different machine learning algorithms on a publicly available database of signals derived from wearable devices is presented with the goal of optimizing human activity recognition and classification. Among the wide number of body signals we choose a couple of signals, namely photoplethysmographic (optically detected subcutaneous blood volume) and tri-axis acceleration signals that are easy to be simultaneously acquired using commercial widespread devices (e.g. smartwatches) as well as custom wearable wireless devices designed for sport, healthcare, or clinical purposes. To this end, two widely used algorithms (decision tree and k-nearest neighbor) were tested, and their performance were compared to two new recent algorithms (particle Bernstein and a Monte Carlo-based regression) both in terms of accuracy and processing time. A data preprocessing phase was also considered to improve the performance of the machine learning procedures, in order to reduce the problem size and a detailed analysis of the compression strategy and results is also presented.
The improvement of collision avoidance for vessels in close range encounter situations is an important topic for maritime traffic safety. Typical approaches generate evasive trajectories or optimise the trajectories of all involved vessels. Such a collision avoidance system has to produce evasive manoeuvres that do not confuse other navigators. To achieve this behaviour, a probabilistic obstacle handling based on information from a radar sensor with target tracking, that considers measurement and tracking uncertainties is proposed. A grid based path search algorithm, that takes the information from the probabilistic obstacle handling into account, is then used to generate evasive trajectories. The proposed algorithms have been tested and verified in a simulated environment for inland waters.
Motion safety for vessels
(2015)
The improvement of collision avoidance for vessels in close range encounter situations is an important topic for maritime traffic safety. Typical approaches generate evasive trajectories or optimise the trajectories of all involved vessels. The idea of this work is to validate these trajectories related to guaranteed motion safety, which means that it is not sufficient for a trajectory to be collision-free, but it must additionally ensure that an evasive manoeuvre is performable at any time. An approach using the distance and the evolution of the distance to the other vessels is proposed. The concept of Inevitable Collision States (ICS) is adopted to identify the states for which no evasive manoeuvre exist. Furthermore, it is implemented into a collision avoidance system for recreational crafts to demonstrate the performance.
Linear and nonlinear response functions (RF) are extracted for the climate system and the carbon cycle represented by the MPI-ESM and cGENIE models, respectively. Appropriately designed simulations are run for this purpose. Joining these RFs, we have a climate emulator with carbon emissions as the forcing and any desired observable quantity (provided the data is saved), such as the surface air temperature or precipitation, as the predictand. Like e.g. for atmospheric CO2 concentration, we also have RFs for the solar constant as a forcing — mimicking solar radiation management (SRM) geoengineering. We consider two application cases. 1. One is based on the Paris 2015 agreement, determining the necessary least amount of SRM geoengineering needed to keep the global mean surface air temperature below a certain threshold, e.g. 1.5 or 2 [oC], given a certain amount of carbon emission abatement (ABA) and carbon dioxide removal (CDR) geoengineering. 2. The other application considers the conservation of the Greenland ice sheet (GrIS). Using a zero-dimensional simplification of a complex ice sheet model, we determine (a) if we need SRM given some ABA and CDR, and, if possible, (b) the required least amount of SRM to avoid the collapse of the GrIS. Keeping temperatures below 2 [oC] even is hardly possible without sustained SRM (1.); however, the collapse of the GrIS can be avoided applying SRM even for moderate levels of CDR and ABA, an overshoot being affordable (2.).
As one of the most important branches of the industry in Germany and
the European Union, the mechanical and plant engineering sector is confronted with fundamental changes due to ever shorter innovation cycles and increased competitive pressure. This makes it even more important to increase the level of service components in business models with a low service level, which are still frequently found in SMEs. This paper is dedicated to the changes that the individual components of a business model have experienced and will experience. Special attention is paid to economic sustainability, since service business models can also positively influence the long-term nature of a business. Seven interviews conducted with relevant companies serve as the empirical basis of this paper. The analysed effects of smart services and active customer integration are structured and summarized within the three pillars of every business model (value proposition, the value creation architecture and the revenue mechanic).
Gamification is one of the recognized methods of motivating people in various life processes, and it has spread to many spheres of life, including healthcare. This article proposes a system design for long-term care patients using the method mentioned. The proposed system aims to increase patient engagement in the treatment and rehabilitation process via gamification. Literature research on available and earlier proposed systems was conducted to develop a suited system design. The primary target group includes bedridden patients and a sedentary lifestyle (predominantly lying in bed). One of the main criteria for selecting a suitable option was its contactless realization for the mentioned target groups in long-term care cases. As a result, we developed the system design for hardware and software that could prevent bedsores and other health problems from occurring because of low activity. The proposed design can be tested in hospitals, nursing homes, and rehabilitation centers.
Evaluation of a Contactless Accelerometer Sensor System for Heart Rate Monitoring During Sleep
(2024)
The monitoring of a patient's heart rate (HR) is critical in the diagnosis of diseases. In the detection of sleep disorders, it also plays an important role. Several techniques have been proposed, including using sensors to record physiological signals that are automatically examined and analysed. This work aims to evaluate using a contactless HR monitoring system based on an accelerometer sensor during sleep. For this purpose, the oscillations caused by chest movements during heart contractions are recorded by an installation mounted under the bed mattress. The processing algorithm presented in this paper filters the signals and determines the HR. As a result, an average error of about 5 bpm has been documented, i.e., the system can be considered to be used for the forecasted domain.
Sleep is extremely important for physical and mental health. Although polysomnography is an established approach in sleep analysis, it is quite intrusive and expensive. Consequently, developing a non-invasive and non-intrusive home sleep monitoring system with minimal influence on patients, that can reliably and accurately measure cardiorespiratory parameters, is of great interest. The aim of this study is to validate a non-invasive and unobtrusive cardiorespiratory parameter monitoring system based on an accelerometer sensor. This system includes a special holder to install the system under the bed mattress. The additional aim is to determine the optimum relative system position (in relation to the subject) at which the most accurate and precise values of measured parameters could be achieved. The data were collected from 23 subjects (13 males and 10 females). The obtained ballistocardiogram signal was sequentially processed using a sixth-order Butterworth bandpass filter and a moving average filter. As a result, an average error (compared to reference values) of 2.24 beats per minute for heart rate and 1.52 breaths per minute for respiratory rate was achieved, regardless of the subject’s sleep position. For males and females, the errors were 2.28 bpm and 2.19 bpm for heart rate and 1.41 rpm and 1.30 rpm for respiratory rate. We determined that placing the sensor and system at chest level is the preferred configuration for cardiorespiratory measurement. Further studies of the system’s performance in larger groups of subjects are required, despite the promising results of the current tests in healthy subjects.
Sleep is an essential part of human existence, as we are in this state for approximately a third of our lives. Sleep disorders are common conditions that can affect many aspects of life. Sleep disorders are diagnosed in special laboratories with a polysomnography system, a costly procedure requiring much effort for the patient. Several systems have been proposed to address this situation, including performing the examination and analysis at the patient's home, using sensors to detect physiological signals automatically analysed by algorithms. This work aims to evaluate the use of a contactless respiratory recording system based on an accelerometer sensor in sleep apnea detection. For this purpose, an installation mounted under the bed mattress records the oscillations caused by the chest movements during the breathing process. The presented processing algorithm performs filtering of the obtained signals and determines the apnea events presence. The performance of the developed system and algorithm of apnea event detection (average values of accuracy, specificity and sensitivity are 94.6%, 95.3%, and 93.7% respectively) confirms the suitability of the proposed method and system for further ambulatory and in-home use.
The respiratory rate is a vital sign indicating breathing illness. It is necessary to analyze the mechanical oscillations of the patient's body arising from chest movements. An inappropriate holder on which the sensor is mounted, or an inappropriate sensor position is some of the external factors which should be minimized during signal registration. This paper considers using a non-invasive device placed under the bed mattress and evaluates the respiratory rate. The aim of the work is the development of an accelerometer sensor holder for this system. The normal and deep breathing signals were analyzed, corresponding to the relaxed state and when taking deep breaths. The evaluation criterion for the holder's model is its influence on the patient's respiratory signal amplitude for each state. As a result, we offer a non-invasive system of respiratory rate detection, including the mechanical component providing the most accurate values of mentioned respiratory rate.
Determination of accelerometer sensor position for respiration rate detection: Initial research
(2022)
Continuous monitoring of a patient's vital signs is essential in many chronic illnesses. The respiratory rate (RR) is one of the vital signs indicating breathing diseases. This article proposes the initial investigation for determining the accelerometric sensor position of a non-invasive and unobtrusive respiratory rate monitoring system. This research aims to determine the sensor position in relation to the patient, which can provide the most accurate values of the mentioned physiological parameter. In order to achieve the result, the particular system setup, including a mechanical sensor holder construction was used. The breathing signals from 5 participants were analyzed corresponding to the relaxed state. The main criterion for selecting a suitable sensor position was each patient's average acceleration amplitude excursion, which corresponds to the respiratory signal. As a result, we provided one more defined important parameter for the considered system, which was not determined before.
Deep neural networks (DNNs) are known for their high prediction performance, especially in perceptual tasks such as object recognition or autonomous driving. Still, DNNs are prone to yield unreliable predictions when encountering completely new situations without indicating their uncertainty. Bayesian variants of DNNs (BDNNs), such as MC dropout BDNNs, do provide uncertainty measures. However, BDNNs are slow during test time because they rely on a sampling approach. Here we present a single shot MC dropout approximation that preserves the advantages of BDNNs without being slower than a DNN. Our approach is to analytically approximate for each layer in a fully connected network the expected value and the variance of the MC dropout signal. We evaluate our approach on different benchmark datasets and a simulated toy example. We demonstrate that our single shot MC dropout approximation resembles the point estimate and the uncertainty estimate of the predictive distribution that is achieved with an MC approach, while being fast enough for real-time deployments of BDNNs.
Ziel der Masterarbeit war es, die Feuchtigkeitseigenschaften von Estrichen bei unterschiedlichen Klimaten mithilfe von Sorptionsisothermen zu charakterisieren. Die wenigen Literaturangaben zu Sorptionsisothermen von mineralischen Estrichen beziehen sich im Wesentlichen auf Calciumsulfatestriche und genormte Zementestriche (ohne dass die Zementart: Portlandzemente, Hochofenzemente bzw. CEM I, CEM II, CEM III etc. unterschieden werden) und i.d.R. nur auf eine Lufttemperatur (= 20 Grad C). Anliegen der Arbeit war es, zusätzlich die seit ca. 20 Jahren marktüblichen ternären Schnellzemente mit zu untersuchen und die baupraktisch interessanten Temperaturen von 15 Grad C und 25 Grad C einzubeziehen. Ebenso wurden die Auswirkungen der Klimabedingung auf der Baustelle (Jahreszeit, Luftfeuchtigkeit, Temperatur) auf den Hydratationsvorgang der Estriche untersucht. Dabei wurden jeweils nicht nur ein Vertreter der verschiedenen Bindemittelsysteme, sondern mindestens zwei verschiedene Estriche unterschiedlicher Hersteller mit einbezogen. In Kombination mit den Ergebnissen der Gefügeuntersuchungen (u. a. Hg-Porosametrie) wird belegt, weshalb sich die zement- und schnellzementgebundenen Estriche vollkommen anders verhalten als die calciumsulfatgebundenen Estriche. Dieses unterschiedliche Verhalten ist auch einer der Gründe, warum Estriche mit der KRL-Methode in Bezug auf ihren Feuchtegehalt nicht bewertet werden können. Aus diesem Grund folgt ein Vergleich der Materialfeuchtemessungen "KRL-Methode" mit der handwerksüblichen und seit Jahrzehnten in der Praxis bewährten "CM-Methode".
Observer-based self sensing for digital (on–off) single-coil solenoid valves is investigated. Self sensing refers to the case where merely the driving signals used to energize the actuator (voltage and coil current) are available to obtain estimates of both the position and velocity. A novel observer approach for estimating the position and velocity from the driving signals is presented, where the dynamics of the mechanical subsystem can be neglected in the model. Both the effect of eddy currents and saturation effects are taken into account in the observer model. Practical experimental results are shown and the new method is compared with a full-order sliding mode observer.
The method of signal injection is investigated for position estimation of proportional solenoid valves. A simple observer is proposed to estimate a position-dependent parameter, i.e. the eddy current resistance, from which the position is calculated analytically. Therefore, the relationship of position and impedance in the case of sinusoidal excitation is accurately described by consideration of classical electrodynamics. The observer approach is compared with a standard identification method, and evaluated by practical experiments on an off-the-shelf proportional solenoid valve.
Flatness-based feed-forward control of solenoid actuators is considered. For precise motion planning and accurate steering of conventional solenoids, eddy currents cannot be neglected. The system of ordinary differential equations including eddy currents, that describes the nonlinear dynamics of such actuators, is not differentially flat. Thus, a distributed parameter approach based on a diffusion equation is considered, that enables the parametrization of the eddy current by the armature position and its time derivatives. In order to design the feedforward control, the distributed parameter model of the eddy current subsystem is combined with a typical nonlinear lumped parameter model for the electrical and mechanical subsystems of the solenoid. The control design and its application are illustrated by numerical and practical results for an industrial solenoid actuator.
Knowing the position of the spool in a solenoid valve, without using costly position sensors, is of considerable interest in a lot of industrial applications. In this paper, the problem of position estimation based on state observers for fast-switching solenoids, with sole use of simple voltage and current measurements, is investigated. Due to the short spool traveling time in fast-switching valves, convergence of the observer errors has to be achieved very fast. Moreover, the observer has to be robust against modeling uncertainties and parameter variations. Therefore, different state observer approaches are investigated, and compared to each other regarding possible uncertainties. The investigation covers a High-Gain-Observer approach, a combined High-Gain Sliding-Mode-Observer approach, both based on extended linearization, and a nonlinear Sliding-Mode-Observer based on equivalent output injection. The results are discussed by means of numerical simulations for all approaches, and finally physical experiments on a valve-mock-up are thoroughly discussed for the nonlinear Sliding-Mode-Observer.
A semilinear distributed parameter approach for solenoid valve control including saturation effects
(2015)
In this paper a semilinear parabolic PDE for the control of solenoid valves is presented. The distributed parameter model of the cylinder becomes nonlinear by the inclusion of saturation effects due to the material's B/H-curve. A flatness based solution of the semilinear PDE is shown as well as a convergence proof of its series solution. By numerical simulation results the adaptability of the approach is demonstrated, and differences between the linear and the nonlinear case are discussed. The major contribution of this paper is the inclusion of saturation effects into the magnetic field governing linear diffusion equation, and the development of a flatness based solution for the resulting semilinear PDE as an extension of previous works [1] and [2].
At the University of Applied Sciences Konstanz, Germany, a modern electronically controlled dynamometer and several cars are available for tests. Numerous studies have been carried out, and the latest results will be presented. The paper is intended to explain different tests under load. One focus is the driving cycle WLTC (Worldwide harmonized Light vehicles Test Cycle) and the requirements for the proper conduct of investigation with this driving cycle. Two and three wheelers have a great importance for mobility in various Asian countries. But also in other countries, this segment is very important for the so-called First or Last Mile Vehicles. Because of this, a short explanation of the driving cycle WMTC (Worldwide harmonized Motorcycle Emissions Certification/Test Procedure) is given. The various possibilities for the operation of the dynamometer and for carrying out various experiments are shown.
Other important figures that can be determined on a dynamometer are the wheel power, the power losses and eventually the engine performance. With the brake specific torque, the traction force at the propelled wheels, the maximum acceleration or maximum gradeability of a car can be determined.
As well the slippage related to load can be measured on the dynamometer. The dynamic wheel radius of the driven wheels has a significant influence on the slippage. Because of the temperature increase of the tires during the tests the tire pressure increases. A rise of tire temperature, tire pressure, and wheel speed results in an increase of the dynamic wheel radius and slippage. Equations for the determination of the dynamic wheel radius are presented.
Several possibilities of tests under load on a chassis dynamometer are presented. Consumption measurements according standard driving cycles as the New European Drive Cycle (NEDC) and Worldwide harmonized light duty test procedure/cycle (WLTP/WLTC) make special attention to the observance of the regulations necessary. The rotational masses of inertia and the load depending on velocity have to match the required values. Load tests as well allow the determination of the maximum acceleration in the current gear and the slippage of the driven wheels.
The aim of the paper is to present the simulation of the sweeping process based on a mathematical model that includes the drag force, the lift force, the sideway force, and the gravity. At the beginning, it is presented a short history of the street sweepers, some considerations about the sweeping process and the parameters of the sweeping process. Considering the developed model, in Matlab there is done some simulation for the trajectory of a spherical pebble. The obtained results are presented in graphical shape.
This paper presents a new likelihood-based partitioning method of the measurement set for the extended object probability hypothesis density (PHD) filter framework. Recent work has mostly relied on heuristic partitioning methods that cluster the measurement data based on a distance measure between the single measurements. This can lead to poor filter performance if the tracked extended objects are closely spaced. The proposed method called Stochastic Partitioning (StP) is based on sampling methods and was inspired by a former work of Granström et. al. In this work, the StP method is applied to a Gaussian inverse Wishart (GIW) PHD filter and compared to a second filter implementation that uses the heuristic Distance Partitioning (DP) method. The performance is evaluated in Monte Carlo simulations in a scenario where two objects approach each other. It is shown that the sampling based StP method leads to an improved filter performance compared to DP.
Twenty-first century infrastructure needs to respond to changing demographics, becoming climate neutral, resilient and economically affordable, while remaining a driver for development and shared prosperity. However, the infrastructure sector remains one of the least innovative and digitalised, plagued by delays, cost overruns and benefit shortfalls (Cantarelli et al. 2008; Flyvbjerg, 2007; Flyvbjerg et al., 2003; Flyvbjerg et al., 2004). The root cause is the prevailing fragmentation of the infrastructure sector (Fellows and Liu, 2012). To help overcome these challenges, integration of the value chain is needed. This could be achieved through a use-case-based creation of federated ecosystems connecting open and trusted data spaces and advanced services applied to infrastructure projects. Such digital platforms enable full-lifecycle participation and responsible governance guided by a shared infrastructure vision. Digital federation enables secure and sovereign data exchange and thus collaboration across the silos within the infrastructure sector and between industries as well as within and between countries. Such an approach to infrastructure technology policy would not rely on technological solutionism but proposes the development of open and trusted data alliances. Federated data spaces provide access to the emerging data economy, especially for SMEs, and can foster the innovation of new digital services. Such responsible digital governance can help make the infrastructure sector more resilient, efficient and aligned with the realisation of ambitious decarbonisation and environmental protection targets. The European Union and the United States have already developed architectures for sovereign and secure data exchange.
Classification of point clouds by different types of geometric primitives is an essential part in the reconstruction process of CAD geometry. We use support vector machines (SVM) to label patches in point clouds with the class labels tori, ellipsoids, spheres, cones, cylinders or planes. For the classification features based on different geometric properties like point normals, angles, and principal curvatures are used. These geometric features are estimated in the local neighborhood of a point of the point cloud. Computing these geometric features for a random subset of the point cloud yields a feature distribution. Different features are combined for achieving best classification results. To minimize the time consuming training phase of SVMs, the geometric features are first evaluated using linear discriminant analysis (LDA).
LDA and SVM are machine learning approaches that require an initial training phase to allow for a subsequent automatic classification of a new data set. For the training phase point clouds are generated using a simulation of a laser scanning device. Additional noise based on an laser scanner error model is added to the point clouds. The resulting LDA and SVM classifiers are then used to classify geometric primitives in simulated and real laser scanned point clouds.
Compared to other approaches, where all known features are used for classification, we explicitly compare novel against known geometric features to prove their effectiveness.
Im Rahmen der Lehrveranstaltung "Nachhaltigkeit im industriellen Umfeld" im Masterstudiengang Umwelt- und Verfahrenstechnik der Hochschulen Konstanz und Ravensburg-Weingarten fand im Dezember 2016 eine studentische Fachkonferenz statt. Die Studierenden entwickelten in Einzelarbeit oder als Zweierteam
Konferenzbeiträge zu folgenden Themen:
- Spannendes aus dem Bereich der Energieerzeugung und der Grauen Energie
- Aspekte der Kreislaufwirtschaft
- Ökosysteme - ihre Belastung und Erhalt
- Spezifische Wirtschaftszweige und Nachhaltigkeit
Die Ergebnisse der studentischen Fachkonferenz zur „Nachhaltigkeit im
industriellen Umfeld“ werden in der vorliegenden Publikation präsentiert.
The ballistocardiography is a technique that measures the heart rate from the mechanical vibrations of the body due to the heart movement. In this work a novel noninvasive device placed under the mattress of a bed estimates the heart rate using the ballistocardiography. Different algorithms for heart rate estimation have been developed.
Personalized remote healthcare monitoring is in continuous development due to the technology improvements of sensors and wearable electronic systems. A state of the art of research works on wearable sensors for healthcare applications is presented in this work. Furthermore, a state of the art of wearable devices, chest and wrist band and smartwatches available on the market for health and sport monitoring is presented in this paper. Many activity trackers are commercially available. The prices are continuously reducing and the performances are improving, but commercial devices do not provide raw data and are therefore not useful for research purposes.
We present a 3d-laser-scan simulation in virtual
reality for creating synthetic scans of CAD models. Consisting of
the virtual reality head-mounted display Oculus Rift and the
motion controller Razer Hydra our system can be used like
common hand-held 3d laser scanners. It supports scanning of
triangular meshes as well as b-spline tensor product surfaces
based on high performance ray-casting algorithms. While point
clouds of known scanning simulations are missing the man-made
structure, our approach overcomes this problem by imitating
real scanning scenarios. Calculation speed, interactivity and the
resulting realistic point clouds are the benefits of this system.
The digital transformation of business processes and the integration of IT systems leads to opportunities and risks for small and medium-sized enterprises (SMEs). Risks that can result in a lack of IT Governance, Risk and Compliance (GRC). The purpose of this paper is to present the Design and Evaluation phase of creating an artefact, to reduce these risks. With this, the Design Science Research approach based on Hevner is using. The artefact will be developed by selecting relevant existing frameworks and the identification of SME-specific competencies. The method enables IT-GRC managers to transfer or adapt the frameworks to an SME organizational structure. The results from ten interviews and further three feedback loops showed that the method can be applied in practice and that a tailoring of established frameworks can take place. Contrary to the previous basic orientation of the research, this paper focuses on the concretization of approaches.
The digital transformation of business processes and the integration of IT systems leads to opportunities and risks for small and medium-sized enterprises (SMEs). Risks that can result in a lack of IT Governance, Risk and Compliance (IT-GRC). The purpose of this paper is to present the current state of the research project. With this, the Design Science Research approach based on Hevner is using. Based on the phase of Problem Identification and Objectives, this paper will deal with the development of an artefact and thus present the draft of the Design phase. The artefact will be developed by selecting relevant existing frameworks and standards and the identification of SME-specific conditions.
An IT-GRC approach in SME
(2022)
The digital transformation of business processes and the integration of IT systems leads to opportunities and risks for small and medium-sized enterprises (SMEs). Risks that can result in a lack of IT compliance. The purpose of this research-in-progress paper is to present the current state of a IT-Governance-Risk-Compliance (IT-GRC) research-project. First, the results of an already conducted literature research will be discussed, combined with qualitative interviews (expert survey) of persons close to IT compliance. In the context of this paper, a first design approach will be developed by selecting relevant existing frameworks and standards and the identification of SME-specific conditions. The first design is intended to contribute a further artefact conception of tailoring approaches and standards and the creation of a guidance.
This policy brief presents the possibilities of using big data analytics for safe, decarbonised and climate-resilient infrastructure. The policy brief focuses on current constraints and limitations to applying big data analytics to the infrastructure ecosystem and presents several examples and best practices for different infrastructure sectors and at different policy levels (national, municipal) to highlight recommendations and policy requirements needed for deep digital transformation and sustainable solutions in infrastructure planning and delivery.
Reconstruction of hand-held laser scanner data is used in industry primarily for reverse engineering. Traditionally, scanning and reconstruction are separate steps. The operator of the laser scanner has no feedback from the reconstruction results. On-line reconstruction of the CAD geometry allows for such an immediate feedback.
We propose a method for on-line segmentation and reconstruction of CAD geometry from a stream of point data based on means that are updated on-line. These means are combined to define complex local geometric properties, e.g., to radii and center points of spherical regions. Using means of local scores, planar, cylindrical, and spherical segments are detected and extended robustly with region growing. For the on-line computation of the means we use so-called accumulated means. They allow for on-line insertion and removal of values and merging of means. Our results show that this approach can be performed on-line and is robust to noise. We demonstrate that our method reconstructs spherical, cylindrical, and planar segments on real scan data containing typical errors caused by hand-held laser scanners.
Using multi-camera matching techniques for 3d reconstruction there is usually the trade-off between the quality of the computed depth map and the speed of the computations. Whereas high quality matching methods take several seconds to several minutes to compute a depth map for one set of images, real-time methods achieve only low quality results. In this paper we present a multi-camera matching method that runs in real-time and yields high resolution depth maps. Our method is based on a novel multi-level combination of normalized cross correlation, deformed matching windows based on the multi-level depth map information, and sub-pixel precise disparity maps. The whole process is implemented completely on the GPU. With this approach we can process four 0.7 megapixel images in 129 milliseconds to a full resolution 3d depth map. Our technique is tailored for the recognition of non-technical shapes, because our target application is face recognition.
Ulrich Finsterwalder
(2016)