Refine
Year of publication
Document Type
- Conference Proceeding (29) (remove)
Language
- English (29) (remove)
Has Fulltext
- yes (29) (remove)
Keywords
- 3D Extended Object Tracking (1)
- 3D shape tracking (1)
- Accelerometer (1)
- Automotive (1)
- Ballistocardiography (1)
- Bernstein polynomial (1)
- Biomedical Signal Capturing (1)
- Checkerboard ordering (2)
- Complex interval (1)
- Complex polynomial (1)
Institute
- Fakultät Bauingenieurwesen (3)
- Fakultät Informatik (1)
- Institut für Angewandte Forschung - IAF (3)
- Institut für Optische Systeme - IOS (5)
- Institut für Strategische Innovation und Technologiemanagement - IST (2)
- Institut für Systemdynamik - ISD (3)
- Institut für Werkstoffsystemtechnik Konstanz - WIK (1)
- Institut für Werkstoffsystemtechnik Thurgau - WITg (1)
Using multi-camera matching techniques for 3d reconstruction there is usually the trade-off between the quality of the computed depth map and the speed of the computations. Whereas high quality matching methods take several seconds to several minutes to compute a depth map for one set of images, real-time methods achieve only low quality results. In this paper we present a multi-camera matching method that runs in real-time and yields high resolution depth maps. Our method is based on a novel multi-level combination of normalized cross correlation, deformed matching windows based on the multi-level depth map information, and sub-pixel precise disparity maps. The whole process is implemented completely on the GPU. With this approach we can process four 0.7 megapixel images in 129 milliseconds to a full resolution 3d depth map. Our technique is tailored for the recognition of non-technical shapes, because our target application is face recognition.
In 3D extended object tracking (EOT), well-established models exist for tracking the object extent using various shape priors. A single update, however, has to be performed for every measurement using these models leading to a high computational runtime for high-resolution sensors. In this paper, we address this problem by using various model-independent downsampling schemes based on distance heuristics and random sampling as pre-processing before the update. We investigate the methods in a simulated and real-world tracking scenario using two different measurement models with measurements gathered from a LiDAR sensor. We found that there is a huge potential for speeding up 3D EOT by dropping up to 95\% of the measurements in our investigated scenarios when using random sampling. Since random sampling, however, can also result in a subset that does not represent the total set very well, leading to a poor tracking performance, there is still a high demand for further research.
In the past years, algorithms for 3D shape tracking using radial functions in spherical coordinates represented with different methods have been proposed. However, we have seen that mainly measurements from the lateral surface of the target can be expected in a lot of dynamic scenarios and only few measurements from the top and bottom parts leading to an error-prone shape estimate in the top and bottom regions when using a representation in spherical coordinates. We, therefore, propose to represent the shape of the target using a radial function in cylindrical coordinates, as these only represent regions of the lateral surface, and no information from the top or bottom parts is needed. In this paper, we use a Fourier-Chebyshev double series for 3D shape representation since a mixture of Fourier and Chebyshev series is a suitable basis for expanding a radial function in cylindrical coordinates. We investigate the method in a simulated and real-world maritime scenario with a CAD model of the target boat as a reference. We have found that shape representation in cylindrical coordinates has decisive advantages compared to a shape representation in spherical coordinates and should preferably be used if no prior knowledge of the measurement distribution on the surface of the target is available.
Digitization and sustainability are the two big topics of our current time. As the usage of digital products like IoT devices continues to grow, it affects the energy consumption caused by the Internet. At the same time, more and more companies feel the need to become carbon neutral and sustainable. Determining the environmental impact of an IoT device is challenging, as the production of the hardware components should be considered and the electricity consumption of the Internet since this is the primary communication medium of an IoT device. Estimating the electricity consumption of the Internet itself is a complex task. We performed a life cycle assessment (LCA) to determine the environmental impact of an intelligent smoke detector sold in Germany, taking its whole life-cycle from cradle-to-grave into account. We applied the impact assessment method ReCiPe 2016 Midpoint and compared its results with ILCD 2011 Midpoint+ to check the robustness of our results. The LCA results showed that electricity consumption during the use phase is the main contributor to environmental impacts. The mining of coal causes this contribution, which is a part of the German electricity mix. Consequently, the smoke detector mainly contributes to the impact categories of freshwater and marine ecotoxicity, but only marginally to global warming.
Sleep is extremely important for physical and mental health. Although polysomnography is an established approach in sleep analysis, it is quite intrusive and expensive. Consequently, developing a non-invasive and non-intrusive home sleep monitoring system with minimal influence on patients, that can reliably and accurately measure cardiorespiratory parameters, is of great interest. The aim of this study is to validate a non-invasive and unobtrusive cardiorespiratory parameter monitoring system based on an accelerometer sensor. This system includes a special holder to install the system under the bed mattress. The additional aim is to determine the optimum relative system position (in relation to the subject) at which the most accurate and precise values of measured parameters could be achieved. The data were collected from 23 subjects (13 males and 10 females). The obtained ballistocardiogram signal was sequentially processed using a sixth-order Butterworth bandpass filter and a moving average filter. As a result, an average error (compared to reference values) of 2.24 beats per minute for heart rate and 1.52 breaths per minute for respiratory rate was achieved, regardless of the subject’s sleep position. For males and females, the errors were 2.28 bpm and 2.19 bpm for heart rate and 1.41 rpm and 1.30 rpm for respiratory rate. We determined that placing the sensor and system at chest level is the preferred configuration for cardiorespiratory measurement. Further studies of the system’s performance in larger groups of subjects are required, despite the promising results of the current tests in healthy subjects.
As fish farming is becoming more and more important worldwide, this ongoing project aims at the simulation and test-based analysis of highly stressed wire contacts, as they are found in off-shore fish farm cages in order to make them more reliable. The quasi-static tensile test of a wire mesh provides data for the construction of a finite element model to get a better understanding of the behavior of high-strength stainless steel from which the cages are made. Fatigue tests provide new insights that are used for an adjustment of the finite element model in order to predict the probability of possible damage caused by heavy mechanical loads (waves, storms, predators (sharks)).
This policy brief presents the possibilities of using big data analytics for safe, decarbonised and climate-resilient infrastructure. The policy brief focuses on current constraints and limitations to applying big data analytics to the infrastructure ecosystem and presents several examples and best practices for different infrastructure sectors and at different policy levels (national, municipal) to highlight recommendations and policy requirements needed for deep digital transformation and sustainable solutions in infrastructure planning and delivery.
In tourism, energy demands are particularly high.Tourism facilities such as hotels require large amounts ofelectric and heating resp. cooling energy. Their supply howeveris usually still based on fossil energies. This research approachanalyses the potential of promoting renewable energies in BlackForest tourism. It focuses on a combined and hence highlyefficient production of both electric and thermal energy bybiogas plants on the one hand and its provision to local tourismfacilities via short distance networks on the other. Basing onsurveys and qualitative empiricism and considering regionalresource availability as well as socio-economic aspects, it thusexamines strengths, weaknesses, opportunities and threats thatcan arise from such a cooperation.
Probabilistic Short-Term Low-Voltage Load Forecasting using Bernstein-Polynomial Normalizing Flows
(2021)
The transition to a fully renewable energy grid requires better forecasting of demand at the low-voltage level. However, high fluctuations and increasing electrification cause huge forecast errors with traditional point estimates. Probabilistic load forecasts take future uncertainties into account and thus enables various applications in low-carbon energy systems. We propose an approach for flexible conditional density forecasting of short-term load based on Bernstein-Polynomial Normalizing Flows where a neural network controls the parameters of the flow. In an empirical study with 363 smart meter customers, our density predictions compare favorably against Gaussian and Gaussian mixture densities and also outperform a non-parametric approach based on the pinball loss for 24h-ahead load forecasting for two different neural network architectures.
In this paper, a novel measurement model based on spherical double Fourier series (DFS) for estimating the 3D shape of a target concurrently with its kinematic state is introduced. Here, the shape is represented as a star-convex radial function, decomposed as spherical DFS. In comparison to ordinary DFS, spherical DFS do not suffer from ambiguities at the poles. Details will be given in the paper. The shape representation is integrated into a Bayesian state estimator framework via a measurement equation. As range sensors only generate measurements from the target side facing the sensor, the shape representation is modified to enable application of shape symmetries during the estimation process. The model is analyzed in simulations and compared to a shape estimation procedure using spherical harmonics. Finally, shape estimation using spherical and ordinary DFS is compared to analyze the effect of the pole problem in extended object tracking (EOT) scenarios.
We have analyzed a pool of 37,839 articles published in 4,404 business-related journals in the entrepreneurship research field using a novel literature review approach that is based on machine learning and text data mining. Most papers have been published in the journals ‘Small Business Economics’, ‘International Journal of Entrepreneurship and Small Business’, and ‘Sustainability’ (Switzerland), while the sum of citations is highest in the ‘Journal of Business Venturing’, ‘Entrepreneurship Theory and Practice’, and ‘Small Business Economics’. We derived 29 overarching themes based on 52 identified clusters. The social entrepreneurship, development, innovation, capital, and economy clusters represent the largest ones among those with high thematic clarity. The most discussed clusters measured by the average number of citations per assigned paper are research, orientation, capital, gender, and growth. Clusters with the highest average growth in publications per year are social entrepreneurship, innovation, development, entrepreneurship education, and (business-) models. Measured by the average yearly citation rate per paper, the thematic cluster ‘research’, mostly containing literature studies, received most attention. The MLR allows for an inclusion of a significantly higher number of publications compared to traditional reviews thus providing a comprehensive, descriptive overview of the whole research field.
Deep neural networks have become a veritable alternative to classic speaker recognition and clustering methods in recent years. However, while the speech signal clearly is a time series, and despite the body of literature on the benefits of prosodic (suprasegmental) features, identifying voices has usually not been approached with sequence learning methods. Only recently has a recurrent neural network (RNN) been successfully applied to this task, while the use of convolutional neural networks (CNNs) (that are not able to capture arbitrary time dependencies, unlike RNNs) still prevails. In this paper, we show the effectiveness of RNNs for speaker recognition by improving state of the art speaker clustering performance and robustness on the classic TIMIT benchmark. We provide arguments why RNNs are superior by experimentally showing a “sweet spot” of the segment length for successfully capturing prosodic information that has been theoretically predicted in previous work.
We propose a novel end-to-end neural network architecture that, once trained, directly outputs a probabilistic clustering of a batch of input examples in one pass. It estimates a distribution over the number of clusters k, and for each 1≤k≤kmax, a distribution over the individual cluster assignment for each data point. The network is trained in advance in a supervised fashion on separate data to learn grouping by any perceptual similarity criterion based on pairwise labels (same/different group). It can then be applied to different data containing different groups. We demonstrate promising performance on high-dimensional data like images (COIL-100) and speech (TIMIT). We call this “learning to cluster” and show its conceptual difference to deep metric learning, semi-supervise clustering and other related approaches while having the advantage of performing learnable clustering fully end-to-end.
CO2 compensation measures, in particular the compensation of flights, are becoming more and more popular. Carbon offsetting is defined as measures financed by donations that save greenhouse gases previously emitted elsewhere through climate protection projects.
CO2 abatement costs are often low in developing countries. This is why most offset projects are implemented there. Nevertheless, this does not mean that the holiday resort and the project country are in any way related to each other.
By linking carbon offset projects with the destination country, the tourist is able to get an impression of the co-financed project. In case such projects are realized in cooperation with the hotel, the hotel operator obtains a new tourist attraction and can demonstrate its efforts to climate protection in a PR-effective way.
Fatigue and drowsiness are responsible for a significant percentage of road traffic accidents. There are several approaches to monitor the driver’s drowsiness, ranging from the driver’s steering behavior to analysis of the driver, e.g. eye tracking, blinking, yawning or electrocardiogram (ECG). This paper describes the development of a low-cost ECG sensor to derive heart rate variability (HRV) data for the drowsiness detection. The work includes the hardware and the software design. The hardware has been implemented on a printed circuit board (PCB) designed so that the board can be used as an extension shield for an Arduino. The PCB contains a double, inverted ECG channel including low-pass filtering and provides two analog outputs to the Arduino, that combined them and performs the analog-to-digital conversion. The digital ECG signal is transferred to an NVidia embedded PC where the processing takes place, including QRS-complex, heart rate and HRV detection as well as visualization features. The compact resulting sensor provides good results in the extraction of the main ECG parameters. The sensor is being used in a larger frame, where facial-recognition-based drowsiness detection is combined with ECG-based detection to improve the recognition rate under unfavorable light or occlusion conditions.
In the reverse engineering process one has to classify parts of point clouds with the correct type of geometric primitive. Features based on different geometric properties like point relations, normals, and curvature information can be used, to train classifiers like Support Vector Machines (SVM). These geometric features are estimated in the local neighborhood of a point of the point cloud. The multitude of different features makes an in-depth comparison necessary. In this work we evaluate 23 features for the classification of geometric primitives in point clouds. Their performance is evaluated on SVMs when used to classify geometric primitives in simulated and real laser scanned point clouds. We also introduce a normalization of point cloud density to improve classification generalization.
InBetween
(2017)