Refine
Year of publication
Document Type
- Conference Proceeding (642)
- Article (425)
- Other Publications (143)
- Part of a Book (141)
- Working Paper (128)
- Book (118)
- Report (115)
- Journal (Complete Issue of a Journal) (85)
- Master's Thesis (76)
- Doctoral Thesis (58)
Language
- German (1112)
- English (881)
- Multiple languages (8)
Keywords
Institute
- Fakultät Architektur und Gestaltung (41)
- Fakultät Bauingenieurwesen (104)
- Fakultät Elektrotechnik und Informationstechnik (33)
- Fakultät Informatik (121)
- Fakultät Maschinenbau (60)
- Fakultät Wirtschafts-, Kultur- und Rechtswissenschaften (106)
- Institut für Angewandte Forschung - IAF (114)
- Institut für Naturwissenschaften und Mathematik - INM (3)
- Institut für Optische Systeme - IOS (39)
- Institut für Strategische Innovation und Technologiemanagement - IST (60)
R concretes with a proportion of recycled aggregates are standardized normal concretes which are allowed for use in Germany up to strength class C30/37. Because of the good technical properties and the ecological advantages, the article presents possible applications in the field of concrete products and precast concrete elements. Read part 1 of the paper.
Digital cameras are subject to physical, electronic and optic effects that result in errors and noise in the image. These effects include for example a temperature dependent dark current, read noise, optical vignetting or different sensitivities of individual pixels. The task of a radiometric calibration is to reduce these errors in the image and thus improve the quality of the overall application. In this work we present an algorithm for radiometric calibration based on Gaussian processes. Gaussian processes are a regression method widely used in machine learning that is particularly useful in our context. Then Gaussian process regression is used to learn a temperature and exposure time dependent mapping from observed gray-scale values to true light intensities for each pixel. Regression models based on the characteristics of single pixels suffer from excessively high runtime and thus are unsuitable for many practical applications. In contrast, a single regression model for an entire image with high spatial resolution leads to a low quality radiometric calibration, which also limits its practical use. The proposed algorithm is predicated on a partitioning of the pixels such that each pixel partition can be represented by one single regression model without quality loss. Partitioning is done by extracting features from the characteristic of each pixel and using them for lexicographic sorting. Splitting the sorted data into partitions with equal size yields the final partitions, each of which is represented by the partition centers. An individual Gaussian process regression and model selection is done for each partition. Calibration is performed by interpolating the gray-scale value of each pixel with the regression model of the respective partition. The experimental comparison of the proposed approach to classical flat field calibration shows a consistently higher reconstruction quality for the same overall number of calibration frames.
Digital cameras are used in a large variety of scientific and industrial applications. For most applications the acquired data should represent the real light intensity per pixel as accurately as possible. However, digital cameras are subject to different sources of noise which distort the resulting image. Noise includes photon noise, fixed pattern noise and read noise. The aim of the radiometric calibration is to improve the quality of the resulting images by reducing the influence of the different types of noise on the measured data. In this paper, a new approach for the radiometric calibration of digital cameras using sparse Gaussian process regression is presented. Gaussian process regression is a kernel based supervised machine learning technique. It is used to learn the response of a camera system from a set of training images to allow for the calibration of new images. Compared to the standard Gaussian process method or flat field correction our sparse approach allows for faster calibration and higher reconstruction quality.
Das Projekt RALV (rasche Analyse der Lichtstärkeverteilung) hatte das Ziel zu untersuchen, inwieweit eine schnelle Messung von Lichtstärkeverteilungen mit Hilfe von CCD Kamera-Aufnahmen von Streubildern auf einem Schirm möglich ist. Dazu wird eine Probe von einer gerichteten Lichtquelle (Laser) bestrahlt. Das gestreute Licht trifft auf einen Schirm. Auf dem Schirm entsteht eine Helligkeitsverteilung. Diese wird auf der Rückseite abfotographiert. Aus den Geometriedaten der Anordnung und der gemessenen Helligkeitsverteilung lässt sich die Lichtstärkeverteilung und damit die Streucharakteristik errechnen. Durch geeignete Wahl der Schirm-Materialien, eine gute Kamera-Auslegung sowie einer dafür entwickelten Software ist es im RALV-Projekt gelungen die Funktionsfähigkeit dieses Verfahrens nachzuweisen. Schwerpunkte bei der Arbeit im Projekt waren: - Finden geeigneter Komponenten: Lichtquellen, Schirmmaterial, Kamera - Schreiben der benötigten Software für: Steuerung, Auswertung und Darstellung - Vergleich der RALV-Methode mit Standard Methoden (ebenes Goniometer) Nach Ablauf des Projektes liegen die Daten für den Bau eines industriell einsetzbaren Messgerätes vor. Es zeigte sich, dass das RALV Messverfahren seine Stärken bei Materialien mit einer spiegelnden Vorzugsrichtung hat. Als Nebeneffekte des Projektes wurde ein ebenes Goniometer mit 3 achsiger Probenaufnahme gefertigt. Zudem wurde die Bewitterungskammer der FH Konstanz für lichttechnische Fragen erweitert.
RC-Beton im Hochbau
(2017)
The performance and reliability of non-volatile NAND flash memories deteriorate as the number of program/erase cycles grows. The reliability also suffers from cell to cell interference, long data retention time, and read disturb. These processes effect the read threshold voltages. The aging of the cells causes voltage shifts which lead to high bit error rates (BER) with fixed pre-defined read thresholds. This work proposes two methods that aim on minimizing the BER by adjusting the read thresholds. Both methods utilize the number of errors detected in the codeword of an error correction code. It is demonstrated that the observed number of errors is a good measure for the voltage shifts and is utilized for the initial calibration of the read thresholds. The second approach is a gradual channel estimation method that utilizes the asymmetrical error probabilities for the one-to-zero and zero-to-one errors that are caused by threshold calibration errors. Both methods are investigated utilizing the mutual information between the optimal read voltage and the measured error values.
Numerical results obtained from flash measurements show that these methods reduce the BER of NAND flash memories significantly.
Non-volatile NAND flash memories store information as an electrical charge. Different read reference voltages are applied to read the data. However, the threshold voltage distributions vary due to aging effects like program erase cycling and data retention time. It is necessary to adapt the read reference voltages for different life-cycle conditions to minimize the error probability during readout. In the past, methods based on pilot data or high-resolution threshold voltage histograms were proposed to estimate the changes in voltage distributions. In this work, we propose a machine learning approach with neural networks to estimate the read reference voltages. The proposed method utilizes sparse histogram data for the threshold voltage distributions. For reading the information from triple-level cell (TLC) memories, several read reference voltages are applied in sequence. We consider two histogram resolutions. The simplest histogram consists of the zero-and-one ratios for the hard decision read operation, whereas a higher resolution is obtained by considering the quantization levels for soft-input decoding. This approach does not require pilot data for the voltage adaptation. Furthermore, only a few measurements of extreme points of the threshold voltage distributions are required as training data. Measurements with different conditions verify the proposed approach. The resulting neural networks perform well under other life-cycle conditions.
The introduction of multi level cell (MLC) and triple level cell (TLC) technologies reduced the reliability of flash memories significantly compared with single level cell (SLC) flash. The reliability of the flash memory suffers from various errors causes. Program/erase cycles, read disturb, and cell to cell interference impact the threshold voltages. With pre-defined fixed read thresholds a voltage shift increases the bit error rate (BER). This work proposes a read threshold calibration method that aims on minimizing the BER by adapting the read voltages. The adaptation of the read thresholds is based on the number of errors observed in the codeword protecting a small amount of meta-data. Simulations based on flash measurements demonstrate that this method can significantly reduce the BER of TLC memories.
Continuous range queries are a common means to handle mobile clients in high-density areas. Most existing approaches focus on settings in which the range queries for location-based services are mostly static whereas the mobile clients in the ranges move. We focus on a category called Dynamic Real-Time Range Queries (DRRQ) assuming that both, clients requested by the query and the inquirers, are mobile. In consequence, the query parameters results continuously change. This leads to two requirements: the ability to deal with an arbitrary high number of mobile nodes (scalability) and the real-time delivery of range query results. In this paper we present the highly decentralized solution Adaptive Quad Streaming (AQS) for the requirements of DRRQs. AQS approximates the query results in favor of a controlled real-time delivery and guaranteed scalability. While prior works commonly optimizes data structures on servers, we use AQS to focus on a highly distributed cell structure without data structures automatically adapting to changing client distributions. Instead of the commonly used request-response approach, we apply a lightweight streaming method in which no bidirectional communication and no storage or maintenance of queries are required at all.
In diesem Beitrag wird zunächst ein einführender Überblick über die Thematik eines Marketing-Controlling spezifischen Rechtsrahmens gegeben. Da die Datenerhebung, -verarbeitung und -analyse stets die Hauptfunktionsmerkmale eines Marketing-Controllings sind, stehen nach diesem Überblick die Rechtsgrundlagen eines Datenschutzes im Mittelpunkt der folgenden Ausführungen.
In this paper, rectangular matrices whose minors of a given order have the same strict sign are considered and sufficient conditions for their recognition are presented. The results are extended to matrices whose minors of a given order have the same sign or are allowed to vanish. A matrix A is called oscillatory if all its minors are nonnegative and there exists a positive integer k such that A^k has all its minors positive. As a generalization, a new type of matrices, called oscillatory of a specific order, is introduced and some of their properties are investigated.
This document presents an algorithm for a non-obtrusive recognition of Sleep/Wake states using signals derived from ECG, respiration, and body movement captured while lying in a bed. As a core mathematical base of system data analytics, multinomial logistic regression techniques were chosen. Derived parameters of the three signals are used as the input for the proposed method. The overall achieved accuracy rate is 84% for Wake/Sleep stages, with Cohen’s kappa value 0.46. The presented algorithm should support experts in analyzing sleep quality in more detail. The results confirm the potential of this method and disclose several ways for its improvement.
The recovery of our body and brain from fatigue directly depends on the quality of sleep, which can be determined from the results of a sleep study. The classification of sleep stages is the first step of this study and includes the measurement of vital data and their further processing. The non-invasive sleep analysis system is based on a hardware sensor network of 24 pressure sensors providing sleep phase detection. The pressure sensors are connected to an energy-efficient microcontroller via a system-wide bus. A significant difference between this system and other approaches is the innovative way in which the sensors are placed under the mattress. This feature facilitates the continuous use of the system without any noticeable influence on the sleeping person. The system was tested by conducting experiments that recorded the sleep of various healthy young people. Results indicate the potential to capture respiratory rate and body movement.
Recognizing Human Activity of Daily Living Using a Flexible Wearable for 3D Spine Pose Tracking
(2023)
The World Health Organization recognizes physical activity as an influencing domain on quality of life. Monitoring, evaluating, and supervising it by wearable devices can contribute to the early detection and progress assessment of diseases such as Alzheimer’s, rehabilitation, and exercises in telehealth, as well as abrupt events such as a fall. In this work, we use a non-invasive and non-intrusive flexible wearable device for 3D spine pose measurement to monitor and classify physical activity. We develop a comprehensive protocol that consists of 10 indoor, 4 outdoor, and 8 transition states activities in three categories of static, dynamic, and transition in order to evaluate the applicability of the flexible wearable device in human activity recognition. We implement and compare the performance of three neural networks: long short-term memory (LSTM), convolutional neural network (CNN), and a hybrid model (CNN-LSTM). For ground truth, we use an accelerometer and strips data. LSTM reached an overall classification accuracy of 98% for all activities. The CNN model with accelerometer data delivered better performance in lying down (100%), static (standing = 82%, sitting = 75%), and dynamic (walking = 100%, running = 100%) positions. Data fusion improved the outputs in standing (92%) and sitting (94%), while LSTM with the strips data yielded a better performance in bending-related activities (bending forward = 49%, bending backward = 88%, bending right = 92%, and bending left = 100%), the combination of data fusion and principle components analysis further strengthened the output (bending forward = 100%, bending backward = 89%, bending right = 100%, and bending left = 100%). Moreover, the LSTM model detected the first transition state that is similar to fall with the accuracy of 84%. The results show that the wearable device can be used in a daily routine for activity monitoring, recognition, and exercise supervision, but still needs further improvement for fall detection.
We consider the problem of increasing the informative value of electrocardiographic (ECG) surveys using data from multichannel electrocardiographic leads, that include both recorded electrocardiosignals and the coordinates of the electrodes placed on the surface of the human torso. In this area, we were interested in reconstruction of the surface distribution of the equivalent sources during the cardiac cycle at relatively low hardware cost. In our work, we propose to reconstruct the equivalent electrical sources by numerical methods, based on integral connection between the density of electrical sources and potential in a conductive medium. We consider maps of distributions of equivalent electric sources on the heart surface (HSSM), presenting source distributions in the form of a simple or double electrical layer. We indicate the dynamics of the heart electrical activity by the space-time mapping of equivalent electrical sources in HSSM.
An inter- and transdisciplinary concept has been developed, focusing on the scaling of industrial circular construction using innovative compacted mineral mixtures (CMM) derived from various soil types (sand, silt, clay) and recycled mineral waste. The concept aims to accelerate the systemic transformation of the construction industry towards carbon neutrality by promoting the large-scale adoption and automation of CMM-based construction materials, which incorporate natural mineral components and recycled aggregates or industrial by-products. In close collaboration with international and domestic stakeholders in the construction sector, the concept explores the integration of various CMM-based construction methods for producing wall elements in conventional building construction. Leveraging a digital urban mining platform, the concept aims to standardize the production process and enable mass-scale production. The ultimate goal is to fully harness the potential of automated CMM-based wall elements as a fast, competitive, emission-free, and recyclable alternative to traditional masonry and concrete construction techniques. To achieve this objective, the concept draws upon the latest advances in soil mechanics, rheology, and automation and incorporates open-source digital platform technologies to enhance data accessibility, processing, and knowledge acquisition. This will bolster confidence in CMM-based technologies and facilitate their widespread adoption. The extraordinary transfer potential of this approach necessitates both basic and applied research. As such, the proposed transformative, inter- and transdisciplinary concept will be conducted and synthesized using a comprehensive, holistic, and transfer-oriented methodology.
Error correction coding for optical communication and storage requires high rate codes that enable high data throughput and low residual errors. Recently, different concatenated coding schemes were proposed that are based on binary BCH codes with low error correcting capabilities. In this work, low-complexity hard- and soft-input decoding methods for such codes are investigated. We propose three concepts to reduce the complexity of the decoder. For the algebraic decoding we demonstrate that Peterson's algorithm can be more efficient than the Berlekamp-Massey algorithm for single, double, and triple error correcting BCH codes. We propose an inversion-less version of Peterson's algorithm and a corresponding decoding architecture. Furthermore, we propose a decoding approach that combines algebraic hard-input decoding with soft-input bit-flipping decoding. An acceptance criterion is utilized to determine the reliability of the estimated codewords. For many received codewords the stopping criterion indicates that the hard-decoding result is sufficiently reliable, and the costly soft-input decoding can be omitted. To reduce the memory size for the soft-values, we propose a bit-flipping decoder that stores only the positions and soft-values of a small number of code symbols. This method significantly reduces the memory requirements and has little adverse effect on the decoding performance.
Particularly for manufactured products subject to aesthetic evaluation, the industrial manufacturing process must be monitored, and visual defects detected. For this purpose, more and more computer vision-integrated inspection systems are being used. In optical inspection based on cameras or range scanners, only a few examples are typically known before novel examples are inspected. Consequently, no large data set of non-defective and defective examples could be used to train a classifier, and methods that work with limited or weak supervision must be applied. For such scenarios, I propose new data-efficient machine learning approaches based on one-class learning that reduce the need for supervision in industrial computer vision tasks. The developed novelty detection model automatically extracts features from the input images and is trained only on available non-defective reference data. On top of the feature extractor, a one-class classifier based on recent developments in deep learning is placed. I evaluate the novelty detector in an industrial inspection scenario and state-of-the-art benchmarks from the machine learning community. In the second part of this work, the model gets improved by using a small number of novel defective examples, and hence, another source of supervision gets incorporated. The targeted real-world inspection unit is based on a camera array and a flashing light illumination, allowing inline capturing of multichannel images at a high rate. Optionally, the integration of range data, such as laser or Lidar signals, is possible by using the developed targetless data fusion method.
Energiewirtschaft und Wassernutzung stehen aufgrund der großen Bedeutung von Kohle-, Kern-, und Wasserkraftwerken in Baden-Württemberg in einem engen Zusammenhang. Niedrige Flusswasserstände können in Trockenzeiten zu Konflikten zwischen den verschiedenen Wassernutzern z.B. der Kühlwassernutzung (Abbildung 1), Bewässerung sowie der Nutzung des Neckars als Schifffahrtsstraße führen. Seit dem Trockensommer 2003 nimmt das Bewusstsein für die Relevanz konkurrierender Wassernutzungen, u.a. von Kühlwassernutzung, Bewässerung für die Landwirtschaft, Nutzung der Wasserwege für den Transport von Massengütern sowie für Belange des Naturschutz zu.
Totally nonnegative matrices, i.e., matrices having all their minors nonnegative, and matrix intervals with respect to the checkerboard partial order are considered. It is proven that if the two bound matrices of such a matrix interval are totally nonnegative and satisfy certain conditions, then all matrices from this interval are also totally nonnegative and satisfy the same conditions.
Reliability Assessment of an Unscented Kalman Filter by Using Ellipsoidal Enclosure Techniques
(2022)
The Unscented Kalman Filter (UKF) is widely used for the state, disturbance, and parameter estimation of nonlinear dynamic systems, for which both process and measurement uncertainties are represented in a probabilistic form. Although the UKF can often be shown to be more reliable for nonlinear processes than the linearization-based Extended Kalman Filter (EKF) due to the enhanced approximation capabilities of its underlying probability distribution, it is not a priori obvious whether its strategy for selecting sigma points is sufficiently accurate to handle nonlinearities in the system dynamics and output equations. Such inaccuracies may arise for sufficiently strong nonlinearities in combination with large state, disturbance, and parameter covariances. Then, computationally more demanding approaches such as particle filters or the representation of (multi-modal) probability densities with the help of (Gaussian) mixture representations are possible ways to resolve this issue. To detect cases in a systematic manner that are not reliably handled by a standard EKF or UKF, this paper proposes the computation of outer bounds for state domains that are compatible with a certain percentage of confidence under the assumption of normally distributed states with the help of a set-based ellipsoidal calculus. The practical applicability of this approach is demonstrated for the estimation of state variables and parameters for the nonlinear dynamics of an unmanned surface vessel (USV).
This paper builds upon the widely-used resource-based approach to explaining survival of new technology-based firms (NTBFs). However, instead of looking at the NTBF's initial resource configuration, a process-oriented perspective is taken by focusing on the entrepreneur's ability to transform resources in response to triggers resulting from market interactions. Transaction relations reflect these interactions and are thus operationalized with a suggested method for measuring the status of venture emergence (VE) applicable to early-stage NTBFs. NTBFs' value network maturity is reflected in the number and strength of their transaction relations in the four market dimensions customer, investor, partner, and human resource. Business plans of NTBFs represent the artifact that contains this data in the form of transaction relation descriptions. Using content analysis, a multi-step combined human and computer coding process has been developed to annotate and classify transaction relations from business plans in order to empirically determine NTBFs' status of VE. Results of the business plan analysis suggest that the level of transaction relations allows to draw conclusions on the VE status. Moreover, applying the developed process, first analysis of a business plan coding test shows that the transaction relation based VE status significantly relates to NTBF survival capability.
Drawing on a rich body of multimethod field research, this book examines the ways in which Indonesian and Philippine religious actors have fostered conflict resolution and under what conditions these efforts have been met with success or limited success.
The book addresses two central questions: In what ways, and to what extent, have post-conflict peacebuilding activities of Christian churches contributed to conflict transformation in Mindanao (Philippines) and Maluku (Indonesia)? And to what extent have these church-based efforts been affected by specific economic, political, or social contexts? Based on extensive fieldwork, the study operates with a nested, multi-dimensional, and multi-layered methodological concept which combines qualitative and quantitative methods. Major findings are that church-based peace activities do matter, that they have higher approval rates than state projects, and that they have fostered interreligious understanding.
Through innovative analysis, this book fills a lacuna in the study of ethno-religious conflicts. Informed by the novel Comparative Area Studies (CAS) approach, this book is strictly comparative, includes in-case and cross-case comparisons, and bridges disciplinary research with Area Studies. It will be of interest to academics in the fields of conflict and peacebuilding studies, interreligious dialogue, Southeast Asian Studies, and Asian Politics.
Zusammenfassende Darstellung der Ergebnisse einer empirischen Studie über Friedens- und Postkonfliktarbeit durch kirchliche Akteure in Indonesien ( Maluku) und auf den Philippinen (Mindanao). Auf Basis der Untersuchung ziehen die Autoren Schlussfolgerungen über die praktische Bedeutung kirchlicher Friedensprojekte nach Konflikten
RELOAD
(2015)
Vortrag auf dem Doktorandenkolloquium des Kooperativen Promotionskollegs der HTWG, 09.07.2015
In my research sabbatical I was working on three different topics, namely orthogonal polynomials in geometric modeling, re-parametrized univariate subdivision curves, and reconstruction of 3d-fish-models and other zoological artifacts. In the subsequent Sections, I will describe my particular activity in these different fields. The sections are meant to present an overview of my research activities, leaving out the technical details.
Section 1 is on orthogonal polynomials and other related generating systems for functions systems of smooth function.
In Section 2, I will discuss the application of various re-parametrization schemes for interpolatory subdivision algorithms for the generation of space curves.
The next Section 3 is concerned with my research at the University of Queensland, Brisbane, in collaboration with Dr. Ulrike Siebeck from the School of Biomedical Sciences on fish behavior and reconstruction of 3d-fish models in particular.
In the last Section 4, I will describe what effects this research will have on in my subsequent teaching at the University of Applied Science Konstanz (HTWG).