Refine
Year of publication
Document Type
- Conference Proceeding (642) (remove)
Language
- English (492)
- German (149)
- Multiple languages (1)
Keywords
- 360-degree coverage (1)
- 3D Extended Object Tracking (1)
- 3D Extended Object Tracking (EOT) (2)
- 3D shape tracking (1)
- 3D ship detection (1)
- AAL (1)
- ADAM (1)
- AHI (1)
- Abrasive grain material (1)
- Abtragsprinzip (1)
Institute
- Fakultät Bauingenieurwesen (9)
- Fakultät Elektrotechnik und Informationstechnik (10)
- Fakultät Informatik (50)
- Fakultät Maschinenbau (9)
- Fakultät Wirtschafts-, Kultur- und Rechtswissenschaften (8)
- Institut für Angewandte Forschung - IAF (53)
- Institut für Optische Systeme - IOS (19)
- Institut für Strategische Innovation und Technologiemanagement - IST (29)
- Institut für Systemdynamik - ISD (64)
- Institut für Werkstoffsystemtechnik Konstanz - WIK (5)
- Institut für Werkstoffsystemtechnik Thurgau - WITg (16)
- Institut für professionelles Schreiben - IPS (3)
- Konstanz Institut für Corporate Governance - KICG (3)
- Konstanzer Institut für Prozesssteuerung - KIPS (6)
The magneto-mechanical behavior of magnetic shape memory (MSM) materials has been investigated by means of different simulation and modeling approaches by several research groups. The target of this paper is to simulate actuators driven by MSM alloys and to understand the MSM element behavior during actuation, which shall lead to an increased performance of the actuator. It is shown that internal and external stresses should be taken into consideration using numerical computation tools for magnetic fields in an efficient way.
The binary asymmetric channel (BAC) is a model for the error characterization of multi-level cell (MLC) flash memories. This contribution presents a joint channel and source coding approach improving the reliability of MLC flash memories. The objective of the data compression algorithm is to reduce the amount of user data such that the redundancy of the error correction coding can be increased in order to improve the reliability of the data storage system. Moreover, data compression can be utilized to exploit the asymmetry of the channel to reduce the error probability. With MLC flash memories data compression has to be performed on block level considering short data blocks. We present a coding scheme suitable for blocks of 1 kilobyte of data.
Multi-object tracking filters require a birth density to detect new objects from measurement data. If the initial positions of new objects are unknown, it may be useful to choose an adaptive birth density. In this paper, a circular birth density is proposed, which is placed like a band around the surveillance area. This allows for 360° coverage. The birth density is described in polar coordinates and considers all point-symmetric quantities such as radius, radial velocity and tangential velocity of objects entering the surveillance area. Since it is assumed that these quantities are unknown and may vary between different targets, detected trajectories, and in particular their initial states, are used to estimate the distribution of initial states. The adapted birth density is approximated as a Gaussian mixture, so that it can be used for filters operating on Cartesian coordinates.
The main aim of presented in this manuscript research is to compare the results of objective and subjective measurement of sleep quality for older adults (65+) in the home environment. A total amount of 73 nights was evaluated in this study. Placing under the mattress device was used to obtain objective measurement data, and a common question on perceived sleep quality was asked to collect the subjective sleep quality level. The achieved results confirm the correlation between objective and subjective measurement of sleep quality with the average standard deviation equal to 2 of 10 possible quality points.
Digitalization is one of the most frequently discussed topics in industry. New technologies, platform concepts and integrated data models do enable disruptive business models and drive changes in organization, processes, and tools. The goal is to make a company more efficient, productive and ultimately profitable. However, many companies are facing the challenge of how to approach digital transformation in a structured way and to realize these potential benefits. What they realize is that Product Lifecycle Management plays a key role in digitalization intends, as object, structure and process management along the life cycle is a foundation for many digitalization use cases. The introduced maturity model for assessing a firm’s capabilities along the product lifecycle has been used almost two hundred times. It allows a company to compare its performance with an industry specific benchmark to reveal individual strengths and weaknesses. Furthermore, an empirical study produced multidimensional correlation coefficients, which identify dependencies between business model characteristics and the maturity level of capabilities.
One major realm of Condition Based Maintenance is finding features that reflect the current health state of the asset or component under observation. Most of the existing approaches are accompanied with high computational costs during the different feature processing phases making them infeasible in a real-world scenario. In this paper a feature generation method is evaluated compensating for two problems: (1) storing and handling large amounts of data and (2) computational complexity. Both aforementioned problems are existent e.g. when electromagnetic solenoids are artificially aged and health indicators have to be extracted or when multiple identical solenoids have to be monitored. To overcome those problems, Compressed Sensing (CS), a new research field that keeps constantly emerging into new applications, is employed. CS is a data compression technique allowing original signal reconstruction with far fewer samples than Shannon-Nyquist dictates, when some criteria are met. By applying this method to measured solenoid coil current, raw data vectors can be reduced to a way smaller set of samples that yet contain enough information for proper reconstruction. The obtained CS vector is also assumed to contain enough relevant information about solenoid degradation and faults, allowing CS samples to be used as input to fault detection or remaining useful life estimation routines. The paper gives some results demonstrating compression and reconstruction of coil current measurements and outlines the application of CS samples as condition monitoring data by determining deterioration and fault related features. Nevertheless, some unresolved issues regarding information loss during the compression stage, the design of the compression method itself and its influence on diagnostic/prognostic methods exist.
Research credits corporate entrepreneurship (CE) with enabling established companies to create new types of innovation. Scholars have focused on the organizational design of CE activities, proposing specific organizational units. These semi-autonomous units create a tense management situation between the core organization and its CE activities. Management and organization research considers control as a key managerial function for help. However, control has received limited research attention regarding CE units, leaving design issues for appropriate control of CE units unanswered. In this study, we link management control and CE to illustrate how control is understood in the context of CE. For this, we scanned the CE literature to identify underlying attributes and characteristics that allow specifying control for CE. We identified 11 attributes to describe control for CE activities in a first round and to derive future research paths.
In many industrial applications a workpiece is continuously fed through a heating zone in order to reach a desired temperature to obtain specific material properties. Many examples of such distributed parameter systems exist in heavy industry and also in furniture production such processes can be found. In this paper, a real-time capable model for a heating process with application to industrial furniture production is modeled. As the model is intended to be used in a Model Predictive Control (MPC) application, the main focus is to achieve minimum computational runtime while maintaining a sufficient amount of accuracy. Thus, the governing Partial Differential Equation (PDE) is discretized using finite differences on a grid, specifically tailored to this application. The grid is optimized to yield acceptable accuracy with a minimum number of grid nodes such that a relatively low order model is obtained. Subsequently, an explicit Runge-Kutta ODE (Ordinary Differential Equation) solver of fourth order is compared to the Crank-Nicolson integration scheme presented in Weiss et al. (2022) in terms of runtime and accuracy. Finally, the unknown thermal parameters of the process are estimated using real-world measurement data that was obtained from an experimental setup. The final model yields acceptable accuracy while at the same time shows promising computation time, which enables its use in an MPC controller.
This paper describes an early lumping approach for generating a mathematical model of the heating process of a moving dual-layer substrate. The heat is supplied by convection and nonlinearly distributed over the whole considered spatial extend of the substrate. Using CFD simulations as a reference, two different modelling approaches have been investigated in order to achieve the most suitable model type. It is shown that due to the possibility of using the transition matrix for time discretization, an equivalent circuit model achieves superior results when compared to the Crank-Nicolson method. In order to maintain a constant sampling time for the in-visioned-control strategies, the effect of variable speed is transformed into a system description, where the state vector has constant length but a variable number of non-zero entries. The handling of the variable transport speed during the heating process is considered as the main contribution of this work. The result is a model, suitable for being used in future control strategies.
Online-based business models, such as shopping platforms, have added new possibilities for consumers over the last two decades. Aside from basic differences to other distribution channels, customer reviews on such platforms have become a powerful tool, which bestows an additional source for gaining transparency to consumers. Related research has, for the most part, been labelled under the term electronic word-of-mouth (eWOM). An approach, providing a theoretical basis for this phenomenon, will be provided here. The approach is mainly based on work in the field of consumer culture theory (CCT) and on the concept of co-creation. The work of several authors in these streams of research is used to construct a culturally informed resource-based theory, as advocated by Arnould & Thompson and Algesheimer & Gurâu.
This contribution presents a data compression scheme for applications in non-volatile flash memories. The objective of the data compression algorithm is to reduce the amount of user data such that the redundancy of the error correction coding can be increased in order to improve the reliability of the data storage system. The data compression is performed on block level considering data blocks of 1 kilobyte. We present an encoder architecture that has low memory requirements and provides a fast data encoding.
This work proposes a decoder implementation for high-rate generalized concatenated (GC) codes. The proposed codes are well suited for error correction in flash memories for high reliability data storage. The GC codes are constructed from inner extended binary Bose-Chaudhuri-Hocquenghem (BCH) codes and outer Reed-Solomon (RS) codes. The extended BCH codes enable high-rate GC codes. Moreover, the decoder can take advantage of soft information. For the first three levels of inner codes we propose an optional Chase soft decoder. In this work, the code construction is explained and a decoder architecture is presented. Furthermore, area and throughput results are discussed.
This paper presents the implementation of deep learning methods for sleep stage detection by using three signals that can be measured in a non-invasive way: heartbeat signal, respiratory signal, and movement signal. Since signals are measurements taken during the time, the problem is seen as time-series data classification. Deep learning methods are chosen to solve the problem are convolutional neural network and long-short term memory network. Input data is structured as a time-series sequence of mentioned signals that represent 30 seconds epoch, which is a standard interval for sleep analysis. The records used belong to the overall 23 subjects, which are divided into two subsets. Records from 18 subjects were used for training the data and from 5 subjects for testing the data. For detecting four sleep stages: REM (Rapid Eye Movement), Wake, Light sleep (Stage 1 and Stage 2), and Deep sleep (Stage 3 and Stage 4), the accuracy of the model is 55%, and F1 score is 44%. For five stages: REM, Stage 1, Stage 2, Deep sleep (Stage 3 and 4), and Wake, the model gives an accuracy of 40% and F1 score of 37%.
Modeling a suitable birth density is a challenge when using Bernoulli filters such as the Labeled Multi-Bernoulli (LMB) filter. The birth density of newborn targets is unknown in most applications, but must be given as a prior to the filter. Usually the birth density stays unchanged or is designed based on the measurements from previous time steps.
In this paper, we assume that the true initial state of new objects is normally distributed. The expected value and covariance of the underlying density are unknown parameters. Using the estimated multi-object state of the LMB and the Rauch-Tung-Striebel (RTS) recursion, these parameters are recursively estimated and adapted after a target is detected.
The main contribution of this paper is an algorithm to estimate the parameters of the birth density and its integration into the LMB framework. Monte Carlo simulations are used to evaluate the detection driven adaptive birth density in two scenarios. The approach can also be applied to filters that are able to estimate trajectories.
Error correction coding (ECC) for optical communication and persistent storage systems require high rate codes that enable high data throughput and low residual errors. Recently, different concatenated coding schemes were proposed that are based on binary Bose-Chaudhuri-Hocquenghem (BCH) codes that have low error correcting capabilities. Commonly, hardware implementations for BCH decoding are based on the Berlekamp-Massey algorithm (BMA). However, for single, double, and triple error correcting BCH codes, Peterson's algorithm can be more efficient than the BMA. The known hardware architectures of Peterson's algorithm require Galois field inversion. This inversion dominates the hardware complexity and limits the decoding speed. This work proposes an inversion-less version of Peterson's algorithm. Moreover, a decoding architecture is presented that is faster than decoders that employ inversion or the fully parallel BMA at a comparable circuit size.
The digital transformation of business processes and the integration of IT systems leads to opportunities and risks for small and medium-sized enterprises (SMEs). Risks that can result in a lack of IT Governance, Risk and Compliance (IT-GRC). The purpose of this paper is to present the current state of the research project. With this, the Design Science Research approach based on Hevner is using. Based on the phase of Problem Identification and Objectives, this paper will deal with the development of an artefact and thus present the draft of the Design phase. The artefact will be developed by selecting relevant existing frameworks and standards and the identification of SME-specific conditions.
Twenty-first century infrastructure needs to respond to changing demographics, becoming climate neutral, resilient and economically affordable, while remaining a driver for development and shared prosperity. However, the infrastructure sector remains one of the least innovative and digitalised, plagued by delays, cost overruns and benefit shortfalls (Cantarelli et al. 2008; Flyvbjerg, 2007; Flyvbjerg et al., 2003; Flyvbjerg et al., 2004). The root cause is the prevailing fragmentation of the infrastructure sector (Fellows and Liu, 2012). To help overcome these challenges, integration of the value chain is needed. This could be achieved through a use-case-based creation of federated ecosystems connecting open and trusted data spaces and advanced services applied to infrastructure projects. Such digital platforms enable full-lifecycle participation and responsible governance guided by a shared infrastructure vision. Digital federation enables secure and sovereign data exchange and thus collaboration across the silos within the infrastructure sector and between industries as well as within and between countries. Such an approach to infrastructure technology policy would not rely on technological solutionism but proposes the development of open and trusted data alliances. Federated data spaces provide access to the emerging data economy, especially for SMEs, and can foster the innovation of new digital services. Such responsible digital governance can help make the infrastructure sector more resilient, efficient and aligned with the realisation of ambitious decarbonisation and environmental protection targets. The European Union and the United States have already developed architectures for sovereign and secure data exchange.
Observer-based self sensing for digital (on–off) single-coil solenoid valves is investigated. Self sensing refers to the case where merely the driving signals used to energize the actuator (voltage and coil current) are available to obtain estimates of both the position and velocity. A novel observer approach for estimating the position and velocity from the driving signals is presented, where the dynamics of the mechanical subsystem can be neglected in the model. Both the effect of eddy currents and saturation effects are taken into account in the observer model. Practical experimental results are shown and the new method is compared with a full-order sliding mode observer.
Cardiovascular diseases are directly or indirectly responsible for up to 38.5% of all deaths in Germany and thus represent the most frequent cause of death. At present, heart diseases are mainly discovered by chance during routine visits to the doctor or when acute symptoms occur. However, there is no practical method to proactively detect diseases or abnormalities of the heart in the daily environment and to take preventive measures for the person concerned. Long-term ECG devices, as currently used by physicians, are simply too expensive, impractical, and not widely available for everyday use. This work aims to develop an ECG device suitable for everyday use that can be worn directly on the body. For this purpose, an already existing hardware platform will be analyzed, and the corresponding potential for improvement will be identified. A precise picture of the existing data quality is obtained by metrological examination, and corresponding requirements are defined. Based on these identified optimization potentials, a new ECG device is developed. The revised ECG device is characterized by a high integration density and combines all components directly on one board except the battery and the ECG electrodes. The compact design allows the device to be attached directly to the chest. An integrated microcontroller allows digital signal processing without the need for an additional computer. Central features of the evaluation are a peak detection for detecting R-peaks and a calculation of the current heart rate based on the RR interval. To ensure the validity of the detected R-peaks, a model of the anatomical conditions is used. Thus, unrealistic RR-intervals can be excluded. The wireless interface allows continuous transmission of the calculated heart rate. Following the development of hardware and software, the results are verified, and appropriate conclusions about the data quality are drawn. As a result, a very compact and wearable ECG device with different wireless technologies, data storage, and evaluation of RR intervals was developed. Some tests yelled runtimes up to 24 hours with wireless Lan activated and streaming.
We propose and apply a requirements engineering approach that focuses on security and privacy properties and takes into account various stakeholder interests. The proposed methodology facilitates the integration of security and privacy by design into the requirements engineering process. Thus, specific, detailed security and privacy requirements can be implemented from the very beginning of a software project. The method is applied to an exemplary application scenario in the logistics industry. The approach includes the application of threat and risk rating methodologies, a technique to derive technical requirements from legal texts, as well as a matching process to avoid duplication and accumulate all essential requirements.
This paper presents a modeling approach of an industrial heating process where a stripe-shaped workpiece is heated up to a specific temperature by applying hot air through a nozzle. The workpiece is moving through the heating zone and is considered to be of infinite length. The speed of the substrate is varying over time. The derived model is supposed to be computationally cheap to enable its use in a model-based control setting. We start by formulating the governing PDE and the corresponding boundary conditions. The PDE is then discretized on a spatial grid using finite differences and two different integration schemes, explicit and implicit, are derived. The two models are evaluated in terms of computational effort and accuracy. It turns out that the implicit approach is favorable for the regarded process. We optimize the grid of the model to achieve a low number of grid nodes while maintaining a sufficient amount of accuracy. Finally, the thermodynamical parameters are optimized in order to fit the model's output to real-world data that was obtained by experiments.
A semilinear distributed parameter approach for solenoid valve control including saturation effects
(2015)
In this paper a semilinear parabolic PDE for the control of solenoid valves is presented. The distributed parameter model of the cylinder becomes nonlinear by the inclusion of saturation effects due to the material's B/H-curve. A flatness based solution of the semilinear PDE is shown as well as a convergence proof of its series solution. By numerical simulation results the adaptability of the approach is demonstrated, and differences between the linear and the nonlinear case are discussed. The major contribution of this paper is the inclusion of saturation effects into the magnetic field governing linear diffusion equation, and the development of a flatness based solution for the resulting semilinear PDE as an extension of previous works [1] and [2].
Sleep quality and in general, behavior in bed can be detected using a sleep state analysis. These results can help a subject to regulate sleep and recognize different sleeping disorders. In this work, a sensor grid for pressure and movement detection supporting sleep phase analysis is proposed. In comparison to the leading standard measuring system, which is Polysomnography (PSG), the system proposed in this project is a non-invasive sleep monitoring device. For continuous analysis or home use, the PSG or wearable Actigraphy devices tends to be uncomfortable. Besides this fact, they are also very expensive. The system represented in this work classifies respiration and body movement with only one type of sensor and also in a non-invasive way. The sensor used is a pressure sensor. This sensor is low cost and can be used for commercial proposes. The system was tested by carrying out an experiment that recorded the sleep process of a subject. These recordings showed the potential for classification of breathing rate and body movements. Although previous researches show the use of pressure sensors in recognizing posture and breathing, they have been mostly used by positioning the sensors between the mattress and bedsheet. This project however, shows an innovative way to position the sensors under the mattress.
Creating cages that enclose a 3D-model of some sort is part of many preprocessing pipelines in computational geometry. Creating a cage of preferably lower resolution than the original model is of special interest when performing an operation on the original model might be to costly. The desired operation can be applied to the cage first and then transferred to the enclosed model. With this paper the authors present a short survey of recent and well known methods for cage computation.
The authors would like to give the reader an insight in common methods and their differences.
Generalized concatenated (GC) codes with soft-input decoding were recently proposed for error correction in flash memories. This work proposes a soft-input decoder for GC codes that is based on a low-complexity bit-flipping procedure. This bit-flipping decoder uses a fixed number of test patterns and an algebraic decoder for soft-input decoding. An acceptance criterion for the final candidate codeword is proposed. Combined with error and erasure decoding of the outer Reed-Solomon codes, this bit-flipping decoder can improve the decoding performance and reduce the decoding complexity compared to the previously proposed sequential decoding. The bit-flipping decoder achieves a decoding performance similar to a maximum likelihood decoder for the inner codes.
The development of automatic solutions for the detection of physiological events of interest is booming. Improvements in the collection and storage of large amounts of healthcare data allow access to these data faster and more efficiently. This fact means that the development of artificial intelligence models for the detection and monitoring of a large number of pathologies is becoming increasingly common in the medical field. In particular, developing deep learning models for detecting obstructive apnea (OSA) events is at the forefront. Numerous scientific studies focus on the architecture of the models and the results that these models can provide in terms of OSA classification and Apnea-Hypopnea-Index (AHI) calculation. However, little focus is put on other aspects of great relevance that are crucial for the training and performance of the models. Among these aspects can be found the set of physiological signals used and the preprocessing tasks prior to model training. This paper covers the essential requirements that must be considered before training the deep learning model for obstructive sleep apnea detection, in addition to covering solutions that currently exist in the scientific literature by analyzing the preprocessing tasks prior to training.
We present a 3d-laser-scan simulation in virtual
reality for creating synthetic scans of CAD models. Consisting of
the virtual reality head-mounted display Oculus Rift and the
motion controller Razer Hydra our system can be used like
common hand-held 3d laser scanners. It supports scanning of
triangular meshes as well as b-spline tensor product surfaces
based on high performance ray-casting algorithms. While point
clouds of known scanning simulations are missing the man-made
structure, our approach overcomes this problem by imitating
real scanning scenarios. Calculation speed, interactivity and the
resulting realistic point clouds are the benefits of this system.
Nowadays, the inexpensive memory space promotes an accelerating growth of stored image data. To exploit the data using supervised Machine or Deep Learning, it needs to be labeled. Manually labeling the vast amount of data is time-consuming and expensive, especially if human experts with specific domain knowledge are indispensable. Active learning addresses this shortcoming by querying the user the labels of the most informative images first. One way to obtain the ‘informativeness’ is by using uncertainty sampling as a query strategy, where the system queries those images it is most uncertain about how to classify. In this paper, we present a web-based active learning framework that helps to accelerate the labeling process. After manually labeling some images, the user gets recommendations of further candidates that could potentially be labeled equally (bulk image folder shift). We aim to explore the most efficient ‘uncertainty’ measure to improve the quality of the recommendations such that all images are sorted with a minimum number of user interactions (clicks). We conducted experiments using a manually labeled reference dataset to evaluate different combinations of classifiers and uncertainty measures. The results clearly show the effectiveness of an uncertainty sampling with bulk image shift recommendations (our novel method), which can reduce the number of required clicks to only around 20% compared to manual labeling.
Sleep is an essential part of human existence, as we are in this state for approximately a third of our lives. Sleep disorders are common conditions that can affect many aspects of life. Sleep disorders are diagnosed in special laboratories with a polysomnography system, a costly procedure requiring much effort for the patient. Several systems have been proposed to address this situation, including performing the examination and analysis at the patient's home, using sensors to detect physiological signals automatically analysed by algorithms. This work aims to evaluate the use of a contactless respiratory recording system based on an accelerometer sensor in sleep apnea detection. For this purpose, an installation mounted under the bed mattress records the oscillations caused by the chest movements during the breathing process. The presented processing algorithm performs filtering of the obtained signals and determines the apnea events presence. The performance of the developed system and algorithm of apnea event detection (average values of accuracy, specificity and sensitivity are 94.6%, 95.3%, and 93.7% respectively) confirms the suitability of the proposed method and system for further ambulatory and in-home use.
Using multi-camera matching techniques for 3d reconstruction there is usually the trade-off between the quality of the computed depth map and the speed of the computations. Whereas high quality matching methods take several seconds to several minutes to compute a depth map for one set of images, real-time methods achieve only low quality results. In this paper we present a multi-camera matching method that runs in real-time and yields high resolution depth maps. Our method is based on a novel multi-level combination of normalized cross correlation, deformed matching windows based on the multi-level depth map information, and sub-pixel precise disparity maps. The whole process is implemented completely on the GPU. With this approach we can process four 0.7 megapixel images in 129 milliseconds to a full resolution 3d depth map. Our technique is tailored for the recognition of non-technical shapes, because our target application is face recognition.
The investigation of stress requires to distinguish between stress caused by physical activity and stress that is caused by psychosocial factors. The behaviour of the heart in response to stress and physical activity is very similar in case the set of monitored parameters is reduced to one. Currently, the differentiation remains difficult and methods which only use the heart rate are not able to differentiate between stress and physical activity, without using additional sensor data input. The approach focusses on methods which generate signals providing characteristics that are useful for detecting stress, physical activity, no activity and relaxation.
When designing drying processes for sensitive biological foodstuffs like fruit or vegetables, energy and time efficiency as well as product quality are gaining more and more importance. These all are greatly influenced by the different drying parameters (e.g. air temperature, air velocity and dew point temperature) in the process. In sterilization of food products the cooking value is widely used as a cross-link between these parameters. In a similar way, the so-called cumulated thermal load (CTL) was introduced for drying processes. This was possible because most quality changes mainly depend on drying air temperature and drying time. In a first approach, the CTL was therefore defined as the time integral of the surface temperature of agricultural products. When conducting experiments with mangoes and pineapples, however, it was found that the CTL as it was used had to be adjusted to a more practical form. So the definition of the CTL was improved and the behaviour of the adjusted CTL (CTLad) was investigated in the drying of pineapples and mangoes. On the basis of these experiments and the work that had been done on the cooking value, it was found, that more optimization on the CTLad had to be done to be able to compare a great variety of different products as well as different quality parameters.
An approach for an adaptive position-dependent friction estimation for linear electromagnetic actuators with altered characteristics is proposed in this paper. The objective is to obtain a friction model that can be used to describe different stages of aging of magnetic actuators. It is compared to a classical Stribeck friction model by means of model fit, sensitivity, and parameter correlation. The identifiability of the parameters in the friction model is of special interest since the model is supposed to be used for diagnostic and prognostic purposes. A method based on the Fisher information matrix is employed to analyze the quality of the model structure and the parameter estimates.
The Lempel-Ziv-Welch (LZW) algorithm is an important dictionary-based data compression approach that is used in many communication and storage systems. The parallel dictionary LZW (PDLZW) algorithm speeds up the LZW encoding by using multiple dictionaries. The PDLZW algorithm applies different dictionaries to store strings of different lengths, where each dictionary stores only strings of the same length. This simplifies the parallel search in the dictionaries for hardware implementations. The compression gain of the PDLZW depends on the partitioning of the address space, i.e. on the sizes of the parallel dictionaries. However, there is no universal partitioning that is optimal for all data sources. This work proposes an address space partitioning technique that optimizes the compression rate of the PDLZW using a Markov model for the data. Numerical results for address spaces with 512, 1024, and 2048 entries demonstrate that the proposed partitioning improves the performance of the PDLZW compared with the original proposal.
Advanced approaches for analysis and form finding of membrane structures with finite elements
(2018)
Part I deals with material modelling of woven fabric membranes. Due to their structure of crossed yarns embedded in coating, woven fabric membranes are characterised by a highly nonlinear stress-strain behaviour. In order to determine an accurate structural response of membrane structures, a suitable description of the material behaviour is required. A linear elastic orthotropic model approach, which is current practice, only allows a relative coarse approximation of the material behaviour. The present work focuses on two different material approaches: A first approach becomes evident by focusing on the meso-scale. The inhomogeneous, however periodic structure of woven fabrics motivates for microstructural modelling. An established microstructural model is considered and enhanced with regard to the coating stiffness. Secondly, an anisotropic hyperelastic material model for woven fabric membranes is considered. By performing inverse processes of parameter identification, fits of the two different material models w.r.t. measured data from a common biaxial test are shown. The results of the inversely parametrised material models are compared and discussed.
Part II presents an extended approach for a simultaneous form finding and cutting patterning computation of membrane structures. The approach is formulated as an optimisation problem in which both the geometries of the equilibrium and cutting patterning configuration are initially unknown. The design objectives are minimum deviations from prescribed stresses in warp and fill direction along with minimum shear deformation. The equilibrium equations are introduced into the optimisation problem as constraints. Additional design criteria can be formulated (for the geometry of seam lines etc.). Similar to the motivation for the Updated Reference Strategy [4] the described problem is singular in the tangent plane. In both the equilibrium and the cutting patterning configuration finite element nodes can move without changing stresses. Therefore, several approaches are presented to stabilise the algorithm. The overall result of the computation is a stressed equilibrium and an unstressed cutting patterning geometry. The interaction of both configurations is described in Total Lagrangian formulation.
The microstructural model, which is focused in Part I, is applied. Based on this approach, information about fibre orientation as well as the ending of fibres at cutting edges are available. As a result, more accurate results can be computed compared to simpler approaches commonly used in practice.
Because process and product innovations are usually no longer sufficient to establish a company in the market or to generate a competitive advantage, Business Model Innovation is considered a powerful tool, especially for start-ups for which innovation is at the core of their business. Due to the complexity of this process, frameworks should help entrepreneurs with executing Business Model Innovation. However, theory and practice diverge. The aim of this paper is to identify the needs of a start-up regarding Business Model Innovation frameworks, underlining the importance of Business Model Innovation for start-ups as well as the relevance of a supporting framework. The research results aim to contribute to an ideal process for Business Model Innovation when applied to start-ups.
Acoustic Echo Cancellation (AEC) plays a crucial role in speech communication devices to enable full-duplex communication. AEC algorithms have been studied extensively in the literature. However, device specific details like microphone or loudspeaker configurations are often neglected, despite their impact on the echo attenuation or near-end speech quality. In this work, we propose a method to investigate different loudspeaker-microphone configurations with respect to their contribution to the overall AEC performance. A generic AEC system consisting of an adaptive filter and a Wiener post filter is used for a fair comparison between different setups. We propose the near-end-to-residual-echo ratio (NRER) and the attenuation-of-near-end (AON) as quality measures for the full-duplex AEC performance.
Successful project management (PM), as one of the most important key competences in the western-oriented working world, is mainly influenced by experience and social skills. As a direct impact on PM training, the degree of practice and reality is crucial for the application of lessons learned in a challenging everyday work life. This work presents a recursive approach that adapts well-known principles of PM itself for PM training. Over three years, we have developed a concept and an integrated software system that support our PM university courses. Stepwise, it transfers theoretical PM knowledge into realistic project phases by automatically adjusting to the individual learning progress. Our study reveals predictors such as degrees of collaboration or weekend work as vital aspects in the PM training progress. The chosen granularity of project phases with variances in different dimensions makes our model a canonical incarnation of seamless learning.
The business plan is one of the most frequently available artifacts to innovation intermediaries of technology-based ventures' presentations in their early stages [1]–[4]. Agreement on the evaluations of venturing projects based on the business plans highly depends on the individual perspective of the readers [5], [6]. One reason is that little empirical proof exists for descriptions in business plans that suggest survival of early-stage technology ventures [7]–[9]. We identified descriptions of transaction relations [10]–[13] as an anchor of the snapshot model business plan to business reality [13]. In the early-stage, surviving ventures are building transaction relations to human resources, financial resources, and suppliers on the input side, and customers on the output side of the business towards a stronger ego-centric value network [10]–[13]. We conceptualized a multidimensional measurement instrument that evaluates the maturity of this ego-centric value networks based on the transaction relations of different strength levels that are described in business plans of early-stage technology ventures [13]. In this paper, the research design and the instrument are purified to achieve high agreement in the evaluation of business plans [14]–[16]. As a result, we present an overall research design that can reach acceptable quality for quantitative research. The paper so contributes to the literature on business analysis in the early-stage of technology-based ventures and the research technique of content analysis.
In this work, we investigate a hybrid decoding approach that combines algebraic hard-input decoding of binary block codes with soft-input decoding. In particular, an acceptance criterion is proposed which determines the reliability of a candidate codeword. For many received codewords the stopping criterion indicates that the hard-decoding result is sufficiently reliable, and the costly soft-input decoding can be omitted. The proposed acceptance criterion significantly reduces the decoding complexity. For simulations we combine the algebraic hard-input decoding with ordered statistics decoding, which enables near maximum likelihood soft-input decoding for codes of small to medium block lengths.
This work proposes an efficient hardware Implementation of sequential stack decoding of binary block codes. The decoder can be applied for soft input decoding for generalized concatenated (GC) codes. The GC codes are constructed from inner nested binary Bose-Chaudhuri-Hocquenghem (BCH) codes and outer Reed-Solomon (RS) codes. In order to enable soft input decoding for the inner BCH block codes, a sequential stack decoding algorithm is used.
Many resource-constrained systems still rely on symmetric cryptography for verification and authentication. Asymmetric cryptographic systems provide higher security levels, but are very computational intensive. Hence, embedded systems can benefit from hardware assistance, i.e., coprocessors optimized for the required public key operations. In this work, we propose an elliptic curve cryptographic coprocessors design for resource-constrained systems. Many such coprocessor designs consider only special (Solinas) prime fields, which enable a low-complexity modulo arithmetic. Other implementations support arbitrary prime curves using the Montgomery reduction. These implementations typically require more time for the point multiplication. We present a coprocessor design that has low area requirements and enables a trade-off between performance and flexibility. The point multiplication can be performed either using a fast arithmetic based on Solinas primes or using a slower, but flexible Montgomery modular arithmetic.
Business models (BM) are the logic of a firm on how to create, deliver and capture value. Business model innovation (BMI) is essential to organisations for keeping competitive advantage. However, the existence of barriers to BMI can impact the success of a corporate strategic alignment. Previous research has examined the internal barriers to business model innovation, however there is a lack of research on the potential external barriers that could potentially inhibit business model innovation. Drawn from an in-depth case study in a German medium size engineering company in the equestrian sports industry, we explore both internal and external barriers to business model innovation. BMI is defined as any change in one or more of the nine building blocks of the Business Model Canvas; customer segment, value propositions, channels, customer relation, revenue streams, key resources, key activities, key partners, cost structure [1]. Our results show that barriers to business model innovation can be overcome by the deployment of organisational learning mechanisms and the development of an open network capability.
In the digital age, information technology (IT) is a strategic asset for organizations. As a result, the IT costs are rising, and the cost-effective management of IT is crucial. Nevertheless, organizations still face major challenges and former studies lack comprehensiveness and depth. The goal of this paper is to generate a deep and holistic view on current management challenges of IT costs. In 15 expert interviews, we identify 23 challenges divided into 7 categories. The main challenges are to ensure transparency on IT cost information, to demonstrate the business impact of IT as well as to change the mindset for the value of IT and overcoming them requires attention to their interactions. Hence, this paper leads to a better understanding of the issues that IT cost management (ITCM) faces in the digital age and builds a base for future research.
The paper investigates an innovative actuator combination based on the magnetic shape memory technology. The actuator is composed of an electromagnet, which is activated to produce motion, and a magnetic shape memory element, which is used passively to yield multistability, i.e. the possibility of holding a position without input power. Based on the experimental open-loop frequency characterization of the actuator, a position controller is developed and tested in several experiments.
An IT-GRC approach in SME
(2022)
The digital transformation of business processes and the integration of IT systems leads to opportunities and risks for small and medium-sized enterprises (SMEs). Risks that can result in a lack of IT compliance. The purpose of this research-in-progress paper is to present the current state of a IT-Governance-Risk-Compliance (IT-GRC) research-project. First, the results of an already conducted literature research will be discussed, combined with qualitative interviews (expert survey) of persons close to IT compliance. In the context of this paper, a first design approach will be developed by selecting relevant existing frameworks and standards and the identification of SME-specific conditions. The first design is intended to contribute a further artefact conception of tailoring approaches and standards and the creation of a guidance.
Regional economies clearly benefit from thriving entrepreneurial ecosystems. However, ecosystems are not yet entirely gender-inclusive and therefore are not tapping their full potential. This is most critical with respect to technology-based entrepreneurship which features the largest gender imbalance. Despite the considerably growing amount of literature in the two research fields of female entrepreneurship and entrepreneurial ecosystems, the intersection of the two areas has not yet been outlined. We depict the state of knowledge with a structured review of the literature highlighting bibliometric information, methods used, and the main topics addressed in current articles. From there, recommendations for future research are derived.
Das klinische Standardverfahren und Referenz der Schlafmessung und der Klassifizierung der einzelnen Schlafstadien ist die Polysomnographie (PSG). Alternative Ansätze zu diesem aufwändigen Verfahren könnten einige Vorteile bieten, wenn die Messungen auf eine komfortablere Weise durchgeführt werden. Das Hauptziel dieser Forschung Studie ist es, einen Algorithmus für die automatische Klassifizierung von Schlafstadien zu entwickeln, der ausschließlich Bewegungs- und Atmungssignale verwendet.
This paper describes the effectiveness and efficiency of Virtual Reality training during a commissioning process. Therefore, 500 picking orders with more than 2000 part-picking operations with 30 test persons have been conducted and analyzed in the Modellfabrik Bodensee. The study points out the advantages and disadvantages of virtual training in comparison to a real execution of a picking process with and without any training.
This work is a study about a comparison of survey tools and it should help developers in selecting a suited tool for application in an AAL environment. The first step was to identify the basic required functionality of the survey tools used for AAL technologies and to compare these tools by their functionality and assignments. The comparative study was derived from the data obtained, previous literature studies and further technical data. A list of requirements was stated and ordered in terms of relevance to the target application domain. With the help of an integrated assessment method, the calculation of a generalized estimate value was performed and the result is explained. Finally, the planned application of this tool in a running project is explained.
Present demographic change and a growing population of elderly people leads to new medical needs. Meeting these with state of the art technology is as a consequence a rapidly growing market. So this work is aimed at taking modern concepts of mobile and sensor technology and putting them in a medical context. By measuring a user’s vital signs on sensors which are processed on a Android smartphone, the target system is able to determine the current health state of the user and to visualize gathered information. The system also includes a weather forecasting functionality, which alerts the user on possibly dangerous future meteorological events. All information are collected centrally and distributed to users based on their location. Further, the system can correlate the client-side measurement of vital signs with a server-side weather history. This enables personalized forecasting for each user individually. Finally, a portable and affordable application was developed that continuously monitors the health status by many vital sensors, all united on a common smartphone.
oday many scientific works are using deep learning algorithms and time series, which can detect physiological events of interest. In sleep medicine, this is particularly relevant in detecting sleep apnea, specifically in detecting obstructive sleep apnea events. Deep learning algorithms with different architectures are used to achieve decent results in accuracy, sensitivity, etc. Although there are models that can reliably determine apnea and hypopnea events, another essential aspect to consider is the explainability of these models, i.e., why a model makes a particular decision. Another critical factor is how these deep learning models determine how severe obstructive sleep apnea is in patients based on the apnea-hypopnea index (AHI). Deep learning models trained by two approaches for AHI determination are exposed in this work. Approaches vary depending on the data format the models are fed: full-time series and window-based time series.
The first part of this work shows the development and application of a new material system using high strength duplex stainless steel wires as net material with environmentally compatible antifouling properties for off-shore fish farm cages. Current net materials from textiles (polyamide) shall be partially replaced by high strength duplex stainless steel in order to have a more environmentally compatible system which meets the more severe mechanical loads (waves, storms, predatores (sharks, seals)). With a new antifouling strategy current issues like reduced ecological damage (e.g. due to copper disposal), lower maintenance costs (e.g. cleaning) and reduced durability shall be resolved.
High strength steel wires are also widely used in geological protection systems, for example rockfall protection or slope stabilisation. Normally hot-dip galvanised carbon steel is used in this case. But in highly corrosive environments like coastal areas, volcanic areas or mines for example, other solutions with a high corrosion resistance and sufficient mechanical properties are necessary. Protection systems made of high strength duplex stainless steel wires enable a significantly longer service life of the portection systems and therefore a higher level of security.
Text produced by entrepreneurs represents a data source in entrepreneurship research on venture performance and fund-raising success. Manual text coding of single variables is increasingly assisted or replaced by computer-aided text analysis. Yet, for the development of prediction models with several variables, such dictionary-based text analysis methods are less suitable. Natural language processing techniques are an alternative; however, the implementation is more complex and requires substantial programming skills. More work is required to understand how text analytics can advance entrepreneurship research. This study hence experiments with different artificial intelligence methods rooted in Natural Language Processing and deep learning. It uses 766 business plans to train a model for the automated measurement of transaction relations, a construct which is an indicator for new technology-based firm survival. Empirical findings show that the accuracy of construct measurement can be significantly increased with automated methods and improves with larger amounts of training data. Language complexity sets limits to the precision of automated construct measurement though. We therefore recommend a hybrid approach: making use of the inherent advantages of combining automated with human coding until the amount of training data is sufficiently large to substitute the human coding completely. The study provides insights into the applicability of different text analytics methods in entrepreneurship research and points at future research potential.
Domain-specific modelling is increasingly adopted in the software development industry. While open source metamodels like Ecore have a wide impact, they still have some problems. The independent storage of nodes (classes) and edges (references) is currently only possible with complex, specific solutions. Furthermore the developed models are stored in the extensible markup language (XML) data format, which leads to problems with large models in terms of scaling. In this paper we describe an approach that solves the problem of independent classes and references in metamodels and we store the models in the JavaScript Object Notation (JSON) data format to support high scalability. First results of our tests show that the developed approach works and classes and references can be defined independently. In addition, our approach reduces the amount of characters per model by a factor of approximately two compared to Ecore. The entire project is made available as open source under the name MoDiGen. This paper focuses on the description of the metamodel definition in terms of scaling.
Today, many resource-constrained systems, such as embedded systems, still rely on symmetric cryptography for authentication and digital signatures. Asymmetric cryptography provide a higher security level, but software implementations of public-key algorithms on small embedded systems are extremely slow. Hence, such embedded systems require hardware assistance, i.e. crypto coprocessors optimized for public key operations. Many such coprocessor designs aim on high computational performance. In this work, an area efficient elliptic curve cryptography (ECC) coprocessor is presented for applications in small embedded systems where high performance coprocessors are too costly. We propose a simple control unit with a small instruction set that supports different ECC point multiplication (PM) algorithms. The control unit reduces the logic and number of registers compared with other implementations of ECC point multiplications.
We present an alternative approach to grid management in low voltage grids by the use of artificial intelligence. The developed decision support system is based on an artificial neural network (ANN). Due to the fast reaction time of our system, real time grid management will be possible. Remote controllable switches and tap changers in transformer stations are used to actively manage the grid infrastructure. The algorithm can support the distribution system operators to keep the grid in a safe state at any time. Its functionality is demonstrated by a case study using a virtual test grid. The ANN achieves a prediction rate of around 90% for the different grid management strategies. By considering the four most likely solutions proposed by the ANN, the prediction rate increases to 98.8%, with a 0.1 second increase in the running time of the model.
Assessment Literacy
(2015)
The influence of sleep on human health is enormous. Accordingly, sleep disorders can have a negative impact on it. To avoid this, they should be identified and treated in time. For this purpose, objective (with an appropriate device) or subjective (based on perceived values) measurement methods are used for sleep analysis to understand the problem. The aim of this work is to find out whether an exchange of the two methods is possible and can provide reliable results. In accordance with this goal, a study was conducted with people aged over 65 years old (a total of 154 night-time recordings) in which both measurement methods were compared. Sleep questionnaires and electronic devices for sleep assessment placed under the mattress were applied to achieve the study aims. The obtained results indicated that the correlation between both measurement methods could be observed for sleep characteristics such as total sleep time, total time in bed and sleep efficiency. However, there are also significant differences in absolute values of the two measurement approaches for some subjects/nights, which leads us to conclude that the substitution is more likely to be considered in case of long-term monitoring where the trends are of more importance and not the absolute values for individual nights.
Home health applications have evolved over the last few decades. Assistive systems such as a data platform in connection with health devices can allow for health-related data to be automatically transmitted to a database. However, there remain significant challenges concerning intermodular communication. Central among them is the challenge of achieving interoperability, the ability of devices to communicate and share data with each other. A major goal of this project was to extend an existing data platform (COMES®) and establish working interoperability by connecting assistive devices with differing approaches. We describe this process for a sleep monitoring and a physical exercise device. Furthermore, we aimed to test this setup and the implementation with a data platform in both a laboratory and an in-home setting with 11 elderly participants. The platform modification was realized, and the relevant changes were made so that the incoming data could be processed by the data platform, as well as visually displayed in real-time. Data was recorded by the respective device and transmitted into the data server with minor disruptions. Our observations affirmed that difficulties and data loss are far more likely to occur with increasing technical complexity, in the event of instable internet connection, or when the device setup requires (elderly) subjects to take specific steps for proper functioning. We emphasize the importance for tests and evaluations of home health technologies in real-life circumstances.
In diesem Beitrag wird eine Methode des maschinellen Lernens entwickelt, die die Schlafstadienerkennung untersucht. Übliche Methoden der Schlafanalyse basieren auf der Polysomnographie (PSG). Der präsentierte Ansatz basiert auf Signalen, die ausschließlich nicht-invasiv in einer häuslichen Umgebung gemessen werden können. Bewegungs-, Herzschlags- und Atmungssignale können vergleichsweise leicht erfasst werden aber die Erkennung der Schlafstadien ist dadurch erschwert. Die Signale werden als Zeitreihenfolge strukturiert und in Epochen überführt. Die Leistungsfähigkeit von maschinellem Lernen wird der Polysomnographie gegenübergestellt und bewertet.
Martensitic stainless steels has a wide use, for example for blades, knifes or cutter. The best corrosion resistance of these materials is in hardened condition. For better mechanical properties a tempering is normally applied to increase the durability. The tempering is also reducing the hardness and finally the corrosion resistance. Austempering is meanly used at low alloyed steels and brings a good compromise between durability, hardness and corrosion resistance. For martensitic stainless steels, austempering is normally not a topic because of the very long tempering times.
This work shows first results of austempering of some standard martensitic stainless steels and the influence to corrosion resistance. For reference, hardened and also hardened and tempered specimens were investigated. The corrosions resistance was investigated by electrochemical methods.
The trajectory tracking problem for a fully-actuated real-scaled surface vessel is addressed in this paper by designing a backstepping controller with a multivariable integral action, considering the thruster allocation problem. The performance and robustness of this controller are evaluated in simulation, taking into account environmental disturbance forces and modeling mismatch, using a docking maneuver as a reference trajectory. Furthermore, a comparison between the backstepping controller and a nonlinear position PID-Control with flatness based-feedforward is also analyzed.
Bauhäusler am Bodensee
(2016)
Unter bestimmten Kontaktbedingungen zwischen Rad und Schiene können selbsterregte Schwingungen angeregt werden, die zu gegenphasigen Drehbewegungen der Radscheiben und hohen Torsionsmomenten in der Radsatzwelle führen. Zur Bestimmung des maximalen Torsionsmoments sind bislang aufwendige Testfahrten erforderlich, da keine Verfahren bekannt waren, die eine konservative Berechnung des Torsionsmoments ermöglichen [1]. In den vergangenen Jahren wurden die drei folgenden Berechnungsmethoden vertieft untersucht, um das maximale, dynamische Torsionsmoment zu berechnen:
- Simulationen von komplexen Mehrkörpersystemen (MKS)
- Differentialgleichungssysteme mit numerischer Berechnung
- Analytische Berechnung durch Reduktion auf ein Minimalmodell
In dieser Publikation sollen diese Berechnungsmethoden näher vorgestellt werden und durch eine Gegenüberstellung der jeweils berechneten und gemessenen Ergebnisse deren Möglichkeiten aber auch Limitationen aufgezeigt werden.
Beyond the SDG compass
(2015)
This policy brief presents the possibilities of using big data analytics for safe, decarbonised and climate-resilient infrastructure. The policy brief focuses on current constraints and limitations to applying big data analytics to the infrastructure ecosystem and presents several examples and best practices for different infrastructure sectors and at different policy levels (national, municipal) to highlight recommendations and policy requirements needed for deep digital transformation and sustainable solutions in infrastructure planning and delivery.
It is well known that signal constellations which are based on a hexagonal grid, so-called Eisenstein constellations, exhibit a performance gain over conventional QAM ones. This benefit is realized by a packing and shaping gain of the Eisenstein (hexagonal) integers in comparison to the Gaussian (complex) integers. Such constellations are especially relevant in transmission schemes that utilize lattice structures, e.g., in MIMO communications. However, for coded modulation, the straightforward approach is to combine Eisenstein constellations with ternary channel codes. In this paper, a multilevel-coding approach is proposed where encoding and multistage decoding can directly be performed with state-of-the-art binary channel codes. An associated mapping and a binary set partitioning are derived. The performance of the proposed approach is contrasted to classical multilevel coding over QAM constellations. To this end, both the single-user AWGN scenario and the (multiuser) MIMO broadcast scenario using lattice-reduction-aided preequalization are considered. Results obtained from numerical simulations with LDPC codes complement the theoretical aspects.
Stress is a recognized as a predominant disease with growing costs of treatment. The approach presented here is aimed to detect stress using a light weighted, mobile, cheap and easy to use system. The result shows that stress can be detected even in case a person’s natural bio vital data is out of the main range. The system enables storage of measured data, while maintaining communication channels of online and post-processing.
The reliability of flash memories suffers from various error causes. Program/erase cycles, read disturb, and cell to cell interference impact the threshold voltages and cause bit errors during the read process. Hence, error correction is required to ensure reliable data storage. In this work, we investigate the bit-labeling of triple level cell (TLC) memories. This labeling determines the page capacities and the latency of the read process. The page capacity defines the redundancy that is required for error correction coding. Typically, Gray codes are used to encode the cell state such that the codes of adjacent states differ in a single digit. These Gray codes minimize the latency for random access reads but cannot balance the page capacities. Based on measured voltage distributions, we investigate the page capacities and propose a labeling that provides a better rate balancing than Gray labeling.