Refine
Year of publication
Document Type
- Conference Proceeding (463)
- Article (164)
- Part of a Book (45)
- Doctoral Thesis (30)
- Other Publications (28)
- Book (8)
- Patent (3)
- Preprint (2)
- Report (2)
Language
- English (745) (remove)
Has Fulltext
- no (745) (remove)
Keywords
- (Strict) sign-regularity (1)
- 360-degree coverage (1)
- 3D Extended Object Tracking (EOT) (2)
- 3D ship detection (1)
- 3D urban planning (1)
- AAL (3)
- ADAM (1)
- AHI (1)
- ASEAN (1)
- Aboriginal people (1)
Institute
- Fakultät Architektur und Gestaltung (4)
- Fakultät Bauingenieurwesen (10)
- Fakultät Elektrotechnik und Informationstechnik (10)
- Fakultät Informatik (51)
- Fakultät Maschinenbau (6)
- Fakultät Wirtschafts-, Kultur- und Rechtswissenschaften (31)
- Institut für Angewandte Forschung - IAF (60)
- Institut für Optische Systeme - IOS (26)
- Institut für Strategische Innovation und Technologiemanagement - IST (36)
- Institut für Systemdynamik - ISD (85)
The magneto-mechanical behavior of magnetic shape memory (MSM) materials has been investigated by means of different simulation and modeling approaches by several research groups. The target of this paper is to simulate actuators driven by MSM alloys and to understand the MSM element behavior during actuation, which shall lead to an increased performance of the actuator. It is shown that internal and external stresses should be taken into consideration using numerical computation tools for magnetic fields in an efficient way.
The binary asymmetric channel (BAC) is a model for the error characterization of multi-level cell (MLC) flash memories. This contribution presents a joint channel and source coding approach improving the reliability of MLC flash memories. The objective of the data compression algorithm is to reduce the amount of user data such that the redundancy of the error correction coding can be increased in order to improve the reliability of the data storage system. Moreover, data compression can be utilized to exploit the asymmetry of the channel to reduce the error probability. With MLC flash memories data compression has to be performed on block level considering short data blocks. We present a coding scheme suitable for blocks of 1 kilobyte of data.
Multi-object tracking filters require a birth density to detect new objects from measurement data. If the initial positions of new objects are unknown, it may be useful to choose an adaptive birth density. In this paper, a circular birth density is proposed, which is placed like a band around the surveillance area. This allows for 360° coverage. The birth density is described in polar coordinates and considers all point-symmetric quantities such as radius, radial velocity and tangential velocity of objects entering the surveillance area. Since it is assumed that these quantities are unknown and may vary between different targets, detected trajectories, and in particular their initial states, are used to estimate the distribution of initial states. The adapted birth density is approximated as a Gaussian mixture, so that it can be used for filters operating on Cartesian coordinates.
This work proposes a lossless data compression algorithm for short data blocks. The proposed compression scheme combines a modified move-to-front algorithm with Huffman coding. This algorithm is applicable in storage systems where the data compression is performed on block level with short block sizes, in particular, in non-volatile memories. For block sizes in the range of 1(Formula presented.)kB, it provides a compression gain comparable to the Lempel–Ziv–Welch algorithm. Moreover, encoder and decoder architectures are proposed that have low memory requirements and provide fast data encoding and decoding.
The main aim of presented in this manuscript research is to compare the results of objective and subjective measurement of sleep quality for older adults (65+) in the home environment. A total amount of 73 nights was evaluated in this study. Placing under the mattress device was used to obtain objective measurement data, and a common question on perceived sleep quality was asked to collect the subjective sleep quality level. The achieved results confirm the correlation between objective and subjective measurement of sleep quality with the average standard deviation equal to 2 of 10 possible quality points.
Digitalization is one of the most frequently discussed topics in industry. New technologies, platform concepts and integrated data models do enable disruptive business models and drive changes in organization, processes, and tools. The goal is to make a company more efficient, productive and ultimately profitable. However, many companies are facing the challenge of how to approach digital transformation in a structured way and to realize these potential benefits. What they realize is that Product Lifecycle Management plays a key role in digitalization intends, as object, structure and process management along the life cycle is a foundation for many digitalization use cases. The introduced maturity model for assessing a firm’s capabilities along the product lifecycle has been used almost two hundred times. It allows a company to compare its performance with an industry specific benchmark to reveal individual strengths and weaknesses. Furthermore, an empirical study produced multidimensional correlation coefficients, which identify dependencies between business model characteristics and the maturity level of capabilities.
One major realm of Condition Based Maintenance is finding features that reflect the current health state of the asset or component under observation. Most of the existing approaches are accompanied with high computational costs during the different feature processing phases making them infeasible in a real-world scenario. In this paper a feature generation method is evaluated compensating for two problems: (1) storing and handling large amounts of data and (2) computational complexity. Both aforementioned problems are existent e.g. when electromagnetic solenoids are artificially aged and health indicators have to be extracted or when multiple identical solenoids have to be monitored. To overcome those problems, Compressed Sensing (CS), a new research field that keeps constantly emerging into new applications, is employed. CS is a data compression technique allowing original signal reconstruction with far fewer samples than Shannon-Nyquist dictates, when some criteria are met. By applying this method to measured solenoid coil current, raw data vectors can be reduced to a way smaller set of samples that yet contain enough information for proper reconstruction. The obtained CS vector is also assumed to contain enough relevant information about solenoid degradation and faults, allowing CS samples to be used as input to fault detection or remaining useful life estimation routines. The paper gives some results demonstrating compression and reconstruction of coil current measurements and outlines the application of CS samples as condition monitoring data by determining deterioration and fault related features. Nevertheless, some unresolved issues regarding information loss during the compression stage, the design of the compression method itself and its influence on diagnostic/prognostic methods exist.
A conceptual framework for indigenous ecotourism projects – a case study in Wayanad, Kerala, India
(2020)
This paper analyses indigenous ecotourism in the Indian district of Wayanad, Kerala, using a conceptual framework based on a PATA 2015 study on indigenous tourism that includes the criteria: human rights, participation, business and ecology. Detailed indicator sets for each criterion are applied to a case study of the Priyadarshini Tea Environs with a qualitative research approach addressing stakeholders from the public sector, non-governmental organisations, academia, tour operators and communities including Adivasi and non-Adivasi. In-depth interviews were supported by participant and non-participant observations. The authors adapted this framework to the needs of the case study and consider that this modified version is a useful tool for academics and practitioners wishing to evaluate and develop indigenous ecotourism projects. The results show that the Adivasi involved in the Priyadarshini Tea Environs project benefit from indigenous ecotourism. But they could profit more if they had more involvement in and control of the whole tourism value chain.
Research credits corporate entrepreneurship (CE) with enabling established companies to create new types of innovation. Scholars have focused on the organizational design of CE activities, proposing specific organizational units. These semi-autonomous units create a tense management situation between the core organization and its CE activities. Management and organization research considers control as a key managerial function for help. However, control has received limited research attention regarding CE units, leaving design issues for appropriate control of CE units unanswered. In this study, we link management control and CE to illustrate how control is understood in the context of CE. For this, we scanned the CE literature to identify underlying attributes and characteristics that allow specifying control for CE. We identified 11 attributes to describe control for CE activities in a first round and to derive future research paths.
In many industrial applications a workpiece is continuously fed through a heating zone in order to reach a desired temperature to obtain specific material properties. Many examples of such distributed parameter systems exist in heavy industry and also in furniture production such processes can be found. In this paper, a real-time capable model for a heating process with application to industrial furniture production is modeled. As the model is intended to be used in a Model Predictive Control (MPC) application, the main focus is to achieve minimum computational runtime while maintaining a sufficient amount of accuracy. Thus, the governing Partial Differential Equation (PDE) is discretized using finite differences on a grid, specifically tailored to this application. The grid is optimized to yield acceptable accuracy with a minimum number of grid nodes such that a relatively low order model is obtained. Subsequently, an explicit Runge-Kutta ODE (Ordinary Differential Equation) solver of fourth order is compared to the Crank-Nicolson integration scheme presented in Weiss et al. (2022) in terms of runtime and accuracy. Finally, the unknown thermal parameters of the process are estimated using real-world measurement data that was obtained from an experimental setup. The final model yields acceptable accuracy while at the same time shows promising computation time, which enables its use in an MPC controller.
This paper describes an early lumping approach for generating a mathematical model of the heating process of a moving dual-layer substrate. The heat is supplied by convection and nonlinearly distributed over the whole considered spatial extend of the substrate. Using CFD simulations as a reference, two different modelling approaches have been investigated in order to achieve the most suitable model type. It is shown that due to the possibility of using the transition matrix for time discretization, an equivalent circuit model achieves superior results when compared to the Crank-Nicolson method. In order to maintain a constant sampling time for the in-visioned-control strategies, the effect of variable speed is transformed into a system description, where the state vector has constant length but a variable number of non-zero entries. The handling of the variable transport speed during the heating process is considered as the main contribution of this work. The result is a model, suitable for being used in future control strategies.
Online-based business models, such as shopping platforms, have added new possibilities for consumers over the last two decades. Aside from basic differences to other distribution channels, customer reviews on such platforms have become a powerful tool, which bestows an additional source for gaining transparency to consumers. Related research has, for the most part, been labelled under the term electronic word-of-mouth (eWOM). An approach, providing a theoretical basis for this phenomenon, will be provided here. The approach is mainly based on work in the field of consumer culture theory (CCT) and on the concept of co-creation. The work of several authors in these streams of research is used to construct a culturally informed resource-based theory, as advocated by Arnould & Thompson and Algesheimer & Gurâu.
This contribution presents a data compression scheme for applications in non-volatile flash memories. The objective of the data compression algorithm is to reduce the amount of user data such that the redundancy of the error correction coding can be increased in order to improve the reliability of the data storage system. The data compression is performed on block level considering data blocks of 1 kilobyte. We present an encoder architecture that has low memory requirements and provides a fast data encoding.
Large-scale quantum computers threaten the security of today's public-key cryptography. The McEliece cryptosystem is one of the most promising candidates for post-quantum cryptography. However, the McEliece system has the drawback of large key sizes for the public key. Similar to other public-key cryptosystems, the McEliece system has a comparably high computational complexity. Embedded devices often lack the required computational resources to compute those systems with sufficiently low latency. Hence, those systems require hardware acceleration. Lately, a generalized concatenated code construction was proposed together with a restrictive channel model, which allows for much smaller public keys for comparable security levels. In this work, we propose a hardware decoder suitable for a McEliece system based on these generalized concatenated codes. The results show that those systems are suitable for resource-constrained embedded devices.
This work proposes a decoder implementation for high-rate generalized concatenated (GC) codes. The proposed codes are well suited for error correction in flash memories for high reliability data storage. The GC codes are constructed from inner extended binary Bose-Chaudhuri-Hocquenghem (BCH) codes and outer Reed-Solomon (RS) codes. The extended BCH codes enable high-rate GC codes. Moreover, the decoder can take advantage of soft information. For the first three levels of inner codes we propose an optional Chase soft decoder. In this work, the code construction is explained and a decoder architecture is presented. Furthermore, area and throughput results are discussed.
This paper presents the implementation of deep learning methods for sleep stage detection by using three signals that can be measured in a non-invasive way: heartbeat signal, respiratory signal, and movement signal. Since signals are measurements taken during the time, the problem is seen as time-series data classification. Deep learning methods are chosen to solve the problem are convolutional neural network and long-short term memory network. Input data is structured as a time-series sequence of mentioned signals that represent 30 seconds epoch, which is a standard interval for sleep analysis. The records used belong to the overall 23 subjects, which are divided into two subsets. Records from 18 subjects were used for training the data and from 5 subjects for testing the data. For detecting four sleep stages: REM (Rapid Eye Movement), Wake, Light sleep (Stage 1 and Stage 2), and Deep sleep (Stage 3 and Stage 4), the accuracy of the model is 55%, and F1 score is 44%. For five stages: REM, Stage 1, Stage 2, Deep sleep (Stage 3 and 4), and Wake, the model gives an accuracy of 40% and F1 score of 37%.
Modeling a suitable birth density is a challenge when using Bernoulli filters such as the Labeled Multi-Bernoulli (LMB) filter. The birth density of newborn targets is unknown in most applications, but must be given as a prior to the filter. Usually the birth density stays unchanged or is designed based on the measurements from previous time steps.
In this paper, we assume that the true initial state of new objects is normally distributed. The expected value and covariance of the underlying density are unknown parameters. Using the estimated multi-object state of the LMB and the Rauch-Tung-Striebel (RTS) recursion, these parameters are recursively estimated and adapted after a target is detected.
The main contribution of this paper is an algorithm to estimate the parameters of the birth density and its integration into the LMB framework. Monte Carlo simulations are used to evaluate the detection driven adaptive birth density in two scenarios. The approach can also be applied to filters that are able to estimate trajectories.
A flight-like absolute optical frequency reference based on iodine for laser systems at 1064 nm
(2017)
We present an absolute optical frequency reference based on precision spectroscopy of hyperfine transitions in molecular iodine 127I2 for laser systems operating at 1064 nm. A quasi-monolithic spectroscopy setup was developed, integrated, and tested with respect to potential deployment in space missions that require frequency stable laser systems. We report on environmental tests of the setup and its frequency stability and reproducibility before and after each test. Furthermore, we report on the first measurements of the frequency stability of the iodine reference with an unsaturated absorption cell which will greatly simplify its application in space missions. Our frequency reference fulfills the requirements on the frequency stability for planned space missions such as LISA or NGGM.