Institut für Systemdynamik - ISD
Refine
Document Type
- Conference Proceeding (54)
- Article (26)
- Doctoral Thesis (7)
- Master's Thesis (2)
- Patent (1)
- Report (1)
Keywords
- 360-degree coverage (1)
- 3D Extended Object Tracking (EOT) (2)
- Actuators (2)
- Adaptive (1)
- Adaptive birth density (1)
- Aerobic fermentation (1)
- Automated Docking of Vessels (1)
- Backstepping control (1)
- Beobachterentwurf (1)
- Bernoulli filter (1)
Institute
The trajectory tracking problem for a fully-actuated real-scaled surface vessel is addressed in this paper by designing a backstepping controller with a multivariable integral action, considering the thruster allocation problem. The performance and robustness of this controller are evaluated in simulation, taking into account environmental disturbance forces and modeling mismatch, using a docking maneuver as a reference trajectory. Furthermore, a comparison between the backstepping controller and a nonlinear position PID-Control with flatness based-feedforward is also analyzed.
The code-based McEliece cryptosystem is a promising candidate for post-quantum cryptography. The sender encodes a message, using a public scrambled generator matrix, and adds a random error vector. In this work, we consider q-ary codes and restrict the Lee weight of the added error symbols. This leads to an increased error correction capability and a larger work factor for information-set decoding attacks. In particular, we consider codes over an extension field and use the one-Lee error channel, which restricts the error values to Lee weight one. For this channel model, generalized concatenated codes can achieve high error correction capabilities. We discuss the decoding of those codes and the possible gain for decoding beyond the guaranteed error correction capability.
In this letter, we present an approach to building a new generalized multistream spatial modulation system (GMSM), where the information is conveyed by the two active antennas with signal indices and using all possible active antenna combinations. The signal constellations associated with these antennas may have different sizes. In addition, four-dimensional hybrid frequency-phase modulated signals are utilized in GMSM. Examples of GMSM systems are given and computer simulation results are presented for transmission over Rayleigh and deep Nakagami- m flat-fading channels when maximum-likelihood detection is used. The presented results indicate a significant improvement of characteristics compared to the best-known similar systems.
Reed-Muller (RM) codes have recently regained some interest in the context of low latency communications and due to their relation to polar codes. RM codes can be constructed based on the Plotkin construction. In this work, we consider concatenated codes based on the Plotkin construction, where extended Bose-Chaudhuri-Hocquenghem (BCH) codes are used as component codes. This leads to improved code parameters compared to RM codes. Moreover, this construction is more flexible concerning the attainable code rates. Additionally, new soft-input decoding algorithms are proposed that exploit the recursive structure of the concatenation and the cyclic structure of the component codes. First, we consider the decoding of the cyclic component codes and propose a low complexity hybrid ordered statistics decoding algorithm. Next, this algorithm is applied to list decoding of the Plotkin construction. The proposed list decoding approach achieves near-maximum-likelihood performance for codes with medium lengths. The performance is comparable to state-of-the-art decoders, whereas the complexity is reduced.
Large-scale quantum computers threaten the security of today's public-key cryptography. The McEliece cryptosystem is one of the most promising candidates for post-quantum cryptography. However, the McEliece system has the drawback of large key sizes for the public key. Similar to other public-key cryptosystems, the McEliece system has a comparably high computational complexity. Embedded devices often lack the required computational resources to compute those systems with sufficiently low latency. Hence, those systems require hardware acceleration. Lately, a generalized concatenated code construction was proposed together with a restrictive channel model, which allows for much smaller public keys for comparable security levels. In this work, we propose a hardware decoder suitable for a McEliece system based on these generalized concatenated codes. The results show that those systems are suitable for resource-constrained embedded devices.
Automotive computing applications like AI databases, ADAS, and advanced infotainment systems have a huge need for persistent memory. This trend requires NAND flash memories designed for extreme automotive environments. However, the error probability of NAND flash memories has increased in recent years due to higher memory density and production tolerances. Hence, strong error correction coding is needed to meet automotive storage requirements. Many errors can be corrected by soft decoding algorithms. However, soft decoding is very resource-intensive and should be avoided when possible. NAND flash memories are organized in pages, and the error correction codes are usually encoded page-wise to reduce the latency of random reads. This page-wise encoding does not reach the maximum achievable capacity. Reading soft information increases the channel capacity but at the cost of higher latency and power consumption. In this work, we consider cell-wise encoding, which also increases the capacity compared to page-wise encoding. We analyze the cell-wise processing of data in triple-level cell (TLC) NAND flash and show the performance gain when using Low-Density Parity-Check (LDPC) codes. In addition, we investigate a coding approach with page-wise encoding and cell-wise reading.
Large persistent memory is crucial for many applications in embedded systems and automotive computing like AI databases, ADAS, and cutting-edge infotainment systems. Such applications require reliable NAND flash memories made for harsh automotive conditions. However, due to high memory densities and production tolerances, the error probability of NAND flash memories has risen. As the number of program/erase cycles and the data retention times increase, non-volatile NAND flash memories' performance and dependability suffer. The read reference voltages of the flash cells vary due to these aging processes. In this work, we consider the issue of reference voltage adaption. The considered estimation procedure uses shallow neural networks to estimate the read reference voltages for different life-cycle conditions with the help of histogram measurements. We demonstrate that the training data for the neural networks can be enhanced by using shifted histograms, i.e., a training of the neural networks is possible based on a few measurements of some extreme points used as training data. The trained neural networks generalize well for other life-cycle conditions.
In many industrial applications a workpiece is continuously fed through a heating zone in order to reach a desired temperature to obtain specific material properties. Many examples of such distributed parameter systems exist in heavy industry and also in furniture production such processes can be found. In this paper, a real-time capable model for a heating process with application to industrial furniture production is modeled. As the model is intended to be used in a Model Predictive Control (MPC) application, the main focus is to achieve minimum computational runtime while maintaining a sufficient amount of accuracy. Thus, the governing Partial Differential Equation (PDE) is discretized using finite differences on a grid, specifically tailored to this application. The grid is optimized to yield acceptable accuracy with a minimum number of grid nodes such that a relatively low order model is obtained. Subsequently, an explicit Runge-Kutta ODE (Ordinary Differential Equation) solver of fourth order is compared to the Crank-Nicolson integration scheme presented in Weiss et al. (2022) in terms of runtime and accuracy. Finally, the unknown thermal parameters of the process are estimated using real-world measurement data that was obtained from an experimental setup. The final model yields acceptable accuracy while at the same time shows promising computation time, which enables its use in an MPC controller.
The trajectory tracking problem for a real-scaled fully-actuated surface vessel is addressed in this paper. A nonlinear model predictive control (NMPC) scheme was designed to track a reference trajectory, considering state and input constraints, and environmental disturbances, which were assumed to be constant over the prediction horizon. The controller was tested by performing docking maneuvers using the real-scaled research vessel from the University of Applied Sciences Konstanz at the Rhine river in Germany. A comparison between the experimental results and the simulated ones was analyzed to validate the NMPC controller.
This paper presents a modeling approach of an industrial heating process where a stripe-shaped workpiece is heated up to a specific temperature by applying hot air through a nozzle. The workpiece is moving through the heating zone and is considered to be of infinite length. The speed of the substrate is varying over time. The derived model is supposed to be computationally cheap to enable its use in a model-based control setting. We start by formulating the governing PDE and the corresponding boundary conditions. The PDE is then discretized on a spatial grid using finite differences and two different integration schemes, explicit and implicit, are derived. The two models are evaluated in terms of computational effort and accuracy. It turns out that the implicit approach is favorable for the regarded process. We optimize the grid of the model to achieve a low number of grid nodes while maintaining a sufficient amount of accuracy. Finally, the thermodynamical parameters are optimized in order to fit the model's output to real-world data that was obtained by experiments.
Code-based cryptosystems are promising candidates for post-quantum cryptography. Recently, generalized concatenated codes over Gaussian and Eisenstein integers were proposed for those systems. For a channel model with errors of restricted weight, those q-ary codes lead to high error correction capabilities. Hence, these codes achieve high work factors for information set decoding attacks. In this work, we adapt this concept to codes for the weight-one error channel, i.e., a binary channel model where at most one bit-error occurs in each block of m bits. We also propose a low complexity decoding algorithm for the proposed codes. Compared to codes over Gaussian and Eisenstein integers, these codes achieve higher minimum Hamming distances for the dual codes of the inner component codes. This property increases the work factor for a structural attack on concatenated codes leading to higher overall security. For comparable security, the key size for the proposed code construction is significantly smaller than for the classic McEliece scheme based on Goppa codes.
Nowadays, most digital modulation schemes are based on conventional signal constellations that have no algebraic group, ring, or field properties, e.g. square quadrature-amplitude modulation constellations. Signal constellations with algebraic structure can enhance the system performance. For instance, multidimensional signal constellations based on dense lattices can achieve performance gains due to the dense packing. The algebraic structure enables low-complexity decoding and detection schemes. In this work, signal constellations with algebraic properties and their application in spatial modulation transmission schemes are investigated. Several design approaches of two- and four-dimensional signal constellations based on Gaussian, Eisenstein, and Hurwitz integers are shown. Detection algorithms with reduced complexity are proposed. It is shown, that the proposed Eisenstein and Hurwitz constellations combined with the proposed suboptimal detection can outperform conventional two-dimensional constellations with ML detection.
Virtual measurement models (VMM) can be used to generate artificial measurements and emulate complex sensor models such as Lidar. The input of the VMM is an estimation and the output is the set of measurements this estimation would cause. A Kalman filter with extension estimation based on random matrices is used to filter mean and covariance of the real measurements. If these match the mean and covariance of the artificial measurements, then the given estimation is appropriate. The optimal input of the VMM is found using an adaptation algorithm. In this paper, the VMM approach is expanded for multi-extended object tracking where objects can be occluded and are only partially visible. The occlusion can be compensated if the extension estimation is performed for all objects together. The VMM now receives as input an estimation for the multi-object state and the output are the measurements that this multi-object state would cause.
With the high resolution of modern sensors such as multilayer LiDARs, estimating the 3D shape in an extended object tracking procedure is possible. In recent years, 3D shapes have been estimated in spherical coordinates using Gaussian processes, spherical double Fourier series or spherical harmonics. However, observations have shown that in many scenarios only a few measurements are obtained from top or bottom surfaces, leading to error-prone estimates in spherical coordinates. Therefore, in this paper we propose to estimate the shape in cylindrical coordinates instead, applying harmonic functions. Specifically, we derive an expansion for 3D shapes in cylindrical coordinates by solving a boundary value problem for the Laplace equation. This shape representation is then integrated in a plain greedy association model and compared to shape estimation procedures in spherical coordinates. Since the shape representation is only integrated in a basic estimator, the results are preliminary and a detailed discussion for future work is presented at the end of the paper.
Feature-Based Proposal Density Optimization for Nonlinear Model Predictive Path Integral Control
(2022)
This paper presents a novel feature-based sampling strategy for nonlinear Model Predictive Path Integral (MPPI) control. In MPPI control, the optimal control is calculated by solving a stochastic optimal control problem online using the weighted inference of stochastic trajectories. While the algorithm can be excellently parallelized the closed- loop performance is dependent on the information quality of the drawn samples. Because these samples are drawn using a proposal density, its quality is crucial for the solver and thus the controller performance. In classical MPPI control, the explored state-space is strongly constrained by assumptions that refer to the control value variance, which are necessary for transforming the Hamilton-Jacobi-Bellman (HJB) equation into a linear second-order partial differential equation. To achieve excellent performance even with discontinuous cost-functions, in this novel approach, knowledge-based features are used to determine the proposal density and thus, the region of state- space for exploration. This paper addresses the question of how the performance of the MPPI algorithm can be improved using a feature-based mixture of base densities. Further, the developed algorithm is applied on an autonomous vessel that follows a track and concurrently avoids collisions using an emergency braking feature.
This paper presents a systematic comparison of different advanced approaches for motion prediction of vessels for docking scenarios. Therefore, a conventional nonlinear gray-box-model, its extension to a hybrid model using an additional regression neural network (RNN) and a black-box-model only based on a RNN are compared. The optimal hyperparameters are found by grid search. The training and validation data for the different models is collected in full-scale experiments using the solar research vessel Solgenia. The performances of the different prediction models are compared in full-scale scenarios. %To use the investigated approaches for controller design, a general optimal control problem containing the advanced models is described. These can improve advanced control strategies e.g., nonlinear model predictive control (NMPC) or reinforcement learning (RL). This paper explores the question of what the advantages and disadvantages of the different presented prediction approaches are and how they can be used to improve the docking behavior of a vessel.
The growing error rates of triple-level cell (TLC) and quadruple-level cell (QLC) NAND flash memories have led to the application of error correction coding with soft-input decoding techniques in flash-based storage systems. Typically, flash memory is organized in pages where the individual bits per cell are assigned to different pages and different codewords of the error-correcting code. This page-wise encoding minimizes the read latency with hard-input decoding. To increase the decoding capability, soft-input decoding is used eventually due to the aging of the cells. This soft-decoding requires multiple read operations. Hence, the soft-read operations reduce the achievable throughput, and increase the read latency and power consumption. In this work, we investigate a different encoding and decoding approach that improves the error correction performance without increasing the number of reference voltages. We consider TLC and QLC flashes where all bits are jointly encoded using a Gray labeling. This cell-wise encoding improves the achievable channel capacity compared with independent page-wise encoding. Errors with cell-wise read operations typically result in a single erroneous bit per cell. We present a coding approach based on generalized concatenated codes that utilizes this property.
In this paper, a novel feature-based sampling strategy for nonlinear Model Predictive Path Integral (MPPI) control is presented. Using the MPPI approach, the optimal feedback control is calculated by solving a stochastic optimal control (OCP) problem online by evaluating the weighted inference of sampled stochastic trajectories. While the MPPI algorithm can be excellently parallelized, the closed-loop performance strongly depends on the information quality of the sampled trajectories. To draw samples, a proposal density is used. The solver’s and thus, the controller’s performance is of high quality if the sampled trajectories drawn from this proposal density are located in low-cost regions of state-space. In classical MPPI control, the explored state-space is strongly constrained by assumptions that refer to the control value’s covariance matrix, which are necessary for transforming the stochastic Hamilton–Jacobi–Bellman (HJB) equation into a linear second-order partial differential equation. To achieve excellent performance even with discontinuous cost functions, in this novel approach, knowledge-based features are introduced to constitute the proposal density and thus the low-cost region of state-space for exploration. This paper addresses the question of how the performance of the MPPI algorithm can be improved using a feature-based mixture of base densities. Furthermore, the developed algorithm is applied to an autonomous vessel that follows a track and concurrently avoids collisions using an emergency braking feature. Therefore, the presented feature-based MPPI algorithm is applied and analyzed in both simulation and full-scale experiments.
Docking Control of a Fully-Actuated Autonomous Vessel using Model Predictive Path Integral Control
(2022)
This paper presents the docking control of an autonomous vessel using the nonlinear Model Predictive Path Integral (MPPI) approach. This algorithm is based on a path integral over stochastic trajectories and can be parallelized easily. The controller parameters are tuned offline using knowledge of the system and simulations, including nonlinear state and disturbance observer. The cost function implicitly contains information regarding the surrounding of the docking position. This approach allows continuous optimization of the trajectory with respect to the system state, disturbance state and actuator dynamics. The control strategy has been tested in full-scale experiments using the solar research vessel Solgenia. The investigated MPPI controller has demonstrated excellent performance in both, simulation and real-world experiments. This paper addresses the question of how the MPPI algorithm can be applied to dock a fully-actuated vessel and what benefits its application achieves.
This paper presents the swinging up and stabilization control of a Furuta pendulum using the recently published nonlinear Model Predictive Path Integral (MPPI) approach. This algorithm is based on a path integral over stochastic trajectories and can be parallelized easily. The controller parameters are tuned offline regarding the nonlinear system dynamics and simulations. Constraints in terms of state and input are taken into account in the cost function. The presented approach sequentially computes an optimal control sequence that minimizes this optimal control problem online. The control strategy has been tested in full-scale experiments using a pendulum prototype. The investigated MPPI controller has demonstrated excellent performance in simulation for the swinging up and stabilizing task. In order to also achieve outstanding performance in a real-world experiment using a controller with limited computing power, a linear quadratic controller (LQR) is designed for the stabilization task. In this paper, the determination of the controller parameters for the MPPI algorithm is described in detail. Further, a discussion treats the advantages of the nonlinear MPPI control.
In this paper, approximating the shape of a sailing boat using elliptic cones is investigated. Measurements are assumed to be gathered from the target's surface recorded by 3D scanning devices such as multilayer LiDAR sensors. Therefore, different models for estimating the sailing boat's extent are presented and evaluated in simulated and real-world scenarios. In particular, the measurement source association problem is addressed in the models. Simulated investigations are conducted with a static and a moving elliptic cone. The real-world scenario was recorded with a Velodyne Alpha Prime (VLP-128) mounted on a ferry of Lake Constance. Final results of this paper constitute the extent estimation of a single sailing boat using LiDAR data applying various measurement models.
Reliability Assessment of an Unscented Kalman Filter by Using Ellipsoidal Enclosure Techniques
(2022)
The Unscented Kalman Filter (UKF) is widely used for the state, disturbance, and parameter estimation of nonlinear dynamic systems, for which both process and measurement uncertainties are represented in a probabilistic form. Although the UKF can often be shown to be more reliable for nonlinear processes than the linearization-based Extended Kalman Filter (EKF) due to the enhanced approximation capabilities of its underlying probability distribution, it is not a priori obvious whether its strategy for selecting sigma points is sufficiently accurate to handle nonlinearities in the system dynamics and output equations. Such inaccuracies may arise for sufficiently strong nonlinearities in combination with large state, disturbance, and parameter covariances. Then, computationally more demanding approaches such as particle filters or the representation of (multi-modal) probability densities with the help of (Gaussian) mixture representations are possible ways to resolve this issue. To detect cases in a systematic manner that are not reliably handled by a standard EKF or UKF, this paper proposes the computation of outer bounds for state domains that are compatible with a certain percentage of confidence under the assumption of normally distributed states with the help of a set-based ellipsoidal calculus. The practical applicability of this approach is demonstrated for the estimation of state variables and parameters for the nonlinear dynamics of an unmanned surface vessel (USV).
Experimental Validation of Ellipsoidal Techniques for State Estimation in Marine Applications
(2022)
A reliable quantification of the worst-case influence of model uncertainty and external disturbances is crucial for the localization of vessels in marine applications. This is especially true if uncertain GPS-based position measurements are used to update predicted vessel locations that are obtained from the evaluation of a ship’s state equation. To reflect real-life working conditions, these state equations need to account for uncertainty in the system model, such as imperfect actuation and external disturbances due to effects such as wind and currents. As an application scenario, the GPS-based localization of autonomous DDboat robots is considered in this paper. Using experimental data, the efficiency of an ellipsoidal approach, which exploits a bounded-error representation of disturbances and uncertainties, is demonstrated.
Multi-object tracking filters require a birth density to detect new objects from measurement data. If the initial positions of new objects are unknown, it may be useful to choose an adaptive birth density. In this paper, a circular birth density is proposed, which is placed like a band around the surveillance area. This allows for 360° coverage. The birth density is described in polar coordinates and considers all point-symmetric quantities such as radius, radial velocity and tangential velocity of objects entering the surveillance area. Since it is assumed that these quantities are unknown and may vary between different targets, detected trajectories, and in particular their initial states, are used to estimate the distribution of initial states. The adapted birth density is approximated as a Gaussian mixture, so that it can be used for filters operating on Cartesian coordinates.
Extended Target Tracking With a Lidar Sensor Using Random Matrices and a Virtual Measurement Model
(2022)
Random matrices are widely used to estimate the extent of an elliptically contoured object. Usually, it is assumed that the measurements follow a normal distribution, with its standard deviation being proportional to the object’s extent. However, the random matrix approach can filter the center of gravity and the covariance matrix of measurements independently of the measurement model. This work considers the whole chain from data acquisition to the linear Kalman Filter with extension estimation as a reference plant. The input is the (unknown) ground truth (position and extent). The output is the filtered center of gravity and the filtered covariance matrix of the measurement distribution. A virtual measurement model emulates the behavior of the reference plant. The input of the virtual measurement model is adapted using the proposed algorithm until the output parameters of the virtual measurement model match the result of the reference plant. After the adaptation, the input to the virtual measurement model is considered an estimation for position and extent. The main contribution of this paper is the reference model concept and an adaptation algorithm to optimize the input of the virtual measurement model.
Kapitel 2 der vorliegenden Arbeit beschreibt die theoretischen Grundlagen optimaler Regelung und die unterschiedlichen Methoden des Pfadintegral Frameworks zur Reglersynthese. Zudem wird ein Ansatz zur Erweiterung des stochastischen NMPC dargestellt, sodass eine Adaption an eine real vorliegende Systemdynamik erfolgt. Weiter wird eine Methode entwickelt und beschrieben, welche die Effizienz des Algorithmus stark erhöht.
In Kapitel 3 wird aufgezeigt, wie die Pfadintegral Regelung dazu genutzt wird ein Furuta Pendel aufzuschwingen.
In Kapitel 4 werden die Algorithmen zur Lösung unterschiedlicher Problemstellungen im Kontext eines Forschungsboot appliziert. So wird unter anderem gezeigt, wie ein Pfadintegral Regelungsalgorithmus genutzt werden kann, um autonom mit dem Forschungsboot Solgenia am Steg der HTWG Konstanz anzulegen.
Abschließend wird in Kapitel 5 ein Fazit aus den Ergebnissen gezogen, diese eingeordnet und ein Ausblick auf weitere mögliche Arbeiten gegeben.
This paper describes the development of a control system for an industrial heating application. In this process a moving substrate is passing through a heating zone with variable speed. Heat is applied by hot air to the substrate with the air flow rate being the manipulated variable. The aim is to control the substrate’s temperature at a specific location after passing the heating zone. First, a model is derived for a point attached to the moving substrate. This is modified to reflect the temperature of the moving substrate at the specified location. In order to regulate the temperature a nonlinear model predictive control approach is applied using an implicit Euler scheme to integrate the model and an augmented gradient based optimization approach. The performance of the controller has been validated both by simulations and experiments on the physical plant. The respective results are presented in this paper.
Trajectory Tracking of a Fully-actuated Surface Vessel using Nonlinear Model Predictive Control
(2021)
The trajectory tracking problem for a fully-actuated real-scaled surface vessel is addressed in this paper. The unknown hydrodynamic and propulsion parameters of the vessel’s dynamic model were identified using an experimental maneuver-based identification process. Then, a nonlinear model predictive control (NMPC) scheme is designed and the controller’s performance is assessed through the variation of NMPC parameters and constraints tightening for tracking a curved trajectory.
In this paper, a systematic comparison of three different advanced control strategies for automated docking of a vessel is presented. The controllers are automatically tuned offline by applying an optimization process using simulations of the whole system including trajectory planner and state and disturbance observer. Then investigations are conducted subject to performance and robustness using Monte Carlos simulation with varying model parameters and disturbances. The control strategies have also been tested in full scale experiments using the solar research vessel Solgenia. The investigated control strategies all have demonstrated very good performance in both, simulation and real world experiments. Videos are available under https://www.htwg-konstanz.de/forschung-und-transfer/institute-und-labore/isd/regelungstechnik/videos/
In multi-extended object tracking, parameters (e.g., extent) and trajectory are often determined independently. In this paper, we propose a joint parameter and trajectory (JPT) state and its integration into the Bayesian framework. This allows processing measurements that contain information about parameters and states. Examples of such measurements are bounding boxes given from an image processing algorithm. It is shown that this approach can consider correlations between states and parameters. In this paper, we present the JPT Bernoulli filter. Since parameters and state elements are considered in the weighting of the measurement data assignment hypotheses, the performance is higher than with the conventional Bernoulli filter. The JPT approach can be also used for other Bayes filters.
List decoding for concatenated codes based on the Plotkin construction with BCH component codes
(2021)
Reed-Muller codes are a popular code family based on the Plotkin construction. Recently, these codes have regained some interest due to their close relation to polar codes and their low-complexity decoding. We consider a similar code family, i.e., the Plotkin concatenation with binary BCH component codes. This construction is more flexible regarding the attainable code parameters. In this work, we consider a list-based decoding algorithm for the Plotkin concatenation with BCH component codes. The proposed list decoding leads to a significant coding gain with only a small increase in computational complexity. Simulation results demonstrate that the Plotkin concatenation with the proposed decoding achieves near maximum likelihood decoding performance. This coding scheme can outperform polar codes for moderate code lengths.
The encoding of antenna patterns with generalized spatial modulation as well as other index modulation techniques require w-out-of-n encoding where all binary vectors of length n have the same weight w. This constant-weight property cannot be obtained by conventional linear coding schemes. In this work, we propose a new class of constant-weight codes that result from the concatenation of convolutional codes with constant-weight block codes. These constant-weight convolutional codes are nonlinear binary trellis codes that can be decoded with the Viterbi algorithm. Some constructed constant-weight convolutional codes are optimum free distance codes. Simulation results demonstrate that the decoding performance with Viterbi decoding is close to the performance of the best-known linear codes. Similarly, simulation results for spatial modulation with a simple on-off keying show a significant coding gain with the proposed coded index modulation scheme.
Acoustic Echo Cancellation (AEC) plays a crucial role in speech communication devices to enable full-duplex communication. AEC algorithms have been studied extensively in the literature. However, device specific details like microphone or loudspeaker configurations are often neglected, despite their impact on the echo attenuation or near-end speech quality. In this work, we propose a method to investigate different loudspeaker-microphone configurations with respect to their contribution to the overall AEC performance. A generic AEC system consisting of an adaptive filter and a Wiener post filter is used for a fair comparison between different setups. We propose the near-end-to-residual-echo ratio (NRER) and the attenuation-of-near-end (AON) as quality measures for the full-duplex AEC performance.
Large-scale quantum computers threaten today's public-key cryptosystems. The code-based McEliece and Niederreiter cryptosystems are among the most promising candidates for post-quantum cryptography. Recently, a new class of q-ary product codes over Gaussian integers together with an efficient decoding algorithm were proposed for the McEliece cryptosystems. It was shown that these codes achieve a higher work factor for information-set decoding attacks than maximum distance separable (MDS) codes with comparable length and dimension. In this work, we adapt this q-ary product code construction to codes over Eisenstein integers. We propose a new syndrome decoding method which is applicable for Niederreiter cryptosystems. The code parameters and work factors for information-set decoding are comparable to codes over Gaussian integers. Hence, the new construction is not favorable for the McEliece system. Nevertheless, it is beneficial for the Niederreiter system, where it achieves larger message lengths. While the Niederreiter and McEliece systems have the same level of security, the Niederreiter system can be advantageous for some applications, e.g., it enables digital signatures. The proposed coding scheme is interesting for lightweight Niederreiter cryptosystems and embedded security due to the short code lengths and low decoding complexity.
Code-based cryptography is a promising candidate for post-quantum public-key encryption. The classic McEliece system uses binary Goppa codes, which are known for their good error correction capability. However, the key generation and decoding procedures of the classic McEliece system have a high computation complexity. Recently, q-ary concatenated codes over Gaussian integers were proposed for the McEliece cryptosystem together with the one-Mannheim error channel, where the error values are limited to Mannheim weight one. For this channel, concatenated codes over Gaussian integers achieve a higher error correction capability than maximum distance separable (MDS) codes with bounded minimum distance decoding. This improves the work factor regarding decoding attacks based on information-set decoding. This work proposes an improved construction for codes over Gaussian integers. These generalized concatenated codes extent the rate region where the work factor is beneficial compared to MDS codes. They allow for shorter public keys for the same level of security as the classic Goppa codes. Such codes are beneficial for lightweight code-based cryptosystems.
Algorithms and Architectures for Cryptography and Source Coding in Non-Volatile Flash Memories
(2021)
In this work, algorithms and architectures for cryptography and source coding are developed, which are suitable for many resource-constrained embedded systems such as non-volatile flash memories. A new concept for elliptic curve cryptography is presented, which uses an arithmetic over Gaussian integers. Gaussian integers are a subset of the complex numbers with integers as real and imaginary parts. Ordinary modular arithmetic over Gaussian integers is computational expensive. To reduce the complexity, a new arithmetic based on the Montgomery reduction is presented. For the elliptic curve point multiplication, this arithmetic over Gaussian integers improves the computational efficiency, the resistance against side channel attacks, and reduces the memory requirements. Furthermore, an efficient variant of the Lempel-Ziv-Welch (LZW) algorithm for universal lossless data compression is investigated. Instead of one LZW dictionary, this algorithm applies several dictionaries to speed up the encoding process. Two dictionary partitioning techniques are introduced that improve the compression rate and reduce the memory size of this parallel dictionary LZW algorithm.
Error correction coding for optical communication and storage requires high rate codes that enable high data throughput and low residual errors. Recently, different concatenated coding schemes were proposed that are based on binary BCH codes with low error correcting capabilities. In this work, low-complexity hard- and soft-input decoding methods for such codes are investigated. We propose three concepts to reduce the complexity of the decoder. For the algebraic decoding we demonstrate that Peterson's algorithm can be more efficient than the Berlekamp-Massey algorithm for single, double, and triple error correcting BCH codes. We propose an inversion-less version of Peterson's algorithm and a corresponding decoding architecture. Furthermore, we propose a decoding approach that combines algebraic hard-input decoding with soft-input bit-flipping decoding. An acceptance criterion is utilized to determine the reliability of the estimated codewords. For many received codewords the stopping criterion indicates that the hard-decoding result is sufficiently reliable, and the costly soft-input decoding can be omitted. To reduce the memory size for the soft-values, we propose a bit-flipping decoder that stores only the positions and soft-values of a small number of code symbols. This method significantly reduces the memory requirements and has little adverse effect on the decoding performance.
Four-Dimensional Hurwitz Signal Constellations, Set Partitioning, Detection, and Multilevel Coding
(2021)
The Hurwitz lattice provides the densest four-dimensional packing. This fact has motivated research on four-dimensional Hurwitz signal constellations for optical and wireless communications. This work presents a new algebraic construction of finite sets of Hurwitz integers that is inherently accompanied by a respective modulo operation. These signal constellations are investigated for transmission over the additive white Gaussian noise (AWGN) channel. It is shown that these signal constellations have a better constellation figure of merit and hence a better asymptotic performance over an AWGN channel when compared with conventional signal constellations with algebraic structure, e.g., two-dimensional Gaussian-integer constellations or four-dimensional Lipschitz-integer constellations. We introduce two concepts for set partitioning of the Hurwitz integers. The first method is useful to reduce the computational complexity of the symbol detection. This suboptimum detection approach achieves near-maximum-likelihood performance. In the second case, the partitioning exploits the algebraic structure of the Hurwitz signal constellations. We partition the Hurwitz integers into additive subgroups in a manner that the minimum Euclidean distance of each subgroup is larger than in the original set. This enables multilevel code constructions for the new signal constellations.
The performance and reliability of non-volatile NAND flash memories deteriorate as the number of program/erase cycles grows. The reliability also suffers from cell to cell interference, long data retention time, and read disturb. These processes effect the read threshold voltages. The aging of the cells causes voltage shifts which lead to high bit error rates (BER) with fixed pre-defined read thresholds. This work proposes two methods that aim on minimizing the BER by adjusting the read thresholds. Both methods utilize the number of errors detected in the codeword of an error correction code. It is demonstrated that the observed number of errors is a good measure for the voltage shifts and is utilized for the initial calibration of the read thresholds. The second approach is a gradual channel estimation method that utilizes the asymmetrical error probabilities for the one-to-zero and zero-to-one errors that are caused by threshold calibration errors. Both methods are investigated utilizing the mutual information between the optimal read voltage and the measured error values.
Numerical results obtained from flash measurements show that these methods reduce the BER of NAND flash memories significantly.
The McEliece cryptosystem is a promising candidate for post-quantum public-key encryption. In this work, we propose q-ary codes over Gaussian integers for the McEliece system and a new channel model. With this one Mannheim error channel, errors are limited to weight one. We investigate the channel capacity of this channel and discuss its relation to the McEliece system. The proposed codes are based on a simple product code construction and have a low complexity decoding algorithm. For the one Mannheim error channel, these codes achieve a higher error correction capability than maximum distance separable codes with bounded minimum distance decoding. This improves the work factor regarding decoding attacks based on information-set decoding.
Generalized Concatenated Codes over Gaussian and Eisenstein Integers for Code-Based Cryptography
(2021)
The code-based McEliece and Niederreiter cryptosystems are promising candidates for post-quantum public-key encryption. Recently, q-ary concatenated codes over Gaussian integers were proposed for the McEliece cryptosystem together with the one-Mannheim error channel, where the error values are limited to Mannheim weight one. Due to the limited error values, the codes over Gaussian integers achieve a higher error correction capability than maximum distance separable (MDS) codes with bounded minimum distance decoding. This higher error correction capability improves the work factor regarding decoding attacks based on information-set decoding. The codes also enable a low complexity decoding algorithm for decoding beyond the guaranteed error correction capability. In this work, we extend this coding scheme to codes over Eisenstein integers. These codes have advantages for the Niederreiter system. Additionally, we propose an improved code construction based on generalized concatenated codes. These codes extent the rate region where the work factor is beneficial compared to MDS codes. Moreover, generalized concatenated codes are more robust against structural attacks than ordinary concatenated codes.
In this paper, a novel measurement model based on spherical double Fourier series (DFS) for estimating the 3D shape of a target concurrently with its kinematic state is introduced. Here, the shape is represented as a star-convex radial function, decomposed as spherical DFS. In comparison to ordinary DFS, spherical DFS do not suffer from ambiguities at the poles. Details will be given in the paper. The shape representation is integrated into a Bayesian state estimator framework via a measurement equation. As range sensors only generate measurements from the target side facing the sensor, the shape representation is modified to enable application of shape symmetries during the estimation process. The model is analyzed in simulations and compared to a shape estimation procedure using spherical harmonics. Finally, shape estimation using spherical and ordinary DFS is compared to analyze the effect of the pole problem in extended object tracking (EOT) scenarios.
A nonlinear mathematical model for the dynamics of permanent magnet synchronous machines with interior magnets is discussed. The model of the current dynamics captures saturation and dependency on the rotor angle. Based on the model, a flatness-based field-oriented closed-loop controller and a feed-forward compensation of torque ripples are derived. Effectiveness and robustness of the proposed algorithms are demonstrated by simulation results.
This paper proposes a novel transmission scheme for generalized multistream spatial modulation. This new approach uses one Mannheim error correcting codes over Gaussian or Eisenstein integers as multidimensional signal constellations. These codes enable a suboptimal decoding strategy with near maximum likelihood performance for transmission over the additive white Gaussian noise channel. In this contribution, this decoding algorithm is generalized to the detection for generalized multistream spatial modulation. The proposed method can outperform conventional generalized multistream spatial modulation with respect to decoding performance, detection complexity, and spectral efficiency.
Soft-input decoding of concatenated codes based on the Plotkin construction and BCH component codes
(2020)
Low latency communication requires soft-input decoding of binary block codes with small to medium block lengths.
In this work, we consider generalized multiple concatenated (GMC) codes based on the Plotkin construction. These codes are similar to Reed-Muller (RM) codes. In contrast to RM codes, BCH codes are employed as component codes. This leads to improved code parameters. Moreover, a decoding algorithm is proposed that exploits the recursive structure of the concatenation. This algorithm enables efficient soft-input decoding of binary block codes with small to medium lengths. The proposed codes and their decoding achieve significant performance gains compared with RM codes and recursive GMC decoding.
The reliability of flash memories suffers from various error causes. Program/erase cycles, read disturb, and cell to cell interference impact the threshold voltages and cause bit errors during the read process. Hence, error correction is required to ensure reliable data storage. In this work, we investigate the bit-labeling of triple level cell (TLC) memories. This labeling determines the page capacities and the latency of the read process. The page capacity defines the redundancy that is required for error correction coding. Typically, Gray codes are used to encode the cell state such that the codes of adjacent states differ in a single digit. These Gray codes minimize the latency for random access reads but cannot balance the page capacities. Based on measured voltage distributions, we investigate the page capacities and propose a labeling that provides a better rate balancing than Gray labeling.
Side Channel Attack Resistance of the Elliptic Curve Point Multiplication using Eisenstein Integers
(2020)
Asymmetric cryptography empowers secure key exchange and digital signatures for message authentication. Nevertheless, consumer electronics and embedded systems often rely on symmetric cryptosystems because asymmetric cryptosystems are computationally intensive. Besides, implementations of cryptosystems are prone to side-channel attacks (SCA). Consequently, the secure and efficient implementation of asymmetric cryptography on resource-constrained systems is demanding. In this work, elliptic curve cryptography is considered. A new concept for an SCA resistant calculation of the elliptic curve point multiplication over Eisenstein integers is presented and an efficient arithmetic over Eisenstein integers is proposed. Representing the key by Eisenstein integer expansions is beneficial to reduce the computational complexity and the memory requirements of an SCA protected implementation.
In this article, we give the construction of new four-dimensional signal constellations in the Euclidean space, which represent a certain combination of binary frequency-shift keying (BFSK) and M-ary amplitude-phase-shift keying (MAPSK). Description of such signals and the formulas for calculating the minimum squared Euclidean distance are presented. We have developed an analytic building method for even and odd values of M. Hence, no computer search and no heuristic methods are required. The new optimized BFSK-MAPSK (M = 5,6,···,16) signal constructions are built for the values of modulation indexes h =0.1,0.15,···,0.5 and their parameters are given. The results of computer simulations are also provided. Based on the obtained results we can conclude, that BFSK-MAPSK systems outperform similar four-dimensional systems both in terms of minimum squared Euclidean distance and simulated symbol error rate.
This work presents a new concept to implement the elliptic curve point multiplication (PM). This computation is based on a new modular arithmetic over Gaussian integer fields. Gaussian integers are a subset of the complex numbers such that the real and imaginary parts are integers. Since Gaussian integer fields are isomorphic to prime fields, this arithmetic is suitable for many elliptic curves. Representing the key by a Gaussian integer expansion is beneficial to reduce the computational complexity and the memory requirements of secure hardware implementations, which are robust against attacks. Furthermore, an area-efficient coprocessor design is proposed with an arithmetic unit that enables Montgomery modular arithmetic over Gaussian integers. The proposed architecture and the new arithmetic provide high flexibility, i.e., binary and non-binary key expansions as well as protected and unprotected PM calculations are supported. The proposed coprocessor is a competitive solution for a compact ECC processor suitable for applications in small embedded systems.
Modeling a suitable birth density is a challenge when using Bernoulli filters such as the Labeled Multi-Bernoulli (LMB) filter. The birth density of newborn targets is unknown in most applications, but must be given as a prior to the filter. Usually the birth density stays unchanged or is designed based on the measurements from previous time steps.
In this paper, we assume that the true initial state of new objects is normally distributed. The expected value and covariance of the underlying density are unknown parameters. Using the estimated multi-object state of the LMB and the Rauch-Tung-Striebel (RTS) recursion, these parameters are recursively estimated and adapted after a target is detected.
The main contribution of this paper is an algorithm to estimate the parameters of the birth density and its integration into the LMB framework. Monte Carlo simulations are used to evaluate the detection driven adaptive birth density in two scenarios. The approach can also be applied to filters that are able to estimate trajectories.
The Montgomery multiplication is an efficient method for modular arithmetic. Typically, it is used for modular arithmetic over integer rings to prevent the expensive inversion for the modulo reduction. In this work, we consider modular arithmetic over rings of Gaussian integers. Gaussian integers are subset of the complex numbers such that the real and imaginary parts are integers. In many cases Gaussian integer rings are isomorphic to ordinary integer rings. We demonstrate that the concept of the Montgomery multiplication can be extended to Gaussian integers. Due to independent calculation of the real and imaginary parts, the computation complexity of the multiplication is reduced compared with ordinary integer modular arithmetic. This concept is suitable for coding applications as well as for asymmetric key cryptographic systems, such as elliptic curve cryptography or the Rivest-Shamir-Adleman system.
In this work, we investigate a hybrid decoding approach that combines algebraic hard-input decoding of binary block codes with soft-input decoding. In particular, an acceptance criterion is proposed which determines the reliability of a candidate codeword. For many received codewords the stopping criterion indicates that the hard-decoding result is sufficiently reliable, and the costly soft-input decoding can be omitted. The proposed acceptance criterion significantly reduces the decoding complexity. For simulations we combine the algebraic hard-input decoding with ordered statistics decoding, which enables near maximum likelihood soft-input decoding for codes of small to medium block lengths.
Multi-dimensional spatial modulation is a multipleinput/ multiple-output wireless transmission technique, that uses only a few active antennas simultaneously. The computational complexity of the optimal maximum-likelihood (ML) detector at the receiver increases rapidly as more transmit antennas or larger modulation orders are employed. ML detection may be infeasible for higher bit rates. Many suboptimal detection algorithms for spatial modulation use two-stage detection schemes where the set of active antennas is detected in the first stage and the transmitted symbols in the second stage. Typically, these detection schemes use the ML strategy for the symbol detection. In this work, we consider a suboptimal detection algorithm for the second detection stage. This approach combines equalization and list decoding. We propose an algorithm for multi-dimensional signal constellations with a reduced search space in the second detection stage through set partitioning. In particular, we derive a set partitioning from the properties of Hurwitz integers. Simulation results demonstrate that the new algorithm achieves near-ML performance. It significantly reduces the complexity when compared with conventional two-stage detection schemes. Multi-dimensional constellations in combination with suboptimal detection can even outperform conventional signal constellations in combination with ML detection.
Spatial modulation is a low-complexity multipleinput/ multipleoutput transmission technique. The recently proposed spatial permutation modulation (SPM) extends the concept of spatial modulation. It is a coding approach, where the symbols are dispersed in space and time. In the original proposal of SPM, short repetition codes and permutation codes were used to construct a space-time code. In this paper, we propose a similar coding scheme that combines permutation codes with codes over Gaussian integers. Short codes over Gaussian integers have good distance properties. Furthermore, the code alphabet can directly be applied as signal constellation, hence no mapping is required. Simulation results demonstrate that the proposed coding approach outperforms SPM with repetition codes.
Many resource-constrained systems still rely on symmetric cryptography for verification and authentication. Asymmetric cryptographic systems provide higher security levels, but are very computational intensive. Hence, embedded systems can benefit from hardware assistance, i.e., coprocessors optimized for the required public key operations. In this work, we propose an elliptic curve cryptographic coprocessors design for resource-constrained systems. Many such coprocessor designs consider only special (Solinas) prime fields, which enable a low-complexity modulo arithmetic. Other implementations support arbitrary prime curves using the Montgomery reduction. These implementations typically require more time for the point multiplication. We present a coprocessor design that has low area requirements and enables a trade-off between performance and flexibility. The point multiplication can be performed either using a fast arithmetic based on Solinas primes or using a slower, but flexible Montgomery modular arithmetic.
Side Channel Attack Resistance of the Elliptic Curve Point Multiplication using Gaussian Integers
(2020)
Elliptic curve cryptography is a cornerstone of embedded security. However, hardware implementations of the elliptic curve point multiplication are prone to side channel attacks. In this work, we present a new key expansion algorithm which improves the resistance against timing and simple power analysis attacks. Furthermore, we consider a new concept for calculating the point multiplication, where the points of the curve are represented as Gaussian integers. Gaussian integers are subset of the complex numbers, such that the real and imaginary parts are integers. Since Gaussian integer fields are isomorphic to prime fields, this concept is suitable for many elliptic curves. Representing the key by a Gaussian integer expansion is beneficial to reduce the computational complexity and the memory requirements of a secure hardware implementation.
Ein Beitrag zum Beobachterentwurf und zur sensorlosen Folgeregelung translatorischer Magnetaktoren
(2020)
Flatness-based feed-forward control of solenoid actuators is considered. For precise motion planning and accurate steering of conventional solenoids, eddy currents cannot be neglected. The system of ordinary differential equations including eddy currents, that describes the nonlinear dynamics of such actuators, is not differentially flat. Thus, a distributed parameter approach based on a diffusion equation is considered, that enables the parametrization of the eddy current by the armature position and its time derivatives. In order to design the feedforward control, the distributed parameter model of the eddy current subsystem is combined with a typical nonlinear lumped parameter model for the electrical and mechanical subsystems of the solenoid. The control design and its application are illustrated by numerical and practical results for an industrial solenoid actuator.
NAND flash memory is widely used for data storage due to low power consumption, high throughput, short random access latency, and high density. The storage density of the NAND flash memory devices increases from one generation to the next, albeit at the expense of storage reliability.
Our objective in this dissertation is to improve the reliability of the NAND flash memory with a low hard implementation cost. We investigate the error characteristic, i.e. the various noises of the NAND flash memory. Based on the error behavior at different life-aging stages, we develop offset calibration techniques that minimize the bit error rate (BER).
Furthermore, we introduce data compression to reduce the write amplification effect and support the error correction codes (ECC) unit. In the first scenario, the numerical results show that the data compression can reduce the wear-out by minimizing the amount of data that is written to the flash. In the ECC scenario, the compression gain is used to improve the ECC capability. Based on the first scenario, the write amplification effect can be halved for the considered target flash and data model. By combining the ECC and data compression, the NAND flash memory lifetime improves three fold compared with uncompressed data for the same data model.
In order to improve the data reliability of the NAND flash memory, we investigate different ECC schemes based on concatenated codes like product codes, half-product codes, and generalized concatenated codes (GCC). We propose a construction for high-rate GCC for hard-input decoding. ECC based on soft-input decoding can significantly improve the reliability of NAND flash memories. Therefore, we propose a low-complexity soft-input decoding algorithm for high-rate GCC.
Extracting suitable features from acquired data to accurately depict the current health state of a system is crucial in data driven condition monitoring and prediction. Usually, analogue sensor data is sampled at rates far exceeding the Nyquist-rate containing substantial amounts of redundancies and noise, imposing high computational loads due to the subsequent and necessary feature processing chain (generation, dimensionality reduction, rating and selection). To overcome these problems, Compressed Sensing can be used to sample directly to a compressed space, provided the signal at hand and the employed compression/measurement system meet certain criteria. Theory states, that during this compression step enough information is conserved, such that a reconstruction of the original signal is possible with high probability. The proposed approach however does not rely on reconstructed data for condition monitoring purposes, but uses directly the compressed signal representation as feature vector. It is hence assumed that enough information is conveyed by the compression for condition monitoring purposes. To fuse the compressed coefficients into one health index that can be used as input for remaining useful life prediction algorithms and is limited to a reasonable range between 1 and 0, a logistic regression approach is used. Run-to-failure data of three translational electromagnetic actuators is used to demonstrate the health index generation procedure. A comparison to the time domain ground truth signals obtained from Nyquist sampled coil current measurements shows reasonable agreement. I.e. underlying wear-out phenomena can be reproduced by the proposed approach enabling further investigation of the application of prognostic methods.
This paper presents a new likelihood-based partitioning method of the measurement set for the extended object probability hypothesis density (PHD) filter framework. Recent work has mostly relied on heuristic partitioning methods that cluster the measurement data based on a distance measure between the single measurements. This can lead to poor filter performance if the tracked extended objects are closely spaced. The proposed method called Stochastic Partitioning (StP) is based on sampling methods and was inspired by a former work of Granström et. al. In this work, the StP method is applied to a Gaussian inverse Wishart (GIW) PHD filter and compared to a second filter implementation that uses the heuristic Distance Partitioning (DP) method. The performance is evaluated in Monte Carlo simulations in a scenario where two objects approach each other. It is shown that the sampling based StP method leads to an improved filter performance compared to DP.
The introduction of multi level cell (MLC) and triple level cell (TLC) technologies reduced the reliability of flash memories significantly compared with single level cell (SLC) flash. The reliability of the flash memory suffers from various errors causes. Program/erase cycles, read disturb, and cell to cell interference impact the threshold voltages. With pre-defined fixed read thresholds a voltage shift increases the bit error rate (BER). This work proposes a read threshold calibration method that aims on minimizing the BER by adapting the read voltages. The adaptation of the read thresholds is based on the number of errors observed in the codeword protecting a small amount of meta-data. Simulations based on flash measurements demonstrate that this method can significantly reduce the BER of TLC memories.
The Lempel-Ziv-Welch (LZW) algorithm is an important dictionary-based data compression approach that is used in many communication and storage systems. The parallel dictionary LZW (PDLZW) algorithm speeds up the LZW encoding by using multiple dictionaries. The PDLZW algorithm applies different dictionaries to store strings of different lengths, where each dictionary stores only strings of the same length. This simplifies the parallel search in the dictionaries for hardware implementations. The compression gain of the PDLZW depends on the partitioning of the address space, i.e. on the sizes of the parallel dictionaries. However, there is no universal partitioning that is optimal for all data sources. This work proposes an address space partitioning technique that optimizes the compression rate of the PDLZW using a Markov model for the data. Numerical results for address spaces with 512, 1024, and 2048 entries demonstrate that the proposed partitioning improves the performance of the PDLZW compared with the original proposal.
This work proposes a suboptimal detection algorithm for generalized multistream spatial modulation. Many suboptimal detection algorithms for spatial modulation use two-stage detection schemes where the set of active antennas is detected in the first stage and the transmitted symbols in the second stage. For multistream spatial modulation with large signal constellations the second detection step typically dominates the detection complexity. With the proposed detection scheme, the modified Gaussian approximation method is used for detecting the antenna pattern. In order to reduce the complexity for detecting the signal points, we propose a combined equalization and list decoding approach. Simulation results demonstrate that the new algorithm achieves near-maximum-likelihood performance with small list sizes. It significantly reduces the complexity when compared with conventional two-stage detection schemes.
This work introduces new signal constellations based on Eisenstein integers, i.e., the hexagonal lattice. These sets of Eisenstein integers have a cardinality which is an integer power of three. They are proposed as signal constellations for representation in the equivalent complex baseband model, especially for applications like physical-layer network coding or MIMO transmission where the constellation is required to be a subset of a lattice. It is shown that these constellations form additive groups where the addition over the complex plane corresponds to the addition with carry over ternary Galois fields. A ternary set partitioning is derived that enables multilevel coding based on ternary error-correcting codes. In the subsets, this partitioning achieves a gain of 4.77 dB, which results from an increased minimum squared Euclidean distance of the signal points. Furthermore, the constellation-constrained capacities over the AWGN channel and the related level capacities in case of ternary multilevel coding are investigated. Simulation results for multilevel coding based on ternary LDPC codes are presented which show that a performance close to the constellation-constrained capacities can be achieved.
The computational complexity of the optimal maximum likelihood (ML) detector for spatial modulation increases rapidly as more transmit antennas or larger modulation orders are employed. Hence, ML detection may be infeasible for higher bit rates. This work proposes an improved suboptimal detection algorithm based on the Gaussian approximation method. It is demonstrated that the new method is closely related to the previously published signal vector based detection and the modified maximum ratio combiner, but can improve the detection performance compared to these methods. Furthermore, the performance of different signal constellations with suboptimal detection is investigated. Simulation results indicate that the performance loss compared to ML detection depends heavily on the signal constellation, where the recently proposed Eisenstein integer constellations are beneficial compared to classical QAM or PSK constellations.
Error correction coding (ECC) for optical communication and persistent storage systems require high rate codes that enable high data throughput and low residual errors. Recently, different concatenated coding schemes were proposed that are based on binary Bose-Chaudhuri-Hocquenghem (BCH) codes that have low error correcting capabilities. Commonly, hardware implementations for BCH decoding are based on the Berlekamp-Massey algorithm (BMA). However, for single, double, and triple error correcting BCH codes, Peterson's algorithm can be more efficient than the BMA. The known hardware architectures of Peterson's algorithm require Galois field inversion. This inversion dominates the hardware complexity and limits the decoding speed. This work proposes an inversion-less version of Peterson's algorithm. Moreover, a decoding architecture is presented that is faster than decoders that employ inversion or the fully parallel BMA at a comparable circuit size.
It is well known that signal constellations which are based on a hexagonal grid, so-called Eisenstein constellations, exhibit a performance gain over conventional QAM ones. This benefit is realized by a packing and shaping gain of the Eisenstein (hexagonal) integers in comparison to the Gaussian (complex) integers. Such constellations are especially relevant in transmission schemes that utilize lattice structures, e.g., in MIMO communications. However, for coded modulation, the straightforward approach is to combine Eisenstein constellations with ternary channel codes. In this paper, a multilevel-coding approach is proposed where encoding and multistage decoding can directly be performed with state-of-the-art binary channel codes. An associated mapping and a binary set partitioning are derived. The performance of the proposed approach is contrasted to classical multilevel coding over QAM constellations. To this end, both the single-user AWGN scenario and the (multiuser) MIMO broadcast scenario using lattice-reduction-aided preequalization are considered. Results obtained from numerical simulations with LDPC codes complement the theoretical aspects.
The Lempel–Ziv–Welch (LZW) algorithm is an important dictionary-based data compression approach that is used in many communication and storage systems. The parallel dictionary LZW (PDLZW) algorithm speeds up the LZW encoding by using multiple dictionaries. This simplifies the parallel search in the dictionaries. However, the compression gain of the PDLZW depends on the partitioning of the address space, i.e. on the sizes of the parallel dictionaries. This work proposes an address space partitioning technique that optimises the compression rate of the PDLZW. Numerical results for address spaces with 512, 1024, and 2048 entries demonstrate that the proposed address partitioning improves the performance of the PDLZW compared with the original proposal. These address space sizes are suitable for flash storage systems. Moreover, the PDLZW has relative high memory requirements which dominate the costs of a hardware implementation. This work proposes a recursive dictionary structure and a word partitioning technique that significantly reduce the memory size of the parallel dictionaries.
Flash memories are non-volatile memory devices. The rapid development of flash technologies leads to higher storage density, but also to higher error rates. This dissertation considers this reliability problem of flash memories and investigates suitable error correction codes, e.g. BCH-codes and concatenated codes. First, the flash cells, their functionality and error characteristics are explained. Next, the mathematics of the employed algebraic code are discussed. Subsequently, generalized concatenated codes (GCC) are presented. Compared to the commonly used BCH codes, concatenated codes promise higher code rates and lower implementation complexity. This complexity reduction is achieved by dividing a long code into smaller components, which require smaller Galois-Field sizes. The algebraic decoding algorithms enable analytical determination of the block error rate. Thus, it is possible to guarantee very low residual error rates for flash memories. Besides the complexity reduction, general concatenated codes can exploit soft information. This so-called soft decoding is not practicable for long BCH-codes. In this dissertation, two soft decoding methods for GCC are presented and analyzed. These methods are based on the Chase decoding and the stack algorithm. The last method explicitly uses the generalized concatenated code structure, where the component codes are nested subcodes. This property supports the complexity reduction. Moreover, the two-dimensional structure of GCC enables the correction of error patterns with statistical dependencies. One chapter of the thesis demonstrates how the concatenated codes can be used to correct two-dimensional cluster errors. Therefore, a two-dimensional interleaver is designed with the help of Gaussian integers. This design achieves the correction of cluster errors with the best possible radius. Large parts of this works are dedicated to the question, how the decoding algorithms can be implemented in hardware. These hardware architectures, their throughput and logic size are presented for long BCH-codes and generalized concatenated codes. The results show that generalized concatenated codes are suitable for error correction in flash memories, especially for three-dimensional NAND memory systems used in industrial applications, where low residual errors must be guaranteed.
In this paper, the problem of controlling the dissolved oxygen level (DO) during an aerobic fermentation is considered. The proposed approach deals with three major difficulties in respect to the nonlinear dynamics of the DO, the poor accuracy of the empirical models for the oxygen consumption rate and the fact that only sampled measurements are available on-line. A nonlinear integral high-gain control law including a continuous-discrete time observer is designed to keep the DO in the neighborhood of a set point value without any knowledge on the dissolved oxygen consumption rate. The local stability of the control algorithm is proved using Lyapunov tools. The performance of the control scheme is first analyzed in simulation and then experimentally evaluated during a successfull fermentation of the bacteria over a period of three days. Pseudomonas putida mt-2
Error correction coding based on soft-input decoding can significantly improve the reliability of flash memories. Such soft-input decoding algorithms require reliability information about the state of the memory cell. This work proposes a channel model for soft-input decoding that considers the asymmetric error characteristic of multi-level cell (MLC) and triple-level cell (TLC) memories. Based on this model, an estimation method for the channel state information is devised which avoids additional pilot data for channel estimation. Furthermore, the proposed method supports page-wise read operations.
In the field of autonomously driving vehicles the environment perception containing dynamic objects like other road users is essential. Especially, detecting other vehicles in the road traffic using sensor data is of utmost importance. As the sensor data and the applied system model for the objects of interest are noise corrupted, a filter algorithm must be used to track moving objects. Using LIDAR sensors one object gives rise to more than one measurement per time step and is therefore called extended object. This allows to jointly estimate the objects, position, as well as its orientation, extension and shape. Estimating an arbitrary shaped object comes with a higher computational effort than estimating the shape of an object that can be approximated using a basic geometrical shape like an ellipse or a rectangle. In the case of a vehicle, assuming a rectangular shape is an accurate assumption.
A recently developed approach models the contour of a vehicle as periodic B-spline function. This representation is an easy to use tool, as the contour can be specified by some basis points in Cartesian coordinates. Also rotating, scaling and moving the contour is easy to handle using a spline contour. This contour model can be used to develop a measurement model for extended objects, that can be integrated into a tracking filter. Another approach modeling the shape of a vehicle is the so-called bounding box that represents the shape as rectangle.
In this thesis the basics of single, multi and extended object tracking, as well as the basics of B-spline functions are addressed. Afterwards, the spline measurement model is established in detail and integrated into an extended Kalman filter to track a single extended object. An implementation of the resulting algorithm is compared with the rectangular shape estimator. The implementation of the rectangular shape estimator is provided. The comparison is done using long-term considerations with Monte Carlo simulations and by analyzing the results of a single run. Therefore, both algorithms are applied to the same measurements. The measurements are generated using an artificial LIDAR sensor in a simulation environment.
In a real-world tracking scenario detecting several extended objects and measurements that do not originate from a real object, named clutter measurements, is possible. Also, the sudden appearance and disappearance of an object is possible. A filter framework investigated in recent years that can handle tracking multiple objects in a cluttered environment is a random finite set based approach. The idea of random finite sets and its use in a tracking filter is recapped in this thesis. Afterwards, the spline measurement model is included in a multi extended object tracking framework. An implementation of the resulting filter is investigated in a long-term consideration using Monte Carlo simulations and by analyzing the results of a single run. The multi extended object filter is also applied to artificial LIDAR measurements generated in a simulation environment.
The results of comparing the spline based and rectangular based extended object trackers show a more stable performance of the spline extended object tracker. Also, some problems that have to be addressed in future works are discussed. The investigation of the resulting multi extended object tracker shows a successful integration of the spline measurement model in a multi extended object tracker. Also, with these results some problems remain, that have to be solved in future works.
This paper describes an early lumping approach for generating a mathematical model of the heating process of a moving dual-layer substrate. The heat is supplied by convection and nonlinearly distributed over the whole considered spatial extend of the substrate. Using CFD simulations as a reference, two different modelling approaches have been investigated in order to achieve the most suitable model type. It is shown that due to the possibility of using the transition matrix for time discretization, an equivalent circuit model achieves superior results when compared to the Crank-Nicolson method. In order to maintain a constant sampling time for the in-visioned-control strategies, the effect of variable speed is transformed into a system description, where the state vector has constant length but a variable number of non-zero entries. The handling of the variable transport speed during the heating process is considered as the main contribution of this work. The result is a model, suitable for being used in future control strategies.
This paper focuses on the multivariable control of a drawing tower process. The nature of the process together with the differences in measurement noise levels that affect the variables to be controlled motivated the development of a new MPC algorithm. An extension of a multivariable predictive control algorithm with separated prediction horizons is proposed. The obtained experimental results show the usefulness of the proposed algorithm..
Comparison and Identifiability Analysis of Friction Models for the Dither Motion of a Solenoid
(2018)
In this paper, the mechanical subsystem of a proportional solenoid excited by a dither signal is considered. The objective is to find a suitable friction model that reflects the characteristic mechanical properties of the dynamic system. Several different friction models from the literature are compared. The friction models are evaluated with respect to their accuracy as well as their practical identifiability, the latter being quantified based on the Fisher information matrix.
A constructive nonlinear observer design for self-sensing of digital (ON/OFF) single coil electromagnetic actuators is studied. Self-sensing in this context means that solely the available energizing signals, i.e., coil current and driving voltage are used to estimate the position and velocity trajectories of the moving plunger. A nonlinear sliding mode observer is considered, where the stability of the reduced error dynamics is analyzed by the equivalent control method. No simplifications are made regarding magnetic saturation and eddy currents in the underlying dynamical model. The observer gains are constructed by taking into account some generic properties of the systems nonlinearities. Two possible choices of the observer gains are discussed. Furthermore, an observer-based tracking control scheme to achieve sensorless soft landing is considered and its closed-loop stability is studied. Experimental results for observer-based soft landing of a fast-switching solenoid valve under dry conditions are presented to demonstrate the usefulness of the approach.
A constructive method for the design of nonlinear observers is discussed. To formulate conditions for the construction of the observer gains, stability results for nonlinear singularly perturbed systems are utilised. The nonlinear observer is designed directly in the given coordinates, where the error dynamics between the plant and the observer becomes singularly perturbed by a high-gain part of the observer injection, and the information of the slow manifold is exploited to construct the observer gains of the reduced-order dynamics. This is in contrast to typical high-gain observer approaches, where the observer gains are chosen such that the nonlinearities are dominated by a linear system. It will be demonstrated that the considered approach is particularly suited for self-sensing electromechanical systems. Two variants of the proposed observer design are illustrated for a nonlinear electromagnetic actuator, where the mechanical quantities, i.e. the position and the velocity, are not measured
Autonomous moving systems require very detailed information about their environment and potential colliding objects. Thus, the systems are equipped with high resolution sensors. These sensors have the property to generate more than one detection per object per time step. This results in an additional complexity for the target tracking algorithm, since standard tracking filters assume that an object generates at most one detection per object. This requires new methods for data association and system state filtering.
As new data association methods, in this thesis two different extensions of the Joint Integrated Probabilistic Data Association (JIPDA) filter to assign more than one detection to tracks are proposed.
The first method that is introduced, is a generalization of the JIPDA to assign a variable number of measurements to each track based on some predefined statistical models, which will be called Multi Detection - Joint Integrated Probabilistic Data Association (MD-JIPDA).
Since this scheme suffers from exponential increase of association hypotheses, also a new approximation scheme is presented. The second method is an extension for the special case, when the number and locations of measurements are a priori known. In preparation of this method, a new notation and computation scheme for the standard Joint Integrated Data Association is outlined, which also enables the derivation of a new fast approximation scheme called balanced permanent-JIPDA.
For state filtering, also two different concepts are applied: the Random Matrix Framework and the Measurement Generating Points. For the Random Matrix framework, first an alternative prediction method is proposed to account for kinematic state changes in the extension state prediction as well. Secondly, various update methods are investigated to account for the polar to Cartesian noise transformation problem. The filtering concepts are connected with the new MD-JIPDA and their characteristics analyzed with various Monte Carlo simulations.
In case an object can be modeled by a finite number of fixed Measurement Generating Points (MGP), also a proposition to track these object via a JIPDA filter is made. In this context, a fast Track-to-Track fusion algorithm is proposed as well and compared against the MGP-JIPDA.
The proposed algorithms are evaluated in two applications where scanning is done using radar sensors only. The first application is a typical automotive scenario, where a passenger car is equipped with six radar sensors to cover its complete environment.
In this application, the location of the measurements on an object can be considered stationary and that is has a rectangular shape. Thus, the MGP based algorithms are applied here. The filters are evaluated by tracking especially vehicles on nearside lanes.
The second application covers the tracking of vessels on inland waters. Here, two different kind of Radar systems are applied, but for both sensors a uniform distribution of the measurements over the target's extent can be assumed. Further, the assumption that the targets have elliptical shape holds, and so the Random Matrix Framework in combination with the MD-JIPDA is evaluated.
Exemplary test scenarios also illustrate the performance of this tracking algorithm.
Digitale Signaturen zum Überprüfen der Integrität von Daten, beispielsweise von Software-Updates, gewinnen zunehmend an Bedeutung. Im Bereich der eingebetteten Systeme kommen derzeit wegen der geringen Komplexität noch überwiegend symmetri-sche Verschlüsselungsverfahren zur Berechnung eines Authentifizierungscodes zum Einsatz. Asym-metrische Kryptosysteme sind rechenaufwendiger, bieten aber mehr Sicherheit, weil der Schlüssel zur Authentifizierung nicht geheim gehalten werden muss. Asymmetrische Signaturverfahren werden typischerweise zweistufig berechnet. Der Schlüssel wird nicht direkt auf die Daten angewendet, sondern auf deren Hash-Wert, der mit Hilfe einer Hash-funktion zuvor berechnet wurde. Zum Einsatz dieser Verfahren in eingebetteten Systemen ist es erforder-lich, dass die Hashfunktion einen hinreichend gro-ßen Datendurchsatz ermöglicht. In diesem Beitrag wird eine effiziente Hardware-Implementierung der SHA-256 Hashfunktion vorgestellt.
Embodiments are generally related to the field of channel and source coding of data to be sent over a channel, such as a communication link or a data memory. Some specific embodiments are related to a method of encoding data for transmission over a channel, a corresponding decoding method, a coding device for performing one or both of these methods and a computer program comprising instructions to cause said coding device to perform one or both of said methods.
This work proposes a construction for low-density parity-check (LDPC) codes over finite Gaussian integer fields. Furthermore, a new channel model for codes over Gaussian integers is introduced and its channel capacity is derived. This channel can be considered as a first order approximation of the additive white Gaussian noise channel with hard decision detection where only errors to nearest neighbors in the signal constellation are considered. For this channel, the proposed LDPC codes can be decoded with a simple non-probabilistic iterative decoding algorithm similar to Gallager's decoding algorithm A.
Generalized concatenated (GC) codes with soft-input decoding were recently proposed for error correction in flash memories. This work proposes a soft-input decoder for GC codes that is based on a low-complexity bit-flipping procedure. This bit-flipping decoder uses a fixed number of test patterns and an algebraic decoder for soft-input decoding. An acceptance criterion for the final candidate codeword is proposed. Combined with error and erasure decoding of the outer Reed-Solomon codes, this bit-flipping decoder can improve the decoding performance and reduce the decoding complexity compared to the previously proposed sequential decoding. The bit-flipping decoder achieves a decoding performance similar to a maximum likelihood decoder for the inner codes.
Error correction coding based on soft-input decoding can significantly improve the reliability of non-volatile flash memories. This work proposes a soft-input decoder for generalized concatenated (GC) codes. GC codes are well suited for error correction in flash memories for high reliability data storage. We propose GC codes constructed from inner extended binary Bose-Chaudhuri-Hocquenghem (BCH) codes and outer Reed-Solomon codes. The extended BCH codes enable an efficient hard-input decoding. Furthermore, a low-complexity soft-input decoding method is proposed. This bit-flipping decoder uses a fixed number of test patterns and an algebraic decoder for soft-decoding. An acceptance criterion for the final candidate codeword is proposed. Combined with error and erasure decoding of the outer Reed-Solomon codes, this acceptance criterion can improve the decoding performance and reduce the decoding complexity. The presented simulation results show that the proposed bit-flipping decoder in combination with outer error and erasure decoding can outperform maximum likelihood decoding of the inner codes.
The Burrows–Wheeler transformation (BWT) is a reversible block sorting transform that is an integral part of many data compression algorithms. This work proposes a memory-efficient pipelined decoder for the BWT. In particular, the authors consider the limited context order BWT that has low memory requirements and enable fast encoding. However, the decoding of the limited context order BWT is typically much slower than the encoding. The proposed decoder pipeline provides a fast inverse BWT by splitting the decoding into several processing stages which are executed in parallel.
The introduction of multiple-level cell (MLC) and triple-level cell (TLC) technologies reduced the reliability of flash memories significantly compared with single-level cell flash. With MLC and TLC flash cells, the error probability varies for the different states. Hence, asymmetric models are required to characterize the flash channel, e.g., the binary asymmetric channel (BAC). This contribution presents a combined channel and source coding approach improving the reliability of MLC and TLC flash memories. With flash memories data compression has to be performed on block level considering short-data blocks. We present a coding scheme suitable for blocks of 1 kB of data. The objective of the data compression algorithm is to reduce the amount of user data such that the redundancy of the error correction coding can be increased in order to improve the reliability of the data storage system. Moreover, data compression can be utilized to exploit the asymmetry of the channel to reduce the error probability. With redundant data, the proposed combined coding scheme results in a significant improvement of the program/erase cycling endurance and the data retention time of flash memories.
Generalised concatenated (GC) codes are well suited for error correction in flash memories for high-reliability data storage. The GC codes are constructed from inner extended binary Bose–Chaudhuri–Hocquenghem (BCH) codes and outer Reed–Solomon codes. The extended BCH codes enable high-rate GC codes and low-complexity soft input decoding. This work proposes a decoder architecture for high-rate GC codes. For such codes, outer error and erasure decoding are mandatory. A pipelined decoder architecture is proposed that achieves a high data throughput with hard input decoding. In addition, a low-complexity soft input decoder is proposed. This soft decoding approach combines a bit-flipping strategy with algebraic decoding. The decoder components for the hard input decoding can be utilised which reduces the overhead for the soft input decoding. Nevertheless, the soft input decoding achieves a significant coding gain compared with hard input decoding.
Simon Grimm examines new multi-microphone signal processing strategies that aim to achieve noise reduction and dereverberation. Therefore, narrow-band signal enhancement approaches are combined with broad-band processing in terms of directivity based beamforming. Previously introduced formulations of the multichannel Wiener filter rely on the second order statistics of the speech and noise signals. The author analyses how additional knowledge about the location of a speaker as well as the microphone arrangement can be used to achieve further noise reduction and dereverberation.
This work studies a wind noise reduction approach for communication applications in a car environment. An endfire array consisting of two microphones is considered as a substitute for an ordinary cardioid microphone capsule of the same size. Using the decomposition of the multichannel Wiener filter (MWF), a suitable beamformer and a single-channel post filter are derived. Due to the known array geometry and the location of the speech source, assumptions about the signal properties can be made to simplify the MWF beamformer and to estimate the speech and noise power spectral densities required for the post filter. Even for closely spaced microphones, the different signal properties at the microphones can be exploited to achieve a significant reduction of wind noise. The proposed beamformer approach results in an improved speech signal regarding the signal-to-noise-ratio and keeps the linear speech distortion low. The derived post filter shows equal performance compared to known approaches but reduces the effort for noise estimation.
Lernfabrik
(2016)
Die Einführung von cyberphysischen Systemen in der Fertigung wird die Arbeitsbedingungen und Prozesse genauso wie Geschäftsmodelle stark verändern. In der Praxis kann eine wachsende Diskrepanz zwischen Großunternehmen und KMU beobachtet werden. Genau diese Diskrepanz soll die im Folgenden präsentierte Lernfabrik überbrücken, die Unternehmen eine Plattform zum Probieren bietet, die Möglichkeit zur Ausbildung von Studenten und Mitarbeitern schafft und Beratungsangebote bereithält. Zur Umsetzung wird ein integriertes, offenes und standardisiertes Automatisierungskonzept vorgestellt, das einzelne Geräte, ganze Produktionslinien bis hin zu höheren Automatisierungssystemen umfasst und auch eine Community bereitstellt sowie zur Umsetzung neuer Geschäftsmodelle dient.