Institut für Systemdynamik - ISD
Refine
Document Type
- Conference Proceeding (64)
- Article (27)
- Doctoral Thesis (8)
- Master's Thesis (2)
- Patent (1)
- Report (1)
Keywords
Institute
This paper focuses on the multivariable control of a drawing tower process. The nature of the process together with the differences in measurement noise levels that affect the variables to be controlled motivated the development of a new MPC algorithm. An extension of a multivariable predictive control algorithm with separated prediction horizons is proposed. The obtained experimental results show the usefulness of the proposed algorithm..
Simon Grimm examines new multi-microphone signal processing strategies that aim to achieve noise reduction and dereverberation. Therefore, narrow-band signal enhancement approaches are combined with broad-band processing in terms of directivity based beamforming. Previously introduced formulations of the multichannel Wiener filter rely on the second order statistics of the speech and noise signals. The author analyses how additional knowledge about the location of a speaker as well as the microphone arrangement can be used to achieve further noise reduction and dereverberation.
This work studies a wind noise reduction approach for communication applications in a car environment. An endfire array consisting of two microphones is considered as a substitute for an ordinary cardioid microphone capsule of the same size. Using the decomposition of the multichannel Wiener filter (MWF), a suitable beamformer and a single-channel post filter are derived. Due to the known array geometry and the location of the speech source, assumptions about the signal properties can be made to simplify the MWF beamformer and to estimate the speech and noise power spectral densities required for the post filter. Even for closely spaced microphones, the different signal properties at the microphones can be exploited to achieve a significant reduction of wind noise. The proposed beamformer approach results in an improved speech signal regarding the signal-to-noise-ratio and keeps the linear speech distortion low. The derived post filter shows equal performance compared to known approaches but reduces the effort for noise estimation.
Acoustic Echo Cancellation (AEC) plays a crucial role in speech communication devices to enable full-duplex communication. AEC algorithms have been studied extensively in the literature. However, device specific details like microphone or loudspeaker configurations are often neglected, despite their impact on the echo attenuation or near-end speech quality. In this work, we propose a method to investigate different loudspeaker-microphone configurations with respect to their contribution to the overall AEC performance. A generic AEC system consisting of an adaptive filter and a Wiener post filter is used for a fair comparison between different setups. We propose the near-end-to-residual-echo ratio (NRER) and the attenuation-of-near-end (AON) as quality measures for the full-duplex AEC performance.
Multi-object tracking filters require a birth density to detect new objects from measurement data. If the initial positions of new objects are unknown, it may be useful to choose an adaptive birth density. In this paper, a circular birth density is proposed, which is placed like a band around the surveillance area. This allows for 360° coverage. The birth density is described in polar coordinates and considers all point-symmetric quantities such as radius, radial velocity and tangential velocity of objects entering the surveillance area. Since it is assumed that these quantities are unknown and may vary between different targets, detected trajectories, and in particular their initial states, are used to estimate the distribution of initial states. The adapted birth density is approximated as a Gaussian mixture, so that it can be used for filters operating on Cartesian coordinates.
Modeling a suitable birth density is a challenge when using Bernoulli filters such as the Labeled Multi-Bernoulli (LMB) filter. The birth density of newborn targets is unknown in most applications, but must be given as a prior to the filter. Usually the birth density stays unchanged or is designed based on the measurements from previous time steps.
In this paper, we assume that the true initial state of new objects is normally distributed. The expected value and covariance of the underlying density are unknown parameters. Using the estimated multi-object state of the LMB and the Rauch-Tung-Striebel (RTS) recursion, these parameters are recursively estimated and adapted after a target is detected.
The main contribution of this paper is an algorithm to estimate the parameters of the birth density and its integration into the LMB framework. Monte Carlo simulations are used to evaluate the detection driven adaptive birth density in two scenarios. The approach can also be applied to filters that are able to estimate trajectories.
Random matrices are used to filter the center of gravity (CoG) and the covariance matrix of measurements. However, these quantities do not always correspond directly to the position and the extent of the object, e.g. when a lidar sensor is used.In this paper, we propose a Gaussian processes regression model (GPRM) to predict the position and extension of the object from the filtered CoG and covariance matrix of the measurements. Training data for the GPRM are generated by a sampling method and a virtual measurement model (VMM). The VMM is a function that generates artificial measurements using ray tracing and allows us to obtain the CoG and covariance matrix that any object would cause. This enables the GPRM to be trained without real data but still be applied to real data due to the precise modeling in the VMM. The results show an accurate extension estimation as long as the reality behaves like the modeling and e.g. lidar measurements only occur on the side facing the sensor.
Virtual measurement models (VMM) can be used to generate artificial measurements and emulate complex sensor models such as Lidar. The input of the VMM is an estimation and the output is the set of measurements this estimation would cause. A Kalman filter with extension estimation based on random matrices is used to filter mean and covariance of the real measurements. If these match the mean and covariance of the artificial measurements, then the given estimation is appropriate. The optimal input of the VMM is found using an adaptation algorithm. In this paper, the VMM approach is expanded for multi-extended object tracking where objects can be occluded and are only partially visible. The occlusion can be compensated if the extension estimation is performed for all objects together. The VMM now receives as input an estimation for the multi-object state and the output are the measurements that this multi-object state would cause.
In multi-extended object tracking, parameters (e.g., extent) and trajectory are often determined independently. In this paper, we propose a joint parameter and trajectory (JPT) state and its integration into the Bayesian framework. This allows processing measurements that contain information about parameters and states. Examples of such measurements are bounding boxes given from an image processing algorithm. It is shown that this approach can consider correlations between states and parameters. In this paper, we present the JPT Bernoulli filter. Since parameters and state elements are considered in the weighting of the measurement data assignment hypotheses, the performance is higher than with the conventional Bernoulli filter. The JPT approach can be also used for other Bayes filters.
The random matrix approach is a robust algorithm to filter the mean and covariance matrix of noisy observations of a dynamic object. Afterward, virtual measurement models can be used to find iteratively the extent parameters of an object that would cause the same statistical moments within their measurements. In previous work, this was limited to elliptical targets and only contour measurements.In this paper, we introduce the parallel use of an elliptical, triangular and rectangular-shaped virtual measurement model and a shape classification that selects the model that fits best to the measurements. The measurement likelihood is modeled either via ray tracing, a uniformly or normally spatial distribution over the object’s extent or as a combination of those.The results show that the extent estimation works precisely and that the classification accuracy highly depends on the measurement noise.
Extended Target Tracking With a Lidar Sensor Using Random Matrices and a Virtual Measurement Model
(2022)
Random matrices are widely used to estimate the extent of an elliptically contoured object. Usually, it is assumed that the measurements follow a normal distribution, with its standard deviation being proportional to the object’s extent. However, the random matrix approach can filter the center of gravity and the covariance matrix of measurements independently of the measurement model. This work considers the whole chain from data acquisition to the linear Kalman Filter with extension estimation as a reference plant. The input is the (unknown) ground truth (position and extent). The output is the filtered center of gravity and the filtered covariance matrix of the measurement distribution. A virtual measurement model emulates the behavior of the reference plant. The input of the virtual measurement model is adapted using the proposed algorithm until the output parameters of the virtual measurement model match the result of the reference plant. After the adaptation, the input to the virtual measurement model is considered an estimation for position and extent. The main contribution of this paper is the reference model concept and an adaptation algorithm to optimize the input of the virtual measurement model.
Kapitel 2 der vorliegenden Arbeit beschreibt die theoretischen Grundlagen optimaler Regelung und die unterschiedlichen Methoden des Pfadintegral Frameworks zur Reglersynthese. Zudem wird ein Ansatz zur Erweiterung des stochastischen NMPC dargestellt, sodass eine Adaption an eine real vorliegende Systemdynamik erfolgt. Weiter wird eine Methode entwickelt und beschrieben, welche die Effizienz des Algorithmus stark erhöht.
In Kapitel 3 wird aufgezeigt, wie die Pfadintegral Regelung dazu genutzt wird ein Furuta Pendel aufzuschwingen.
In Kapitel 4 werden die Algorithmen zur Lösung unterschiedlicher Problemstellungen im Kontext eines Forschungsboot appliziert. So wird unter anderem gezeigt, wie ein Pfadintegral Regelungsalgorithmus genutzt werden kann, um autonom mit dem Forschungsboot Solgenia am Steg der HTWG Konstanz anzulegen.
Abschließend wird in Kapitel 5 ein Fazit aus den Ergebnissen gezogen, diese eingeordnet und ein Ausblick auf weitere mögliche Arbeiten gegeben.
Feature-Based Proposal Density Optimization for Nonlinear Model Predictive Path Integral Control
(2022)
This paper presents a novel feature-based sampling strategy for nonlinear Model Predictive Path Integral (MPPI) control. In MPPI control, the optimal control is calculated by solving a stochastic optimal control problem online using the weighted inference of stochastic trajectories. While the algorithm can be excellently parallelized the closed- loop performance is dependent on the information quality of the drawn samples. Because these samples are drawn using a proposal density, its quality is crucial for the solver and thus the controller performance. In classical MPPI control, the explored state-space is strongly constrained by assumptions that refer to the control value variance, which are necessary for transforming the Hamilton-Jacobi-Bellman (HJB) equation into a linear second-order partial differential equation. To achieve excellent performance even with discontinuous cost-functions, in this novel approach, knowledge-based features are used to determine the proposal density and thus, the region of state- space for exploration. This paper addresses the question of how the performance of the MPPI algorithm can be improved using a feature-based mixture of base densities. Further, the developed algorithm is applied on an autonomous vessel that follows a track and concurrently avoids collisions using an emergency braking feature.
This paper presents a systematic comparison of different advanced approaches for motion prediction of vessels for docking scenarios. Therefore, a conventional nonlinear gray-box-model, its extension to a hybrid model using an additional regression neural network (RNN) and a black-box-model only based on a RNN are compared. The optimal hyperparameters are found by grid search. The training and validation data for the different models is collected in full-scale experiments using the solar research vessel Solgenia. The performances of the different prediction models are compared in full-scale scenarios. %To use the investigated approaches for controller design, a general optimal control problem containing the advanced models is described. These can improve advanced control strategies e.g., nonlinear model predictive control (NMPC) or reinforcement learning (RL). This paper explores the question of what the advantages and disadvantages of the different presented prediction approaches are and how they can be used to improve the docking behavior of a vessel.
In this paper, a novel feature-based sampling strategy for nonlinear Model Predictive Path Integral (MPPI) control is presented. Using the MPPI approach, the optimal feedback control is calculated by solving a stochastic optimal control (OCP) problem online by evaluating the weighted inference of sampled stochastic trajectories. While the MPPI algorithm can be excellently parallelized, the closed-loop performance strongly depends on the information quality of the sampled trajectories. To draw samples, a proposal density is used. The solver’s and thus, the controller’s performance is of high quality if the sampled trajectories drawn from this proposal density are located in low-cost regions of state-space. In classical MPPI control, the explored state-space is strongly constrained by assumptions that refer to the control value’s covariance matrix, which are necessary for transforming the stochastic Hamilton–Jacobi–Bellman (HJB) equation into a linear second-order partial differential equation. To achieve excellent performance even with discontinuous cost functions, in this novel approach, knowledge-based features are introduced to constitute the proposal density and thus the low-cost region of state-space for exploration. This paper addresses the question of how the performance of the MPPI algorithm can be improved using a feature-based mixture of base densities. Furthermore, the developed algorithm is applied to an autonomous vessel that follows a track and concurrently avoids collisions using an emergency braking feature. Therefore, the presented feature-based MPPI algorithm is applied and analyzed in both simulation and full-scale experiments.
This paper presents the swinging up and stabilization control of a Furuta pendulum using the recently published nonlinear Model Predictive Path Integral (MPPI) approach. This algorithm is based on a path integral over stochastic trajectories and can be parallelized easily. The controller parameters are tuned offline regarding the nonlinear system dynamics and simulations. Constraints in terms of state and input are taken into account in the cost function. The presented approach sequentially computes an optimal control sequence that minimizes this optimal control problem online. The control strategy has been tested in full-scale experiments using a pendulum prototype. The investigated MPPI controller has demonstrated excellent performance in simulation for the swinging up and stabilizing task. In order to also achieve outstanding performance in a real-world experiment using a controller with limited computing power, a linear quadratic controller (LQR) is designed for the stabilization task. In this paper, the determination of the controller parameters for the MPPI algorithm is described in detail. Further, a discussion treats the advantages of the nonlinear MPPI control.
Docking Control of a Fully-Actuated Autonomous Vessel using Model Predictive Path Integral Control
(2022)
This paper presents the docking control of an autonomous vessel using the nonlinear Model Predictive Path Integral (MPPI) approach. This algorithm is based on a path integral over stochastic trajectories and can be parallelized easily. The controller parameters are tuned offline using knowledge of the system and simulations, including nonlinear state and disturbance observer. The cost function implicitly contains information regarding the surrounding of the docking position. This approach allows continuous optimization of the trajectory with respect to the system state, disturbance state and actuator dynamics. The control strategy has been tested in full-scale experiments using the solar research vessel Solgenia. The investigated MPPI controller has demonstrated excellent performance in both, simulation and real-world experiments. This paper addresses the question of how the MPPI algorithm can be applied to dock a fully-actuated vessel and what benefits its application achieves.
This paper compares novel methods to efficiently include input constraints using the nonlinear Model Predictive Path Integral (MPPI) approach. The MPPI algorithm solves stochastic optimal control problems and is based on sampled trajectories. MPPI results from the physical path integral framework. Sample-based algorithms are characterized by the fact that they can be computed in parallel and offer the possibility to handle discontinuous dynamics and cost functions. However, using standard MPPI the input costs in the Lagrange term have to be chosen quadratic. This fact is unfavorable for various real applications. Further, in standard nonlinear model predictive control (NMPC) approaches hard box constraints on the control input trajectory can be treated directly. In this contribution, novel architectures based on integrator action are compared. The investigated input constraint MPPI controllers were tested on an autonomous self-balancing vehicle. Therefore both, simulation and real-world experiments are presented. This paper addresses the question of how the MPPI algorithm can be further developed to consider input box constraints. Videos of the self-balancing vehicle are available at: https: https://tinyurl.com/mvn8j7vf
Recently published nonlinear model-based control
approaches achieve impressive performances in complex real-
world applications. However, due to model-plant mismatches
and unforeseen disturbances, the model-based controller’s per-
formance is limited in full-scale applications. In most applica-
tions, low-level control loops mitigate the model-plant mismatch
and the sensitivity to disturbances. But what is the influence
of these low-level control loops? In this paper, we present
the model predictive path integral (MPPI) control of a self-
balancing vehicle and investigate the influence of subordinate
control loops on closed-loop performance. Therefore, simulation
and full-scale experiments are performed and analyzed. Subor-
dinate control loops empower the MPPI controller because they
dampen the influence of disturbances, and thus improve the
model’s accuracy. This is the basis for the successful application
of model-based control approaches in real-world systems. All
in all, a model is used to design a low-level controller, then
its closed-loop behavior is determined, and this model is used
within the superimposed MPPI control loop – modeling for
control and vice versa.
This thesis presents the development of two different state-feedback controllers to solve the trajectory tracking problem, where the vessel needs to reach and follow a time-varying reference trajectory. This motion problem was addressed to a real-scaled fully actuated surface vessel, whose dynamic model had unknown hydrodynamic and propulsion parameters that were identified by applying an experimental maneuver-based identification process. This dynamic model was then used to develop the controllers. The first one was the backstepping controller, which was designed with a local exponential stability proof. For the NMPC, the controller was developed to minimize the tracking error, considering the thrusters’ constraints. Moreover, both controllers considered the thruster allocation problem and counteracted environmental disturbance forces such as current, waves and wind.The effectiveness of these approaches was verified in simulation using Matlab/Simulink and GRAMPC (in the case of the NMPC), and in experimental scenarios, where they were applied to the vessel, performing docking maneuvers at the Rhine River in Constance (Germany).
The trajectory tracking problem for a fully-actuated real-scaled surface vessel is addressed in this paper by designing a backstepping controller with a multivariable integral action, considering the thruster allocation problem. The performance and robustness of this controller are evaluated in simulation, taking into account environmental disturbance forces and modeling mismatch, using a docking maneuver as a reference trajectory. Furthermore, a comparison between the backstepping controller and a nonlinear position PID-Control with flatness based-feedforward is also analyzed.
The trajectory tracking problem for a real-scaled fully-actuated surface vessel is addressed in this paper. A nonlinear model predictive control (NMPC) scheme was designed to track a reference trajectory, considering state and input constraints, and environmental disturbances, which were assumed to be constant over the prediction horizon. The controller was tested by performing docking maneuvers using the real-scaled research vessel from the University of Applied Sciences Konstanz at the Rhine river in Germany. A comparison between the experimental results and the simulated ones was analyzed to validate the NMPC controller.
Trajectory Tracking of a Fully-actuated Surface Vessel using Nonlinear Model Predictive Control
(2021)
The trajectory tracking problem for a fully-actuated real-scaled surface vessel is addressed in this paper. The unknown hydrodynamic and propulsion parameters of the vessel’s dynamic model were identified using an experimental maneuver-based identification process. Then, a nonlinear model predictive control (NMPC) scheme is designed and the controller’s performance is assessed through the variation of NMPC parameters and constraints tightening for tracking a curved trajectory.
Extracting suitable features from acquired data to accurately depict the current health state of a system is crucial in data driven condition monitoring and prediction. Usually, analogue sensor data is sampled at rates far exceeding the Nyquist-rate containing substantial amounts of redundancies and noise, imposing high computational loads due to the subsequent and necessary feature processing chain (generation, dimensionality reduction, rating and selection). To overcome these problems, Compressed Sensing can be used to sample directly to a compressed space, provided the signal at hand and the employed compression/measurement system meet certain criteria. Theory states, that during this compression step enough information is conserved, such that a reconstruction of the original signal is possible with high probability. The proposed approach however does not rely on reconstructed data for condition monitoring purposes, but uses directly the compressed signal representation as feature vector. It is hence assumed that enough information is conveyed by the compression for condition monitoring purposes. To fuse the compressed coefficients into one health index that can be used as input for remaining useful life prediction algorithms and is limited to a reasonable range between 1 and 0, a logistic regression approach is used. Run-to-failure data of three translational electromagnetic actuators is used to demonstrate the health index generation procedure. A comparison to the time domain ground truth signals obtained from Nyquist sampled coil current measurements shows reasonable agreement. I.e. underlying wear-out phenomena can be reproduced by the proposed approach enabling further investigation of the application of prognostic methods.
Lernfabrik
(2016)
Die Einführung von cyberphysischen Systemen in der Fertigung wird die Arbeitsbedingungen und Prozesse genauso wie Geschäftsmodelle stark verändern. In der Praxis kann eine wachsende Diskrepanz zwischen Großunternehmen und KMU beobachtet werden. Genau diese Diskrepanz soll die im Folgenden präsentierte Lernfabrik überbrücken, die Unternehmen eine Plattform zum Probieren bietet, die Möglichkeit zur Ausbildung von Studenten und Mitarbeitern schafft und Beratungsangebote bereithält. Zur Umsetzung wird ein integriertes, offenes und standardisiertes Automatisierungskonzept vorgestellt, das einzelne Geräte, ganze Produktionslinien bis hin zu höheren Automatisierungssystemen umfasst und auch eine Community bereitstellt sowie zur Umsetzung neuer Geschäftsmodelle dient.
NAND flash memory is widely used for data storage due to low power consumption, high throughput, short random access latency, and high density. The storage density of the NAND flash memory devices increases from one generation to the next, albeit at the expense of storage reliability.
Our objective in this dissertation is to improve the reliability of the NAND flash memory with a low hard implementation cost. We investigate the error characteristic, i.e. the various noises of the NAND flash memory. Based on the error behavior at different life-aging stages, we develop offset calibration techniques that minimize the bit error rate (BER).
Furthermore, we introduce data compression to reduce the write amplification effect and support the error correction codes (ECC) unit. In the first scenario, the numerical results show that the data compression can reduce the wear-out by minimizing the amount of data that is written to the flash. In the ECC scenario, the compression gain is used to improve the ECC capability. Based on the first scenario, the write amplification effect can be halved for the considered target flash and data model. By combining the ECC and data compression, the NAND flash memory lifetime improves three fold compared with uncompressed data for the same data model.
In order to improve the data reliability of the NAND flash memory, we investigate different ECC schemes based on concatenated codes like product codes, half-product codes, and generalized concatenated codes (GCC). We propose a construction for high-rate GCC for hard-input decoding. ECC based on soft-input decoding can significantly improve the reliability of NAND flash memories. Therefore, we propose a low-complexity soft-input decoding algorithm for high-rate GCC.
Error correction coding based on soft-input decoding can significantly improve the reliability of non-volatile flash memories. This work proposes a soft-input decoder for generalized concatenated (GC) codes. GC codes are well suited for error correction in flash memories for high reliability data storage. We propose GC codes constructed from inner extended binary Bose-Chaudhuri-Hocquenghem (BCH) codes and outer Reed-Solomon codes. The extended BCH codes enable an efficient hard-input decoding. Furthermore, a low-complexity soft-input decoding method is proposed. This bit-flipping decoder uses a fixed number of test patterns and an algebraic decoder for soft-decoding. An acceptance criterion for the final candidate codeword is proposed. Combined with error and erasure decoding of the outer Reed-Solomon codes, this acceptance criterion can improve the decoding performance and reduce the decoding complexity. The presented simulation results show that the proposed bit-flipping decoder in combination with outer error and erasure decoding can outperform maximum likelihood decoding of the inner codes.
The introduction of multi level cell (MLC) and triple level cell (TLC) technologies reduced the reliability of flash memories significantly compared with single level cell (SLC) flash. The reliability of the flash memory suffers from various errors causes. Program/erase cycles, read disturb, and cell to cell interference impact the threshold voltages. With pre-defined fixed read thresholds a voltage shift increases the bit error rate (BER). This work proposes a read threshold calibration method that aims on minimizing the BER by adapting the read voltages. The adaptation of the read thresholds is based on the number of errors observed in the codeword protecting a small amount of meta-data. Simulations based on flash measurements demonstrate that this method can significantly reduce the BER of TLC memories.
Experimental Validation of Ellipsoidal Techniques for State Estimation in Marine Applications
(2022)
A reliable quantification of the worst-case influence of model uncertainty and external disturbances is crucial for the localization of vessels in marine applications. This is especially true if uncertain GPS-based position measurements are used to update predicted vessel locations that are obtained from the evaluation of a ship’s state equation. To reflect real-life working conditions, these state equations need to account for uncertainty in the system model, such as imperfect actuation and external disturbances due to effects such as wind and currents. As an application scenario, the GPS-based localization of autonomous DDboat robots is considered in this paper. Using experimental data, the efficiency of an ellipsoidal approach, which exploits a bounded-error representation of disturbances and uncertainties, is demonstrated.
Reliability Assessment of an Unscented Kalman Filter by Using Ellipsoidal Enclosure Techniques
(2022)
The Unscented Kalman Filter (UKF) is widely used for the state, disturbance, and parameter estimation of nonlinear dynamic systems, for which both process and measurement uncertainties are represented in a probabilistic form. Although the UKF can often be shown to be more reliable for nonlinear processes than the linearization-based Extended Kalman Filter (EKF) due to the enhanced approximation capabilities of its underlying probability distribution, it is not a priori obvious whether its strategy for selecting sigma points is sufficiently accurate to handle nonlinearities in the system dynamics and output equations. Such inaccuracies may arise for sufficiently strong nonlinearities in combination with large state, disturbance, and parameter covariances. Then, computationally more demanding approaches such as particle filters or the representation of (multi-modal) probability densities with the help of (Gaussian) mixture representations are possible ways to resolve this issue. To detect cases in a systematic manner that are not reliably handled by a standard EKF or UKF, this paper proposes the computation of outer bounds for state domains that are compatible with a certain percentage of confidence under the assumption of normally distributed states with the help of a set-based ellipsoidal calculus. The practical applicability of this approach is demonstrated for the estimation of state variables and parameters for the nonlinear dynamics of an unmanned surface vessel (USV).
Nowadays, most digital modulation schemes are based on conventional signal constellations that have no algebraic group, ring, or field properties, e.g. square quadrature-amplitude modulation constellations. Signal constellations with algebraic structure can enhance the system performance. For instance, multidimensional signal constellations based on dense lattices can achieve performance gains due to the dense packing. The algebraic structure enables low-complexity decoding and detection schemes. In this work, signal constellations with algebraic properties and their application in spatial modulation transmission schemes are investigated. Several design approaches of two- and four-dimensional signal constellations based on Gaussian, Eisenstein, and Hurwitz integers are shown. Detection algorithms with reduced complexity are proposed. It is shown, that the proposed Eisenstein and Hurwitz constellations combined with the proposed suboptimal detection can outperform conventional two-dimensional constellations with ML detection.
This work proposes a construction for low-density parity-check (LDPC) codes over finite Gaussian integer fields. Furthermore, a new channel model for codes over Gaussian integers is introduced and its channel capacity is derived. This channel can be considered as a first order approximation of the additive white Gaussian noise channel with hard decision detection where only errors to nearest neighbors in the signal constellation are considered. For this channel, the proposed LDPC codes can be decoded with a simple non-probabilistic iterative decoding algorithm similar to Gallager's decoding algorithm A.
This paper proposes a novel transmission scheme for generalized multistream spatial modulation. This new approach uses one Mannheim error correcting codes over Gaussian or Eisenstein integers as multidimensional signal constellations. These codes enable a suboptimal decoding strategy with near maximum likelihood performance for transmission over the additive white Gaussian noise channel. In this contribution, this decoding algorithm is generalized to the detection for generalized multistream spatial modulation. The proposed method can outperform conventional generalized multistream spatial modulation with respect to decoding performance, detection complexity, and spectral efficiency.
Spatial modulation is a low-complexity multipleinput/ multipleoutput transmission technique. The recently proposed spatial permutation modulation (SPM) extends the concept of spatial modulation. It is a coding approach, where the symbols are dispersed in space and time. In the original proposal of SPM, short repetition codes and permutation codes were used to construct a space-time code. In this paper, we propose a similar coding scheme that combines permutation codes with codes over Gaussian integers. Short codes over Gaussian integers have good distance properties. Furthermore, the code alphabet can directly be applied as signal constellation, hence no mapping is required. Simulation results demonstrate that the proposed coding approach outperforms SPM with repetition codes.
Four-Dimensional Hurwitz Signal Constellations, Set Partitioning, Detection, and Multilevel Coding
(2021)
The Hurwitz lattice provides the densest four-dimensional packing. This fact has motivated research on four-dimensional Hurwitz signal constellations for optical and wireless communications. This work presents a new algebraic construction of finite sets of Hurwitz integers that is inherently accompanied by a respective modulo operation. These signal constellations are investigated for transmission over the additive white Gaussian noise (AWGN) channel. It is shown that these signal constellations have a better constellation figure of merit and hence a better asymptotic performance over an AWGN channel when compared with conventional signal constellations with algebraic structure, e.g., two-dimensional Gaussian-integer constellations or four-dimensional Lipschitz-integer constellations. We introduce two concepts for set partitioning of the Hurwitz integers. The first method is useful to reduce the computational complexity of the symbol detection. This suboptimum detection approach achieves near-maximum-likelihood performance. In the second case, the partitioning exploits the algebraic structure of the Hurwitz signal constellations. We partition the Hurwitz integers into additive subgroups in a manner that the minimum Euclidean distance of each subgroup is larger than in the original set. This enables multilevel code constructions for the new signal constellations.
Multi-dimensional spatial modulation is a multipleinput/ multiple-output wireless transmission technique, that uses only a few active antennas simultaneously. The computational complexity of the optimal maximum-likelihood (ML) detector at the receiver increases rapidly as more transmit antennas or larger modulation orders are employed. ML detection may be infeasible for higher bit rates. Many suboptimal detection algorithms for spatial modulation use two-stage detection schemes where the set of active antennas is detected in the first stage and the transmitted symbols in the second stage. Typically, these detection schemes use the ML strategy for the symbol detection. In this work, we consider a suboptimal detection algorithm for the second detection stage. This approach combines equalization and list decoding. We propose an algorithm for multi-dimensional signal constellations with a reduced search space in the second detection stage through set partitioning. In particular, we derive a set partitioning from the properties of Hurwitz integers. Simulation results demonstrate that the new algorithm achieves near-ML performance. It significantly reduces the complexity when compared with conventional two-stage detection schemes. Multi-dimensional constellations in combination with suboptimal detection can even outperform conventional signal constellations in combination with ML detection.
This work proposes a suboptimal detection algorithm for generalized multistream spatial modulation. Many suboptimal detection algorithms for spatial modulation use two-stage detection schemes where the set of active antennas is detected in the first stage and the transmitted symbols in the second stage. For multistream spatial modulation with large signal constellations the second detection step typically dominates the detection complexity. With the proposed detection scheme, the modified Gaussian approximation method is used for detecting the antenna pattern. In order to reduce the complexity for detecting the signal points, we propose a combined equalization and list decoding approach. Simulation results demonstrate that the new algorithm achieves near-maximum-likelihood performance with small list sizes. It significantly reduces the complexity when compared with conventional two-stage detection schemes.
Algorithms and Architectures for Cryptography and Source Coding in Non-Volatile Flash Memories
(2021)
In this work, algorithms and architectures for cryptography and source coding are developed, which are suitable for many resource-constrained embedded systems such as non-volatile flash memories. A new concept for elliptic curve cryptography is presented, which uses an arithmetic over Gaussian integers. Gaussian integers are a subset of the complex numbers with integers as real and imaginary parts. Ordinary modular arithmetic over Gaussian integers is computational expensive. To reduce the complexity, a new arithmetic based on the Montgomery reduction is presented. For the elliptic curve point multiplication, this arithmetic over Gaussian integers improves the computational efficiency, the resistance against side channel attacks, and reduces the memory requirements. Furthermore, an efficient variant of the Lempel-Ziv-Welch (LZW) algorithm for universal lossless data compression is investigated. Instead of one LZW dictionary, this algorithm applies several dictionaries to speed up the encoding process. Two dictionary partitioning techniques are introduced that improve the compression rate and reduce the memory size of this parallel dictionary LZW algorithm.
In this work, we investigate a hybrid decoding approach that combines algebraic hard-input decoding of binary block codes with soft-input decoding. In particular, an acceptance criterion is proposed which determines the reliability of a candidate codeword. For many received codewords the stopping criterion indicates that the hard-decoding result is sufficiently reliable, and the costly soft-input decoding can be omitted. The proposed acceptance criterion significantly reduces the decoding complexity. For simulations we combine the algebraic hard-input decoding with ordered statistics decoding, which enables near maximum likelihood soft-input decoding for codes of small to medium block lengths.
Modular arithmetic over integers is required for many cryptography systems. Montgomeryreduction is an efficient algorithm for the modulo reduction after a multiplication. Typically, Mont-gomery reduction is used for rings of ordinary integers. In contrast, we investigate the modularreduction over rings of Gaussian integers. Gaussian integers are complex numbers where the real andimaginary parts are integers. Rings over Gaussian integers are isomorphic to ordinary integer rings.In this work, we show that Montgomery reduction can be applied to Gaussian integer rings. Twoalgorithms for the precision reduction are presented. We demonstrate that the proposed Montgomeryreduction enables an efficient Gaussian integer arithmetic that is suitable for elliptic curve cryptogra-phy. In particular, we consider the elliptic curve point multiplication according to the randomizedinitial point method which is protected against side-channel attacks. The implementation of thisprotected point multiplication is significantly faster than comparable algorithms over ordinary primefields.
The Montgomery multiplication is an efficient method for modular arithmetic. Typically, it is used for modular arithmetic over integer rings to prevent the expensive inversion for the modulo reduction. In this work, we consider modular arithmetic over rings of Gaussian integers. Gaussian integers are subset of the complex numbers such that the real and imaginary parts are integers. In many cases Gaussian integer rings are isomorphic to ordinary integer rings. We demonstrate that the concept of the Montgomery multiplication can be extended to Gaussian integers. Due to independent calculation of the real and imaginary parts, the computation complexity of the multiplication is reduced compared with ordinary integer modular arithmetic. This concept is suitable for coding applications as well as for asymmetric key cryptographic systems, such as elliptic curve cryptography or the Rivest-Shamir-Adleman system.
The Lempel-Ziv-Welch (LZW) algorithm is an important dictionary-based data compression approach that is used in many communication and storage systems. The parallel dictionary LZW (PDLZW) algorithm speeds up the LZW encoding by using multiple dictionaries. The PDLZW algorithm applies different dictionaries to store strings of different lengths, where each dictionary stores only strings of the same length. This simplifies the parallel search in the dictionaries for hardware implementations. The compression gain of the PDLZW depends on the partitioning of the address space, i.e. on the sizes of the parallel dictionaries. However, there is no universal partitioning that is optimal for all data sources. This work proposes an address space partitioning technique that optimizes the compression rate of the PDLZW using a Markov model for the data. Numerical results for address spaces with 512, 1024, and 2048 entries demonstrate that the proposed partitioning improves the performance of the PDLZW compared with the original proposal.
The Lempel–Ziv–Welch (LZW) algorithm is an important dictionary-based data compression approach that is used in many communication and storage systems. The parallel dictionary LZW (PDLZW) algorithm speeds up the LZW encoding by using multiple dictionaries. This simplifies the parallel search in the dictionaries. However, the compression gain of the PDLZW depends on the partitioning of the address space, i.e. on the sizes of the parallel dictionaries. This work proposes an address space partitioning technique that optimises the compression rate of the PDLZW. Numerical results for address spaces with 512, 1024, and 2048 entries demonstrate that the proposed address partitioning improves the performance of the PDLZW compared with the original proposal. These address space sizes are suitable for flash storage systems. Moreover, the PDLZW has relative high memory requirements which dominate the costs of a hardware implementation. This work proposes a recursive dictionary structure and a word partitioning technique that significantly reduce the memory size of the parallel dictionaries.
The Burrows–Wheeler transformation (BWT) is a reversible block sorting transform that is an integral part of many data compression algorithms. This work proposes a memory-efficient pipelined decoder for the BWT. In particular, the authors consider the limited context order BWT that has low memory requirements and enable fast encoding. However, the decoding of the limited context order BWT is typically much slower than the encoding. The proposed decoder pipeline provides a fast inverse BWT by splitting the decoding into several processing stages which are executed in parallel.
This work presents a new concept to implement the elliptic curve point multiplication (PM). This computation is based on a new modular arithmetic over Gaussian integer fields. Gaussian integers are a subset of the complex numbers such that the real and imaginary parts are integers. Since Gaussian integer fields are isomorphic to prime fields, this arithmetic is suitable for many elliptic curves. Representing the key by a Gaussian integer expansion is beneficial to reduce the computational complexity and the memory requirements of secure hardware implementations, which are robust against attacks. Furthermore, an area-efficient coprocessor design is proposed with an arithmetic unit that enables Montgomery modular arithmetic over Gaussian integers. The proposed architecture and the new arithmetic provide high flexibility, i.e., binary and non-binary key expansions as well as protected and unprotected PM calculations are supported. The proposed coprocessor is a competitive solution for a compact ECC processor suitable for applications in small embedded systems.
Side Channel Attack Resistance of the Elliptic Curve Point Multiplication using Gaussian Integers
(2020)
Elliptic curve cryptography is a cornerstone of embedded security. However, hardware implementations of the elliptic curve point multiplication are prone to side channel attacks. In this work, we present a new key expansion algorithm which improves the resistance against timing and simple power analysis attacks. Furthermore, we consider a new concept for calculating the point multiplication, where the points of the curve are represented as Gaussian integers. Gaussian integers are subset of the complex numbers, such that the real and imaginary parts are integers. Since Gaussian integer fields are isomorphic to prime fields, this concept is suitable for many elliptic curves. Representing the key by a Gaussian integer expansion is beneficial to reduce the computational complexity and the memory requirements of a secure hardware implementation.
Digitale Signaturen zum Überprüfen der Integrität von Daten, beispielsweise von Software-Updates, gewinnen zunehmend an Bedeutung. Im Bereich der eingebetteten Systeme kommen derzeit wegen der geringen Komplexität noch überwiegend symmetri-sche Verschlüsselungsverfahren zur Berechnung eines Authentifizierungscodes zum Einsatz. Asym-metrische Kryptosysteme sind rechenaufwendiger, bieten aber mehr Sicherheit, weil der Schlüssel zur Authentifizierung nicht geheim gehalten werden muss. Asymmetrische Signaturverfahren werden typischerweise zweistufig berechnet. Der Schlüssel wird nicht direkt auf die Daten angewendet, sondern auf deren Hash-Wert, der mit Hilfe einer Hash-funktion zuvor berechnet wurde. Zum Einsatz dieser Verfahren in eingebetteten Systemen ist es erforder-lich, dass die Hashfunktion einen hinreichend gro-ßen Datendurchsatz ermöglicht. In diesem Beitrag wird eine effiziente Hardware-Implementierung der SHA-256 Hashfunktion vorgestellt.
Autonomous moving systems require very detailed information about their environment and potential colliding objects. Thus, the systems are equipped with high resolution sensors. These sensors have the property to generate more than one detection per object per time step. This results in an additional complexity for the target tracking algorithm, since standard tracking filters assume that an object generates at most one detection per object. This requires new methods for data association and system state filtering.
As new data association methods, in this thesis two different extensions of the Joint Integrated Probabilistic Data Association (JIPDA) filter to assign more than one detection to tracks are proposed.
The first method that is introduced, is a generalization of the JIPDA to assign a variable number of measurements to each track based on some predefined statistical models, which will be called Multi Detection - Joint Integrated Probabilistic Data Association (MD-JIPDA).
Since this scheme suffers from exponential increase of association hypotheses, also a new approximation scheme is presented. The second method is an extension for the special case, when the number and locations of measurements are a priori known. In preparation of this method, a new notation and computation scheme for the standard Joint Integrated Data Association is outlined, which also enables the derivation of a new fast approximation scheme called balanced permanent-JIPDA.
For state filtering, also two different concepts are applied: the Random Matrix Framework and the Measurement Generating Points. For the Random Matrix framework, first an alternative prediction method is proposed to account for kinematic state changes in the extension state prediction as well. Secondly, various update methods are investigated to account for the polar to Cartesian noise transformation problem. The filtering concepts are connected with the new MD-JIPDA and their characteristics analyzed with various Monte Carlo simulations.
In case an object can be modeled by a finite number of fixed Measurement Generating Points (MGP), also a proposition to track these object via a JIPDA filter is made. In this context, a fast Track-to-Track fusion algorithm is proposed as well and compared against the MGP-JIPDA.
The proposed algorithms are evaluated in two applications where scanning is done using radar sensors only. The first application is a typical automotive scenario, where a passenger car is equipped with six radar sensors to cover its complete environment.
In this application, the location of the measurements on an object can be considered stationary and that is has a rectangular shape. Thus, the MGP based algorithms are applied here. The filters are evaluated by tracking especially vehicles on nearside lanes.
The second application covers the tracking of vessels on inland waters. Here, two different kind of Radar systems are applied, but for both sensors a uniform distribution of the measurements over the target's extent can be assumed. Further, the assumption that the targets have elliptical shape holds, and so the Random Matrix Framework in combination with the MD-JIPDA is evaluated.
Exemplary test scenarios also illustrate the performance of this tracking algorithm.
Flash memories are non-volatile memory devices. The rapid development of flash technologies leads to higher storage density, but also to higher error rates. This dissertation considers this reliability problem of flash memories and investigates suitable error correction codes, e.g. BCH-codes and concatenated codes. First, the flash cells, their functionality and error characteristics are explained. Next, the mathematics of the employed algebraic code are discussed. Subsequently, generalized concatenated codes (GCC) are presented. Compared to the commonly used BCH codes, concatenated codes promise higher code rates and lower implementation complexity. This complexity reduction is achieved by dividing a long code into smaller components, which require smaller Galois-Field sizes. The algebraic decoding algorithms enable analytical determination of the block error rate. Thus, it is possible to guarantee very low residual error rates for flash memories. Besides the complexity reduction, general concatenated codes can exploit soft information. This so-called soft decoding is not practicable for long BCH-codes. In this dissertation, two soft decoding methods for GCC are presented and analyzed. These methods are based on the Chase decoding and the stack algorithm. The last method explicitly uses the generalized concatenated code structure, where the component codes are nested subcodes. This property supports the complexity reduction. Moreover, the two-dimensional structure of GCC enables the correction of error patterns with statistical dependencies. One chapter of the thesis demonstrates how the concatenated codes can be used to correct two-dimensional cluster errors. Therefore, a two-dimensional interleaver is designed with the help of Gaussian integers. This design achieves the correction of cluster errors with the best possible radius. Large parts of this works are dedicated to the question, how the decoding algorithms can be implemented in hardware. These hardware architectures, their throughput and logic size are presented for long BCH-codes and generalized concatenated codes. The results show that generalized concatenated codes are suitable for error correction in flash memories, especially for three-dimensional NAND memory systems used in industrial applications, where low residual errors must be guaranteed.