Institut für Systemdynamik - ISD
Refine
Document Type
- Doctoral Thesis (8) (remove)
Has Fulltext
- no (8)
Keywords
- Autonomous vessels (1)
- Backstepping control (1)
- Beobachterentwurf (1)
- Channel Coding (1)
- Computer hardware (1)
- Computer security (1)
- Data encryption (Computer science) (1)
- Digital and Analog Signal Processing, Automation (1)
- Flash Memories (1)
- Generalized Concatenated Code (1)
- Magnetaktoren (1)
- Microprocessors (1)
- Model predictive control (1)
- Probabilistic Data Association (1)
- Random Matrices (1)
- Sensorlose Folgeregelung (1)
- Target Tracking (1)
- Trajectory tracking (1)
Institute
This thesis presents the development of two different state-feedback controllers to solve the trajectory tracking problem, where the vessel needs to reach and follow a time-varying reference trajectory. This motion problem was addressed to a real-scaled fully actuated surface vessel, whose dynamic model had unknown hydrodynamic and propulsion parameters that were identified by applying an experimental maneuver-based identification process. This dynamic model was then used to develop the controllers. The first one was the backstepping controller, which was designed with a local exponential stability proof. For the NMPC, the controller was developed to minimize the tracking error, considering the thrusters’ constraints. Moreover, both controllers considered the thruster allocation problem and counteracted environmental disturbance forces such as current, waves and wind.The effectiveness of these approaches was verified in simulation using Matlab/Simulink and GRAMPC (in the case of the NMPC), and in experimental scenarios, where they were applied to the vessel, performing docking maneuvers at the Rhine River in Constance (Germany).
Nowadays, most digital modulation schemes are based on conventional signal constellations that have no algebraic group, ring, or field properties, e.g. square quadrature-amplitude modulation constellations. Signal constellations with algebraic structure can enhance the system performance. For instance, multidimensional signal constellations based on dense lattices can achieve performance gains due to the dense packing. The algebraic structure enables low-complexity decoding and detection schemes. In this work, signal constellations with algebraic properties and their application in spatial modulation transmission schemes are investigated. Several design approaches of two- and four-dimensional signal constellations based on Gaussian, Eisenstein, and Hurwitz integers are shown. Detection algorithms with reduced complexity are proposed. It is shown, that the proposed Eisenstein and Hurwitz constellations combined with the proposed suboptimal detection can outperform conventional two-dimensional constellations with ML detection.
Algorithms and Architectures for Cryptography and Source Coding in Non-Volatile Flash Memories
(2021)
In this work, algorithms and architectures for cryptography and source coding are developed, which are suitable for many resource-constrained embedded systems such as non-volatile flash memories. A new concept for elliptic curve cryptography is presented, which uses an arithmetic over Gaussian integers. Gaussian integers are a subset of the complex numbers with integers as real and imaginary parts. Ordinary modular arithmetic over Gaussian integers is computational expensive. To reduce the complexity, a new arithmetic based on the Montgomery reduction is presented. For the elliptic curve point multiplication, this arithmetic over Gaussian integers improves the computational efficiency, the resistance against side channel attacks, and reduces the memory requirements. Furthermore, an efficient variant of the Lempel-Ziv-Welch (LZW) algorithm for universal lossless data compression is investigated. Instead of one LZW dictionary, this algorithm applies several dictionaries to speed up the encoding process. Two dictionary partitioning techniques are introduced that improve the compression rate and reduce the memory size of this parallel dictionary LZW algorithm.
Ein Beitrag zum Beobachterentwurf und zur sensorlosen Folgeregelung translatorischer Magnetaktoren
(2020)
NAND flash memory is widely used for data storage due to low power consumption, high throughput, short random access latency, and high density. The storage density of the NAND flash memory devices increases from one generation to the next, albeit at the expense of storage reliability.
Our objective in this dissertation is to improve the reliability of the NAND flash memory with a low hard implementation cost. We investigate the error characteristic, i.e. the various noises of the NAND flash memory. Based on the error behavior at different life-aging stages, we develop offset calibration techniques that minimize the bit error rate (BER).
Furthermore, we introduce data compression to reduce the write amplification effect and support the error correction codes (ECC) unit. In the first scenario, the numerical results show that the data compression can reduce the wear-out by minimizing the amount of data that is written to the flash. In the ECC scenario, the compression gain is used to improve the ECC capability. Based on the first scenario, the write amplification effect can be halved for the considered target flash and data model. By combining the ECC and data compression, the NAND flash memory lifetime improves three fold compared with uncompressed data for the same data model.
In order to improve the data reliability of the NAND flash memory, we investigate different ECC schemes based on concatenated codes like product codes, half-product codes, and generalized concatenated codes (GCC). We propose a construction for high-rate GCC for hard-input decoding. ECC based on soft-input decoding can significantly improve the reliability of NAND flash memories. Therefore, we propose a low-complexity soft-input decoding algorithm for high-rate GCC.
Flash memories are non-volatile memory devices. The rapid development of flash technologies leads to higher storage density, but also to higher error rates. This dissertation considers this reliability problem of flash memories and investigates suitable error correction codes, e.g. BCH-codes and concatenated codes. First, the flash cells, their functionality and error characteristics are explained. Next, the mathematics of the employed algebraic code are discussed. Subsequently, generalized concatenated codes (GCC) are presented. Compared to the commonly used BCH codes, concatenated codes promise higher code rates and lower implementation complexity. This complexity reduction is achieved by dividing a long code into smaller components, which require smaller Galois-Field sizes. The algebraic decoding algorithms enable analytical determination of the block error rate. Thus, it is possible to guarantee very low residual error rates for flash memories. Besides the complexity reduction, general concatenated codes can exploit soft information. This so-called soft decoding is not practicable for long BCH-codes. In this dissertation, two soft decoding methods for GCC are presented and analyzed. These methods are based on the Chase decoding and the stack algorithm. The last method explicitly uses the generalized concatenated code structure, where the component codes are nested subcodes. This property supports the complexity reduction. Moreover, the two-dimensional structure of GCC enables the correction of error patterns with statistical dependencies. One chapter of the thesis demonstrates how the concatenated codes can be used to correct two-dimensional cluster errors. Therefore, a two-dimensional interleaver is designed with the help of Gaussian integers. This design achieves the correction of cluster errors with the best possible radius. Large parts of this works are dedicated to the question, how the decoding algorithms can be implemented in hardware. These hardware architectures, their throughput and logic size are presented for long BCH-codes and generalized concatenated codes. The results show that generalized concatenated codes are suitable for error correction in flash memories, especially for three-dimensional NAND memory systems used in industrial applications, where low residual errors must be guaranteed.
Autonomous moving systems require very detailed information about their environment and potential colliding objects. Thus, the systems are equipped with high resolution sensors. These sensors have the property to generate more than one detection per object per time step. This results in an additional complexity for the target tracking algorithm, since standard tracking filters assume that an object generates at most one detection per object. This requires new methods for data association and system state filtering.
As new data association methods, in this thesis two different extensions of the Joint Integrated Probabilistic Data Association (JIPDA) filter to assign more than one detection to tracks are proposed.
The first method that is introduced, is a generalization of the JIPDA to assign a variable number of measurements to each track based on some predefined statistical models, which will be called Multi Detection - Joint Integrated Probabilistic Data Association (MD-JIPDA).
Since this scheme suffers from exponential increase of association hypotheses, also a new approximation scheme is presented. The second method is an extension for the special case, when the number and locations of measurements are a priori known. In preparation of this method, a new notation and computation scheme for the standard Joint Integrated Data Association is outlined, which also enables the derivation of a new fast approximation scheme called balanced permanent-JIPDA.
For state filtering, also two different concepts are applied: the Random Matrix Framework and the Measurement Generating Points. For the Random Matrix framework, first an alternative prediction method is proposed to account for kinematic state changes in the extension state prediction as well. Secondly, various update methods are investigated to account for the polar to Cartesian noise transformation problem. The filtering concepts are connected with the new MD-JIPDA and their characteristics analyzed with various Monte Carlo simulations.
In case an object can be modeled by a finite number of fixed Measurement Generating Points (MGP), also a proposition to track these object via a JIPDA filter is made. In this context, a fast Track-to-Track fusion algorithm is proposed as well and compared against the MGP-JIPDA.
The proposed algorithms are evaluated in two applications where scanning is done using radar sensors only. The first application is a typical automotive scenario, where a passenger car is equipped with six radar sensors to cover its complete environment.
In this application, the location of the measurements on an object can be considered stationary and that is has a rectangular shape. Thus, the MGP based algorithms are applied here. The filters are evaluated by tracking especially vehicles on nearside lanes.
The second application covers the tracking of vessels on inland waters. Here, two different kind of Radar systems are applied, but for both sensors a uniform distribution of the measurements over the target's extent can be assumed. Further, the assumption that the targets have elliptical shape holds, and so the Random Matrix Framework in combination with the MD-JIPDA is evaluated.
Exemplary test scenarios also illustrate the performance of this tracking algorithm.
Simon Grimm examines new multi-microphone signal processing strategies that aim to achieve noise reduction and dereverberation. Therefore, narrow-band signal enhancement approaches are combined with broad-band processing in terms of directivity based beamforming. Previously introduced formulations of the multichannel Wiener filter rely on the second order statistics of the speech and noise signals. The author analyses how additional knowledge about the location of a speaker as well as the microphone arrangement can be used to achieve further noise reduction and dereverberation.