000 Allgemeines, Informatik, Informationswissenschaft
Refine
Document Type
- Conference Proceeding (14)
- Article (5)
- Doctoral Thesis (2)
- Report (2)
- Bachelor Thesis (1)
- Other Publications (1)
Keywords
- AAL (2)
- Atmung (1)
- Binary codes (1)
- Block codes (1)
- Cloud (1)
- Codes over Gaussian integers (1)
- Computational complexity (1)
- Digital arithmetic (1)
- E-Health (1)
- Elliptic curve cryptography (1)
Error correction coding (ECC) for optical communication and persistent storage systems require high rate codes that enable high data throughput and low residual errors. Recently, different concatenated coding schemes were proposed that are based on binary Bose-Chaudhuri-Hocquenghem (BCH) codes that have low error correcting capabilities. Commonly, hardware implementations for BCH decoding are based on the Berlekamp-Massey algorithm (BMA). However, for single, double, and triple error correcting BCH codes, Peterson's algorithm can be more efficient than the BMA. The known hardware architectures of Peterson's algorithm require Galois field inversion. This inversion dominates the hardware complexity and limits the decoding speed. This work proposes an inversion-less version of Peterson's algorithm. Moreover, a decoding architecture is presented that is faster than decoders that employ inversion or the fully parallel BMA at a comparable circuit size.
The computational complexity of the optimal maximum likelihood (ML) detector for spatial modulation increases rapidly as more transmit antennas or larger modulation orders are employed. Hence, ML detection may be infeasible for higher bit rates. This work proposes an improved suboptimal detection algorithm based on the Gaussian approximation method. It is demonstrated that the new method is closely related to the previously published signal vector based detection and the modified maximum ratio combiner, but can improve the detection performance compared to these methods. Furthermore, the performance of different signal constellations with suboptimal detection is investigated. Simulation results indicate that the performance loss compared to ML detection depends heavily on the signal constellation, where the recently proposed Eisenstein integer constellations are beneficial compared to classical QAM or PSK constellations.
Die Erholung unseres Körpers und Gehirns von Müdigkeit ist direkt abhängig von der Qualität des Schlafes, die aus den Ergebnissen einer Schlafstudie ermittelt werden kann. Die Klassifizierung der Schlafstadien ist der erste Schritt dieser Studie und beinhaltet die Messung von Biovitaldaten und deren weitere Verarbeitung. Das non-invasive Schlafanalyse-System basiert auf einem Hardware-Sensornetz aus 24 Drucksensoren, das die Schlafphasenerkennung ermöglicht. Die Drucksensoren sind mit einem energieeffizienten Mikrocontroller über einen systemweiten Bus mit Adressarbitrierung verbunden. Ein wesentlicher Unterschied dieses Systems im Vergleich zu anderen Ansätzen ist die innovative Art, die Sensoren unter der Matratze zu platzieren. Diese Eigenschaft erleichtert die kontinuierliche Nutzung des Systems ohne fühlbaren Einfluss auf das gewohnte Bett. Das System wurde getestet, indem Experimente durchgeführt wurden, die den Schlaf verschiedener gesunder junger Personen aufzeichneten. Die ersten Ergebnisse weisen auf das Potenzial hin, nicht nur Atemfrequenz und Körperbewegung, sondern auch Herzfrequenz zu erfassen.
Random matrices are used to filter the center of gravity (CoG) and the covariance matrix of measurements. However, these quantities do not always correspond directly to the position and the extent of the object, e.g. when a lidar sensor is used.In this paper, we propose a Gaussian processes regression model (GPRM) to predict the position and extension of the object from the filtered CoG and covariance matrix of the measurements. Training data for the GPRM are generated by a sampling method and a virtual measurement model (VMM). The VMM is a function that generates artificial measurements using ray tracing and allows us to obtain the CoG and covariance matrix that any object would cause. This enables the GPRM to be trained without real data but still be applied to real data due to the precise modeling in the VMM. The results show an accurate extension estimation as long as the reality behaves like the modeling and e.g. lidar measurements only occur on the side facing the sensor.
This thesis emphasizes problems that reports generated by vulnerability scanners impose on the process of vulnerability management, which are a. an overwhelming amount of data and b. an insufficient prioritization of the scan results.
To assist the process of developing means to counteract those problems and to allow for quantitative evaluation of their solutions, two metrics are proposed for their effectiveness and efficiency. These metrics imply a focus on higher severity vulnerabilities and can be applied to any simplification process of vulnerability scan results, given it relies on a severity score and time of remediation estimation for each vulnerability.
A priority score is introduced which aims to improve the widely used Common Vulnerability Scoring System (CVSS) base score of each vulnerability dependent on a vulnerability’s ease of exploit, estimated probability of exploitation and probability of its existence.
Patterns within the reports generated by the Open Vulnerability Assessment System (OpenVAS) vulnerability scanner between vulnerabilities are discovered which identify criteria by which they can be categorized from a remediation actor standpoint. These categories lay the groundwork of a final simplified report and consist of updates that need to be installed on a host, severe vulnerabilities, vulnerabilities that occur on multiple hosts and vulnerabilities that will take a lot of time for remediation. The highest potential time savings are found to exist within frequently occurring vulnerabilities, minor- and major suggested updates.
Processing of the results provided by the vulnerability scanner and creation of the report is realized in the form of a python script. The resulting reports are short, straight to the point and provide a top down remediation process which should theoretically allow to minimize the institutions attack surface as fast as possible. Evaluation of the practicality must follow as the reports are yet to be introduced into the Information Security Management Lifecycle.
Service in der Investitionsgüterindustrie wird heutzutage in der Regel immer noch manuell und vor Ort beim Kunden ausgeführt. Dazu braucht es qualifizierte Service-Techniker:innen, die über das nötige Produkt- Prozesswissen verfügen. Für kleine und mittelständische Unternehmen (KMU) der Investitionsgüterindustrie stellt insbesondere die Internationalisierung eine Herausforderung dar, da qualifizierte Service-Techniker:innen eine rare Ressource sind. Es gilt sie möglichst effektiv und effizient einzusetzen. Zu diesem Zweck wurde im Rahmen des SerWiss-Projektes eine Lösung entwickelt, die es KMU ermöglicht, service-rele-
vantes Wissen effizient zu generieren, zu strukturieren und am Point-of-Service bereitzustellen sowie im Rahmen geeigneter Geschäftsmodelle zu vermarkten. Im Beitrawird erläutert, wie sich dieses erfasste Wissen als kundenorientiertes Wertangebot einsetzen und erlöswirksam in entsprechenden Geschäftsmodellen umsetzen lässt.
NAND flash memory is widely used for data storage due to low power consumption, high throughput, short random access latency, and high density. The storage density of the NAND flash memory devices increases from one generation to the next, albeit at the expense of storage reliability.
Our objective in this dissertation is to improve the reliability of the NAND flash memory with a low hard implementation cost. We investigate the error characteristic, i.e. the various noises of the NAND flash memory. Based on the error behavior at different life-aging stages, we develop offset calibration techniques that minimize the bit error rate (BER).
Furthermore, we introduce data compression to reduce the write amplification effect and support the error correction codes (ECC) unit. In the first scenario, the numerical results show that the data compression can reduce the wear-out by minimizing the amount of data that is written to the flash. In the ECC scenario, the compression gain is used to improve the ECC capability. Based on the first scenario, the write amplification effect can be halved for the considered target flash and data model. By combining the ECC and data compression, the NAND flash memory lifetime improves three fold compared with uncompressed data for the same data model.
In order to improve the data reliability of the NAND flash memory, we investigate different ECC schemes based on concatenated codes like product codes, half-product codes, and generalized concatenated codes (GCC). We propose a construction for high-rate GCC for hard-input decoding. ECC based on soft-input decoding can significantly improve the reliability of NAND flash memories. Therefore, we propose a low-complexity soft-input decoding algorithm for high-rate GCC.
The introduction of multi level cell (MLC) and triple level cell (TLC) technologies reduced the reliability of flash memories significantly compared with single level cell (SLC) flash. The reliability of the flash memory suffers from various errors causes. Program/erase cycles, read disturb, and cell to cell interference impact the threshold voltages. With pre-defined fixed read thresholds a voltage shift increases the bit error rate (BER). This work proposes a read threshold calibration method that aims on minimizing the BER by adapting the read voltages. The adaptation of the read thresholds is based on the number of errors observed in the codeword protecting a small amount of meta-data. Simulations based on flash measurements demonstrate that this method can significantly reduce the BER of TLC memories.