Refine
Document Type
- Conference Proceeding (11)
- Article (8)
- Doctoral Thesis (1)
Keywords
- Channel capacity (1)
- Channel coding (1)
- Channel estimation (1)
- Code-based cryptography (2)
- Code-based cryptosystem (1)
- Computational complexity (1)
- Computer Science Applications (1)
- Concatenated codes (2)
- Data retention time (1)
- Decoding (1)
Institute
Code-based cryptography is a promising candidate for post-quantum public-key encryption. The classic McEliece system uses binary Goppa codes, which are known for their good error correction capability. However, the key generation and decoding procedures of the classic McEliece system have a high computation complexity. Recently, q-ary concatenated codes over Gaussian integers were proposed for the McEliece cryptosystem together with the one-Mannheim error channel, where the error values are limited to Mannheim weight one. For this channel, concatenated codes over Gaussian integers achieve a higher error correction capability than maximum distance separable (MDS) codes with bounded minimum distance decoding. This improves the work factor regarding decoding attacks based on information-set decoding. This work proposes an improved construction for codes over Gaussian integers. These generalized concatenated codes extent the rate region where the work factor is beneficial compared to MDS codes. They allow for shorter public keys for the same level of security as the classic Goppa codes. Such codes are beneficial for lightweight code-based cryptosystems.
Large-scale quantum computers threaten today's public-key cryptosystems. The code-based McEliece and Niederreiter cryptosystems are among the most promising candidates for post-quantum cryptography. Recently, a new class of q-ary product codes over Gaussian integers together with an efficient decoding algorithm were proposed for the McEliece cryptosystems. It was shown that these codes achieve a higher work factor for information-set decoding attacks than maximum distance separable (MDS) codes with comparable length and dimension. In this work, we adapt this q-ary product code construction to codes over Eisenstein integers. We propose a new syndrome decoding method which is applicable for Niederreiter cryptosystems. The code parameters and work factors for information-set decoding are comparable to codes over Gaussian integers. Hence, the new construction is not favorable for the McEliece system. Nevertheless, it is beneficial for the Niederreiter system, where it achieves larger message lengths. While the Niederreiter and McEliece systems have the same level of security, the Niederreiter system can be advantageous for some applications, e.g., it enables digital signatures. The proposed coding scheme is interesting for lightweight Niederreiter cryptosystems and embedded security due to the short code lengths and low decoding complexity.
Side Channel Attack Resistance of the Elliptic Curve Point Multiplication using Gaussian Integers
(2020)
Elliptic curve cryptography is a cornerstone of embedded security. However, hardware implementations of the elliptic curve point multiplication are prone to side channel attacks. In this work, we present a new key expansion algorithm which improves the resistance against timing and simple power analysis attacks. Furthermore, we consider a new concept for calculating the point multiplication, where the points of the curve are represented as Gaussian integers. Gaussian integers are subset of the complex numbers, such that the real and imaginary parts are integers. Since Gaussian integer fields are isomorphic to prime fields, this concept is suitable for many elliptic curves. Representing the key by a Gaussian integer expansion is beneficial to reduce the computational complexity and the memory requirements of a secure hardware implementation.
Many resource-constrained systems still rely on symmetric cryptography for verification and authentication. Asymmetric cryptographic systems provide higher security levels, but are very computational intensive. Hence, embedded systems can benefit from hardware assistance, i.e., coprocessors optimized for the required public key operations. In this work, we propose an elliptic curve cryptographic coprocessors design for resource-constrained systems. Many such coprocessor designs consider only special (Solinas) prime fields, which enable a low-complexity modulo arithmetic. Other implementations support arbitrary prime curves using the Montgomery reduction. These implementations typically require more time for the point multiplication. We present a coprocessor design that has low area requirements and enables a trade-off between performance and flexibility. The point multiplication can be performed either using a fast arithmetic based on Solinas primes or using a slower, but flexible Montgomery modular arithmetic.
Large-scale quantum computers threaten the security of today's public-key cryptography. The McEliece cryptosystem is one of the most promising candidates for post-quantum cryptography. However, the McEliece system has the drawback of large key sizes for the public key. Similar to other public-key cryptosystems, the McEliece system has a comparably high computational complexity. Embedded devices often lack the required computational resources to compute those systems with sufficiently low latency. Hence, those systems require hardware acceleration. Lately, a generalized concatenated code construction was proposed together with a restrictive channel model, which allows for much smaller public keys for comparable security levels. In this work, we propose a hardware decoder suitable for a McEliece system based on these generalized concatenated codes. The results show that those systems are suitable for resource-constrained embedded devices.
Reed-Muller (RM) codes have recently regained some interest in the context of low latency communications and due to their relation to polar codes. RM codes can be constructed based on the Plotkin construction. In this work, we consider concatenated codes based on the Plotkin construction, where extended Bose-Chaudhuri-Hocquenghem (BCH) codes are used as component codes. This leads to improved code parameters compared to RM codes. Moreover, this construction is more flexible concerning the attainable code rates. Additionally, new soft-input decoding algorithms are proposed that exploit the recursive structure of the concatenation and the cyclic structure of the component codes. First, we consider the decoding of the cyclic component codes and propose a low complexity hybrid ordered statistics decoding algorithm. Next, this algorithm is applied to list decoding of the Plotkin construction. The proposed list decoding approach achieves near-maximum-likelihood performance for codes with medium lengths. The performance is comparable to state-of-the-art decoders, whereas the complexity is reduced.
The code-based McEliece cryptosystem is a promising candidate for post-quantum cryptography. The sender encodes a message, using a public scrambled generator matrix, and adds a random error vector. In this work, we consider q-ary codes and restrict the Lee weight of the added error symbols. This leads to an increased error correction capability and a larger work factor for information-set decoding attacks. In particular, we consider codes over an extension field and use the one-Lee error channel, which restricts the error values to Lee weight one. For this channel model, generalized concatenated codes can achieve high error correction capabilities. We discuss the decoding of those codes and the possible gain for decoding beyond the guaranteed error correction capability.
Digitale Signaturen zum Überprüfen der Integrität von Daten, beispielsweise von Software-Updates, gewinnen zunehmend an Bedeutung. Im Bereich der eingebetteten Systeme kommen derzeit wegen der geringen Komplexität noch überwiegend symmetri-sche Verschlüsselungsverfahren zur Berechnung eines Authentifizierungscodes zum Einsatz. Asym-metrische Kryptosysteme sind rechenaufwendiger, bieten aber mehr Sicherheit, weil der Schlüssel zur Authentifizierung nicht geheim gehalten werden muss. Asymmetrische Signaturverfahren werden typischerweise zweistufig berechnet. Der Schlüssel wird nicht direkt auf die Daten angewendet, sondern auf deren Hash-Wert, der mit Hilfe einer Hash-funktion zuvor berechnet wurde. Zum Einsatz dieser Verfahren in eingebetteten Systemen ist es erforder-lich, dass die Hashfunktion einen hinreichend gro-ßen Datendurchsatz ermöglicht. In diesem Beitrag wird eine effiziente Hardware-Implementierung der SHA-256 Hashfunktion vorgestellt.
Today, many resource-constrained systems, such as embedded systems, still rely on symmetric cryptography for authentication and digital signatures. Asymmetric cryptography provide a higher security level, but software implementations of public-key algorithms on small embedded systems are extremely slow. Hence, such embedded systems require hardware assistance, i.e. crypto coprocessors optimized for public key operations. Many such coprocessor designs aim on high computational performance. In this work, an area efficient elliptic curve cryptography (ECC) coprocessor is presented for applications in small embedded systems where high performance coprocessors are too costly. We propose a simple control unit with a small instruction set that supports different ECC point multiplication (PM) algorithms. The control unit reduces the logic and number of registers compared with other implementations of ECC point multiplications.
The introduction of multi level cell (MLC) and triple level cell (TLC) technologies reduced the reliability of flash memories significantly compared with single level cell (SLC) flash. The reliability of the flash memory suffers from various errors causes. Program/erase cycles, read disturb, and cell to cell interference impact the threshold voltages. With pre-defined fixed read thresholds a voltage shift increases the bit error rate (BER). This work proposes a read threshold calibration method that aims on minimizing the BER by adapting the read voltages. The adaptation of the read thresholds is based on the number of errors observed in the codeword protecting a small amount of meta-data. Simulations based on flash measurements demonstrate that this method can significantly reduce the BER of TLC memories.