Refine
Year of publication
Document Type
- Conference Proceeding (46)
- Article (30)
- Patent (3)
Keywords
- Antenna arrays (1)
- BCH codes (1)
- Binary codes (1)
- Block codes (2)
- CONCATENATED codes (1)
- CONVOLUTION codes (1)
- Capacity (1)
- Channel capacity (1)
- Channel coding (1)
- Channel estimation (3)
Institute
The multichannel Wiener filter (MWF) is a well-established noise reduction technique for speech processing. Most commonly, the speech component in a selected reference microphone is estimated. The choice of this reference microphone influences the broadband output signal-to-noise ratio (SNR) as well as the speech distortion. Recently, a generalized formulation for the MWF (G-MWF) was proposed that uses a weighted sum of the individual transfer functions from the speaker to the microphones to form a better speech reference resulting in an improved broadband output SNR. For the MWF, the influence of the phase reference is often neglected, because it has no impact on the narrow-band output SNR. The G-MWF allows an arbitrary choice of the phase reference especially in the context of spatially distributed microphones.
In this work, we demonstrate that the phase reference determines the overall transfer function and hence has an impact on both the speech distortion and the broadband output SNR. We propose two speech references that achieve a better signal-to-reverberation ratio (SRR) and an improvement in the broadband output SNR. Both proposed references are based on the phase of a delay-and-sum beamformer. Hence, the time-difference-of-arrival (TDOA) of the speech source is required to align the signals. The different techniques are compared in terms of SRR and SNR performance.
Acoustic Echo Cancellation (AEC) plays a crucial role in speech communication devices to enable full-duplex communication. AEC algorithms have been studied extensively in the literature. However, device specific details like microphone or loudspeaker configurations are often neglected, despite their impact on the echo attenuation or near-end speech quality. In this work, we propose a method to investigate different loudspeaker-microphone configurations with respect to their contribution to the overall AEC performance. A generic AEC system consisting of an adaptive filter and a Wiener post filter is used for a fair comparison between different setups. We propose the near-end-to-residual-echo ratio (NRER) and the attenuation-of-near-end (AON) as quality measures for the full-duplex AEC performance.
This paper studies suitable models for the identification of nonlinear acoustic systems. A cascaded structure of nonlinear filters is proposed that contains several parallel branches, consisting of polynomial functions followed by a linear filter for each order of nonlinearity. The second order of nonlinearity is additionally modelled with a parallel branch, containing a Volterra filter. These are followed by a long linear FIR filter that is able to model the room acoustics. The model is applied to the identification of a tube power amplifier feeding a guitar loudspeaker cabinet in an acoustic room. The adaptive identification is performed by the normalized least mean square (NLMS) algorithm. Compared with a generalized polynomial Hammerstein (GPH) model, the accuracy in modelling the dedicated real world system can be improved to a greater extend than increasing the order of nonlinearity in the GPH model.
This work studies a wind noise reduction approach for communication applications in a car environment. An endfire array consisting of two microphones is considered as a substitute for an ordinary cardioid microphone capsule of the same size. Using the decomposition of the multichannel Wiener filter (MWF), a suitable beamformer and a single-channel post filter are derived. Due to the known array geometry and the location of the speech source, assumptions about the signal properties can be made to simplify the MWF beamformer and to estimate the speech and noise power spectral densities required for the post filter. Even for closely spaced microphones, the different signal properties at the microphones can be exploited to achieve a significant reduction of wind noise. The proposed beamformer approach results in an improved speech signal regarding the signal-to-noise-ratio and keeps the linear speech distortion low. The derived post filter shows equal performance compared to known approaches but reduces the effort for noise estimation.
This work proposes an efficient hardware Implementation of sequential stack decoding of binary block codes. The decoder can be applied for soft input decoding for generalized concatenated (GC) codes. The GC codes are constructed from inner nested binary Bose-Chaudhuri-Hocquenghem (BCH) codes and outer Reed-Solomon (RS) codes. In order to enable soft input decoding for the inner BCH block codes, a sequential stack decoding algorithm is used.
The McEliece cryptosystem is a promising candidate for post-quantum public-key encryption. In this work, we propose q-ary codes over Gaussian integers for the McEliece system and a new channel model. With this one Mannheim error channel, errors are limited to weight one. We investigate the channel capacity of this channel and discuss its relation to the McEliece system. The proposed codes are based on a simple product code construction and have a low complexity decoding algorithm. For the one Mannheim error channel, these codes achieve a higher error correction capability than maximum distance separable codes with bounded minimum distance decoding. This improves the work factor regarding decoding attacks based on information-set decoding.
Method and device for error correction coding based on high-rate generalized concatenated codes
(2017)
Field error correction coding is particularly suitable for applications in non-volatile flash memories. We describe a method for error correction encoding of data to be stored in a memory device, a corresponding method for decoding a codeword matrix resulting from the encoding method, a coding device, and a computer program for performing the methods on the coding device, using a new construction for high-rate generalized concatenated (GC) codes. The codes, which are well suited for error correction in flash memories for high reliability data storage, are constructed from inner nested binary Bose-Chaudhuri-Hocquenghem (BCH) codes and outer codes, preferably Reed-Solomon (RS) codes. For the inner codes extended BCH codes are used, where only single parity-check codes are applied in the first level of the GC code. This enables high-rate codes.
A soft input decoding method and a decoder for generalized concatenated (GC) codes. The GC codes are constructed from inner nested block codes, such as binary Bose-Chaudhuri-Hocquenghem, BCH, codes and outer codes, such as Reed-Solomon, RS, codes. In order to enable soft input decoding for the inner block codes, a sequential stack decoding algorithm is used. Ordinary stack decoding of binary block codes requires the complete trellis of the code. In one aspect, the present invention applies instead a representation of the block codes based on the trellises of supercodes in order to reduce the memory requirements for the representation of the inner codes. This enables an efficient hardware implementation. In another aspect, there is provided a soft input decoding method and device employing a sequential stack decoding algorithm in combination with list-of-two decoding which is particularly well suited for applications that require very low residual error rates.
This letter introduces signal constellations based on multiplicative groups of Eisenstein integers, i.e., hexagonal lattices. These sets of Eisenstein integers are proposed as signal constellations for generalized spatial modulation. The algebraic properties of the new constellations are investigated and a set partitioning technique is developed. This technique can be used to design coded modulation schemes over hexagonal lattices.
Codes over quotient rings of Lipschitz integers have recently attracted some attention. This work investigates the performance of Lipschitz integer constellations for transmission over the AWGN channel by means of the constellation figure of merit. A construction of sets of Lipschitz integers that leads to a better constellation figure of merit compared to ordinary Lipschitz integer constellations is presented. In particular, it is demonstrated that the concept of set partitioning can be applied to quotient rings of Lipschitz integers where the number of elements is not a prime number. It is shown that it is always possible to partition such quotient rings into additive subgroups in a manner that the minimum Euclidean distance of each subgroup is strictly larger than in the original set. The resulting signal constellations have a better performance for transmission over an additive white Gaussian noise channel compared to Gaussian integer constellations and to ordinary Lipschitz integer constellations. In addition, we present multilevel code constructions for the new signal constellations.
Codes over quotient rings of Lipschitz integers have recently attracted some attention. This work investigates the performance of Lipschitz integer constellations for transmission over the AWGN channel by means of the constellation figure of merit. A construction of sets of Lipschitz integers is presented that leads to a better constellation figure of merit compared to ordinary Lipschitz integer constellations. In particular, it is demonstrated that the concept of set partitioning can be applied to quotient rings of Lipschitz integers where the number of elements is not a prime number. It is shown that it is always possible to partition such quotient rings into additive subgroups in a manner that the minimum Euclidean distance of each subgroup is strictly larger than in the original set. The resulting signal constellations have a better performance for transmission over an additive white Gaussian noise channel compared to Gaussian integer constellations and to ordinary Lipschitz integer constellations.
The computational complexity of the optimal maximum likelihood (ML) detector for spatial modulation increases rapidly as more transmit antennas or larger modulation orders are employed. Hence, ML detection may be infeasible for higher bit rates. This work proposes an improved suboptimal detection algorithm based on the Gaussian approximation method. It is demonstrated that the new method is closely related to the previously published signal vector based detection and the modified maximum ratio combiner, but can improve the detection performance compared to these methods. Furthermore, the performance of different signal constellations with suboptimal detection is investigated. Simulation results indicate that the performance loss compared to ML detection depends heavily on the signal constellation, where the recently proposed Eisenstein integer constellations are beneficial compared to classical QAM or PSK constellations.
This letter proposes two contributions to improve the performance of transmission with generalized multistream spatial modulation (SM). In particular, a modified suboptimal detection algorithm based on the Gaussian approximation method is proposed. The proposed modifications reduce the complexity of the Gaussian approximation method and improve the performance for high signal-to-noise ratios. Furthermore, this letter introduces signal constellations based on Hurwitz integers, i.e., a 4-D lattice. Simulation results demonstrate that these signal constellations are beneficial for generalized SM with two active antennas.
The introduction of multiple-level cell (MLC) and triple-level cell (TLC) technologies reduced the reliability of flash memories significantly compared with single-level cell flash. With MLC and TLC flash cells, the error probability varies for the different states. Hence, asymmetric models are required to characterize the flash channel, e.g., the binary asymmetric channel (BAC). This contribution presents a combined channel and source coding approach improving the reliability of MLC and TLC flash memories. With flash memories data compression has to be performed on block level considering short-data blocks. We present a coding scheme suitable for blocks of 1 kB of data. The objective of the data compression algorithm is to reduce the amount of user data such that the redundancy of the error correction coding can be increased in order to improve the reliability of the data storage system. Moreover, data compression can be utilized to exploit the asymmetry of the channel to reduce the error probability. With redundant data, the proposed combined coding scheme results in a significant improvement of the program/erase cycling endurance and the data retention time of flash memories.
Generalized concatenated (GC) codes with soft-input decoding were recently proposed for error correction in flash memories. This work proposes a soft-input decoder for GC codes that is based on a low-complexity bit-flipping procedure. This bit-flipping decoder uses a fixed number of test patterns and an algebraic decoder for soft-input decoding. An acceptance criterion for the final candidate codeword is proposed. Combined with error and erasure decoding of the outer Reed-Solomon codes, this bit-flipping decoder can improve the decoding performance and reduce the decoding complexity compared to the previously proposed sequential decoding. The bit-flipping decoder achieves a decoding performance similar to a maximum likelihood decoder for the inner codes.
The binary asymmetric channel (BAC) is a model for the error characterization of multi-level cell (MLC) flash memories. This contribution presents a joint channel and source coding approach improving the reliability of MLC flash memories. The objective of the data compression algorithm is to reduce the amount of user data such that the redundancy of the error correction coding can be increased in order to improve the reliability of the data storage system. Moreover, data compression can be utilized to exploit the asymmetry of the channel to reduce the error probability. With MLC flash memories data compression has to be performed on block level considering short data blocks. We present a coding scheme suitable for blocks of 1 kilobyte of data.
Error correction coding based on soft-input decoding can significantly improve the reliability of flash memories. Such soft-input decoding algorithms require reliability information about the state of the memory cell. This work proposes a channel model for soft-input decoding that considers the asymmetric error characteristic of multi-level cell (MLC) and triple-level cell (TLC) memories. Based on this model, an estimation method for the channel state information is devised which avoids additional pilot data for channel estimation. Furthermore, the proposed method supports page-wise read operations.
Error correction coding (ECC) for optical communication and persistent storage systems require high rate codes that enable high data throughput and low residual errors. Recently, different concatenated coding schemes were proposed that are based on binary Bose-Chaudhuri-Hocquenghem (BCH) codes that have low error correcting capabilities. Commonly, hardware implementations for BCH decoding are based on the Berlekamp-Massey algorithm (BMA). However, for single, double, and triple error correcting BCH codes, Peterson's algorithm can be more efficient than the BMA. The known hardware architectures of Peterson's algorithm require Galois field inversion. This inversion dominates the hardware complexity and limits the decoding speed. This work proposes an inversion-less version of Peterson's algorithm. Moreover, a decoding architecture is presented that is faster than decoders that employ inversion or the fully parallel BMA at a comparable circuit size.
This work proposes a lossless data compression algorithm for short data blocks. The proposed compression scheme combines a modified move-to-front algorithm with Huffman coding. This algorithm is applicable in storage systems where the data compression is performed on block level with short block sizes, in particular, in non-volatile memories. For block sizes in the range of 1(Formula presented.)kB, it provides a compression gain comparable to the Lempel–Ziv–Welch algorithm. Moreover, encoder and decoder architectures are proposed that have low memory requirements and provide fast data encoding and decoding.
Embodiments are generally related to the field of channel and source coding of data to be sent over a channel, such as a communication link or a data memory. Some specific embodiments are related to a method of encoding data for transmission over a channel, a corresponding decoding method, a coding device for performing one or both of these methods and a computer program comprising instructions to cause said coding device to perform one or both of said methods.
In this paper we propose a method to determine the active speaker for each time-frequency point in the noisy signals of a microphone array. This detection is based on a statistical model where the speech signals as well as noise signals are assumed to be multivariate Gaussian random variables in the Fourier domain. Based on this model we derive a maximum-likelihood detector for the active speaker. The decision is based on the a posteriori signal to noise ratio (SNR) of a speaker dependent max-SNR beamformer.
This contribution presents a data compression scheme for applications in non-volatile flash memories. The objective of the data compression algorithm is to reduce the amount of user data such that the redundancy of the error correction coding can be increased in order to improve the reliability of the data storage system. The data compression is performed on block level considering data blocks of 1 kilobyte. We present an encoder architecture that has low memory requirements and provides a fast data encoding.
Error correction coding for optical communication and storage requires high rate codes that enable high data throughput and low residual errors. Recently, different concatenated coding schemes were proposed that are based on binary BCH codes with low error correcting capabilities. In this work, low-complexity hard- and soft-input decoding methods for such codes are investigated. We propose three concepts to reduce the complexity of the decoder. For the algebraic decoding we demonstrate that Peterson's algorithm can be more efficient than the Berlekamp-Massey algorithm for single, double, and triple error correcting BCH codes. We propose an inversion-less version of Peterson's algorithm and a corresponding decoding architecture. Furthermore, we propose a decoding approach that combines algebraic hard-input decoding with soft-input bit-flipping decoding. An acceptance criterion is utilized to determine the reliability of the estimated codewords. For many received codewords the stopping criterion indicates that the hard-decoding result is sufficiently reliable, and the costly soft-input decoding can be omitted. To reduce the memory size for the soft-values, we propose a bit-flipping decoder that stores only the positions and soft-values of a small number of code symbols. This method significantly reduces the memory requirements and has little adverse effect on the decoding performance.
The growing error rates of triple-level cell (TLC) and quadruple-level cell (QLC) NAND flash memories have led to the application of error correction coding with soft-input decoding techniques in flash-based storage systems. Typically, flash memory is organized in pages where the individual bits per cell are assigned to different pages and different codewords of the error-correcting code. This page-wise encoding minimizes the read latency with hard-input decoding. To increase the decoding capability, soft-input decoding is used eventually due to the aging of the cells. This soft-decoding requires multiple read operations. Hence, the soft-read operations reduce the achievable throughput, and increase the read latency and power consumption. In this work, we investigate a different encoding and decoding approach that improves the error correction performance without increasing the number of reference voltages. We consider TLC and QLC flashes where all bits are jointly encoded using a Gray labeling. This cell-wise encoding improves the achievable channel capacity compared with independent page-wise encoding. Errors with cell-wise read operations typically result in a single erroneous bit per cell. We present a coding approach based on generalized concatenated codes that utilizes this property.
Soft-input decoding of concatenated codes based on the Plotkin construction and BCH component codes
(2020)
Low latency communication requires soft-input decoding of binary block codes with small to medium block lengths.
In this work, we consider generalized multiple concatenated (GMC) codes based on the Plotkin construction. These codes are similar to Reed-Muller (RM) codes. In contrast to RM codes, BCH codes are employed as component codes. This leads to improved code parameters. Moreover, a decoding algorithm is proposed that exploits the recursive structure of the concatenation. This algorithm enables efficient soft-input decoding of binary block codes with small to medium lengths. The proposed codes and their decoding achieve significant performance gains compared with RM codes and recursive GMC decoding.
Large persistent memory is crucial for many applications in embedded systems and automotive computing like AI databases, ADAS, and cutting-edge infotainment systems. Such applications require reliable NAND flash memories made for harsh automotive conditions. However, due to high memory densities and production tolerances, the error probability of NAND flash memories has risen. As the number of program/erase cycles and the data retention times increase, non-volatile NAND flash memories' performance and dependability suffer. The read reference voltages of the flash cells vary due to these aging processes. In this work, we consider the issue of reference voltage adaption. The considered estimation procedure uses shallow neural networks to estimate the read reference voltages for different life-cycle conditions with the help of histogram measurements. We demonstrate that the training data for the neural networks can be enhanced by using shifted histograms, i.e., a training of the neural networks is possible based on a few measurements of some extreme points used as training data. The trained neural networks generalize well for other life-cycle conditions.
Automotive computing applications like AI databases, ADAS, and advanced infotainment systems have a huge need for persistent memory. This trend requires NAND flash memories designed for extreme automotive environments. However, the error probability of NAND flash memories has increased in recent years due to higher memory density and production tolerances. Hence, strong error correction coding is needed to meet automotive storage requirements. Many errors can be corrected by soft decoding algorithms. However, soft decoding is very resource-intensive and should be avoided when possible. NAND flash memories are organized in pages, and the error correction codes are usually encoded page-wise to reduce the latency of random reads. This page-wise encoding does not reach the maximum achievable capacity. Reading soft information increases the channel capacity but at the cost of higher latency and power consumption. In this work, we consider cell-wise encoding, which also increases the capacity compared to page-wise encoding. We analyze the cell-wise processing of data in triple-level cell (TLC) NAND flash and show the performance gain when using Low-Density Parity-Check (LDPC) codes. In addition, we investigate a coding approach with page-wise encoding and cell-wise reading.
Non-volatile NAND flash memories store information as an electrical charge. Different read reference voltages are applied to read the data. However, the threshold voltage distributions vary due to aging effects like program erase cycling and data retention time. It is necessary to adapt the read reference voltages for different life-cycle conditions to minimize the error probability during readout. In the past, methods based on pilot data or high-resolution threshold voltage histograms were proposed to estimate the changes in voltage distributions. In this work, we propose a machine learning approach with neural networks to estimate the read reference voltages. The proposed method utilizes sparse histogram data for the threshold voltage distributions. For reading the information from triple-level cell (TLC) memories, several read reference voltages are applied in sequence. We consider two histogram resolutions. The simplest histogram consists of the zero-and-one ratios for the hard decision read operation, whereas a higher resolution is obtained by considering the quantization levels for soft-input decoding. This approach does not require pilot data for the voltage adaptation. Furthermore, only a few measurements of extreme points of the threshold voltage distributions are required as training data. Measurements with different conditions verify the proposed approach. The resulting neural networks perform well under other life-cycle conditions.
Reliability is a crucial aspect of non-volatile NAND flash memories, and it is essential to thoroughly analyze the channel to prevent errors and ensure accurate readout. Es-timating the read reference voltages (RRV s) is a significant challenge due to the multitude of physical effects involved. The question arises which features are useful and necessary for the RRV estimation. Various possible features require specialized hardware or specific readout techniques to be usable. In contrast we consider sparse histograms based on the decision thresholds for hard-input and soft-input decoding. These offer a distinct advantage as they are derived directly from the raw readout data without the need for decoding. This paper focuses on the information-theoretic study of different features, especially on the exploration of the mutual information (MI) between feature vector and RRV. In particular, we investigate the dependency of the MI on the resolution of the histograms. With respect to the RRV estimation, sparse histograms provide sufficient information for near-optimum estimation.
Spatial modulation (SM) is a low-complexity multiple-input/multiple-output transmission technique that combines index modulation and quadrature amplitude modulation for wireless communications. In this work, we consider the problem of link adaption for generalized spatial modulation (GSM) systems that use multiple active transmit antennas simultaneously. Link adaption algorithms require a real-time estimation of the link quality of the time-variant communication channels, e.g., by means of estimating the mutual information. However, determining the mutual information of SM is challenging because no closed-form expressions have been found so far. Recently, multilayer feedforward neural networks were applied to compute the achievable rate of an index modulation link. However, only a small SM system with two transmit and two receive antennas was considered. In this work, we consider a similar approach but investigate larger GSM systems with multiple active antennas. We analyze the portions of mutual information related to antenna selection and the IQ modulation processes, which depend on the GSM variant and the signal constellation.
The encoding of antenna patterns with generalized spatial modulation as well as other index modulation techniques require w-out-of-n encoding where all binary vectors of length n have the same weight w. This constant-weight property cannot be obtained by conventional linear coding schemes. In this work, we propose a new class of constant-weight codes that result from the concatenation of convolutional codes with constant-weight block codes. These constant-weight convolutional codes are nonlinear binary trellis codes that can be decoded with the Viterbi algorithm. Some constructed constant-weight convolutional codes are optimum free distance codes. Simulation results demonstrate that the decoding performance with Viterbi decoding is close to the performance of the best-known linear codes. Similarly, simulation results for spatial modulation with a simple on-off keying show a significant coding gain with the proposed coded index modulation scheme.
List decoding for concatenated codes based on the Plotkin construction with BCH component codes
(2021)
Reed-Muller codes are a popular code family based on the Plotkin construction. Recently, these codes have regained some interest due to their close relation to polar codes and their low-complexity decoding. We consider a similar code family, i.e., the Plotkin concatenation with binary BCH component codes. This construction is more flexible regarding the attainable code parameters. In this work, we consider a list-based decoding algorithm for the Plotkin concatenation with BCH component codes. The proposed list decoding leads to a significant coding gain with only a small increase in computational complexity. Simulation results demonstrate that the Plotkin concatenation with the proposed decoding achieves near maximum likelihood decoding performance. This coding scheme can outperform polar codes for moderate code lengths.
Reed-Muller (RM) codes have recently regained some interest in the context of low latency communications and due to their relation to polar codes. RM codes can be constructed based on the Plotkin construction. In this work, we consider concatenated codes based on the Plotkin construction, where extended Bose-Chaudhuri-Hocquenghem (BCH) codes are used as component codes. This leads to improved code parameters compared to RM codes. Moreover, this construction is more flexible concerning the attainable code rates. Additionally, new soft-input decoding algorithms are proposed that exploit the recursive structure of the concatenation and the cyclic structure of the component codes. First, we consider the decoding of the cyclic component codes and propose a low complexity hybrid ordered statistics decoding algorithm. Next, this algorithm is applied to list decoding of the Plotkin construction. The proposed list decoding approach achieves near-maximum-likelihood performance for codes with medium lengths. The performance is comparable to state-of-the-art decoders, whereas the complexity is reduced.
This work investigates data compression algorithms for applications in non-volatile flash memories. The main goal of the data compression is to minimize the amount of user data such that the redundancy of the error correction coding can be increased and the reliability of the error correction can be improved. A compression algorithm is proposed that combines a modified move-to-front algorithm with Huffman coding. The proposed data compression algorithm has low complexity, but provides a compression gain comparable to the Lempel-Ziv-Welch algorithm.