Refine
Year of publication
Document Type
- Doctoral Thesis (30) (remove)
Language
- English (30) (remove)
Has Fulltext
- no (30) (remove)
Keywords
- Agrarprodukt (1)
- Autonomous vessels (1)
- Autonomy (1)
- Backstepping control (1)
- Bahnplanung (1)
- Bernstein Basis (1)
- Biomedical signals (1)
- COMSOL Multiphysics (1)
- Cauchon algorithm (1)
- Channel Coding (1)
In spite of the amount of new tools and methodologies adopted in the road infrastructure sector, the performance of road infrastructure projects is not constantly improving. Considering that the volume of projects undertaken is forecasted to increase every year, this is a substantial issue for the road infrastructure sector. Hence this work focuses on the principles of Blockchain Technology, road infrastructure sector and the information exchange with the aim to use the advantages of the Blockchain Technology in supporting to overcome the various challenges along the life cycle of road infrastructure projects.
Within the scope of this paper, two studies were conducted. First, focus groups were used to explore where society (road infrastructure sector) stands in terms of industry 4.0 and to get a better understanding if and where the principles of Blockchain Technology can be used when managing projects in the road infrastructure sector. Second, semi-structured interviews were administrated with experts of the road infrastructure sector and experts of Blockchain Technology to better understand the interrelation between these two areas. Based on the outcome of the two studies, technology barriers and enablers were explored for the purpose of improved information exchange within the road infrastructure sector.
The two studies revealed that there are significant and strong interrelations between the principles of the Blockchain Technology, project management within the road infrastructure sector and information exchange. These interrelations are complex and diverse, but overall it can be concluded that the adoption of the principles of Blockchain Technology into the field of information exchange improves the management of road infrastructure projects. Based on the two studies a theoretical framework was developed.
In summary this research showed that trust is an important factor and builds the foundation for communication and to ensure a proper information exchange. Within the scope of this thesis, it was demonstrated that the principles of the Blockchain Technology can be used to increase transparency, traceability and immutability during the life cycle of road infrastructure projects in the area of information exchange.
This thesis presents the development of two different state-feedback controllers to solve the trajectory tracking problem, where the vessel needs to reach and follow a time-varying reference trajectory. This motion problem was addressed to a real-scaled fully actuated surface vessel, whose dynamic model had unknown hydrodynamic and propulsion parameters that were identified by applying an experimental maneuver-based identification process. This dynamic model was then used to develop the controllers. The first one was the backstepping controller, which was designed with a local exponential stability proof. For the NMPC, the controller was developed to minimize the tracking error, considering the thrusters’ constraints. Moreover, both controllers considered the thruster allocation problem and counteracted environmental disturbance forces such as current, waves and wind.The effectiveness of these approaches was verified in simulation using Matlab/Simulink and GRAMPC (in the case of the NMPC), and in experimental scenarios, where they were applied to the vessel, performing docking maneuvers at the Rhine River in Constance (Germany).
Public-key cryptographic algorithms are an essential part of todays cyber security, since those are required for key exchange protocols, digital signatures, and authentication. But large scale quantum computers threaten the security of the most widely used public-key cryptosystems. Hence, the National Institute of Standards and Technology ( NIST ) is currently in a standardization process for post-quantum secure public-key cryptography. One type of such systems is based on the NP-complete problem of decoding random linear codes and therefore called code-based cryptography. The best-known code-based cryptographic system is the McEliece system proposed in 1978 by Robert McEliece. It uses a scrambled generator matrix as a public key and the original generator matrix as well as the scrambling as private key. When encrypting a message it is encoded in the public code and a random but correctable error vector is added. Only the legitimate receiver can correct the errors and decrypt the message using the knowledge of the private key generator matrix. The original proposal of the McEliece system was based on binary Goppa codes, which are also considered for standardization. While those codes seem to be a secure choice, the public keys are extremely large, limiting the practicality of those systems. Many different code families were proposed for the McEliece system, but many of them are considered insecure since attacks exist, which use the known code structure to recover the private key. The security of code-based cryptosystems mainly depends on the number of errors added by the sender, which is limited by the error correction capability of the code. Hence, in order to obtain a high security for relatively short codes one needs a high error correction capability. Therefore maximum distance separable ( MDS ) codes were proposed for those systems, since those are optimal for the Hamming distance. In order to increase the error correction capability we propose q -ary codes over different metrics. There are many code families that have a higher minimum distance in some other metric than in the Hamming metric, leading to increased error correction capability over this metric. To make use of this one needs to restrict not only the number of errors but also their value. In this work, we propose the weight-one error channel, which restricts the error values to weight one and can be applied for different metrics. In addition we propose some concatenated code constructions, which make use of this restriction of error values. For each of these constructions we discuss the usability in code-based cryptography and compare them to other state-of-the-art code-based cryptosystems. The proposed code constructions show that restricting the error values allows for significantly lower public key sizes for code-based cryptographic systems. Furthermore, the use of concatenated code constructions allows for low complexity decoding and therefore an efficient cryptosystem.
Particularly for manufactured products subject to aesthetic evaluation, the industrial manufacturing process must be monitored, and visual defects detected. For this purpose, more and more computer vision-integrated inspection systems are being used. In optical inspection based on cameras or range scanners, only a few examples are typically known before novel examples are inspected. Consequently, no large data set of non-defective and defective examples could be used to train a classifier, and methods that work with limited or weak supervision must be applied. For such scenarios, I propose new data-efficient machine learning approaches based on one-class learning that reduce the need for supervision in industrial computer vision tasks. The developed novelty detection model automatically extracts features from the input images and is trained only on available non-defective reference data. On top of the feature extractor, a one-class classifier based on recent developments in deep learning is placed. I evaluate the novelty detector in an industrial inspection scenario and state-of-the-art benchmarks from the machine learning community. In the second part of this work, the model gets improved by using a small number of novel defective examples, and hence, another source of supervision gets incorporated. The targeted real-world inspection unit is based on a camera array and a flashing light illumination, allowing inline capturing of multichannel images at a high rate. Optionally, the integration of range data, such as laser or Lidar signals, is possible by using the developed targetless data fusion method.
Nowadays, most digital modulation schemes are based on conventional signal constellations that have no algebraic group, ring, or field properties, e.g. square quadrature-amplitude modulation constellations. Signal constellations with algebraic structure can enhance the system performance. For instance, multidimensional signal constellations based on dense lattices can achieve performance gains due to the dense packing. The algebraic structure enables low-complexity decoding and detection schemes. In this work, signal constellations with algebraic properties and their application in spatial modulation transmission schemes are investigated. Several design approaches of two- and four-dimensional signal constellations based on Gaussian, Eisenstein, and Hurwitz integers are shown. Detection algorithms with reduced complexity are proposed. It is shown, that the proposed Eisenstein and Hurwitz constellations combined with the proposed suboptimal detection can outperform conventional two-dimensional constellations with ML detection.
The influence of sleep on human life, including physiological, psychological, and mental aspects, is remarkable. Therefore, it is essential to apply appropriate therapy in the case of sleep disorders. For this, however, the irregularities must first be recognised, preferably conveniently for the person concerned. This dissertation, structured as a composition of research articles, presents the development of mathematically based algorithmic principles for a sleep analysis system. The particular focus is on the classification of sleep stages with a minimal set of physiological parameters. In addition, the aspects of using the sleep analysis system as part of the more complex healthcare systems are explored. Design of hardware for non-obtrusive measurement of relevant physiological parameters and the use of such systems to detect other sleep disorders, such as sleep apnoea, are also referred to. Multinomial logistic regression was selected as the basis for development resulting from the investigations carried out. By following a methodical procedure, the number of physiological parameters necessary for the classification of sleep stages was successively reduced to two: Respiratory and Movement signals. These signals might be measured in a contactless way. A prototype implementation of the developed algorithms was performed to validate the proposed method, and the evaluation of 19324 sleep epochs was carried out. The results, with the achieved accuracy of 73% in the classification of Wake/NREM/REM stages and Cohen's kappa of 0.44, outperform the state of the art and demonstrate the appropriateness of the selected approach. In the future, this method could enable convenient, cost-effective, and accurate sleep analysis, leading to the detection of sleep disorders at an early stage so that therapy can be initiated as soon as possible, thus improving the general population's health status and quality of life.
Algorithms and Architectures for Cryptography and Source Coding in Non-Volatile Flash Memories
(2021)
In this work, algorithms and architectures for cryptography and source coding are developed, which are suitable for many resource-constrained embedded systems such as non-volatile flash memories. A new concept for elliptic curve cryptography is presented, which uses an arithmetic over Gaussian integers. Gaussian integers are a subset of the complex numbers with integers as real and imaginary parts. Ordinary modular arithmetic over Gaussian integers is computational expensive. To reduce the complexity, a new arithmetic based on the Montgomery reduction is presented. For the elliptic curve point multiplication, this arithmetic over Gaussian integers improves the computational efficiency, the resistance against side channel attacks, and reduces the memory requirements. Furthermore, an efficient variant of the Lempel-Ziv-Welch (LZW) algorithm for universal lossless data compression is investigated. Instead of one LZW dictionary, this algorithm applies several dictionaries to speed up the encoding process. Two dictionary partitioning techniques are introduced that improve the compression rate and reduce the memory size of this parallel dictionary LZW algorithm.
The main goal of this work was to experimentally characterize the hot air-drying process of agricultural products (Potato, Carrot, Tomato) and verify it with numerical solutions at single layer and industrial scale dryer using Comsol Multiphysics® 5.3.
Input parameters at single layer dryer effects on quality attributes were examined. Two strategies of drying were applied on batch dryer to examine the input effects on quality attributes. Constant input parameters strategy was designed by using central composite design formulation and optimized by Response Surface Methodology (RSM). The second strategy was applied for further optimization of the selected region by using square wave profile of the air temperature and relative humidity. Similarly, numerical method for single layer dryer, unsteady-state partial differential equations have been solved by means of the Finite Elements Method coupled to the Arbitrary Lagrangian-Eulerian (ALE). Also, for batch dryer, the mechanistic mathematical models of coupled heat and mass transfer were developed and solved as solid porous moist material.
With this work, the process of convective drying of agricultural products could be optimized. Furthermore, important knowledge about the basic mechanisms of the drying process was found and implemented in the numerical models.
In today's volatile market environments, companies must be able to continuously innovate. In this context, innovation does not only refer to the development of new products or business models but often also affects the entire organization, which has to transform its structures, processes, and ways of working.Corporate entrepreneurship (CE) programs are often used by established companies to address these innovation and transformation challenges. In general, they are understood as formalized entrepreneurial activities to (1) support internal corporate ventures or (2) work with external startups. The organizational design and value creation of CE programs exhibit a high degree of heterogeneity. On the one hand, this heterogeneity makes CE programs a valuable management tool that can be used for many purposes. On the other hand, it can be seen as a reason for the current challenges that companies experience in effectively using and managing CE programs.By systematically analyzing 54 different cases in established companies in Germany, Switzerland, and Austria, this study contributes to a better understanding of the heterogeneity of CE programs. The taxonomic approach provides clearly defined types of CE programs that are distinguished according to their organizational design and the outputs they generate.
Pascal Laube presents machine learning approaches for three key problems of reverse engineering of defective structured surfaces: parametrization of curves and surfaces, geometric primitive classification and inpainting of high-resolution textures. The proposed methods aim to improve the reconstruction quality while further automating the process. The contributions demonstrate that machine learning can be a viable part of the CAD reverse engineering pipeline.
In this thesis, the recognition problem and the properties of eigenvalues and eigenvectors of matrices which are strictly sign-regular of a given order, i.e., matrices whose minors of a given order have the same strict sign, are considered. The results are extended to matrices which are sign-regular of a given order, i.e., matrices whose minors of a given order have the same sign or are allowed to vanish. As a generalization, a new type of matrices called oscillatory of a specific order, are introduced. Furthermore, the properties for this type are investigated. Also, same applications to dynamic systems are given.
NAND flash memory is widely used for data storage due to low power consumption, high throughput, short random access latency, and high density. The storage density of the NAND flash memory devices increases from one generation to the next, albeit at the expense of storage reliability.
Our objective in this dissertation is to improve the reliability of the NAND flash memory with a low hard implementation cost. We investigate the error characteristic, i.e. the various noises of the NAND flash memory. Based on the error behavior at different life-aging stages, we develop offset calibration techniques that minimize the bit error rate (BER).
Furthermore, we introduce data compression to reduce the write amplification effect and support the error correction codes (ECC) unit. In the first scenario, the numerical results show that the data compression can reduce the wear-out by minimizing the amount of data that is written to the flash. In the ECC scenario, the compression gain is used to improve the ECC capability. Based on the first scenario, the write amplification effect can be halved for the considered target flash and data model. By combining the ECC and data compression, the NAND flash memory lifetime improves three fold compared with uncompressed data for the same data model.
In order to improve the data reliability of the NAND flash memory, we investigate different ECC schemes based on concatenated codes like product codes, half-product codes, and generalized concatenated codes (GCC). We propose a construction for high-rate GCC for hard-input decoding. ECC based on soft-input decoding can significantly improve the reliability of NAND flash memories. Therefore, we propose a low-complexity soft-input decoding algorithm for high-rate GCC.
Flash memories are non-volatile memory devices. The rapid development of flash technologies leads to higher storage density, but also to higher error rates. This dissertation considers this reliability problem of flash memories and investigates suitable error correction codes, e.g. BCH-codes and concatenated codes. First, the flash cells, their functionality and error characteristics are explained. Next, the mathematics of the employed algebraic code are discussed. Subsequently, generalized concatenated codes (GCC) are presented. Compared to the commonly used BCH codes, concatenated codes promise higher code rates and lower implementation complexity. This complexity reduction is achieved by dividing a long code into smaller components, which require smaller Galois-Field sizes. The algebraic decoding algorithms enable analytical determination of the block error rate. Thus, it is possible to guarantee very low residual error rates for flash memories. Besides the complexity reduction, general concatenated codes can exploit soft information. This so-called soft decoding is not practicable for long BCH-codes. In this dissertation, two soft decoding methods for GCC are presented and analyzed. These methods are based on the Chase decoding and the stack algorithm. The last method explicitly uses the generalized concatenated code structure, where the component codes are nested subcodes. This property supports the complexity reduction. Moreover, the two-dimensional structure of GCC enables the correction of error patterns with statistical dependencies. One chapter of the thesis demonstrates how the concatenated codes can be used to correct two-dimensional cluster errors. Therefore, a two-dimensional interleaver is designed with the help of Gaussian integers. This design achieves the correction of cluster errors with the best possible radius. Large parts of this works are dedicated to the question, how the decoding algorithms can be implemented in hardware. These hardware architectures, their throughput and logic size are presented for long BCH-codes and generalized concatenated codes. The results show that generalized concatenated codes are suitable for error correction in flash memories, especially for three-dimensional NAND memory systems used in industrial applications, where low residual errors must be guaranteed.
According to the World Food Organization, nearly half of all root and tuber crops worldwide are not consumed, but are lost due to inappropriate storage and post-harvest losses. In developing countries such as Ethiopia, potatoes have not been dried, but are traditionally stored in potato clamps. So far, dried potatoes have not been converted into usable foods.
The aim of the present work is to convert potatoes - perishable rootlets and tubers - into stable products by hot air drying. Hot air dryers are economical to operate in industrialized countries. In Africa, this is reserved for larger industrial companies only. In regions with a tropical climate, however, the use of solar tunnel dryers is worthwhile. These are a good choice for farming and small industries and wherever electrical energy is difficult or impossible to obtain.
In a first part of the work, the drying process of potatoes was investigated, in particular with regard to the change of thermal, mechanical and chemical quality parameters. In an evaluation of the literature it was found that potatoes are not subject to quality changes if the water activityis below a value of 0.2. In order to determine the water content associated with this value at storage temperature, the known equations for the sorption equilibrium were evaluated and verified with own experimental investigations. This determined the end point of the drying process.
The following experimental investigations showed a process-dependent change of the quality criteria such as color, shrinkage, and mechanical properties as well as the content of valuedetermining substances such as vitamin C and starch. The differences in the course and magnitude of the quality changes were attributed to the glass transition that takes place during the drying process. For the determination of the glass transition temperature a new, simple method based on the measurement of mechanical properties could be developed. The knowledge of the glass transition temperature allowed optimizing the drying process. The drying process could be carried out in the rubbery or glassy region, depending on the expected quality changes. Thus, all information was available to produce high quality dried potatoes in an industrial process.
Since the production of potato products in less industrialized regions without sufficient supply of electrical energy should be included, potatoes were dried with a solar tunnel dryer. Examination of the quality properties mentioned above confirmed the process-dependent quality changes.
Finally, the dried product was ground and with the flour thus produced, wheat flour was replaced for baking bread. An evaluation of the finished bread by a panel showed that the acceptance of the bread according to the new recipe was high, also with regard to baking volume, taste, texture and color.
This work shows that by drying potatoes can be transformed a well accepted, storable and easily transportable product. The risk of losses or degradation is minimized. It can be produced on an industrial as well as on farm level. If the influence of the glass transition is taken into account, it is possible to optimize the quality of the product.