Refine
Year of publication
Document Type
- Conference Proceeding (642)
- Article (429)
- Other Publications (143)
- Part of a Book (141)
- Working Paper (128)
- Book (118)
- Report (115)
- Journal (Complete Issue of a Journal) (85)
- Master's Thesis (77)
- Doctoral Thesis (58)
Language
- German (1115)
- English (883)
- Multiple languages (8)
Keywords
Institute
- Fakultät Architektur und Gestaltung (41)
- Fakultät Bauingenieurwesen (105)
- Fakultät Elektrotechnik und Informationstechnik (34)
- Fakultät Informatik (121)
- Fakultät Maschinenbau (60)
- Fakultät Wirtschafts-, Kultur- und Rechtswissenschaften (106)
- Institut für Angewandte Forschung - IAF (115)
- Institut für Naturwissenschaften und Mathematik - INM (3)
- Institut für Optische Systeme - IOS (40)
- Institut für Strategische Innovation und Technologiemanagement - IST (60)
The expansion of a given multivariate polynomial into Bernstein polynomials is considered. Matrix methods for the calculation of the Bernstein expansion of the product of two polynomials and of the Bernstein expansion of a polynomial from the expansion of one of its partial derivatives are provided which allow also a symbolic computation.
Let A = [a_ij] be a real symmetric matrix. If f:(0,oo)-->[0,oo) is a Bernstein function, a sufficient condition for the matrix [f(a_ij)] to have only one positive eigenvalue is presented. By using this result, new results for a symmetric matrix with exactly one positive eigenvalue, e.g., properties of its Hadamard powers, are derived.
In this thesis, the recognition problem and the properties of eigenvalues and eigenvectors of matrices which are strictly sign-regular of a given order, i.e., matrices whose minors of a given order have the same strict sign, are considered. The results are extended to matrices which are sign-regular of a given order, i.e., matrices whose minors of a given order have the same sign or are allowed to vanish. As a generalization, a new type of matrices called oscillatory of a specific order, are introduced. Furthermore, the properties for this type are investigated. Also, same applications to dynamic systems are given.
Totally nonnegative matrices, i.e., matrices having all their minors nonnegative, and matrix intervals with respect to the checkerboard partial order are considered. It is proven that if the two bound matrices of such a matrix interval are totally nonnegative and satisfy certain conditions, then all matrices from this interval are also totally nonnegative and satisfy the same conditions.
For a long time, the use of intermediate products in production has been growing more rapidly in most countries than domestic production. This is a strong indication of more interdependency in production. The main purpose of input-output analysis is to study the interdependency of industries in an economy. Often the term interindustry analysis is also used. Therefore, the exchange of intermediate products is a key issue of input-output analysis. We will use input–output data for this study that the author prepared for the new ‘Handbook on Supply, Use and Input–Output Tables with Extensions and Applications’ of the United Nations. The supply use and input–output tables contain separate valuation matrices for trade margins, transport margins, value added tax, other taxes on products and subsidies on products. For the study, two input–output models were developed to evaluate the impact of fuel subsidy and taxation reform on output, gross domestic product, inflation and trade. Six scenarios are discussed covering different aspects of the reform.
Multi-dimensional spatial modulation is a multipleinput/ multiple-output wireless transmission technique, that uses only a few active antennas simultaneously. The computational complexity of the optimal maximum-likelihood (ML) detector at the receiver increases rapidly as more transmit antennas or larger modulation orders are employed. ML detection may be infeasible for higher bit rates. Many suboptimal detection algorithms for spatial modulation use two-stage detection schemes where the set of active antennas is detected in the first stage and the transmitted symbols in the second stage. Typically, these detection schemes use the ML strategy for the symbol detection. In this work, we consider a suboptimal detection algorithm for the second detection stage. This approach combines equalization and list decoding. We propose an algorithm for multi-dimensional signal constellations with a reduced search space in the second detection stage through set partitioning. In particular, we derive a set partitioning from the properties of Hurwitz integers. Simulation results demonstrate that the new algorithm achieves near-ML performance. It significantly reduces the complexity when compared with conventional two-stage detection schemes. Multi-dimensional constellations in combination with suboptimal detection can even outperform conventional signal constellations in combination with ML detection.
Many resource-constrained systems still rely on symmetric cryptography for verification and authentication. Asymmetric cryptographic systems provide higher security levels, but are very computational intensive. Hence, embedded systems can benefit from hardware assistance, i.e., coprocessors optimized for the required public key operations. In this work, we propose an elliptic curve cryptographic coprocessors design for resource-constrained systems. Many such coprocessor designs consider only special (Solinas) prime fields, which enable a low-complexity modulo arithmetic. Other implementations support arbitrary prime curves using the Montgomery reduction. These implementations typically require more time for the point multiplication. We present a coprocessor design that has low area requirements and enables a trade-off between performance and flexibility. The point multiplication can be performed either using a fast arithmetic based on Solinas primes or using a slower, but flexible Montgomery modular arithmetic.
In this work, we investigate a hybrid decoding approach that combines algebraic hard-input decoding of binary block codes with soft-input decoding. In particular, an acceptance criterion is proposed which determines the reliability of a candidate codeword. For many received codewords the stopping criterion indicates that the hard-decoding result is sufficiently reliable, and the costly soft-input decoding can be omitted. The proposed acceptance criterion significantly reduces the decoding complexity. For simulations we combine the algebraic hard-input decoding with ordered statistics decoding, which enables near maximum likelihood soft-input decoding for codes of small to medium block lengths.
The ballistocardiography is a technique that measures the heart rate from the mechanical vibrations of the body due to the heart movement. In this work a novel noninvasive device placed under the mattress of a bed estimates the heart rate using the ballistocardiography. Different algorithms for heart rate estimation have been developed.
NAND flash memory is widely used for data storage due to low power consumption, high throughput, short random access latency, and high density. The storage density of the NAND flash memory devices increases from one generation to the next, albeit at the expense of storage reliability.
Our objective in this dissertation is to improve the reliability of the NAND flash memory with a low hard implementation cost. We investigate the error characteristic, i.e. the various noises of the NAND flash memory. Based on the error behavior at different life-aging stages, we develop offset calibration techniques that minimize the bit error rate (BER).
Furthermore, we introduce data compression to reduce the write amplification effect and support the error correction codes (ECC) unit. In the first scenario, the numerical results show that the data compression can reduce the wear-out by minimizing the amount of data that is written to the flash. In the ECC scenario, the compression gain is used to improve the ECC capability. Based on the first scenario, the write amplification effect can be halved for the considered target flash and data model. By combining the ECC and data compression, the NAND flash memory lifetime improves three fold compared with uncompressed data for the same data model.
In order to improve the data reliability of the NAND flash memory, we investigate different ECC schemes based on concatenated codes like product codes, half-product codes, and generalized concatenated codes (GCC). We propose a construction for high-rate GCC for hard-input decoding. ECC based on soft-input decoding can significantly improve the reliability of NAND flash memories. Therefore, we propose a low-complexity soft-input decoding algorithm for high-rate GCC.
This document presents a new complete standalone system for a recognition of sleep apnea using signals from the pressure sensors placed under the mattress. The developed hardware part of the system is tuned to filter and to amplify the signal. Its software part performs more accurate signal filtering and identification of apnea events. The overall achieved accuracy of the recognition of apnea occurrence is 91%, with the average measured recognition delay of about 15 seconds, which confirms the suitability of the proposed method for future employment. The main aim of the presented approach is the support of the healthcare system with the cost-efficient tool for recognition of sleep apnea in the home environment.
Which testing formats are most appropriate for assessing academic language skills? Christian Krekeler addresses this question for informal language assessment in the classroom. In the first part of this chapter discrete items, isolated tests of writing, and integrated tests, three formats which are frequently used in standardised assessment, are investigated and their usefulness for classroom assessment, where diagnosis, achievement and learning are important test functions, is discussed. Because academic language use is cognitively demanding, it is argued that assessment of academic language skills should include authentic and complex tasks as well as subject specific content. The resulting tensions between complex, authentic and context sensitive tasks and the requirements of language assessment methodology are discussed in the second part of the chapter.
This paper examines the corporate organisational aspects of the implementation of Industry 4.0. Industry 4.0 builds on new technologies and appears as a disruptive innovation to manufacturing firms. Although we do have a good understanding of the technical components, the implementation of the management and organisational aspects of Industry 4.0 is under-researched. It is challenging to find qualitative empirical evidence which provides comprehensive insights about real implementation cases. Based on a case study in a German high value manufacturing firm, we explore the corporate organisation and implementation of Industry 4.0. By using the framework of Complex Adaptive System (CAS), we have identified three key factors which facilitate the implementation of Industry 4.0 namely 1.) Organisational structure changes such as the foundation of a central department for digital transformation, 2.) The election of a Chief Digital Officer as a personnel change, and 3.) Corporate opening up towards cooperating with partners as a cultural change. We have furthermore found that Lean Management is an important enabler that ensures readiness for the adoption of Industry 4.0.
Uncertainty about the future requires companies to create discontinuous innovations. Established companies, however, struggle to do so; whereas independent startups seem to better cope with this. Consequently, established companies set up entrepreneurial initiatives to make use of startups' benefits. Consequently, this led-amongst others-to great interest in socalled corporate entrepreneurship (CE) programs and to the development and characterization of several different forms. Their processes to achieve certain objectives, yet, are still rather ineffective. Thus, considerations of the actions performed in preparation for and during CE programs could be one approach to improve this but are still absent today. Furthermore, the increasing use of several CE programs in parallel seems to bear the potential for synergies and, thus, more efficient use of resources. Aiming to provide insights to both issues, this study analyzes actions of CE programs, by looking at interviews with managers of seven corporate incubators and accelerator programs of five established German tech-companies.
Digitalization is one of the most frequently discussed topics in industry. New technologies, platform concepts and integrated data models do enable disruptive business models and drive changes in organization, processes, and tools. The goal is to make a company more efficient, productive and ultimately profitable. However, many companies are facing the challenge of how to approach digital transformation in a structured way and to realize these potential benefits. What they realize is that Product Lifecycle Management plays a key role in digitalization intends, as object, structure and process management along the life cycle is a foundation for many digitalization use cases. The introduced maturity model for assessing a firm’s capabilities along the product lifecycle has been used almost two hundred times. It allows a company to compare its performance with an industry specific benchmark to reveal individual strengths and weaknesses. Furthermore, an empirical study produced multidimensional correlation coefficients, which identify dependencies between business model characteristics and the maturity level of capabilities.
Globalization has increased the number of road trips and vehicles. The result has been an intensification of traffic accidents, which are becoming one of the most important causes of death worldwide. Traffic accidents are often due to human error, the probability of which increases when the cognitive ability of the driver decreases. Cognitive capacity is closely related to the driver’s mental state, as well as other external factors such as the CO2 concentration inside the vehicle. The objective of this work is to analyze how these elements affect driving. We have conducted an experiment with 50 drivers who have driven for 25 min using a driving simulator. These drivers completed a survey at the start and end of the experiment to obtain information about their mental state. In addition, during the test, their stress level was monitored using biometric sensors and the state of the environment (temperature, humidity and CO2 level) was recorded. The results of the experiment show that the initial level of stress and tiredness of the driver can have a strong impact on stress, driving behavior and fatigue produced by the driving test. Other elements such as sadness and the conditions of the interior of the vehicle also cause impaired driving and affect compliance with traffic regulations.
In previous studies, we used a method for detecting stress that was based exclusively on heart rate and ECG for differentiation between such situations as mental stress, physical activity, relaxation, and rest. As a response of the heart to these situations, we observed different behavior in the Root Mean Square of the Successive differences heartbeats (RMSSD). This study aims to analyze Virtual Reality via a virtual reality headset as an effective stressor for future works. The value of the Root Mean Square of the Successive Differences is an important marker for the parasympathetic effector on the heart and can provide information about stress. For these measurements, the RR interval was collected using a breast belt. In these studies, we can observe the Root Mean Square of the successive differences heartbeats. Additional sensors for the analysis were not used. We conducted experiments with ten subjects that had to drive a simulator for 25 minutes using monitors and 25 minutes using virtual reality headset. Before starting and after finishing each simulation, the subjects had to complete a survey in which they had to describe their mental state. The experiment results show that driving using virtual reality headset has some influence on the heart rate and RMSSD, but it does not significantly increase the stress of driving.
In modern fruit processing technology, non-destructive quality measuring techniques aresought for determining and controlling changes in the optical, structural, and chemical properties of theproducts. In this context, changes inside the product can be measured during processing. Especiallyfor industrial use, fast, precise, but robust methods are particularly important to obtain high-qualityproducts. In this work, a newly developed multi-spectral imaging system was implemented andadapted for drying processes. Further it was investigated if the system could be used to link changesin the surface spectral reflectance during mango drying with changes in moisture content andcontents of chemical components. This was achieved by recovering the spectral reflectance frommulti-spectral image data and comparing the spectral changes with changes of the total soluble solids(TSS), pH-value and the relative moisture contentxwbof the products. In a first step, the camera wasmodified to be used in drying, then the changes in the spectra and quality criteria during mangodrying were measured. For this, mango slices were dried at air temperatures of 40–80◦C and relativeair humidities of 5%–30%. Samples were analyzed and pictures were taken with the multi-spectralimaging system. The quality criteria were then predicted from spectral data. It could be shown thatthe newly developed multi-spectral imaging system can be used for quality control in fruit drying.There are strong indications as well, that it can be employed for the prediction of chemical qualitycriteria of mangoes during drying. This way, quality changes can be monitored inline during theprocess using only one single measuring device.
What drives entrepreneurial action to create a lasting impact? The creation of new ventures that aim at having an impact beyond their financial performance face additional challenges: achieving economic sustainability and at the same time addressing social or environmental issues. Little is known on how these new hybrid organizations, aiming for multiple impact dimensions, manage to be congruent with their blended values. A dataset of 4,125 early-stage ventures is used to gain insights into how blended values are converted into financial, social and environmental impacts, giving shape to different types of hybrid organizations. Our findings suggest new hybrid organizations might opt to sacrifice financial impact to achieve social impact, yet this is not the case when they aim to generate environmental or sustainable impact. Therefore, the tensions and sacrifices related to holding blended values are not homogeneous across all types of new hybrid organizations.
Three-dimensional ship localization with only one camera is a challenging task due to the loss of depth information caused by perspective projection. In this paper, we propose a method to measure distances based on the assumption that ships lie on a flat surface. This assumption allows to recover depth from a single image using the principle of inverse perspective. For the 3D ship detection task, we use a hybrid approach that combines image detection with a convolutional neural network, camera geometry and inverse perspective. Furthermore, a novel calculation of object height is introduced. Experiments show that the monocular distance computation works well in comparison to a Velodyne lidar. Due to its robustness, this could be an easy-to-use baseline method for detection tasks in navigation systems.
Mapping of tree seedlings is useful for tasks ranging from monitoring natural succession and regeneration to effective silvicultural management. Development of methods that are both accurate and cost-effective is especially important considering the dramatic increase in tree planting that is required globally to mitigate the impacts of climate change. The combination of high-resolution imagery from unmanned aerial vehicles and object detection by convolutional neural networks (CNNs) is one promising approach. However, unbiased assessments of these models and methods to integrate them into geospatial workflows are lacking. In this study, we present a method for rapid, large-scale mapping of young conifer seedlings using CNNs applied to RGB orthomosaic imagery. Importantly, we provide an unbiased assessment of model performance by using two well-characterised trial sites together containing over 30,000 seedlings to assemble datasets with a high level of completeness. Our results showed CNN-based models trained on two sites detected seedlings with sensitivities of 99.5% and 98.8%. False positives due to tall weeds at one site and naturally regenerating seedlings of the same species led to slightly lower precision of 98.5% and 96.7%. A model trained on examples from both sites had 99.4% sensitivity and precision of 97%, showing applicability across sites. Additional testing showed that the CNN model was able to detect 68.7% of obscured seedlings missed during the initial annotation of the imagery but present in the field data. Finally, we demonstrate the potential to use a form of weakly supervised training and a tile-based processing chain to enhance the accuracy and efficiency of CNNs applied to large, high-resolution orthomosaics.
This paper presents the goals, service design approach, and the results of the project “Accessible Tourism around Lake Constance”, which is currently run by different universities, industrial partners and selected hotels in Switzerland, Germany and Austria. In the 1st phase, interviews with different persons with disabilities and elderly persons have been conducted to identify the barriers and pains faced by tourists who want to spend their holidays in the region of Lake Constance as well as possible assistive technologies that help to overcome these barriers. The analysis of the interviews shows that one third of the pains and barriers are due to missing, insufficient, wrong or inaccessible information about the
accessibility of the accommodation, surroundings, and points of interests during the planning phase of the holidays. Digital assistive technologies hence play a
major role in bridging this information gap. In the 2nd phase so-called Hotel-Living-Labs (HLL) have been established where the identified assistive technologies
can be evaluated. Based on these HLLs an overall service for accessible holidays has been designed and developed. In the last phase, this service has been implemented
based on the HLLs as well as the identified assistive technologies and is currently field tested with tourists with disabilities from the three participated countries.
Im vorliegenden Beitrag wird der Einfluss der Modellierung des Kellergeschosses auf die Querkraft und die Verformungen von im Kellergeschoss eingespannten Stahlbeton‐Aussteifungswänden untersucht. Der Querkraftverlauf der Wand und die Verschiebung am Kopf der Wand werden mit den entsprechenden, am vereinfachten Kragwandmodell ermittelten Werten verglichen, bei dem die Wand auf Höhe der Kellerdecke voll eingespannt ist und die Weiterleitung der Schnittkräfte im Kellergeschoss nicht näher betrachtet wird. Für die Berücksichtigung des Kellergeschosses wird zunächst eine gelenkige Festhaltung durch die Kellerdecke und die Bodenplatte betrachtet, wodurch sich eine unrealistisch große Wandquerkraft im Kellergeschoss ergibt. Danach wird ein verfeinertes Modell mit Teileinspannung in der Bodenplatte und nachgiebiger Halterung durch die Kellerdecke untersucht. Es werden Empfehlungswerte für die Federkonstante der Drehfeder auf Höhe der Bodenplatte und der horizontalen Translationsfeder auf Höhe der Kellerdecke angegeben, die in der Praxis Anwendung finden können. Es wird besonders der Frage nachgegangen, welche Einflüsse die Berücksichtigung des Kellergeschosses bei der Erdbebenbemessung der Aussteifungswände hat. Dabei wird einerseits die Systemsteifigkeit, von der die Erdbebenersatzlasten abhängen, und andererseits die mögliche Verschiebungsduktilität, von der der mögliche Verhaltensbeiwert q abhängt, betrachtet.
Location-aware mobile devices are becoming increasingly popular and GPS sensors are built into nearly every portable unit with computational capabilities. At the same time, the emergence of location-aware virtual services and ideas calls for new efficient spatial real-time queries. Communication latency in mobile environments interacting with high decentralization and the need of scalability in high-density systems with immense client counts leads to major challenges. In this paper we describe a decentralized architecture for continuous range queries in settings in which both, the requested and the requesting clients, are mobile. While prior works commonly use a request-response approach we provide a stream-based adaptive grid solution dealing with arbitrary high client counts and improving communication latency that meets given hard real-time constraints.
Die Nibelungenbrücke Worms
(2020)
The transformation to an Industry 4.0, which is in general seen as a solution to increasing market challenges, is forcing companies to radically change their way of thinking and to be open to new forms of cooperation. In this context, the opening-up of the innovation process is widely seen as a necessity to meet these challenges, especially for small and medium enterprises (SMEs). The aim of the study therefore is to analyze how cooperation today can be characterized, how this character has changed since the establishment of the term Industry 4.0 at Hanover Fair in 2011 and which cooperation strategies have proven successful. The analysis consists of a quantitative, secondary data analysis that includes country-specific data from 35 European countries of 2010 and 2016 collected by the European Commission and the OECD. The research, focusing on the secondary sector, shows that multinational enterprises MNEs still tend to cooperate more than SMEs, with a slight overall trend towards protectionism. Nevertheless, there is a clear tendency towards the opening-up of SMEs. In this regard, especially universities, competitors and suppliers have become increasingly attractive as cooperation partners for SMEs.
At present, the majority of the proposed Deep Learning (DL) methods provide point predictions without quantifying the model's uncertainty. However, a quantification of the reliability of automated image analysis is essential, in particular in medicine when physicians rely on the results for making critical treatment decisions. In this work, we provide an entire framework to diagnose ischemic stroke patients incorporating Bayesian uncertainty into the analysis procedure. We present a Bayesian Convolutional Neural Network (CNN) yielding a probability for a stroke lesion on 2D Magnetic Resonance (MR) images with corresponding uncertainty information about the reliability of the prediction. For patient-level diagnoses, different aggregation methods are proposed and evaluated, which combine the individual image-level predictions. Those methods take advantage of the uncertainty in the image predictions and report model uncertainty at the patient-level. In a cohort of 511 patients, our Bayesian CNN achieved an accuracy of 95.33% at the image-level representing a significant improvement of 2% over a non-Bayesian counterpart. The best patient aggregation method yielded 95.89% of accuracy. Integrating uncertainty information about image predictions in aggregation models resulted in higher uncertainty measures to false patient classifications, which enabled to filter critical patient diagnoses that are supposed to be closer examined by a medical doctor. We therefore recommend using Bayesian approaches not only for improved image-level prediction and uncertainty estimation but also for the detection of uncertain aggregations at the patient-level.
Probabilistic Deep Learning
(2020)
Probabilistic Deep Learning is a hands-on guide to the principles that support neural networks. Learn to improve network performance with the right distribution for different data types, and discover Bayesian variants that can state their own uncertainty to increase accuracy. This book provides easy-to-apply code and uses popular frameworks to keep you focused on practical applications.
Deep neural networks (DNNs) are known for their high prediction performance, especially in perceptual tasks such as object recognition or autonomous driving. Still, DNNs are prone to yield unreliable predictions when encountering completely new situations without indicating their uncertainty. Bayesian variants of DNNs (BDNNs), such as MC dropout BDNNs, do provide uncertainty measures. However, BDNNs are slow during test time because they rely on a sampling approach. Here we present a single shot MC dropout approximation that preserves the advantages of BDNNs without being slower than a DNN. Our approach is to analytically approximate for each layer in a fully connected network the expected value and the variance of the MC dropout signal. We evaluate our approach on different benchmark datasets and a simulated toy example. We demonstrate that our single shot MC dropout approximation resembles the point estimate and the uncertainty estimate of the predictive distribution that is achieved with an MC approach, while being fast enough for real-time deployments of BDNNs.
Sustainable technologies are being increasingly used in various areas of human life. While they have a multitude of benefits, they are especially useful in health monitoring, especially for certain groups of people, such as the elderly. However, there are still several issues that need to be addressed before its use becomes widespread. This work aims to clarify the aspects that are of great importance for increasing the acceptance of the use of this type of technology in the elderly. In addition, we aim to clarify whether the technologies that are already available are able to ensure acceptable accuracy and whether they could replace some of the manual approaches that are currently being used. A two-week study with people 65 years of age and over was conducted to address the questions posed here, and the results were evaluated. It was demonstrated that simplicity of use and automatic functioning play a crucial role. It was also concluded that technology cannot yet completely replace traditional methods such as questionnaires in some areas. Although the technologies that were tested were classified as being “easy to use”, the elderly population in the current study indicated that they were not sure that they would use these technologies regularly in the long term because the added value is not always clear, among other issues. Therefore, awareness-raising must take place in parallel with the development of technologies and services.
Generalized Concatenated Codes over Gaussian and Eisenstein Integers for Code-Based Cryptography
(2021)
The code-based McEliece and Niederreiter cryptosystems are promising candidates for post-quantum public-key encryption. Recently, q-ary concatenated codes over Gaussian integers were proposed for the McEliece cryptosystem together with the one-Mannheim error channel, where the error values are limited to Mannheim weight one. Due to the limited error values, the codes over Gaussian integers achieve a higher error correction capability than maximum distance separable (MDS) codes with bounded minimum distance decoding. This higher error correction capability improves the work factor regarding decoding attacks based on information-set decoding. The codes also enable a low complexity decoding algorithm for decoding beyond the guaranteed error correction capability. In this work, we extend this coding scheme to codes over Eisenstein integers. These codes have advantages for the Niederreiter system. Additionally, we propose an improved code construction based on generalized concatenated codes. These codes extent the rate region where the work factor is beneficial compared to MDS codes. Moreover, generalized concatenated codes are more robust against structural attacks than ordinary concatenated codes.
Das Ziel dieser Arbeit war die Konzeption eines untergeordneten Stadtbusnetzes der Stadt Konstanz. Dabei wurde auf einem bereits entwickelten Entwurf eines übergeordneten Netzes aufgesetzt. Durch die Kombination dieser beiden entstand ein Gesamtbusnetz.
Im ersten Schritt wurde eine breit gefächerte Bestandsanalyse durchgeführt, inbegriffen war dabei eine Analyse des übergeordneten Busnetzes. Die gebündelten Ergebnisse gaben den Rahmen für die Konzeption vor. Danach folgte der schrittweise Entwurf der einzelnen Linien des untergeordneten Netzes. Im letzten Schritt der Konzeption wurden die Linien an wichtigen Umsteigeknoten und Parkräumen aufeinander abgestimmt.
Aus der genannten Analyse resultierten diverse Anforderungen für die Konzeption eines neuen Busnetzes. Hierbei konnte die Anbindung an die Bahn und das neu entstehende Mobilitätszentrum besonders hervorgehoben werden. Das Ergebnis der Konzeption waren elf Buslinien, die dem untergeordneten Netz zugewiesen werden können. Gemeinsam mit den fünf Linien des übergeordneten Entwurfs bilden sie ein Gesamtstadtbusnetz. Dabei wurden die Streckenverläufe der Linien aufeinander abgestimmt, sodass ein breitgefächertes Netz an Direktverbindungen entstehen konnte. Mithilfe der Abstimmung an Umsteigeknoten konnten sinnvolle Umsteigerelationen geschaffen werden, wodurch die Direktverbindungen zu einer flächendeckenden Anbindung der Stadtteile an relevante Ziele im Stadtgebiet erweitert wurden.
Auf theoretischer Ebene wurde hier aufgezeigt, wie ein neues Buskonzept der Stadt Konstanz gestaltet sein kann. Das vorliegende Konzept sollte in einer separaten Arbeit auf eine tatsächliche Umsetzbarkeit geprüft werden.
Optimierungsansätze zur Verbesserung der Leistungsbeschreibungen bei öffentlichen Bauvorhaben
(2021)
Elbphilharmonie, BER, Bischofsresistenz Limburg, Stuttgart 21, Alter Elbtunnel Hamburg. Alles bekannte deutsche Bauprojekte der letzten 20 Jahre. Alle mit einer Gemeinsamkeit: Exorbitante Kostenexplosionen und Bauzeitenverzögerungen.
In dieser Arbeit wird aufgezeigt, weshalb solche Kostenexplosionen auch aufgrund von Unklarheiten bzw. Fehlern in Leistungsbeschreibungen entstehen und welches enorme Potential klare und erschöpfende Leistungsbeschreibungen haben, um bereits im Vorfeld eines Bauvorhabens die Weichen für einen reibungslosen und erfolgreichen Projektverlauf zu stellen.
Many countries offer state credit guarantees to support credit-constrained exporters. The policy instrument is commonly justified by governments as a means to mitigating adverse outcomes of financial market frictions for exporting firms. Accumulated returns to the German state credit guarantee scheme deriving from risk-compensating premia have outweighed accumulated losses over the past 60 years. Why do private financial agents not step in and provide insurance given that the state-run program yields positive returns? We argue that costs of risk diversification, liquidity management, and coordination among creditors limit the ability of private financial agents to offer comparable insurance products. Moreover, we suggest that the government’s greater effectiveness in recovering claims in foreign countries endows the state with a cost advantage in dealing with the risks involved in large export projects. We test these hypotheses using monthly firm-level data combined with official transaction-level data on covered exports of German firms and find suggestive evidence that positive effects on trade are due to mitigated financial constraints: State credit guarantees benefit firms that are dependent on external finance, if the value at risk which they seek to cover is large, and at times when refinancing conditions on the private financial market are tight.
Modular arithmetic over integers is required for many cryptography systems. Montgomeryreduction is an efficient algorithm for the modulo reduction after a multiplication. Typically, Mont-gomery reduction is used for rings of ordinary integers. In contrast, we investigate the modularreduction over rings of Gaussian integers. Gaussian integers are complex numbers where the real andimaginary parts are integers. Rings over Gaussian integers are isomorphic to ordinary integer rings.In this work, we show that Montgomery reduction can be applied to Gaussian integer rings. Twoalgorithms for the precision reduction are presented. We demonstrate that the proposed Montgomeryreduction enables an efficient Gaussian integer arithmetic that is suitable for elliptic curve cryptogra-phy. In particular, we consider the elliptic curve point multiplication according to the randomizedinitial point method which is protected against side-channel attacks. The implementation of thisprotected point multiplication is significantly faster than comparable algorithms over ordinary primefields.
Beim data-driven learning (DDL) werden Lernerinnen und Lerner angeleitet, sprachliche Muster mit Hilfe von Korpuswerkzeugen zu entdecken und eigene Korpusabfragen durchzuführen. Am Beispiel einer Unterrichtseinheit für den Wirtschaftsdeutsch-Unterricht wird der Einsatz von DDL erläutert. Es wird deutlich, welche Chancen korpuslinguistische Verfahren bieten, aber auch, welche Probleme beim DDL auftreten können. Vor allem für die Planung des Fachsprachenunter-richts können korpuslinguistische Analysen hilfreich sein: Zu nennen sind die Bedarfsermittlung, die Auswahl von Materialien, die Identifizierung von typischem Wortschatz und häufigen Mustern sowie die Erstellung von Übungsmaterialien. Das Praxisbeispiel, das auf andere Kontexte übertragen werden kann, illustriert, wie sich korpuslinguistische Verfahren und DDL auf die Unterrichtsplanung und -durchführung auswirken: Sprache wird als Datenmenge betrachtet; der Fokus liegt auf sprachlichen Mustern; Fragen nach der Korrektheit bzw. der Angemessenheit werden thematisiert.
The McEliece cryptosystem is a promising candidate for post-quantum public-key encryption. In this work, we propose q-ary codes over Gaussian integers for the McEliece system and a new channel model. With this one Mannheim error channel, errors are limited to weight one. We investigate the channel capacity of this channel and discuss its relation to the McEliece system. The proposed codes are based on a simple product code construction and have a low complexity decoding algorithm. For the one Mannheim error channel, these codes achieve a higher error correction capability than maximum distance separable codes with bounded minimum distance decoding. This improves the work factor regarding decoding attacks based on information-set decoding.
Zur Erfassung von Veränderungen der Produkteigenschaften während der Trocknung von Lebensmitteln werden zerstörungsfreie Qualitätsmesstechniken gefordert, mit denen Veränderungen im Inneren des Produkts bestimmt werden können. Gerade im industriellen Einsatz sind schnelle, präzise, und gleichzeitig robuste Verfahren besonders wichtig, um qualitativ hochwertige Produkte zu erhalten.
In dieser Arbeit wurde zur optischen Qualitätsmessung ein neuartiges multispektrales Kamerasystem eingesetzt, um von Veränderungen der spektralen Oberflächenreflexion bei der Mango- und Ananastrocknung mit Veränderungen der Produktfeuchte, sowie mechanischen und chemischen Eigenschaften zu verknüpfen. Diese Verknüpfung wurde mit maschinellem Lernen erreicht.
In einem ersten Schritt wurde ein neues Kameraprinzip, eine multispektrale Flächenkamera mit vier Objektiven und Vorsatzfiltern, entwickelt und speziell auf den Einsatz in der Obsttrocknung angepasst. Anschließend wurden die Änderungen der Spektren und der Qualitätskriterien während der Trocknung gemessen. Dazu wurden Mango- und Ananasscheiben in einem Einzelschichttrockner bei Lufttemperaturen zwischen 40 °C und 80 °C und relativen Luftfeuchtigkeiten von 5 % bis 30 % getrocknet. Während der gesamten Trocknungsdauer wurde die Produktfeuchte der Proben gemessen, und Bilder mit der multispektralen Flächenkamera aufgenommen. Zur Analysen von nur ausgewählten Bereichen von Interesse in den Bildern wurde ein Softwarefilter entwickelt. Aus Spektraldaten und Prozessdaten konnte mit Algorithmen des maschinellen Lernens die Produktfeuchte zu jedem Zeitpunkt sehr genau vorhergesagt werden (Bestimmtheitsmaß R² von 0,98 bis 0,99). Die Kombination aus dem Prinzip der multispektralen Flächenkamera und maschinellem Lernen wurde in einem anderen Trocknungsprozess und mit weiteren Qualitätskriterien getestet. Dafür wurden Mangoscheiben in einem Schranktrockner getrocknet und deren Produkteigenschaften anhand der Spektraldaten und der Prozessdaten vorhergesagt. Bei der Vorhersage der Farbwerte Δa*, Δb* und ΔE00 sowie des Gehalts an gesamtlöslichen Feststoffen im Rehydrierungswasser wurden Bestimmtheitsmaße R² zwischen 0,56 und 0,94 erzielt).
Es konnte gezeigt werden, dass die Kombination aus dem neu entwickelten Multispektralkamerasystem und maschinellem Lernen zur Vorhersage der Produktfeuchte und anderer Qualitätskriterien der Produkte eingesetzt werden kann. Auf diese Weise können Qualitätsänderungen während des Prozesses mit nur wenigen Messgeräten inline überwacht werden.
In diesem Beitrag wird zunächst ein einführender Überblick über die Thematik eines Marketing-Controlling spezifischen Rechtsrahmens gegeben. Da die Datenerhebung, -verarbeitung und -analyse stets die Hauptfunktionsmerkmale eines Marketing-Controllings sind, stehen nach diesem Überblick die Rechtsgrundlagen eines Datenschutzes im Mittelpunkt der folgenden Ausführungen.