Refine
Year of publication
Document Type
- Doctoral Thesis (58) (remove)
Language
- English (31)
- German (25)
- Multiple languages (2)
Keywords
- Agrarprodukt (1)
- Apfel (1)
- Architektur (3)
- Autonomous vessels (1)
- Autonomy (1)
- BIPV (1)
- Backstepping control (1)
- Bahnplanung (1)
- Beobachterentwurf (1)
- Bernstein Basis (1)
Institute
- Fakultät Bauingenieurwesen (1)
- Fakultät Elektrotechnik und Informationstechnik (1)
- Fakultät Informatik (3)
- Fakultät Maschinenbau (2)
- Fakultät Wirtschafts-, Kultur- und Rechtswissenschaften (3)
- Institut für Optische Systeme - IOS (3)
- Institut für Strategische Innovation und Technologiemanagement - IST (3)
- Institut für Systemdynamik - ISD (8)
- Institut für angewandte Thermo- und Fluiddynamik - IATF (1)
- Konstanz Institut für Corporate Governance - KICG (1)
Cyberspace: a world at war. Our privacy, freedom of speech, and with them the very foundations of democracy are under attack. In the virtual world frontiers are not set by nations or states, they are set by those, who control the flows of information. And control is, what everybody wants.
The Five Eyes are watching, storing, and evaluating every transmission. Internet corporations compete for our data and decide if, when, and how we gain access to that data and to their pretended free services. Search engines control what information we are allowed - or want - to consume. Network access providers and carriers are fighting for control of larger networks and for better ways to shape the traffic. Interest groups and copyright holders struggle to limit access to specific content. Network operators try to keep their networks and their data safe from outside - or inside - adversaries.
And users? Many of them just don’t care. Trust in concepts and techniques is implicit. Those who do care try to take back control of the Internet through privacy-preserving techniques.
This leads to an arms race between those who try to classify the traffic, and those who try to obfuscate it. But good or bad lies in the eye of the beholder, and one will find himself fighting on both sides.
Network Traffic Classification is an important tool for network security. It allows identification of malicious traffic and possible intruders, and can also optimize network usage. Network Traffic Obfuscation is required to protect transmissions of important data from unauthorized observers, to keep the information private. However, with security and privacy both crumbling under the grip of legal and illegal black hat crackers, we dare say that contemporary traffic classification and obfuscation techniques are fundamentally flawed. The underlying concepts cannot keep up with technological evolution. Their implementation is insufficient, inefficient and requires too much resources.
We provide (1) a unified view on the apparently opposed fields of traffic classification and obfuscation, their deficiencies and limitations, and how they can be improved. We show that (2) using multiple classification techniques, optimized for specific tasks improves overall resource requirements and subsequently increases classification speed. (3) Classification based on application domain behavior leads to more accurate information than trying to identify communication protocols. (4) Current approaches to identify signatures in packet content are slow and require much space or memory. Enhanced methods reduce these requirements and allow faster matching. (5) Simple and easy to implement obfuscation techniques allow circumvention of even sophisticated contemporary classification systems. (6) Trust and privacy can be increased by reducing communication to a required minimum and limit it to known and trustworthy communication partners.
Our techniques improve both security and privacy and can be applied efficiently on a large scale. It is but a small step in taking back the Web.
Die stetig steigende Digitalisierung von Kommunikation und Interaktion ermöglicht eine immer flexiblere und schnellere Erfassung und Ausführung von Aktivitäten in Geschäftsprozessen. Dabei ermöglichen technologische und organisatorische Treiber, wie beispielsweise Cloud Computing und Industrie 4.0, immer komplexere organisationsübergreifende Geschäftsprozesse. Die effektive und effiziente Einbindung aller beteiligten Menschen (z.B. IT-Experten, Endanwender) ist hierbei ein entscheidender Erfolgsfaktor. Nur wenn alle Prozessbeteiligten Kenntnis über die aktuellen Geschäftsprozesse besitzen, kann eine adäquate Ausführung dieser sichergestellt werden. Die notwendige Balance zwischen Flexibilität und Stabilität wird durch die traditionellen Methoden des Geschäftsprozessmanagements (GPM) nur unzureichend gewährleistet. Sowohl aktuelle Forschungen als auch anwendungsbezogene Studien stellen die unzureichende Integration aller Beteiligten, deren fehlendes Verständnis und die geringe Akzeptanz gegenüber GPM dar. Die Dissertation, welche im Rahmen des anwenderorientierten Forschungsprojekts „BPM@Cloud“ erstellt wird, befasst sich mit der Erarbeitung einer neuen Methode zum agilen Geschäftsprozessmanagement auf Basis gebrauchssprachlicher (alltagssprachlicher, fachsprachlicher) Modellierung von Geschäftsprozessen. Die Methode umfasst drei Bestandteile (Vorgehensweise, Modellierungssprache, Softwarewerkzeug), wodurch eine ganzheitliche Unterstützung bei der Umsetzung von GPM Projekten sichergestellt wird. Durch die Adaption und Erweiterung von agilen Konzepten der Softwareentwicklung wird die Vorgehensweise zum iterativen, inkrementellen und empirischen Management von Geschäftsprozessen beschrieben. Des Weiteren wird eine Modellierungssprache für Geschäftsprozesse entwickelt, welche zur intuitiven, gebrauchssprachlichen Erfassung von Geschäftsprozessen angewendet werden kann. Die Implementierung eines Software-Prototyps ermöglicht des Weiteren die direkte Aufnahme von Feedback während der Ausführung von Geschäftsprozessen. Die drei sich ergänzenden Bestandteile – Vorgehensweise, Sprache und Software-Prototyp – bilden eine neuartige Grundlage für eine verbesserte Erfassung, Anreicherung, Ausführung und Optimierung von Geschäftsprozessen.
Nachhaltiges Wirtschaften und insbesondere nachhaltigerer Konsum sind längst als zentrale Herausforderung des 21. Jahrhunderts erkannt worden. Unternehmen und Verbraucher/innen sind dabei gleichermaßen gefordert, sich gegenseitig in Prozessen gemeinsamer Wertschöpfung zu befähigen. Das Potential dieser Prozesse liegt jedoch nicht alleine in einer klugen Mäßigung der Akteure, sondern in einer sichtbaren Aufwertung der vielfältigen nachhaltigen Konsumoptionen. Eine Möglichkeit, dies zu erreichen, liegt darin, Nachhaltigkeit als Qualität für ein breites Spektrum von Konsumgütern erkennbar und erlebbar werden zu lassen. Eine Nachhaltigkeitsdeklarierung kann dabei weit mehr leisten als nur eine weitere visuelle Auszeichnung. Unternehmen können die kulturellen Kontexte von Verbraucher/innen erkennen und zielgerichtet agieren. Dabei können die vielfältigen Möglichkeiten digitaler Medien hilfreich sein. Die vorliegende Arbeit schlägt vor, dies stets mit Blick auf die lebendige Realität auf Anbieter- und Nachfragerseite zu tun.
Public-key cryptographic algorithms are an essential part of todays cyber security, since those are required for key exchange protocols, digital signatures, and authentication. But large scale quantum computers threaten the security of the most widely used public-key cryptosystems. Hence, the National Institute of Standards and Technology ( NIST ) is currently in a standardization process for post-quantum secure public-key cryptography. One type of such systems is based on the NP-complete problem of decoding random linear codes and therefore called code-based cryptography. The best-known code-based cryptographic system is the McEliece system proposed in 1978 by Robert McEliece. It uses a scrambled generator matrix as a public key and the original generator matrix as well as the scrambling as private key. When encrypting a message it is encoded in the public code and a random but correctable error vector is added. Only the legitimate receiver can correct the errors and decrypt the message using the knowledge of the private key generator matrix. The original proposal of the McEliece system was based on binary Goppa codes, which are also considered for standardization. While those codes seem to be a secure choice, the public keys are extremely large, limiting the practicality of those systems. Many different code families were proposed for the McEliece system, but many of them are considered insecure since attacks exist, which use the known code structure to recover the private key. The security of code-based cryptosystems mainly depends on the number of errors added by the sender, which is limited by the error correction capability of the code. Hence, in order to obtain a high security for relatively short codes one needs a high error correction capability. Therefore maximum distance separable ( MDS ) codes were proposed for those systems, since those are optimal for the Hamming distance. In order to increase the error correction capability we propose q -ary codes over different metrics. There are many code families that have a higher minimum distance in some other metric than in the Hamming metric, leading to increased error correction capability over this metric. To make use of this one needs to restrict not only the number of errors but also their value. In this work, we propose the weight-one error channel, which restricts the error values to weight one and can be applied for different metrics. In addition we propose some concatenated code constructions, which make use of this restriction of error values. For each of these constructions we discuss the usability in code-based cryptography and compare them to other state-of-the-art code-based cryptosystems. The proposed code constructions show that restricting the error values allows for significantly lower public key sizes for code-based cryptographic systems. Furthermore, the use of concatenated code constructions allows for low complexity decoding and therefore an efficient cryptosystem.
According to the World Food Organization, nearly half of all root and tuber crops worldwide are not consumed, but are lost due to inappropriate storage and post-harvest losses. In developing countries such as Ethiopia, potatoes have not been dried, but are traditionally stored in potato clamps. So far, dried potatoes have not been converted into usable foods.
The aim of the present work is to convert potatoes - perishable rootlets and tubers - into stable products by hot air drying. Hot air dryers are economical to operate in industrialized countries. In Africa, this is reserved for larger industrial companies only. In regions with a tropical climate, however, the use of solar tunnel dryers is worthwhile. These are a good choice for farming and small industries and wherever electrical energy is difficult or impossible to obtain.
In a first part of the work, the drying process of potatoes was investigated, in particular with regard to the change of thermal, mechanical and chemical quality parameters. In an evaluation of the literature it was found that potatoes are not subject to quality changes if the water activityis below a value of 0.2. In order to determine the water content associated with this value at storage temperature, the known equations for the sorption equilibrium were evaluated and verified with own experimental investigations. This determined the end point of the drying process.
The following experimental investigations showed a process-dependent change of the quality criteria such as color, shrinkage, and mechanical properties as well as the content of valuedetermining substances such as vitamin C and starch. The differences in the course and magnitude of the quality changes were attributed to the glass transition that takes place during the drying process. For the determination of the glass transition temperature a new, simple method based on the measurement of mechanical properties could be developed. The knowledge of the glass transition temperature allowed optimizing the drying process. The drying process could be carried out in the rubbery or glassy region, depending on the expected quality changes. Thus, all information was available to produce high quality dried potatoes in an industrial process.
Since the production of potato products in less industrialized regions without sufficient supply of electrical energy should be included, potatoes were dried with a solar tunnel dryer. Examination of the quality properties mentioned above confirmed the process-dependent quality changes.
Finally, the dried product was ground and with the flour thus produced, wheat flour was replaced for baking bread. An evaluation of the finished bread by a panel showed that the acceptance of the bread according to the new recipe was high, also with regard to baking volume, taste, texture and color.
This work shows that by drying potatoes can be transformed a well accepted, storable and easily transportable product. The risk of losses or degradation is minimized. It can be produced on an industrial as well as on farm level. If the influence of the glass transition is taken into account, it is possible to optimize the quality of the product.
Trotz des dringenden Erfordernisses einer nachhaltigen und unabhängigen Energieerzeugung und bereits steigender Anteile photovoltaisch erzeugten Stroms stockt die Verbreitung der bauwerkintegrierten Photovoltaik (BIPV). Zahlreiche „Leuchtturm“-Projekte zeigen das große ästhetische Potential solaraktiver Bauteile und dennoch werden insbesondere von Architekt/innen-Seite neben vermeintlichen Einschränkungen in der planerischen Freiheit immer wieder auch gestalterische Vorbehalte angeführt.
Bisher wurde im Zusammenhang mit PV-Bauteilen schwerpunktmäßig die technische und konstruktive Einfügung thematisiert. Um einen Beitrag zur Diskussion um die Entwicklung visuell überzeugender Ergebnisse zu leisten, die verhindern, dass photovoltaische Bauteile am Gebäude als Fremdkörper wahrgenommen werden, ermittelt die vorliegende Arbeit auf der Grundlage ästhetischer Architekturtheorien allgemeingültige Kriterien für architektonische Wirkungsqualität und transferiert diese auf den Bereich der BIPV-Gestaltung.
Dabei werden zum Verständnis erforderliche Grundlagen der BIPV-Systemtechnik vermittelt sowie verfügbare Bauteile und die unterschiedlichen Akteure und Ziele bei der Gestaltung von BIPV aufzeigt. Auch die speziellen funktionalen und technischen Anforderungen, die PV-Bauteile als „aktive“ Bauteile stellen, werden berücksichtigt und hinsichtlich ihrer hemmenden oder synergetischen Wechselwirkungen differenziert.
Im Rahmen einer Projektstudie finden die oben genannten Kriterien Anwendung auf 13 „best practice“-Beispiele aktueller Wettbewerbsgewinner des vom Solarenergieförderverein Bayern e. V. (SeV) ausgelobten „Architekturpreis Gebäudeintegrierte Solartechnik“, die in Form von Steckbriefen vergleichend dargestellt werden.
Das Ergebnis ist die Synthese eines Kriterienkatalogs als Orientierungs-, Planungs- und Kommunikationswerkzeug, in dem alle Ergebnisse systematisiert zusammengestellt werden.
Ergänzend wird in einem kurzen Exkurs auf von der Hauptuntersuchung ausgenommene, für die Praxis aber relevante Schnittstellen zu wirtschaftlichen Aspekten eingegangen.
In this thesis, a new framework has been proposed, designed and developed for creating efficient and cost effective logistics chains for long items within the building industry. The building industry handles many long items such as pipes, profiles and so on. The handling of these long items is quite complicated and difficult because they are bulky, unstable and heavy. So it is not cost effective and efficient to handle them manually. Existing planning frameworks ignore these special requirements of such goods and are not planned for handling these goods. That leads to that many additional manual handling steps are currently required to handle long items. Therefore, it is very important to develop a new framework for creating the efficient and cost-effective logistics chain for long items. To propose such a new framework, the expert interviews were conducted to gain the fully understanding about the customer requirements. The experts from all stages of the building industry supply chain were interviewed. The data collected from the expert interviews has been analysed and the meaningful findings about the customer requirements have been applied as the valuable inputs for the proposition of the new framework. To have fully knowledge about current practices, all existing planning frameworks have been analysed and evaluated using SWOT analysis. The strengths, weaknesses, opportunities and threats of the current planning frameworks have been comparatively analysed and evaluated. The findings from SWOT analysis have been used for proposing, designing and developing the new framework. The great efforts have been made during the implementation stage. The six different key parameters for a successful implementation have been identified. They are: • Improvement Process with Employees • Control of the Improvements • Gifts/Money for the Improvements and Additional Work • KAIZEN Workshops • Motivation of the Employees for Improvements • Presentation of the Results Among these six parameters, it has been found that KAIZEN workshops is a very effective way for creating an efficient and cost-effective logistics chain for long items. It is believed that the new framework can be theoretically used for the planning of logistics that handle long items and commercial goods. This framework can also be used to plan all kinds of in-house logistics processes from the incoming goods, storage, picking, delivery combination areas and through to the outgoing goods area. The achievements of this project are as follows (1) the new framework for creating efficient and cost-effective logistics chains for long items, (2) the data collection and the data evaluation at the preliminary planning, (3) the decision for one planning variant already at the end of the structure planning, (4) the analysis and evaluation of customer requirements, (5) the consideration and implementation of the customer requirements in the new framework, (6) the creation of figures and tables as planning guideline, (7) the research and further development of Minomi with regards to long items, (8) the research on the information flow, (9) the classification of the improvements and the improvement handling at the implementation, (10) the identification of key parameters for a successful implementation of the planning framework. This framework has been evaluated both theoretically and through a case study of a logistics system planning for handling long items and commercial goods. It has been found that the new framework is theoretically sound and practically valuable. It can be applied to creating the logistics system for long items, especially for building industry.
Flash memories are non-volatile memory devices. The rapid development of flash technologies leads to higher storage density, but also to higher error rates. This dissertation considers this reliability problem of flash memories and investigates suitable error correction codes, e.g. BCH-codes and concatenated codes. First, the flash cells, their functionality and error characteristics are explained. Next, the mathematics of the employed algebraic code are discussed. Subsequently, generalized concatenated codes (GCC) are presented. Compared to the commonly used BCH codes, concatenated codes promise higher code rates and lower implementation complexity. This complexity reduction is achieved by dividing a long code into smaller components, which require smaller Galois-Field sizes. The algebraic decoding algorithms enable analytical determination of the block error rate. Thus, it is possible to guarantee very low residual error rates for flash memories. Besides the complexity reduction, general concatenated codes can exploit soft information. This so-called soft decoding is not practicable for long BCH-codes. In this dissertation, two soft decoding methods for GCC are presented and analyzed. These methods are based on the Chase decoding and the stack algorithm. The last method explicitly uses the generalized concatenated code structure, where the component codes are nested subcodes. This property supports the complexity reduction. Moreover, the two-dimensional structure of GCC enables the correction of error patterns with statistical dependencies. One chapter of the thesis demonstrates how the concatenated codes can be used to correct two-dimensional cluster errors. Therefore, a two-dimensional interleaver is designed with the help of Gaussian integers. This design achieves the correction of cluster errors with the best possible radius. Large parts of this works are dedicated to the question, how the decoding algorithms can be implemented in hardware. These hardware architectures, their throughput and logic size are presented for long BCH-codes and generalized concatenated codes. The results show that generalized concatenated codes are suitable for error correction in flash memories, especially for three-dimensional NAND memory systems used in industrial applications, where low residual errors must be guaranteed.
In today's volatile market environments, companies must be able to continuously innovate. In this context, innovation does not only refer to the development of new products or business models but often also affects the entire organization, which has to transform its structures, processes, and ways of working.Corporate entrepreneurship (CE) programs are often used by established companies to address these innovation and transformation challenges. In general, they are understood as formalized entrepreneurial activities to (1) support internal corporate ventures or (2) work with external startups. The organizational design and value creation of CE programs exhibit a high degree of heterogeneity. On the one hand, this heterogeneity makes CE programs a valuable management tool that can be used for many purposes. On the other hand, it can be seen as a reason for the current challenges that companies experience in effectively using and managing CE programs.By systematically analyzing 54 different cases in established companies in Germany, Switzerland, and Austria, this study contributes to a better understanding of the heterogeneity of CE programs. The taxonomic approach provides clearly defined types of CE programs that are distinguished according to their organizational design and the outputs they generate.
Integrität in Unternehmen
(2018)
Unternehmen stehen in der Verantwortung, eine Vielzahl an Werten in ihrem Geschäft zu beachten, allen voran den der Integrität. Das Buch beantwortet die Frage, was Integrität für Unternehmen bedeutet und wie integres Unternehmenshandeln erreicht werden kann. Die Autorin entwickelt einen theoretisch fundierten und praktisch anwendbaren Ansatz der Unternehmensintegrität und gibt Orientierung, wie dieser durch vielfältige Maßnahmen im Rahmen von Integrity Management umgesetzt werden kann. Dabei werden klassische Compliance-Ansätze um eine werteorientierte Perspektive ergänzt, damit Unternehmen ihre je eigene Verantwortung wahrnehmen können.
Autonomous moving systems require very detailed information about their environment and potential colliding objects. Thus, the systems are equipped with high resolution sensors. These sensors have the property to generate more than one detection per object per time step. This results in an additional complexity for the target tracking algorithm, since standard tracking filters assume that an object generates at most one detection per object. This requires new methods for data association and system state filtering.
As new data association methods, in this thesis two different extensions of the Joint Integrated Probabilistic Data Association (JIPDA) filter to assign more than one detection to tracks are proposed.
The first method that is introduced, is a generalization of the JIPDA to assign a variable number of measurements to each track based on some predefined statistical models, which will be called Multi Detection - Joint Integrated Probabilistic Data Association (MD-JIPDA).
Since this scheme suffers from exponential increase of association hypotheses, also a new approximation scheme is presented. The second method is an extension for the special case, when the number and locations of measurements are a priori known. In preparation of this method, a new notation and computation scheme for the standard Joint Integrated Data Association is outlined, which also enables the derivation of a new fast approximation scheme called balanced permanent-JIPDA.
For state filtering, also two different concepts are applied: the Random Matrix Framework and the Measurement Generating Points. For the Random Matrix framework, first an alternative prediction method is proposed to account for kinematic state changes in the extension state prediction as well. Secondly, various update methods are investigated to account for the polar to Cartesian noise transformation problem. The filtering concepts are connected with the new MD-JIPDA and their characteristics analyzed with various Monte Carlo simulations.
In case an object can be modeled by a finite number of fixed Measurement Generating Points (MGP), also a proposition to track these object via a JIPDA filter is made. In this context, a fast Track-to-Track fusion algorithm is proposed as well and compared against the MGP-JIPDA.
The proposed algorithms are evaluated in two applications where scanning is done using radar sensors only. The first application is a typical automotive scenario, where a passenger car is equipped with six radar sensors to cover its complete environment.
In this application, the location of the measurements on an object can be considered stationary and that is has a rectangular shape. Thus, the MGP based algorithms are applied here. The filters are evaluated by tracking especially vehicles on nearside lanes.
The second application covers the tracking of vessels on inland waters. Here, two different kind of Radar systems are applied, but for both sensors a uniform distribution of the measurements over the target's extent can be assumed. Further, the assumption that the targets have elliptical shape holds, and so the Random Matrix Framework in combination with the MD-JIPDA is evaluated.
Exemplary test scenarios also illustrate the performance of this tracking algorithm.
Koordination des Wissenstransfers in Service-Netzwerken transnationaler Investitionsgüterhersteller
(2017)
Algorithms and Architectures for Cryptography and Source Coding in Non-Volatile Flash Memories
(2021)
In this work, algorithms and architectures for cryptography and source coding are developed, which are suitable for many resource-constrained embedded systems such as non-volatile flash memories. A new concept for elliptic curve cryptography is presented, which uses an arithmetic over Gaussian integers. Gaussian integers are a subset of the complex numbers with integers as real and imaginary parts. Ordinary modular arithmetic over Gaussian integers is computational expensive. To reduce the complexity, a new arithmetic based on the Montgomery reduction is presented. For the elliptic curve point multiplication, this arithmetic over Gaussian integers improves the computational efficiency, the resistance against side channel attacks, and reduces the memory requirements. Furthermore, an efficient variant of the Lempel-Ziv-Welch (LZW) algorithm for universal lossless data compression is investigated. Instead of one LZW dictionary, this algorithm applies several dictionaries to speed up the encoding process. Two dictionary partitioning techniques are introduced that improve the compression rate and reduce the memory size of this parallel dictionary LZW algorithm.