Refine
Year of publication
Document Type
- Doctoral Thesis (52) (remove)
Language
- English (30)
- German (20)
- Multiple languages (2)
Has Fulltext
- no (52) (remove)
Keywords
- Agrarprodukt (1)
- Apfel (1)
- Autonomous vessels (1)
- Autonomy (1)
- BIPV (1)
- Backstepping control (1)
- Bahnplanung (1)
- Beobachterentwurf (1)
- Bernstein Basis (1)
- Biomedical signals (1)
Institute
- Fakultät Bauingenieurwesen (1)
- Fakultät Elektrotechnik und Informationstechnik (1)
- Fakultät Informatik (3)
- Fakultät Maschinenbau (2)
- Fakultät Wirtschafts-, Kultur- und Rechtswissenschaften (3)
- Institut für Optische Systeme - IOS (3)
- Institut für Strategische Innovation und Technologiemanagement - IST (3)
- Institut für Systemdynamik - ISD (8)
- Konstanz Institut für Corporate Governance - KICG (1)
Zugleich: Dissertation Carl von Ossietzky Universität Oldenburg, 2015 unter dem Titel: Werteorientiertes Management von Chancen und Risiken in der kommunalen Energieversorgung. Eine Untersuchung der Herausforderungen und Handlungsmöglichkeiten von Stadtwerken in der Energiewende aus ökonomischer, moralischer und kultureller Sicht.
Unter dem Ansatz der werteorientierten Unternehmensfuhrung erforscht die Autorin die Herausforderungen und Handlungsmöglichkeiten von (kommunalen) Energieversorgern im Zuge der Energiewende und des Energiewandels seit 2011. Sie zeigt, dass kommunale Energieversorger durch ihre Verzahnung zwischen Privatwirtschaft und öffentlicher Hand eine Sonderrolle einnehmen, die in Bezug auf die neuen Herausforderungen der Energiewende zu besonderen Chancen führt. Mit Hilfe des werteorientierten Managements können Dezentralität, Vertrauensvorsprung etc. erkannt und gehoben werden.
Dieses Forschungsvorhaben zielt darauf ab, individuelle Entrepreneure hinsichtlich ihrer Veränderungstendenzen zur Nutzung der Entscheidungslogik Effectuation und Causation zu untersuchen – insbesondere während des Gründungsprozesses. Basierend auf einem qualitativen Fallstudiendesign zeigen sowohl fallinterne als auch fallübergreifende Analysen, wie Entrepreneure zunächst an der effektuativen Logik festhalten und im Gründungsverlauf zu hybriden Logikformen übergehen. Der Beitrag zur Weiterentwicklung der Forschung zielt in drei Richtungen: Erstens schließt die Arbeit Lücken in der Effectuation-Forschung, indem sie Entscheidungsfindung von Unternehmensgründern auf individueller Ebene spezifiziert. Zweitens ermöglicht die Fokussierung auf die einzelnen Gründungsphasen ein besseres Verständnis der Veränderungstendenzen einschließlich der Kausalzusammenhänge (wann, wie und warum). Insbesondere werden Erkenntnisse zur Auffälligkeit von Veränderungssprüngen von einer Phase zur anderen geliefert. Drittens kann durch die Beleuchtung der verschiedenen Subdimensionen von Effectuation und Causation ihre zunehmend hybride Verwendung im Zeitverlauf der Gründung das Verständnis für transformative Prozesse entwickeln.
In spite of the amount of new tools and methodologies adopted in the road infrastructure sector, the performance of road infrastructure projects is not constantly improving. Considering that the volume of projects undertaken is forecasted to increase every year, this is a substantial issue for the road infrastructure sector. Hence this work focuses on the principles of Blockchain Technology, road infrastructure sector and the information exchange with the aim to use the advantages of the Blockchain Technology in supporting to overcome the various challenges along the life cycle of road infrastructure projects.
Within the scope of this paper, two studies were conducted. First, focus groups were used to explore where society (road infrastructure sector) stands in terms of industry 4.0 and to get a better understanding if and where the principles of Blockchain Technology can be used when managing projects in the road infrastructure sector. Second, semi-structured interviews were administrated with experts of the road infrastructure sector and experts of Blockchain Technology to better understand the interrelation between these two areas. Based on the outcome of the two studies, technology barriers and enablers were explored for the purpose of improved information exchange within the road infrastructure sector.
The two studies revealed that there are significant and strong interrelations between the principles of the Blockchain Technology, project management within the road infrastructure sector and information exchange. These interrelations are complex and diverse, but overall it can be concluded that the adoption of the principles of Blockchain Technology into the field of information exchange improves the management of road infrastructure projects. Based on the two studies a theoretical framework was developed.
In summary this research showed that trust is an important factor and builds the foundation for communication and to ensure a proper information exchange. Within the scope of this thesis, it was demonstrated that the principles of the Blockchain Technology can be used to increase transparency, traceability and immutability during the life cycle of road infrastructure projects in the area of information exchange.
Untersuchung und Darstellung der Qualitätsveränderung von Agrarprodukten während der Trocknung
(2019)
Das Ziel der Arbeit war es optimale Trocknungsprozesse für verschiedene Agrarprodukte zu finden. Dazu wurden die Qualitätskriterien frischer und getrockneter Agrarprodukte analysiert und die Veränderungen durch die unterschiedlichen Trocknungsparameter, wie Luftgeschwindigkeit, Taupunkttemperatur, Trocknungstemperatur und –zeit dargestellt. In einer Literaturrecherche wurden sowohl die Faktoren für die Nachernteverluste und deren Höhe in Industrie- sowie Schwellen- und Entwicklungsländer untersucht. Zudem sind die Agrarprodukte und deren qualitätsbestimmenden Inhaltsstoffe vorgestellt. Auch die Extraktions- sowie die Analyse-Methoden werden aufgezeigt und erklärt. Dabei handelt es sich um die Hochleistungsflüssigkeit- und die Ionenausschlusschromatographie, aber auch um die UV/Vis-Spektroskopie und die Polarimetrie. Des Weiteren wurden während den Trocknungsprozessen mit der integrierten Kamera des Trockners in definierten Zeitabständen Bilder aufgenommen und diese über eine speziell entwickelte Software im Hinblick auf die Farbveränderung und die Schrumpfung der Agrarprodukte untersucht. Die Erstellung und Überprüfung der Versuchsergebnisse fand mittels Statistik-Software statt. Es wurden neue Diagramme, sogenannte Schädigungsdiagramme, eingeführt. Dabei handelt es sich um Diagramme, mit deren Hilfe die Identifizierung optimaler Trocknungsprozesse möglich ist. Für Chilis erwies sich eine Trocknungstemperatur von ~ 60 °C, für Kartoffeln von ~ 64 °C bis 74 °C, für Ananas von ~ 43 °C und Mangos von ~ 60 °C als optimal. Auch Taupunkttemperaturen von ~ <12 °C / >27 °C für Chilis, ~ 30 °C für Kartoffeln, ~ 14 °C für Ananas und ~ 20 °C Mangos waren optimal. Die Luftgeschwindigkeit wurde mit rund 1,2 m/s (Kartoffeln: ~ 1.2 m/s; Ananas: ~ 1.2 m/s und Mangos: ~ 0.9 m/s) als optimal befunden. Die Ergebnisse zeigten, dass bei jedem der vier Agrarprodukte die Trocknungstemperatur den größten Effekt auf die Reduzierung der qualitätsbestimmenden Eigenschaften hatte. Bei-spielsweise wurden die Ascorbinsäure, der Gesamtzucker-Gehalt sowie die organischen Säuren mit zunehmender Trocknungstemperatur stärker abgebaut. In Zukunft sollte neben den optimalen Trocknungsbedingungen auch beachtet werden, dass die Größe, Form, und Beschaffenheit der Proben einen entscheidenden Einfluss auf die stationären Trocknungsprozesse haben. Weiter ist es denkbar, instationäre Trocknungsprozesse zum Einsatz zu bringen. Dabei werden zuerst bei hohen Temperaturen die qualitätsreduzierenden Enzyme inaktiviert und anschließend bei geringen Temperatur und damit geringerer thermischer Belastung getrocknet. Weiter sollte darauf geachtet werden, dass Produkte nicht übertrocknen, so dass in Zukunft nur bis knapp unter den maximalen Restfeuchte-Gehalt und nicht wie in dieser Arbeit bis zur Gewichtskonstanz getrocknet wird.
In today's volatile market environments, companies must be able to continuously innovate. In this context, innovation does not only refer to the development of new products or business models but often also affects the entire organization, which has to transform its structures, processes, and ways of working.Corporate entrepreneurship (CE) programs are often used by established companies to address these innovation and transformation challenges. In general, they are understood as formalized entrepreneurial activities to (1) support internal corporate ventures or (2) work with external startups. The organizational design and value creation of CE programs exhibit a high degree of heterogeneity. On the one hand, this heterogeneity makes CE programs a valuable management tool that can be used for many purposes. On the other hand, it can be seen as a reason for the current challenges that companies experience in effectively using and managing CE programs.By systematically analyzing 54 different cases in established companies in Germany, Switzerland, and Austria, this study contributes to a better understanding of the heterogeneity of CE programs. The taxonomic approach provides clearly defined types of CE programs that are distinguished according to their organizational design and the outputs they generate.
Public-key cryptographic algorithms are an essential part of todays cyber security, since those are required for key exchange protocols, digital signatures, and authentication. But large scale quantum computers threaten the security of the most widely used public-key cryptosystems. Hence, the National Institute of Standards and Technology ( NIST ) is currently in a standardization process for post-quantum secure public-key cryptography. One type of such systems is based on the NP-complete problem of decoding random linear codes and therefore called code-based cryptography. The best-known code-based cryptographic system is the McEliece system proposed in 1978 by Robert McEliece. It uses a scrambled generator matrix as a public key and the original generator matrix as well as the scrambling as private key. When encrypting a message it is encoded in the public code and a random but correctable error vector is added. Only the legitimate receiver can correct the errors and decrypt the message using the knowledge of the private key generator matrix. The original proposal of the McEliece system was based on binary Goppa codes, which are also considered for standardization. While those codes seem to be a secure choice, the public keys are extremely large, limiting the practicality of those systems. Many different code families were proposed for the McEliece system, but many of them are considered insecure since attacks exist, which use the known code structure to recover the private key. The security of code-based cryptosystems mainly depends on the number of errors added by the sender, which is limited by the error correction capability of the code. Hence, in order to obtain a high security for relatively short codes one needs a high error correction capability. Therefore maximum distance separable ( MDS ) codes were proposed for those systems, since those are optimal for the Hamming distance. In order to increase the error correction capability we propose q -ary codes over different metrics. There are many code families that have a higher minimum distance in some other metric than in the Hamming metric, leading to increased error correction capability over this metric. To make use of this one needs to restrict not only the number of errors but also their value. In this work, we propose the weight-one error channel, which restricts the error values to weight one and can be applied for different metrics. In addition we propose some concatenated code constructions, which make use of this restriction of error values. For each of these constructions we discuss the usability in code-based cryptography and compare them to other state-of-the-art code-based cryptosystems. The proposed code constructions show that restricting the error values allows for significantly lower public key sizes for code-based cryptographic systems. Furthermore, the use of concatenated code constructions allows for low complexity decoding and therefore an efficient cryptosystem.
Die Fähigkeit zur Erzeugung einer nur wenige Nanometer dicken Passivschicht gelingt nichtrostenden Edelstählen aufgrund deren chemischer Zusammensetzung. Die erzielte Korrosionsbeständigkeit wird allerdings darüber hinaus als Systemeigenschaft von einer Vielzahl von weiteren Faktoren beeinflusst. Die vorliegende Arbeit befasst sich fokussiert mit dem Einfluss der schleiftechnischen Oberflächenbearbeitung von zwei ausgesuchten metastabilen austenitischen Legierungen auf deren Korrosionsverhalten.
Obwohl beide Legierungen eine vergleichbare Beständigkeit gemäß der chemischen Zusammensetzung besitzen, kann gezeigt werden, dass infolge von unterschiedlicher Oberflächenbearbeitung und unterschiedlichem Umformgrad eine starke Variation des Korrosionsverhaltens möglich ist. Es kann ebenfalls nachgewiesen werden, dass die Ausprägung und Anzahl von lokalen Oberflächendefekten hierfür verantwortlich ist. Dem eingesetzten Schleifkornwerkstoff kommt hierbei eine besondere Bedeutung zu.
Trotz des dringenden Erfordernisses einer nachhaltigen und unabhängigen Energieerzeugung und bereits steigender Anteile photovoltaisch erzeugten Stroms stockt die Verbreitung der bauwerkintegrierten Photovoltaik (BIPV). Zahlreiche „Leuchtturm“-Projekte zeigen das große ästhetische Potential solaraktiver Bauteile und dennoch werden insbesondere von Architekt/innen-Seite neben vermeintlichen Einschränkungen in der planerischen Freiheit immer wieder auch gestalterische Vorbehalte angeführt.
Bisher wurde im Zusammenhang mit PV-Bauteilen schwerpunktmäßig die technische und konstruktive Einfügung thematisiert. Um einen Beitrag zur Diskussion um die Entwicklung visuell überzeugender Ergebnisse zu leisten, die verhindern, dass photovoltaische Bauteile am Gebäude als Fremdkörper wahrgenommen werden, ermittelt die vorliegende Arbeit auf der Grundlage ästhetischer Architekturtheorien allgemeingültige Kriterien für architektonische Wirkungsqualität und transferiert diese auf den Bereich der BIPV-Gestaltung.
Dabei werden zum Verständnis erforderliche Grundlagen der BIPV-Systemtechnik vermittelt sowie verfügbare Bauteile und die unterschiedlichen Akteure und Ziele bei der Gestaltung von BIPV aufzeigt. Auch die speziellen funktionalen und technischen Anforderungen, die PV-Bauteile als „aktive“ Bauteile stellen, werden berücksichtigt und hinsichtlich ihrer hemmenden oder synergetischen Wechselwirkungen differenziert.
Im Rahmen einer Projektstudie finden die oben genannten Kriterien Anwendung auf 13 „best practice“-Beispiele aktueller Wettbewerbsgewinner des vom Solarenergieförderverein Bayern e. V. (SeV) ausgelobten „Architekturpreis Gebäudeintegrierte Solartechnik“, die in Form von Steckbriefen vergleichend dargestellt werden.
Das Ergebnis ist die Synthese eines Kriterienkatalogs als Orientierungs-, Planungs- und Kommunikationswerkzeug, in dem alle Ergebnisse systematisiert zusammengestellt werden.
Ergänzend wird in einem kurzen Exkurs auf von der Hauptuntersuchung ausgenommene, für die Praxis aber relevante Schnittstellen zu wirtschaftlichen Aspekten eingegangen.
Bei der Auslegung von Trocknungsprozessen empfindlicher biologischer Güter spielt die Produktqualität eine zunehmend wichtige Rolle. Obwohl der Einfluss der Trocknungsparameter auf die Trocknungskinetik von Äpfeln bereits Gegenstand vieler Studien war, sind die Auswirkungen auf die Produktqualität bisher kaum bekannt. Die Untersuchung dieses Sachverhalts und die Entwicklung geeigneter Prozessstrategien zur Verbesserung der Qualität des resultierenden Produkts, waren das Ziel der vorliegenden Arbeit. In einem ersten Schritt wurden zunächst umfangreiche stationäre Grundlagenversuche durchgeführt, die zeigten, dass eine Lufttemperatur im höheren Bereich, eine möglichst hohe Luftgeschwindigkeit und eine niedrige Taupunkttemperatur zur geringsten Trocknungszeit bei gleichzeitig guter optischer Qualität führt. Die Beurteilung dieser Qualitätsveränderungen erfolgte mit Hilfe einer neu eingeführten Bezugsgröße, der kumulierten thermischen Belastung, die durch das zeitliche Integral über der Oberflächentemperatur repräsentiert wird und die Vergleichbarkeit der Versuchsergebnisse entscheidend verbessert. Im zweiten Schritt wurden die Ergebnisse der Einzelschichtversuche zur Aufstellung eines numerischen Simulationsmodells verwendet, welches sowohl die entsprechenden Transportvorgänge, als auch die Formveränderung des Trocknungsgutes berücksichtigt. Das Simulationsmodell sowie die experimentellen Daten waren die Grundlage zur anschließenden Entwicklung von Prozessstrategien für die konvektive Trocknung von Äpfeln, die die resultierende Produktqualität, repräsentiert durch die Produktfarbe und –form, verbessern und gleichzeitig möglichst energieeffizient sein sollten. In einem weiteren Schritt wurde die Übertragbarkeit auf den industriellen Maßstab untersucht, wobei die entsprechenden Prozessstrategien an einer neu entwickelten, kostengünstigen Trocknungsanlage erfolgreich implementiert werden konnten. Das Ziel einer verbesserten Produktqualität konnte mit Hilfe unterschiedlicher instationärer Trocknungsschemata sowohl am Einzelschichttrockner, als auch im größeren Maßstab erreicht werden. Das vorgestellte numerische Simulationsmodell zeigte auch bei der Vorhersage des instationären Trocknungsprozesses eine hohe Genauigkeit und war außerdem in der Lage, den Trocknungsverlauf im industriellen Maßstab zuverlässig voraus zu berechnen.
Nowadays, most digital modulation schemes are based on conventional signal constellations that have no algebraic group, ring, or field properties, e.g. square quadrature-amplitude modulation constellations. Signal constellations with algebraic structure can enhance the system performance. For instance, multidimensional signal constellations based on dense lattices can achieve performance gains due to the dense packing. The algebraic structure enables low-complexity decoding and detection schemes. In this work, signal constellations with algebraic properties and their application in spatial modulation transmission schemes are investigated. Several design approaches of two- and four-dimensional signal constellations based on Gaussian, Eisenstein, and Hurwitz integers are shown. Detection algorithms with reduced complexity are proposed. It is shown, that the proposed Eisenstein and Hurwitz constellations combined with the proposed suboptimal detection can outperform conventional two-dimensional constellations with ML detection.
Particularly for manufactured products subject to aesthetic evaluation, the industrial manufacturing process must be monitored, and visual defects detected. For this purpose, more and more computer vision-integrated inspection systems are being used. In optical inspection based on cameras or range scanners, only a few examples are typically known before novel examples are inspected. Consequently, no large data set of non-defective and defective examples could be used to train a classifier, and methods that work with limited or weak supervision must be applied. For such scenarios, I propose new data-efficient machine learning approaches based on one-class learning that reduce the need for supervision in industrial computer vision tasks. The developed novelty detection model automatically extracts features from the input images and is trained only on available non-defective reference data. On top of the feature extractor, a one-class classifier based on recent developments in deep learning is placed. I evaluate the novelty detector in an industrial inspection scenario and state-of-the-art benchmarks from the machine learning community. In the second part of this work, the model gets improved by using a small number of novel defective examples, and hence, another source of supervision gets incorporated. The targeted real-world inspection unit is based on a camera array and a flashing light illumination, allowing inline capturing of multichannel images at a high rate. Optionally, the integration of range data, such as laser or Lidar signals, is possible by using the developed targetless data fusion method.
Path planning and collision avoidance for safe autonomous vessel navigation in dynamic environments
(2017)
The intentions of the so-called "More Electrical Aircraft" (MEA) are higher efficiency and lower weight. A main topic here is the application of electrical instead of hydraulical, pneumatical and mechanical systems. The necessary power electronic devices have intermediate DC-links, which are typically supplied by a three-phase system with active B6 and passive B12 rectifiers. A possible alternative is the B6 diode bridge in combination with an active power filter (APF). Due to the parallel arrangement, the APF offers a higher power density and is able to compensate for harmonics from several devices. The use of the diode bridge rectifier alone is not permitted due to the highly distorted phase current. The following investigations are dealing with the development of an active power filter for a three-phase supply with variable frequency from 360 to 800 Hz. All relevant components such as inductors, EMC-filters, power modules and DC-link capacitor are designed. A particular focus is put on the customized power module with SiC-MOSFETs and SiC-diodes, which is characterized electrically and thermally. The maximum supply frequency slope has a value of 50 Hz/ms, which requires a high dynamic and robustness on the control algorithm. Furthermore, the content of 5th and 7th harmonics must be reduced to less than 2 %, which demands a high accuracy. To cope with both requirements, a two-stage filter algorithm is developed and implemented in two independent signal processors. Simulations and laboratory experiments confirm the performance and robustness of the control algorithm. This work comprehensively presents the design of aerospace rectifiers. The results were published in conferences and patents.
According to the World Food Organization, nearly half of all root and tuber crops worldwide are not consumed, but are lost due to inappropriate storage and post-harvest losses. In developing countries such as Ethiopia, potatoes have not been dried, but are traditionally stored in potato clamps. So far, dried potatoes have not been converted into usable foods.
The aim of the present work is to convert potatoes - perishable rootlets and tubers - into stable products by hot air drying. Hot air dryers are economical to operate in industrialized countries. In Africa, this is reserved for larger industrial companies only. In regions with a tropical climate, however, the use of solar tunnel dryers is worthwhile. These are a good choice for farming and small industries and wherever electrical energy is difficult or impossible to obtain.
In a first part of the work, the drying process of potatoes was investigated, in particular with regard to the change of thermal, mechanical and chemical quality parameters. In an evaluation of the literature it was found that potatoes are not subject to quality changes if the water activityis below a value of 0.2. In order to determine the water content associated with this value at storage temperature, the known equations for the sorption equilibrium were evaluated and verified with own experimental investigations. This determined the end point of the drying process.
The following experimental investigations showed a process-dependent change of the quality criteria such as color, shrinkage, and mechanical properties as well as the content of valuedetermining substances such as vitamin C and starch. The differences in the course and magnitude of the quality changes were attributed to the glass transition that takes place during the drying process. For the determination of the glass transition temperature a new, simple method based on the measurement of mechanical properties could be developed. The knowledge of the glass transition temperature allowed optimizing the drying process. The drying process could be carried out in the rubbery or glassy region, depending on the expected quality changes. Thus, all information was available to produce high quality dried potatoes in an industrial process.
Since the production of potato products in less industrialized regions without sufficient supply of electrical energy should be included, potatoes were dried with a solar tunnel dryer. Examination of the quality properties mentioned above confirmed the process-dependent quality changes.
Finally, the dried product was ground and with the flour thus produced, wheat flour was replaced for baking bread. An evaluation of the finished bread by a panel showed that the acceptance of the bread according to the new recipe was high, also with regard to baking volume, taste, texture and color.
This work shows that by drying potatoes can be transformed a well accepted, storable and easily transportable product. The risk of losses or degradation is minimized. It can be produced on an industrial as well as on farm level. If the influence of the glass transition is taken into account, it is possible to optimize the quality of the product.
The main goal of this work was to experimentally characterize the hot air-drying process of agricultural products (Potato, Carrot, Tomato) and verify it with numerical solutions at single layer and industrial scale dryer using Comsol Multiphysics® 5.3.
Input parameters at single layer dryer effects on quality attributes were examined. Two strategies of drying were applied on batch dryer to examine the input effects on quality attributes. Constant input parameters strategy was designed by using central composite design formulation and optimized by Response Surface Methodology (RSM). The second strategy was applied for further optimization of the selected region by using square wave profile of the air temperature and relative humidity. Similarly, numerical method for single layer dryer, unsteady-state partial differential equations have been solved by means of the Finite Elements Method coupled to the Arbitrary Lagrangian-Eulerian (ALE). Also, for batch dryer, the mechanistic mathematical models of coupled heat and mass transfer were developed and solved as solid porous moist material.
With this work, the process of convective drying of agricultural products could be optimized. Furthermore, important knowledge about the basic mechanisms of the drying process was found and implemented in the numerical models.
This thesis presents the development of two different state-feedback controllers to solve the trajectory tracking problem, where the vessel needs to reach and follow a time-varying reference trajectory. This motion problem was addressed to a real-scaled fully actuated surface vessel, whose dynamic model had unknown hydrodynamic and propulsion parameters that were identified by applying an experimental maneuver-based identification process. This dynamic model was then used to develop the controllers. The first one was the backstepping controller, which was designed with a local exponential stability proof. For the NMPC, the controller was developed to minimize the tracking error, considering the thrusters’ constraints. Moreover, both controllers considered the thruster allocation problem and counteracted environmental disturbance forces such as current, waves and wind.The effectiveness of these approaches was verified in simulation using Matlab/Simulink and GRAMPC (in the case of the NMPC), and in experimental scenarios, where they were applied to the vessel, performing docking maneuvers at the Rhine River in Constance (Germany).
Cyberspace: a world at war. Our privacy, freedom of speech, and with them the very foundations of democracy are under attack. In the virtual world frontiers are not set by nations or states, they are set by those, who control the flows of information. And control is, what everybody wants.
The Five Eyes are watching, storing, and evaluating every transmission. Internet corporations compete for our data and decide if, when, and how we gain access to that data and to their pretended free services. Search engines control what information we are allowed - or want - to consume. Network access providers and carriers are fighting for control of larger networks and for better ways to shape the traffic. Interest groups and copyright holders struggle to limit access to specific content. Network operators try to keep their networks and their data safe from outside - or inside - adversaries.
And users? Many of them just don’t care. Trust in concepts and techniques is implicit. Those who do care try to take back control of the Internet through privacy-preserving techniques.
This leads to an arms race between those who try to classify the traffic, and those who try to obfuscate it. But good or bad lies in the eye of the beholder, and one will find himself fighting on both sides.
Network Traffic Classification is an important tool for network security. It allows identification of malicious traffic and possible intruders, and can also optimize network usage. Network Traffic Obfuscation is required to protect transmissions of important data from unauthorized observers, to keep the information private. However, with security and privacy both crumbling under the grip of legal and illegal black hat crackers, we dare say that contemporary traffic classification and obfuscation techniques are fundamentally flawed. The underlying concepts cannot keep up with technological evolution. Their implementation is insufficient, inefficient and requires too much resources.
We provide (1) a unified view on the apparently opposed fields of traffic classification and obfuscation, their deficiencies and limitations, and how they can be improved. We show that (2) using multiple classification techniques, optimized for specific tasks improves overall resource requirements and subsequently increases classification speed. (3) Classification based on application domain behavior leads to more accurate information than trying to identify communication protocols. (4) Current approaches to identify signatures in packet content are slow and require much space or memory. Enhanced methods reduce these requirements and allow faster matching. (5) Simple and easy to implement obfuscation techniques allow circumvention of even sophisticated contemporary classification systems. (6) Trust and privacy can be increased by reducing communication to a required minimum and limit it to known and trustworthy communication partners.
Our techniques improve both security and privacy and can be applied efficiently on a large scale. It is but a small step in taking back the Web.
Nachhaltiges Wirtschaften und insbesondere nachhaltigerer Konsum sind längst als zentrale Herausforderung des 21. Jahrhunderts erkannt worden. Unternehmen und Verbraucher/innen sind dabei gleichermaßen gefordert, sich gegenseitig in Prozessen gemeinsamer Wertschöpfung zu befähigen. Das Potential dieser Prozesse liegt jedoch nicht alleine in einer klugen Mäßigung der Akteure, sondern in einer sichtbaren Aufwertung der vielfältigen nachhaltigen Konsumoptionen. Eine Möglichkeit, dies zu erreichen, liegt darin, Nachhaltigkeit als Qualität für ein breites Spektrum von Konsumgütern erkennbar und erlebbar werden zu lassen. Eine Nachhaltigkeitsdeklarierung kann dabei weit mehr leisten als nur eine weitere visuelle Auszeichnung. Unternehmen können die kulturellen Kontexte von Verbraucher/innen erkennen und zielgerichtet agieren. Dabei können die vielfältigen Möglichkeiten digitaler Medien hilfreich sein. Die vorliegende Arbeit schlägt vor, dies stets mit Blick auf die lebendige Realität auf Anbieter- und Nachfragerseite zu tun.
Autonomous moving systems require very detailed information about their environment and potential colliding objects. Thus, the systems are equipped with high resolution sensors. These sensors have the property to generate more than one detection per object per time step. This results in an additional complexity for the target tracking algorithm, since standard tracking filters assume that an object generates at most one detection per object. This requires new methods for data association and system state filtering.
As new data association methods, in this thesis two different extensions of the Joint Integrated Probabilistic Data Association (JIPDA) filter to assign more than one detection to tracks are proposed.
The first method that is introduced, is a generalization of the JIPDA to assign a variable number of measurements to each track based on some predefined statistical models, which will be called Multi Detection - Joint Integrated Probabilistic Data Association (MD-JIPDA).
Since this scheme suffers from exponential increase of association hypotheses, also a new approximation scheme is presented. The second method is an extension for the special case, when the number and locations of measurements are a priori known. In preparation of this method, a new notation and computation scheme for the standard Joint Integrated Data Association is outlined, which also enables the derivation of a new fast approximation scheme called balanced permanent-JIPDA.
For state filtering, also two different concepts are applied: the Random Matrix Framework and the Measurement Generating Points. For the Random Matrix framework, first an alternative prediction method is proposed to account for kinematic state changes in the extension state prediction as well. Secondly, various update methods are investigated to account for the polar to Cartesian noise transformation problem. The filtering concepts are connected with the new MD-JIPDA and their characteristics analyzed with various Monte Carlo simulations.
In case an object can be modeled by a finite number of fixed Measurement Generating Points (MGP), also a proposition to track these object via a JIPDA filter is made. In this context, a fast Track-to-Track fusion algorithm is proposed as well and compared against the MGP-JIPDA.
The proposed algorithms are evaluated in two applications where scanning is done using radar sensors only. The first application is a typical automotive scenario, where a passenger car is equipped with six radar sensors to cover its complete environment.
In this application, the location of the measurements on an object can be considered stationary and that is has a rectangular shape. Thus, the MGP based algorithms are applied here. The filters are evaluated by tracking especially vehicles on nearside lanes.
The second application covers the tracking of vessels on inland waters. Here, two different kind of Radar systems are applied, but for both sensors a uniform distribution of the measurements over the target's extent can be assumed. Further, the assumption that the targets have elliptical shape holds, and so the Random Matrix Framework in combination with the MD-JIPDA is evaluated.
Exemplary test scenarios also illustrate the performance of this tracking algorithm.
In this thesis, the recognition problem and the properties of eigenvalues and eigenvectors of matrices which are strictly sign-regular of a given order, i.e., matrices whose minors of a given order have the same strict sign, are considered. The results are extended to matrices which are sign-regular of a given order, i.e., matrices whose minors of a given order have the same sign or are allowed to vanish. As a generalization, a new type of matrices called oscillatory of a specific order, are introduced. Furthermore, the properties for this type are investigated. Also, same applications to dynamic systems are given.
Pascal Laube presents machine learning approaches for three key problems of reverse engineering of defective structured surfaces: parametrization of curves and surfaces, geometric primitive classification and inpainting of high-resolution textures. The proposed methods aim to improve the reconstruction quality while further automating the process. The contributions demonstrate that machine learning can be a viable part of the CAD reverse engineering pipeline.
Koordination des Wissenstransfers in Service-Netzwerken transnationaler Investitionsgüterhersteller
(2017)
IT-Compliance in KMU
(2023)
Integrität in Unternehmen
(2018)
Unternehmen stehen in der Verantwortung, eine Vielzahl an Werten in ihrem Geschäft zu beachten, allen voran den der Integrität. Das Buch beantwortet die Frage, was Integrität für Unternehmen bedeutet und wie integres Unternehmenshandeln erreicht werden kann. Die Autorin entwickelt einen theoretisch fundierten und praktisch anwendbaren Ansatz der Unternehmensintegrität und gibt Orientierung, wie dieser durch vielfältige Maßnahmen im Rahmen von Integrity Management umgesetzt werden kann. Dabei werden klassische Compliance-Ansätze um eine werteorientierte Perspektive ergänzt, damit Unternehmen ihre je eigene Verantwortung wahrnehmen können.
Die Entwicklung der Elektromobilität, als alternative Fortbewegungsform ist seit geraumer Zeit eine nicht nur regional, sondern weltweit und unter den verschiedensten Aspekten (Technik, Umwelt, Wirtschaft, Energiewende etc.) intensiv betrachtete und untersuchte Thematik. Hierbei spielt der mögliche positive Effekt auf die Umwelt und die Energiewende hin zu nicht fossilen Energieträgern eine zentrale Rolle für Politik und Forschung bei der Förderung dieser Technologie. Die vorliegende Arbeit untersucht die Elektromobilität im Bodenseetourismus. Ziel der Arbeit ist es, die Potenziale für die Integration der Elektromobilität im Bodenseetourismus darzustellen. Hierfür wird die Elektromobilität im Bodenseetourismus als innovative Mobilitätsform postuliert, verstanden und untersucht. Die Betrachtung der Diffusion der Innovation wird vor dem Hintergrund heterogener Akteursgruppen im Dreiländereck D-A-CH untersucht.
Ein Beitrag zum Beobachterentwurf und zur sensorlosen Folgeregelung translatorischer Magnetaktoren
(2020)
Efficient privacy-preserving configurationless service discovery supporting multi-link networks
(2017)
Data is the pollution problem of the information age, and protecting privacy is the environmental challenge — this quotation from Bruce Schneier laconically illustrates the importance of protecting privacy. Protecting privacy — as well as protecting our planet — is fundamental for humankind. Privacy is a basic human right, stated in the 12th article of the United Nations’ Universal Declaration of Human Rights. The necessity to protect human rights is unquestionable. Nothing ever threatened privacy on a scale comparable to today’s interconnected computers. Ranging from small sensors over smart phones and notebooks to large compute clusters, they collect, generate and evaluate vast amounts of data. Often, this data is distributed via the network, not only rendering it accessible to addressees, but also — if not properly secured — to malevolent parties. Like a toxic gas, this data billows through networks and suffocates privacy. This thesis takes on the challenge of protecting privacy in the area of configurationless service discovery. Configurationless service discovery is a basis for user-friendly applications. It brings great benefits, allowing the configurationless network setup for various kinds of applications; e.g. for communicating, sharing documents and collaborating, or using infrastructure devices like printers. However, while today’s various protocols provide some means of privacy protection, typical configurationless service discovery solutions do not even consider privacy. As configurationless service discovery solutions are ubiquitous and run on almost every smart device, their privacy problems affect almost everyone. The quotation aligns very well with configurationless service discovery. Typically, configurationless service discovery solutions realize configurationlessness by using cleartext multicast messages literally polluting the local network and suffocating privacy. Messages containing private cleartext data are sent to everyone, even if they are only relevant for a few users. The typical means for mitigating the network pollution problem caused by multicast per se, regardless of the privacy aspects, is confining multicast messages to a single network link or to the access network of a WiFi access point; institutions often even completely deactivate multicast. While this mitigates the privacy problem, it also strongly scales configurationless service discovery down, either confining it or rendering it completely unusable. In this thesis, we provide an efficient configurationless service discovery framework that protects the users’ privacy. It further reduces the network pollution by reducing the number of necessary multicast messages and offers a mode of operation that is completely independent of multicast. Introducing a multicast independent mode of operation, we also address the problem of the limited range in which services are discoverable. Our framework comprises components for device pairing, privacy-preserving service discovery, and multi-link scaling. These components are independent and — while usable in a completely separated way — are meant to be used as an integrated framework as they work seamlessly together. Based on our device pairing and privacy-preserving service discovery components, we published IETF Internet drafts specifying a privacy extension for DNS service discovery over multicast DNS, a wildly used protocol stack for configurationless service discovery. As our drafts have already been adopted by the dnssd working group, they are likely to become standards.
Simon Grimm examines new multi-microphone signal processing strategies that aim to achieve noise reduction and dereverberation. Therefore, narrow-band signal enhancement approaches are combined with broad-band processing in terms of directivity based beamforming. Previously introduced formulations of the multichannel Wiener filter rely on the second order statistics of the speech and noise signals. The author analyses how additional knowledge about the location of a speaker as well as the microphone arrangement can be used to achieve further noise reduction and dereverberation.
The influence of sleep on human life, including physiological, psychological, and mental aspects, is remarkable. Therefore, it is essential to apply appropriate therapy in the case of sleep disorders. For this, however, the irregularities must first be recognised, preferably conveniently for the person concerned. This dissertation, structured as a composition of research articles, presents the development of mathematically based algorithmic principles for a sleep analysis system. The particular focus is on the classification of sleep stages with a minimal set of physiological parameters. In addition, the aspects of using the sleep analysis system as part of the more complex healthcare systems are explored. Design of hardware for non-obtrusive measurement of relevant physiological parameters and the use of such systems to detect other sleep disorders, such as sleep apnoea, are also referred to. Multinomial logistic regression was selected as the basis for development resulting from the investigations carried out. By following a methodical procedure, the number of physiological parameters necessary for the classification of sleep stages was successively reduced to two: Respiratory and Movement signals. These signals might be measured in a contactless way. A prototype implementation of the developed algorithms was performed to validate the proposed method, and the evaluation of 19324 sleep epochs was carried out. The results, with the achieved accuracy of 73% in the classification of Wake/NREM/REM stages and Cohen's kappa of 0.44, outperform the state of the art and demonstrate the appropriateness of the selected approach. In the future, this method could enable convenient, cost-effective, and accurate sleep analysis, leading to the detection of sleep disorders at an early stage so that therapy can be initiated as soon as possible, thus improving the general population's health status and quality of life.
Nowadays, there is a continuous need for many corporations to renew their business portfolio strategically in anticipation of changes in the business environment (e.g., technological change). The ongoing booming of founding international start-ups suggests that small entrepreneurial teams are an effective means to develop new businesses. Corporations should be able to benefit from this form of self-organized innovation when entering novel business domains for strategic renewal. However, corporations that establish small entrepreneurial teams (corporate ventures) are facing two obstacles. First, corporate ventures often fail for reasons that are not well explored. Second, it remains unclear how the partial successes may be improved to large successes. Although the key success factors remain ambiguous, there is little hope that corporate ventures will be successful without effective management. Since an empirical model for corporate venture management does not exists so far, the thesis formulates and answers the following problem statement: How can corporate management effectively manage corporate ventures? Building on qualitative and quantitative research methodologies, a model for effective corporate venture management is developed and tested statistically in the German IT consulting industry. The research results reveal some of the essential management principles through which corporate management can increase corporate venture success systematically.
Die Vorstellung über die "Nature of the Firm" hat sich innerhalb der vergangenen Jahrzehnte stetig verändert und weiterentwickelt. Insgesamt lässt sich festhalten, dass an Wirtschaftsunternehmen zunehmende Anforderungen hinsichtlich der Übernahme gesellschaftlicher Verantwortung gestellt werden. Deutlich zeigt sich dies in der "Agenda 2030 für nachhaltige Entwicklung", in der die Vereinten Nationen Organisationen der Wirtschaft als zentrale Akteure zur Erreichung der Nachhaltigkeitsziele (Sustainable Development Goals) benennen. Darin spiegelt sich die Überzeugung, dass bestimmte gesellschaftliche Problemstellungen erfolgreich über neuartige marktorientierte und moralökonomische Lösungsmodelle adressiert werden können.
Das vorliegende Buch nimmt das damit verbundene Phänomen "Corporate Social Entrepreneurship" in den Blick. Es geht hierbei nicht um philanthropische Aktivitäten oder die Reduzierung negativer externer Effekte von bestehenden Wertschöpfungsketten. Vielmehr geht es um die Generierung positiver (externer) Effekte beziehungsweise gesellschaftlicher und privater Wertschöpfung durch die Entwicklung und Durchsetzung innovativer Güter, Dienstleistungen und Geschäftsmodelle zur Lösung der benannten gesellschaftlichen Probleme.
Für viele Firmen stellt Corporate Social Entrepreneurship eine Veränderung der Praxis ökonomischen Handelns und eine große organisatorische Herausforderung dar, die bedeutende strategische Vorteile verspricht. Diese liegen nicht primär im Beitrag zur finanziellen "Bottom Line", sondern in Produkt- und Geschäftsmodellinnovationen sowie in Kreation und Erschließung neuer Märkte.
Die vorliegende Arbeit folgt der Annahme, dass mit dem Aufkommen von neuartigen Prozessen ökonomischer Organisation Fragen zur Form ökonomischer Organisation - also zum Wesen, zum Zweck und zum gesellschaftlichen Verhältnis der Firma - neu gestellt werden müssen. Vor diesem Hintergrund fokussiert das Werk die Analyse und die Beschreibung von Corporate Social Entrepreneurship als Prozess ökonomischer Organisation und zieht hieraus Schlüsse für die Firma als Form ökonomischer Organisation. Dabei ist die Arbeit interdisziplinär ausgerichtet und zielt darauf ab, einen Beitrag zur Erweiterung und Konkretisierung der Governanceökonomik und Governanceethik zu leisten.
Flash memories are non-volatile memory devices. The rapid development of flash technologies leads to higher storage density, but also to higher error rates. This dissertation considers this reliability problem of flash memories and investigates suitable error correction codes, e.g. BCH-codes and concatenated codes. First, the flash cells, their functionality and error characteristics are explained. Next, the mathematics of the employed algebraic code are discussed. Subsequently, generalized concatenated codes (GCC) are presented. Compared to the commonly used BCH codes, concatenated codes promise higher code rates and lower implementation complexity. This complexity reduction is achieved by dividing a long code into smaller components, which require smaller Galois-Field sizes. The algebraic decoding algorithms enable analytical determination of the block error rate. Thus, it is possible to guarantee very low residual error rates for flash memories. Besides the complexity reduction, general concatenated codes can exploit soft information. This so-called soft decoding is not practicable for long BCH-codes. In this dissertation, two soft decoding methods for GCC are presented and analyzed. These methods are based on the Chase decoding and the stack algorithm. The last method explicitly uses the generalized concatenated code structure, where the component codes are nested subcodes. This property supports the complexity reduction. Moreover, the two-dimensional structure of GCC enables the correction of error patterns with statistical dependencies. One chapter of the thesis demonstrates how the concatenated codes can be used to correct two-dimensional cluster errors. Therefore, a two-dimensional interleaver is designed with the help of Gaussian integers. This design achieves the correction of cluster errors with the best possible radius. Large parts of this works are dedicated to the question, how the decoding algorithms can be implemented in hardware. These hardware architectures, their throughput and logic size are presented for long BCH-codes and generalized concatenated codes. The results show that generalized concatenated codes are suitable for error correction in flash memories, especially for three-dimensional NAND memory systems used in industrial applications, where low residual errors must be guaranteed.
NAND flash memory is widely used for data storage due to low power consumption, high throughput, short random access latency, and high density. The storage density of the NAND flash memory devices increases from one generation to the next, albeit at the expense of storage reliability.
Our objective in this dissertation is to improve the reliability of the NAND flash memory with a low hard implementation cost. We investigate the error characteristic, i.e. the various noises of the NAND flash memory. Based on the error behavior at different life-aging stages, we develop offset calibration techniques that minimize the bit error rate (BER).
Furthermore, we introduce data compression to reduce the write amplification effect and support the error correction codes (ECC) unit. In the first scenario, the numerical results show that the data compression can reduce the wear-out by minimizing the amount of data that is written to the flash. In the ECC scenario, the compression gain is used to improve the ECC capability. Based on the first scenario, the write amplification effect can be halved for the considered target flash and data model. By combining the ECC and data compression, the NAND flash memory lifetime improves three fold compared with uncompressed data for the same data model.
In order to improve the data reliability of the NAND flash memory, we investigate different ECC schemes based on concatenated codes like product codes, half-product codes, and generalized concatenated codes (GCC). We propose a construction for high-rate GCC for hard-input decoding. ECC based on soft-input decoding can significantly improve the reliability of NAND flash memories. Therefore, we propose a low-complexity soft-input decoding algorithm for high-rate GCC.
This thesis considers bounding functions for multivariate polynomials and rational functions over boxes and simplices. It also considers the synthesis of polynomial Lyapunov functions for obtaining the stability of control systems. Bounding the range of functions is an important issue in many areas of mathematics and its applications like global optimization, computer aided geometric design, robust control etc.
Algorithms and Architectures for Cryptography and Source Coding in Non-Volatile Flash Memories
(2021)
In this work, algorithms and architectures for cryptography and source coding are developed, which are suitable for many resource-constrained embedded systems such as non-volatile flash memories. A new concept for elliptic curve cryptography is presented, which uses an arithmetic over Gaussian integers. Gaussian integers are a subset of the complex numbers with integers as real and imaginary parts. Ordinary modular arithmetic over Gaussian integers is computational expensive. To reduce the complexity, a new arithmetic based on the Montgomery reduction is presented. For the elliptic curve point multiplication, this arithmetic over Gaussian integers improves the computational efficiency, the resistance against side channel attacks, and reduces the memory requirements. Furthermore, an efficient variant of the Lempel-Ziv-Welch (LZW) algorithm for universal lossless data compression is investigated. Instead of one LZW dictionary, this algorithm applies several dictionaries to speed up the encoding process. Two dictionary partitioning techniques are introduced that improve the compression rate and reduce the memory size of this parallel dictionary LZW algorithm.