Refine
Year of publication
Document Type
- Conference Proceeding (642)
- Article (429)
- Other Publications (143)
- Part of a Book (141)
- Working Paper (128)
- Book (118)
- Report (115)
- Journal (Complete Issue of a Journal) (85)
- Master's Thesis (77)
- Doctoral Thesis (58)
Language
- German (1115)
- English (883)
- Multiple languages (8)
Keywords
Institute
- Fakultät Architektur und Gestaltung (41)
- Fakultät Bauingenieurwesen (105)
- Fakultät Elektrotechnik und Informationstechnik (34)
- Fakultät Informatik (121)
- Fakultät Maschinenbau (60)
- Fakultät Wirtschafts-, Kultur- und Rechtswissenschaften (106)
- Institut für Angewandte Forschung - IAF (115)
- Institut für Naturwissenschaften und Mathematik - INM (3)
- Institut für Optische Systeme - IOS (40)
- Institut für Strategische Innovation und Technologiemanagement - IST (60)
Methodology
(2019)
Chapter three introduces the methodology of the study and the research design. Its nested, multi-layered methodological concept constitutes one of the major innovations of the book and enables us to map and measure peacebuilding activities of Christian church actors in the conflicts of Mindanao and Maluku (Ambon). The methodological concept draws from Katzenstein’s “analytic eclecticism,” which transcends the rigidity of existing research schools and seeks to fuse elements of several approaches into a new research agenda. The nested, multi-layered analysis of this book seamlessly combines process tracing, discourse analysis, statistical analysis of primary data generated in two field surveys, Qualitative Comparative Analysis (QCA), and statistical regression analysis.
This article describes a research project that aims at investigating individual entrepreneurial founders concerning their shift tendencies of decision-making logics - especially during the respective phases of the venture creating process. Prior studies found that team founders show a hybrid perspective on strategic decision-making. They not only combine causation (planning-based) and effectuation (flexible) logics but also show logic shifts and also re-shifts over time. Due to the fact, that founders' social identity shapes early structuring processes, this article describes the necessity of elimination of in-group influences of multi-founding ventures and focus on individuals in order to make specific assessments on logic shifts and re-shifts. Based on an extensive literature review, a pre-selection-test and a qualitative case study design from the empirical body of the paper. Insofar, this study applies a qualitative design of a process research approach to investigate shifts of decision-making logics of individual founders in new venture creation over time.
Die Datenschutz-Grundverordnung (DSGVO) sieht in Art. 82 DSGVO einen Anspruch auf Ersatz materieller und immaterieller Schäden vor, allerdings ohne Aussagen darüber, wie die Schadenshöhe zu bestimmen ist. Der Beitrag beleuchtet die bisherige Gerichtspraxis kritisch und zeigt auf, welche Kriterien speziell in data leakage-Fällen herangezogen werden können, um die Schadenshöhe zu bestimmen. Aus der Sicht von Betroffenen stellt sich die Frage nach effektiver Rechtsdurchsetzung, aus der Sicht von Unternehmen stellt sich die Frage nach Prävention und Abwehr von Ansprüchen.
Flooded Edge Gateways
(2019)
Increasing numbers of internet-compatible devices, in particular in the context of IoT, usually cause increasing amounts of data. The processing and analysis of a continuously growing amount of data in real-time by means of cloud platforms cannot be guaranteed anymore. Approaches of Edge Computing decentralize parts of the data analysis logics towards the data sources in order to control the data transfer rate to the cloud through pre-processing with predefined quality-of-service parameters. In this paper, we present a solution for preventing overloaded gateways by optimizing the transfer of IoT data through a combination of Complex Event Processing and Machine Learning. The presented solution is completely based on open-source technologies and can therefore also be used in smaller companies.
The reliable supply of energy is an essential prerequisite for the economic success of a country. Questions of sustainability and the replacement of import dependencies require new tasks with new approaches. This contribution provides an overview of dependencies using the example of German electrical power grids integrating renewable energies. Aspects of energy trading and grid stability are brought into connection, stock exchange trading, grid codes and volatility of used primary energies are discussed.
Engineering and management
(2019)
Generative Design Software - How does digitalization change the professional profile of architects
(2019)
Abstract, Poster und Vortrag
Das hier vorgestellte Netzoptimierungstool kann dem Verteilnetzbetreiber bei einem Störfall im Netz in Echtzeit eine Lösung zur Steuerung seiner Betriebsmittel vorschlagen. Dadurch kann das bestehende Netz optimal genutzt werden und ein kostenintensiver Netzausbau im Mittel- und Niederspannungsnetz verringert oder sogar verhindert werden. Als Grundlage für den Netzoptimierer dient ein künstliches neuronales Netz (KNN). Zum Training des KNN wurden Störfälle generiert, die auf reellen Erzeugungs- und Lastprofilen aus dem CoSSMic-Projekt basieren [1]. Für jeden Störfall wurde aus allen möglichen und sinnvollen Netzkonfigurationen eine optimierte Netztopologie anhand von Lastflussberechnungen ermittelt. Durch die Variation der Stufenschalter der Transformatoren und der Stellungen aller installierten Schalter im Netz wurde berechnet, wie der Stromfluss gelenkt werden muss, damit keines der Betriebsmittel die zulässigen Belastungsgrenzen mehr überschreitet. Für ein virtuelles Testnetz konnte mit einem trainierten KNN zu 90 Prozent die optimale Lösung des jeweiligen Störfalls erkannt werden. Durch die Anwendung der N-Best Methode konnte die Vorhersagewahrscheinlichkeit auf annähernd 99 Prozent erhöht werden.
We present an alternative approach to grid management in low voltage grids by the use of artificial intelligence. The developed decision support system is based on an artificial neural network (ANN). Due to the fast reaction time of our system, real time grid management will be possible. Remote controllable switches and tap changers in transformer stations are used to actively manage the grid infrastructure. The algorithm can support the distribution system operators to keep the grid in a safe state at any time. Its functionality is demonstrated by a case study using a virtual test grid. The ANN achieves a prediction rate of around 90% for the different grid management strategies. By considering the four most likely solutions proposed by the ANN, the prediction rate increases to 98.8%, with a 0.1 second increase in the running time of the model.
We present an innovative decision support system (DSS) for distribution system operators (DSO) based on an artificial neural network (ANN). A trained ANN has the ability to recognize problem patterns and to propose solutions that can be implemented directly in real time grid management. The principle functionality of this ANN based optimizer has been demonstrated by means of a simple virtual electrical grid. For this grid, the trained ANN predicted the solution minimizing the total line power dissipation in 98 percent of the cases considered. In 99 percent of the cases, a valid solution in compliance with the specified operating conditions was found. First ANN tests on a more realistic grid, calibrated with household load measurements, revealed a prediction rate between 88 and 90 percent depending on the optimization criteria. This approach promises a faster, more cost-efficient and potentially secure method to support distribution system operators in grid management.
Summary of the 8th Workshop on Metallization and Interconnection for Crystalline Silicon Solar Cells
(2019)
This article gives a summary of the 8th Metallization and Interconnection workshop and attempts to place each contribution in the appropriate context. The field of metallization and interconnection continues to progress at a very fast pace. Several printing techniques can now achieve linewidths below 20 μm. Screen printing is more than ever the dominating metallization technology in the industry, with finger widths of 45 μm in routine mass production and values below 20 μm in the lab. Plating technology is also being improved, particularly through the development of lower cost patterning techniques. Interconnection technology is changing fast, with introduction in mass production of multiwire and shingled cells technologies. New models and characterization techniques are being introduced to study and understand in detail these new interconnection technologies.
Vortrag und Abstract
When a country grants preferential tariffs to another, either reciprocally in a free trade agreement (FTA) or unilaterally, rules of origin (RoOs) are defined to determine whether a product is eligible for preferential treatment. RoOs exist to avoid that exports from third countries enter through the member with the lowest tariff (trade deflection). However, RoOs distort exporters' sourcing decisions and burden them with red tape. Using a global data set, we show that, for 86% of all bilateral product-level comparisons within FTAs, trade deflection is not profitable because external tariffs are rather similar and transportation costs are non-negligible; in the case of unilateral trade preferences extended by rich countries to poor ones that ratio is a striking 98%. The pervasive and unconditional use of RoOs is, therefore, hard to rationalize.
Fast and reliable acquisition of truth data for document analysis using cyclic suggest algorithms
(2019)
In document analysis the availability of ground truth data plays a crucial role for the success of a project. This is even more true at the rise of new deep learning methods which heavily rely on the availability of training data. But even for traditional, hand crafted algorithms that are not trained on data, reliable test data is important for the improvement and evaluation of the methods. Because ground truth acquisition is expensive and time consuming, semi-automatic methods are introduced which make use of suggestions coming from document analysis systems. The interaction between the human operator and the automatic analysis algorithms is the key to speed up the process while improving the quality of the data. The final confirmation of data may always be done by the human operator. This paper demonstrates a use case for acquisition of truth data in a mail processing system. It shows why a new, extended view on truth data is necessary in development and engineering of such systems. An overview over the tool and the data handling is given, the advantages in the workflow are shown, and consequences for the construction of analysis algorithms are discussed. It can be shown that the interplay between suggest algorithms and human operator leads to very fast truth data capturing. The surprising finding is the fact that if multiple suggest algorithms circularly depend on data, they are especially effective in terms of speed and accuracy.
The first part of this work shows the development and application of a new material system using high strength duplex stainless steel wires as net material with environmentally compatible antifouling properties for off-shore fish farm cages. Current net materials from textiles (polyamide) shall be partially replaced by high strength duplex stainless steel in order to have a more environmentally compatible system which meets the more severe mechanical loads (waves, storms, predatores (sharks, seals)). With a new antifouling strategy current issues like reduced ecological damage (e.g. due to copper disposal), lower maintenance costs (e.g. cleaning) and reduced durability shall be resolved.
High strength steel wires are also widely used in geological protection systems, for example rockfall protection or slope stabilisation. Normally hot-dip galvanised carbon steel is used in this case. But in highly corrosive environments like coastal areas, volcanic areas or mines for example, other solutions with a high corrosion resistance and sufficient mechanical properties are necessary. Protection systems made of high strength duplex stainless steel wires enable a significantly longer service life of the portection systems and therefore a higher level of security.
Pitting susceptibility of metastable austenitic stainless steels as a function of surface conditions
(2019)
The influence of surface roughness and local defects on pitting susceptibility of type 304 (UNS S30400) and type 301 (UNS S30100) in chloride solution were investigated. Because the mechanical properties can be regarded as decisive for the achieved surface quality, different properties of the base material were obtained by cold rolling the metastable austenites. This was done before the surfaces were finished. Therefor the surfaces were treated by different grinding parameters to generate different surface conditions and different defects. As a reference, different standardised surface finishes were used.
By using and comparing different methods for the characterization of surface roughness and surface texture, it is possible to find a relationship between the quantity and characteristics of local defects on the one hand and pitting susceptibility on the other hand. For the machining parameters used, a ranking of the influencing factors on the corrosion resistance achieved could be determined.
The automated application of software-based solutions for estimating the pitting susceptibility of machined surfaces and components will be discussed using concrete examples.
Pitting susceptibility of metastable austenitic stainless steels as a function of surface conditions
(2019)
Fachvortrag auf der 10th International European Stainless Steel Conference and 6th European Duplex Stainless Steel Conference (ESSC & DUPLEX 2019), 30.09. – 02.10.2019, Vienna, Austria
Fachvortrag auf der 10th International European Stainless Steel Conference and 6th European Duplex Stainless Steel Conference (ESSC & DUPLEX 2019), 30.09. – 02.10.2019, Vienna, Austria
Fachvortrag auf dem Swiss Tribology Symposium, 12.11.2019, Hightechzentrum Aargau, Brugg, Schweiz
Mechanical properties after stretching testings were calcu-lated and experimentally determined via Tempcore method for bar core, bar surface and whole bar cross section. It was displayed on the base of experiments and imitating simulation that deformation in core and surface areas of a bar are equal and therefore influence of structural parameters in the core area is principally decisive for initiating of neck forming in the surface area. The results showed that resistance to destruction of martensite surface layer has rather less effect on bar properties in general in comparison with previous investigations. It is concluded that improvement of core structure quality can help to lower brittleness of the whole bar. It was also proved that used techniques provide good concordance between the obtained results and experimental data. Therefore, the additivity rule for structural components can be used successfully for determination of whole bar parameters, taking into account thickness of surface layer that can be measured easily using hardness sensor. It will simplify practically quality control of products.
Jahresbericht 2019
(2019)
This thesis deals with the object tracking problem of multiple extended objects. For instance, this tracking problem occurs when a car with sensors drives on the road and detects multiple other cars in front of it. When the setup between the senor and the other cars is in a such way that multiple measurements are created by each single car, the cars are called extended objects. This can occur in real world scenarios, mainly with the use of high resolution sensors in near field applications. Such a near field scenario leads a single object to occupy several resolution cells of the sensor so that multiple measurements are generated per scan. The measurements are additionally superimposed by the sensor’s noise. Beside the object generated measurements, there occur false alarms, which are not caused by any object and sometimes in a sensor scan, single objects could be missed so that they not generate any measurements.
To handle these scenarios, object tracking filters are needed to process the sensor measurements in order to obtain a stable and accurate estimate of the objects in each sensor scan. In this thesis, the scope is to implement such a tracking filter that handles the extended objects, i.e. the filter estimates their positions and extents. In context of this, the topic of measurement partitioning occurs, which is a pre-processing of the measurement data. With the use of partitioning, the measurements that are likely generated by one object are put into one cluster, also called cell. Then, the obtained cells are processed by the tracking filter for the estimation process. The partitioning of measurement data is a crucial part for the performance of tracking filter because insufficient partitioning leads to bad tracking performance, i.e. inaccurate object estimates.
In this thesis, a Gaussian inverse Wishart Probability Hypothesis Density (GIW-PHD) filter was implemented to handle the multiple extended object tracking problem. Within this filter framework, the number of objects are modelled as Random Finite Sets (RFSs) and the objects’ extent as random matrices (RM). The partitioning methods that are used to cluster the measurement data are existing ones as well as a new approach that is based on likelihood sampling methods. The applied classical heuristic methods are Distance Partitioning (DP) and Sub-Partitioning (SP), whereas the proposed likelihood-based approach is called Stochastic Partitioning (StP). The latter was developed in this thesis based on the Stochastic Optimisation approach by Granström et al. An implementation, including the StP method and its integration into the filter framework, is provided within this thesis.
The implementations, using the different partitioning methods, were tested on simulated random multi-object scenarios and in a fixed parallel tracking scenario using Monte Carlo methods. Further, a runtime analysis was done to provide an insight into the computational effort using the different partitioning methods. It emphasized, that the StP method outperforms the classical partitioning methods in scenarios, where the objects move spatially close. The filter using StP performs more stable and with more accurate estimates. However, this advantage is associated with a higher computational effort compared to the classical heuristic partitioning methods.
The goal of the presented project is to develop the concept of home ehealth centers for barrier-free and cross-border telemedicine. AAL technologies are already present on the market but there is still a gap to close until they can be used for ordinary patient needs. The general idea needs to be accompanied by new services, which should be brought together in order to provide a full coverage of service for the users. Sleep and stress were chosen as predominant diseases for a detailed study within this project because of their widespread influence in the population. The executed scientific study of available home devices analyzing sleep has provided the necessary to select appropriate devices. The first choice for the project implementation is the device EMFIT QS+. This equipment provides a part of a complete system that a home telemedical hospital can provide at a level of precision and communication with internal and/or external health services.
Die Studienanfänger in den technischen Studiengängen der Hochschulen für angewandte Wissenschaften haben nicht nur in Mathematik sondern auch in Physik sehr unterschiedliche Vorkenntnisse. Obwohl diese Fächer für das grundlegende Verständnis technischer Vorgänge von großer Bedeutung sind, kann die Ausbildung in diesen Bereichen angesichts der begrenzten dafür im Verlauf des Studiums zur Verfügung stehenden Zeitfenster nicht bei Null anfangen. Für Mathematik wurde daher von der Arbeitsgruppe cosh ein Mindestanforderungskatalog zusammengestellt und 2014 veröffentlicht. Er beschreibt Kenntnisse und Fertigkeiten, die Studienanfänger zur erfolgreichen Aufnahme eines WiMINT-Studiums (Wirtschaft, Mathematik, Informatik, Naturwissenschaft, Technik) an einer Hochschule benötigen. Inzwischen hat sich nun eine Arbeitsgruppe von Physikerinnen und Physikern an Hochschulen in Baden-Württemberg gebildet, deren Ziel es ist, einen analogen Mindestanforderungskatalog für den Bereich Physik zu erstellen. Hier wird der aktuell erreichte Stand der Arbeiten vorgestellt.
A physics lab-setup has been developed for engineering students in their first year at university. The so-called LabTeamCoaching helps to improve general lab skills, such as preparing an experiment, writing a documentation, using graphs and drawing conclusions. By using a flipped classroom approach, students get better involved than in our former physics labs when we applied classical methods. This approach will be described and an overview of our 10 years of experience using this method will be given.
One important skill for engineers is the ability of optimizing their experiments. On their job they will often spend a lot more time designing and improving an experimental setup compared to running the actual experiment itself. Is it possible to teach this complex task in physics labs? A method for reaching this goal is proposed an example is given and discussed.
Ziel der Masterarbeit war es, die Feuchtigkeitseigenschaften von Estrichen bei unterschiedlichen Klimaten mithilfe von Sorptionsisothermen zu charakterisieren. Die wenigen Literaturangaben zu Sorptionsisothermen von mineralischen Estrichen beziehen sich im Wesentlichen auf Calciumsulfatestriche und genormte Zementestriche (ohne dass die Zementart: Portlandzemente, Hochofenzemente bzw. CEM I, CEM II, CEM III etc. unterschieden werden) und i.d.R. nur auf eine Lufttemperatur (= 20 Grad C). Anliegen der Arbeit war es, zusätzlich die seit ca. 20 Jahren marktüblichen ternären Schnellzemente mit zu untersuchen und die baupraktisch interessanten Temperaturen von 15 Grad C und 25 Grad C einzubeziehen. Ebenso wurden die Auswirkungen der Klimabedingung auf der Baustelle (Jahreszeit, Luftfeuchtigkeit, Temperatur) auf den Hydratationsvorgang der Estriche untersucht. Dabei wurden jeweils nicht nur ein Vertreter der verschiedenen Bindemittelsysteme, sondern mindestens zwei verschiedene Estriche unterschiedlicher Hersteller mit einbezogen. In Kombination mit den Ergebnissen der Gefügeuntersuchungen (u. a. Hg-Porosametrie) wird belegt, weshalb sich die zement- und schnellzementgebundenen Estriche vollkommen anders verhalten als die calciumsulfatgebundenen Estriche. Dieses unterschiedliche Verhalten ist auch einer der Gründe, warum Estriche mit der KRL-Methode in Bezug auf ihren Feuchtegehalt nicht bewertet werden können. Aus diesem Grund folgt ein Vergleich der Materialfeuchtemessungen "KRL-Methode" mit der handwerksüblichen und seit Jahrzehnten in der Praxis bewährten "CM-Methode".
In this paper, multivariate polynomials in the Bernstein basis over a box (tensorial Bernstein representation) are considered. A new matrix method for the computation of the polynomial coefficients with respect to the Bernstein basis, the so-called Bernstein coefficients, is presented and compared with existing methods. Also matrix methods for the calculation of the Bernstein coefficients over subboxes generated by subdivision of the original box are proposed. All the methods solely use matrix operations such as multiplication, transposition and reshaping; some of them rely on the bidiagonal factorization of the lower triangular Pascal matrix or the factorization of this matrix by a Toeplitz matrix. In the case that the coefficients of the polynomial are due to uncertainties and can be represented in the form of intervals it is shown that the developed methods can be extended to compute the set of the Bernstein coefficients of all members of the polynomial family.
Die Entstehung von Radsatz-Torsionsschwingungen kann aufgrund der zunehmenden Ausnut- zung des Kraftschlusses im Rad-Schiene-Kontakt nicht vollends verhindert werden. Während analytische Untersuchungen zeigen, dass bei der Überschreitung eines bestimmten Dämp- fungswerts die Entstehung von Radsatz-Torsionsschwingungen vollständig vermieden wird, zeigen Simulationen, dass die Schwingungsamplitude des dynamischen Torsionsmoments in- folge des betragsmäβig sinkenden Kraftschlussgradienten und einer gewissen Antriebsstrang- dämpfung bei höheren Gleitgeschwindigkeiten begrenzt ist. Im Gegensatz zur analytischen Formel (17) ist damit ein maximales, dynamisches Torsionsmoment berechenbar.
Durch eine ganzheitliche Betrachtung des Gesamtsystems, bestehend aus dem Kraftschluss zwischen Rad und Schiene, dem Radsatz inkl. Lagerung, sowie Antriebsstrang können ver- schiedene Einflussfaktoren untersucht werden, die eine Radsatz-Torsionsschwingung beein- flussen. Die damit verbundenen Berechnungsmodelle sind aufgrund der Anzahl von Massen und Freiheitsgraden nicht mehr analytisch lösbar. Durch den Einsatz von Mehrkörperdynamik-Software können die Einflussfaktoren identifiziert und deren Einfluss auf die Torsionsneigung des Systems und das dynamische Torsionsmoment quantifiziert werden.
We have introduced in this paper new variants of two methods for projecting Supply and Use Tables that are based on a distance minimisation approach (SUT-RAS) and the Leontief model (SUT-EURO). We have also compared them under similar and comparable exogenous information, i.e.: with and without exogenous industry output, and with explicit consideration of taxes less subsidies on products. We have conducted an empirical assessment of all of these methods against a set of annual tables between 2000 and 2005 for Austria, Belgium, Spain and Italy. From the empirical assessment, we obtained three main conclusions: (a) the use of extra information (i.e. industry output) generally improves projected estimates in both methods; (b) whenever industry output is available, the SUT-RAS method should be used and otherwise the SUT-EURO should be used instead; and (c) the total industry output is best estimated by the SUT-EURO method when this is not available.
This paper analyses international cooperation in alternative energy production research and development. Therefore, patents of the technological domain, registered at the European Patent Office from 1997 until 2016, are analysed. International cooperation is considered when patents involve co-assignment or co-inventorship comprising two or more different countries. Generally, international R&D cooperation tends to be increasing over time in alternative energy production. In total, 2234 co-patents from 87 countries are identified. Through social network analysis the cooperative relationships between countries are examined. The most significant states of the network are the United States of America and Germany. Innovative clusters and strong partnerships are identified. Alternative energy technologies that involve international cooperation most extensively are harnessing energy from manmade waste, solar energy and bio-fuels. The paper clarifies which countries are cooperating with each other for what purpose. The findings can be used for establishing R&D strategies in the domain of alternative energy production.
The Lempel-Ziv-Welch (LZW) algorithm is an important dictionary-based data compression approach that is used in many communication and storage systems. The parallel dictionary LZW (PDLZW) algorithm speeds up the LZW encoding by using multiple dictionaries. The PDLZW algorithm applies different dictionaries to store strings of different lengths, where each dictionary stores only strings of the same length. This simplifies the parallel search in the dictionaries for hardware implementations. The compression gain of the PDLZW depends on the partitioning of the address space, i.e. on the sizes of the parallel dictionaries. However, there is no universal partitioning that is optimal for all data sources. This work proposes an address space partitioning technique that optimizes the compression rate of the PDLZW using a Markov model for the data. Numerical results for address spaces with 512, 1024, and 2048 entries demonstrate that the proposed partitioning improves the performance of the PDLZW compared with the original proposal.
Fatigue and drowsiness are responsible for a significant percentage of road traffic accidents. There are several approaches to monitor the driver’s drowsiness, ranging from the driver’s steering behavior to analysis of the driver, e.g. eye tracking, blinking, yawning or electrocardiogram (ECG). This paper describes the development of a low-cost ECG sensor to derive heart rate variability (HRV) data for the drowsiness detection. The work includes the hardware and the software design. The hardware has been implemented on a printed circuit board (PCB) designed so that the board can be used as an extension shield for an Arduino. The PCB contains a double, inverted ECG channel including low-pass filtering and provides two analog outputs to the Arduino, that combined them and performs the analog-to-digital conversion. The digital ECG signal is transferred to an NVidia embedded PC where the processing takes place, including QRS-complex, heart rate and HRV detection as well as visualization features. The compact resulting sensor provides good results in the extraction of the main ECG parameters. The sensor is being used in a larger frame, where facial-recognition-based drowsiness detection is combined with ECG-based detection to improve the recognition rate under unfavorable light or occlusion conditions.
Error correction coding (ECC) for optical communication and persistent storage systems require high rate codes that enable high data throughput and low residual errors. Recently, different concatenated coding schemes were proposed that are based on binary Bose-Chaudhuri-Hocquenghem (BCH) codes that have low error correcting capabilities. Commonly, hardware implementations for BCH decoding are based on the Berlekamp-Massey algorithm (BMA). However, for single, double, and triple error correcting BCH codes, Peterson's algorithm can be more efficient than the BMA. The known hardware architectures of Peterson's algorithm require Galois field inversion. This inversion dominates the hardware complexity and limits the decoding speed. This work proposes an inversion-less version of Peterson's algorithm. Moreover, a decoding architecture is presented that is faster than decoders that employ inversion or the fully parallel BMA at a comparable circuit size.
This work proposes a suboptimal detection algorithm for generalized multistream spatial modulation. Many suboptimal detection algorithms for spatial modulation use two-stage detection schemes where the set of active antennas is detected in the first stage and the transmitted symbols in the second stage. For multistream spatial modulation with large signal constellations the second detection step typically dominates the detection complexity. With the proposed detection scheme, the modified Gaussian approximation method is used for detecting the antenna pattern. In order to reduce the complexity for detecting the signal points, we propose a combined equalization and list decoding approach. Simulation results demonstrate that the new algorithm achieves near-maximum-likelihood performance with small list sizes. It significantly reduces the complexity when compared with conventional two-stage detection schemes.
The introduction of multi level cell (MLC) and triple level cell (TLC) technologies reduced the reliability of flash memories significantly compared with single level cell (SLC) flash. The reliability of the flash memory suffers from various errors causes. Program/erase cycles, read disturb, and cell to cell interference impact the threshold voltages. With pre-defined fixed read thresholds a voltage shift increases the bit error rate (BER). This work proposes a read threshold calibration method that aims on minimizing the BER by adapting the read voltages. The adaptation of the read thresholds is based on the number of errors observed in the codeword protecting a small amount of meta-data. Simulations based on flash measurements demonstrate that this method can significantly reduce the BER of TLC memories.
If the process contains a delay (dead time), the Nyquist criterion is well suited to derive a PI or PID tuning rule because the delay is taken into account without approximation. The tuning of the speed of the closed loop enters naturally by the crossover frequency. The goal of robustness and performance is translated into the phase margin.
Flash memories are non-volatile memory devices. The rapid development of flash technologies leads to higher storage density, but also to higher error rates. This dissertation considers this reliability problem of flash memories and investigates suitable error correction codes, e.g. BCH-codes and concatenated codes. First, the flash cells, their functionality and error characteristics are explained. Next, the mathematics of the employed algebraic code are discussed. Subsequently, generalized concatenated codes (GCC) are presented. Compared to the commonly used BCH codes, concatenated codes promise higher code rates and lower implementation complexity. This complexity reduction is achieved by dividing a long code into smaller components, which require smaller Galois-Field sizes. The algebraic decoding algorithms enable analytical determination of the block error rate. Thus, it is possible to guarantee very low residual error rates for flash memories. Besides the complexity reduction, general concatenated codes can exploit soft information. This so-called soft decoding is not practicable for long BCH-codes. In this dissertation, two soft decoding methods for GCC are presented and analyzed. These methods are based on the Chase decoding and the stack algorithm. The last method explicitly uses the generalized concatenated code structure, where the component codes are nested subcodes. This property supports the complexity reduction. Moreover, the two-dimensional structure of GCC enables the correction of error patterns with statistical dependencies. One chapter of the thesis demonstrates how the concatenated codes can be used to correct two-dimensional cluster errors. Therefore, a two-dimensional interleaver is designed with the help of Gaussian integers. This design achieves the correction of cluster errors with the best possible radius. Large parts of this works are dedicated to the question, how the decoding algorithms can be implemented in hardware. These hardware architectures, their throughput and logic size are presented for long BCH-codes and generalized concatenated codes. The results show that generalized concatenated codes are suitable for error correction in flash memories, especially for three-dimensional NAND memory systems used in industrial applications, where low residual errors must be guaranteed.
Autismus-Spektrum-Störungen (ASD) bei Kindern werden häufig zu spät diagnostiziert und die Begleitung der chronischen Krankheit gestaltet sich schwierig. Der vorgestellte Ansatz erlaubt die Behandlung der Kinder in dem bekannten häuslichen Umfeld und versucht die Beziehungen zwischen Schlaf und Verhalten herauszuarbeiten. Die gewonnenen Erkenntnisse sollen die Lebensqualität der Patienten verbessern und den Eltern Hilfestellung geben. Die notwendige infrastrukturelle Unterstützung wird durch medizinisches Fachpersonal geleistet, das auf einen web-basierten Service zurückgreifen kann, der sämtliche Prozesse (Diagnostik, Datenerfassung, -aufzeichnung und Training etc.) begleitet. Die anonymisierten Daten werden in einem Diagnosesystem zentral abgelegt und können so für zukünftige Behandlungsstrategien nutzbar sein. Die umfassende Lösung setzt auf zentrale Elemente von Smart-Homes und AAL auf.
The Lake Constance region is due to its scenic attractiveness one of the most visited destinations in German-speaking countries. Scenic attractiveness as well as so-called landscape stereotypes also play a decisive role in tourism marketing. Tour operators reproduce supra-individual landscape concepts and establish mental geographies that ultimately influence the choice of destinations. A growing trend in tourism is the emergence of creative narratives in tourism marketing and tourism offers induced by creative companies. By means of a discourse-analytical investigation, whose theoretical and conceptual frame of reference is the hegemony and discourse theory of Laclau and Mouffe (1985), recurring landscape stereotypes are identified in tourist promotional material for the destination Bodensee. Based on these results as well as expert interviews with regional tourism stakeholders, a discussion of the creative economic potential for regional tourism marketing will take place. The investigation shows that these potentials are currently not being exhausted. At the same time, creative tourism can help a rural region, such as Lake Constance, to position itself as an alternative to city tourism, while at the same time addressing the lucrative target group 60plus.
Arbeitsrecht für Dummies
(2019)
Compliance im Personalwesen
(2019)
Der Erfolg eines Unternehmens hängt nicht nur von qualifizierten, sondern maßgeblich auch von motivierten, zuverlässigen und integren Mitarbeitern ab. Denn mögliche Compliance-Risiken beruhen in vielen Fällen auf einem Fehlverhalten der eigenen Mitarbeiter. Derartige Risiken können sehr einfach minimiert werden, indem von vornherein keine Personen eingestellt oder befördert werden, die in der Vergangenheit straffällig geworden sind oder deren Zuverlässigkeit und Integrität angezweifelt werden kann. Doch nicht immer ist die Sachlage so offensichtlich. Für Unternehmen ist es daher wichtig, Compliance auch im Personalmanagement und in den Personalprozessen zu berücksichtigen und zu integrieren.
This paper presents the integration of a spline based extension model into a probability hypothesis density (PHD) filter for extended targets. Using this filter the position and extension of each object as well as the number of present objects can jointly be estimated. Therefore, the spline extension model and the PHD filter are addressed and merged in a Gaussian mixture (GM) implementation. Simulation results using artificial laser measurements are used to evaluate the performance of the presented filter. Finally, the results are illustrated and discussed.
Corporate entrepreneurship (CE) is experiencing continuously increasing interest from scholars and practitioners. One reason for this seems to be rooted in the organizational structures of established companies, which are cumbersome for implementing organizational agility and for developing radical innovations. In view of the advancing digitalization, however, exactly this is required in order to be successful in the long-term. CE is a promising managerial tool that offers a wide range of options to pursue the creation of new businesses and to support the companies' transformation in order to adapt to changes in the environment. Even though CE offers a broad range of opportunities, the effective management is a challenge. One reason for this is the ambiguity when it comes to the differences between the various CE forms and the objectives that can be achieved by those. This study, which is based on 13 in-depth interviews from eight high-tech companies, contributes to a better understanding of CE by offering a first harmonized set of CE objectives that is suitable to compare and differentiate across the different forms. In addition to that, three CE types, offering a new perspective on how to differentiate CE forms, are identified and give implications for a more effective management.
Business coaching is believed to effectively improve survival and success chances of new technology-based firms (NTBFs). However, not much empirical evidence on the support measure's effectiveness is available. Therefore, a pragmatic two-armed Randomized Controlled Trial (RCT) to test the effect of tactical business coaching on NTBF survival capabilities was designed and, for the most part, carried out. However, due to a lower than expected sample size and great attrition between groups, the RCT reveals deviations from the trial design that impede a thorough data assessment. Based on the data given, a first data analysis does not reveal significant differences in survival capability between the two groups. Thus, to provide guidance for future RCTs in business contexts, lessons learned about how to deal with trickle samples and experiment constellations with third parties carrying out the intervention are drawn.
Corporate Entrepreneurship (CE) became the new paradigm for organizations to cope with the accelerated development of innovations. Therefore, especially established organizations increasingly implement CE activities, even in combination. Scholars point out that a coordinated portfolio of CE activities could yield synergies and thus higher value for the organization and further call for more scientific examinations. This literature review aims to better the understanding of the combined and coordinated use of CE activities as well as about resulting synergies. Results show that there are only very few studies that addressed a combination and/or coordination of CE activities with respect to the creation of additional value, however, without empirical analyses. Yet, five categories of direct and indirect synergies could be derived. Discussing the results as well as the heterogenous use of terminology and concepts, this paper concludes with a research agenda for future analyses.
The organizational capability to adapt to the fast and radical changes of market parameters becomes a prerequisite for companies’ long-term survival. In this context, organizational ambidexterity has gained much attention in research and practice. It is the capability to develop new businesses (exploration) while simultaneously optimizing the existing core businesses (exploitation). Established companies face several challenges in achieving this capability, as the underlying learning modes of exploration and exploitation are mutually incompatible. One way to solve these challenges is to separate the exploration-oriented part from the core organization. Corporate venturing has been widely recognized as one tool to create these dual structures to develop new businesses, based on discontinuous innovation. In recent times, new corporate venturing forms emerge in practice. This growing number of different forms has led to new applications of corporate venturing which go beyond the pure development of new businesses, toward supporting the entrepreneurial transformation of companies. This study aims at answering how different corporate venturing forms contribute to the strategic renewal of established companies. For this purpose, qualitative research methods are used to analyze data from 17 interviews conducted in two German high-tech companies. The study at hand provides empirical evidence in the field of corporate venturing by uncovering new insights about the different transformational effects of corporate venturing initiatives on the core organization. It further reveals that corporate venturing forms can be classified into two categories according to their respective level of entrepreneurship and frequency of execution. Both categories exhibit different transformational effects and can be understood as being complementary to each other.
Entrepreneurial employees
(2019)
Volatile markets and accelerating innovation cycles progressively force established companies to adopt alternative innovation strategies such as entrepreneurship. Due to the key role entrepreneurial employees play for strengthening the company's abilities for innovation and change, various concepts have emerged like corporate entrepreneurship or intrapreneurship. While the extant literature has increasingly examined only specific issues of entrepreneurial employees, an overall view on it lacks investigation. Therefore, the purpose of this paper is to structurally present current research on entrepreneurial employees by conducting a broad systematic literature review. The resulting research streams contribute to a clearer justification for future research and are a first step towards a comprehensive research view related to intrapreneurship.
设计城市
(2019)
The Lempel–Ziv–Welch (LZW) algorithm is an important dictionary-based data compression approach that is used in many communication and storage systems. The parallel dictionary LZW (PDLZW) algorithm speeds up the LZW encoding by using multiple dictionaries. This simplifies the parallel search in the dictionaries. However, the compression gain of the PDLZW depends on the partitioning of the address space, i.e. on the sizes of the parallel dictionaries. This work proposes an address space partitioning technique that optimises the compression rate of the PDLZW. Numerical results for address spaces with 512, 1024, and 2048 entries demonstrate that the proposed address partitioning improves the performance of the PDLZW compared with the original proposal. These address space sizes are suitable for flash storage systems. Moreover, the PDLZW has relative high memory requirements which dominate the costs of a hardware implementation. This work proposes a recursive dictionary structure and a word partitioning technique that significantly reduce the memory size of the parallel dictionaries.
Zusammenfassende Darstellung der Ergebnisse einer empirischen Studie über Friedens- und Postkonfliktarbeit durch kirchliche Akteure in Indonesien ( Maluku) und auf den Philippinen (Mindanao). Auf Basis der Untersuchung ziehen die Autoren Schlussfolgerungen über die praktische Bedeutung kirchlicher Friedensprojekte nach Konflikten
The business model canvas (BMC) and the lean start-up manifesto (LSM) have been changing both the entrepreneurial education and, on the practical side, the mindset in setting up innovative ventures since the burst of the dot-com bubble. However, few empirical insights on the business model implementation patterns that distinguish between digital and non-digital innovative ventures exist. Connecting practical management tools to network theory as well as to the theory of organizational learning, this paper investigates evolution patterns of digital and non-digital business models out of the deal flow of an innovation intermediary. For this purpose, a multi-dimensional quantitative content analysis research design is applied to 242 ventures' business plans. The measured strength of transaction relations to customers, suppliers, people, and financiers has been combined with performance indicators of the sampled ventures. The results indicate that in order to succeed, digital ventures iterate their business on the market early and search for investment afterwards. Contrariwise, non-digital ventures already need financial investments in the early stages to set up a product ready to be tested on the market. In both groups we found strong evidence that specific evolutionary patterns relate to higher rates of success.
The Burrows–Wheeler transformation (BWT) is a reversible block sorting transform that is an integral part of many data compression algorithms. This work proposes a memory-efficient pipelined decoder for the BWT. In particular, the authors consider the limited context order BWT that has low memory requirements and enable fast encoding. However, the decoding of the limited context order BWT is typically much slower than the encoding. The proposed decoder pipeline provides a fast inverse BWT by splitting the decoding into several processing stages which are executed in parallel.
Untersuchung und Darstellung der Qualitätsveränderung von Agrarprodukten während der Trocknung
(2019)
Das Ziel der Arbeit war es optimale Trocknungsprozesse für verschiedene Agrarprodukte zu finden. Dazu wurden die Qualitätskriterien frischer und getrockneter Agrarprodukte analysiert und die Veränderungen durch die unterschiedlichen Trocknungsparameter, wie Luftgeschwindigkeit, Taupunkttemperatur, Trocknungstemperatur und –zeit dargestellt. In einer Literaturrecherche wurden sowohl die Faktoren für die Nachernteverluste und deren Höhe in Industrie- sowie Schwellen- und Entwicklungsländer untersucht. Zudem sind die Agrarprodukte und deren qualitätsbestimmenden Inhaltsstoffe vorgestellt. Auch die Extraktions- sowie die Analyse-Methoden werden aufgezeigt und erklärt. Dabei handelt es sich um die Hochleistungsflüssigkeit- und die Ionenausschlusschromatographie, aber auch um die UV/Vis-Spektroskopie und die Polarimetrie. Des Weiteren wurden während den Trocknungsprozessen mit der integrierten Kamera des Trockners in definierten Zeitabständen Bilder aufgenommen und diese über eine speziell entwickelte Software im Hinblick auf die Farbveränderung und die Schrumpfung der Agrarprodukte untersucht. Die Erstellung und Überprüfung der Versuchsergebnisse fand mittels Statistik-Software statt. Es wurden neue Diagramme, sogenannte Schädigungsdiagramme, eingeführt. Dabei handelt es sich um Diagramme, mit deren Hilfe die Identifizierung optimaler Trocknungsprozesse möglich ist. Für Chilis erwies sich eine Trocknungstemperatur von ~ 60 °C, für Kartoffeln von ~ 64 °C bis 74 °C, für Ananas von ~ 43 °C und Mangos von ~ 60 °C als optimal. Auch Taupunkttemperaturen von ~ <12 °C / >27 °C für Chilis, ~ 30 °C für Kartoffeln, ~ 14 °C für Ananas und ~ 20 °C Mangos waren optimal. Die Luftgeschwindigkeit wurde mit rund 1,2 m/s (Kartoffeln: ~ 1.2 m/s; Ananas: ~ 1.2 m/s und Mangos: ~ 0.9 m/s) als optimal befunden. Die Ergebnisse zeigten, dass bei jedem der vier Agrarprodukte die Trocknungstemperatur den größten Effekt auf die Reduzierung der qualitätsbestimmenden Eigenschaften hatte. Bei-spielsweise wurden die Ascorbinsäure, der Gesamtzucker-Gehalt sowie die organischen Säuren mit zunehmender Trocknungstemperatur stärker abgebaut. In Zukunft sollte neben den optimalen Trocknungsbedingungen auch beachtet werden, dass die Größe, Form, und Beschaffenheit der Proben einen entscheidenden Einfluss auf die stationären Trocknungsprozesse haben. Weiter ist es denkbar, instationäre Trocknungsprozesse zum Einsatz zu bringen. Dabei werden zuerst bei hohen Temperaturen die qualitätsreduzierenden Enzyme inaktiviert und anschließend bei geringen Temperatur und damit geringerer thermischer Belastung getrocknet. Weiter sollte darauf geachtet werden, dass Produkte nicht übertrocknen, so dass in Zukunft nur bis knapp unter den maximalen Restfeuchte-Gehalt und nicht wie in dieser Arbeit bis zur Gewichtskonstanz getrocknet wird.
This paper summarizes the trends in metallization and interconnection technology in the eyes of the participants of the 8th Metallization and Interconnection Workshop. Participants were asked in a questionnaire to share their view on the future development of metallization technology, the kind of metal used for front side metallization and the future development of interconnection technology. The continuous improvement of the screen-printing technology is reflected in the high expected percentage share decreasing from 88% in three years to still 70% in ten years. The dominating front side metal in the view of the participants will be silver with an expected percentage share of nearly 70% in 2029. Regarding interconnection technologies, the experts of the workshop expect new technologies to gain significant technology shares faster. Whereas in three years soldering on busbars is expected to dominate with a percentage share of 71% it will drop in ten years to 35% in the eyes of the participants. Multiwire and shingling technologies are seen to have the highest potential with expected percentage shares of 33% (multiwire) and 16% (shingling) in ten years.
Some 165 global experts and specialists from industry and academic institutes met at the 8th Metallization & Interconnection Workshop (MIW2019) that took place from 13 to 14 May 2019 in Konstanz, Germany. Participants from 19 countries debated results of 28 oral and 11 poster presentations.
All presentations are available on www.metallizationworkshop.info as pdf documents. As in previous editions, lots of room was available for discussions and networking during the two-days program which included panel and market-place discussions as well as social events (reception, workshop dinner).
These proceedings contain: a summary of the oral and poster presentations, the results of the survey conducted during the workshop, and peer-reviewed papers based on workshop contributions.
This PhD investigation lies at the intersection of Architecture, Textile Design and Interaction Design and speculates about sustainable forms of future living, focussing on bionic principles to create alternative lightweight building structures with textiles and digital fabrication techniques. In an interdisciplinary, practice- based design approach, informed by radical case studies from the 1960s to 80s on soft architectures like Archigram, Buckminster Fuller, Cedric Price, or Yona Friedman and critical theory on new materialism, (D. Haraway 1997, K. Barad 1998, J. Bennett 2007) sociological, philosophical (B. Latour 2005, G.Deleuze F., Guattari F 1987) and phenomenological thinkers (L. Malafouris 2005, J.Rancière 2004, B. Massumi 2002, N. Bourriaud 2002 , M. Merleau-Ponty 1963) this research investigates the cultural and social rootedness (Verortung) of novel materials and technologies, exploring in between prosthetic relations between the body and the environment.
Rheumatoid arthritis is an autoimmune disease that causes chronic inflammation of synovial joints, often resulting in irreversible structural damage. The activity of the disease is evaluated by clinical examinations, laboratory tests, and patient self-assessment. The long-term course of the disease is assessed with radiographs of hands and feet. The evaluation of the X-ray images performed by trained medical staff requires several minutes per patient. We demonstrate that deep convolutional neural networks can be leveraged for a fully automated, fast, and reproducible scoring of X-ray images of patients with rheumatoid arthritis. A comparison of the predictions of different human experts and our deep learning system shows that there is no significant difference in the performance of human experts and our deep learning model.
This work presents a new concept to implement the elliptic curve point multiplication (PM). This computation is based on a new modular arithmetic over Gaussian integer fields. Gaussian integers are a subset of the complex numbers such that the real and imaginary parts are integers. Since Gaussian integer fields are isomorphic to prime fields, this arithmetic is suitable for many elliptic curves. Representing the key by a Gaussian integer expansion is beneficial to reduce the computational complexity and the memory requirements of secure hardware implementations, which are robust against attacks. Furthermore, an area-efficient coprocessor design is proposed with an arithmetic unit that enables Montgomery modular arithmetic over Gaussian integers. The proposed architecture and the new arithmetic provide high flexibility, i.e., binary and non-binary key expansions as well as protected and unprotected PM calculations are supported. The proposed coprocessor is a competitive solution for a compact ECC processor suitable for applications in small embedded systems.
Die Projektaufgabe bestand darin, den aktuellen Laborversuch zu modernisieren, indem die Kommunikation zwischen dem Versuchsaufbau und Laborrechner nicht wie bisher über Wandlerkarten stattfindet, sondern über EtherCAT und TwinCAT 3.
Die Installation von TwinCAT 3 mit den zugehörigen Erweiterungen und erforderlichen Programmen stellt sich als sehr umfangreich und schwierig dar, was die Installationsanleitungen zeigen. Außerdem gab es sehr viele Fehlerquellen, die nicht auf Anhieb ersichtlich waren, wie das Aktualisieren der aktuellen MATLAB Version. Ist die Installation abgeschlossen kann die Kommunikation zwischen MATLAB und TwinCAT relativ einfach umgesetzt werden.
In der Projektarbeit wurde anfangs dann die Kommunikation mit mehreren Tests überprüft und Optimierungen vorgenommen. So wurde zum Beispiel die Wegbegrenzung angepasst. Schwierigkeiten zeigten sich bei der Bedienung über MATLAB oder beim Abstürzen von MATLAB, da beim Stoppen oder Abstürzen von MATLAB, der zuletzt gesendete Wert immer noch an TwinCAT 3 anliegt und somit der Aktor weiter verfahren würde. Diese sehr gefährliche Situation wäre ein gravierender Nachteil, gegenüber der alten Kommunikation mit einer Wandlerkarte. Um einen sicheren Stopp zu garantieren, wird über ein neues TcCOM Objekt der Matlab-Status mit einem Togglebit überprüft, ändert sich der Wert des Bits nicht mehr, stoppt die Anlage sicher.
Um einen Vergleich mit dem bisherigen Masterversuch erhalten zu können, wurde die Strecke mit der neuen Kommunikation untersucht und ein passender Regler dafür auszulegt.
Die Auswertung der Impulsantwort sowie der „Spectrum-Analyse“ zeigten beim Vergleich mit den Schnittstellen gleiche Ergebnisse, somit sind die Versuche bei dem Laborversuch ohne Einschränkungen durchführbar. Die Auslegung des Reglers zeigte entgegen den Prognosen der Beckhoff-Experten sehr gute Ergebnisse und die Kommunikation über die Schnittstelle zeigte keine Probleme.
Einschränkungen zeigten sich jedoch bei der einzustellenden Abtastzeit, da eine Abtastzeit unter 2ms nicht möglich ist. Zwar kann man eine geringere Abtastzeit einstellen, jedoch zeigt sich bei der Auswertung, dass die Schnittstelle mit Abtastzeiten unter 2ms Probleme aufweist. Die Rechendauer wird deutlich größer und die größere Anzahl an Messpunkte kann nicht richtig verarbeitet werden. Ein Regler kann damit nicht implementiert werden.
Die Projektarbeit konnte somit erfolgreich angeschlossen werden und bis auf die aufwendige Installation sind die Erweiterungen von Beckhoff sehr zuverlässig und gut zu bedienen. Die ersten Voruntersuchen waren positiv, somit kann auch an weiteren Laborrechnern eine Umstellung der Schnittstelle in Betracht gezogen werden.