Refine
Year of publication
Document Type
- Conference Proceeding (642)
- Article (429)
- Other Publications (143)
- Part of a Book (141)
- Working Paper (128)
- Book (118)
- Report (115)
- Journal (Complete Issue of a Journal) (85)
- Master's Thesis (77)
- Doctoral Thesis (58)
Language
- German (1115)
- English (883)
- Multiple languages (8)
Keywords
Institute
- Fakultät Architektur und Gestaltung (41)
- Fakultät Bauingenieurwesen (105)
- Fakultät Elektrotechnik und Informationstechnik (34)
- Fakultät Informatik (121)
- Fakultät Maschinenbau (60)
- Fakultät Wirtschafts-, Kultur- und Rechtswissenschaften (106)
- Institut für Angewandte Forschung - IAF (115)
- Institut für Naturwissenschaften und Mathematik - INM (3)
- Institut für Optische Systeme - IOS (40)
- Institut für Strategische Innovation und Technologiemanagement - IST (60)
When designing drying processes for sensitive biological foodstuffs like fruit or vegetables, energy and time efficiency as well as product quality are gaining more and more importance. These all are greatly influenced by the different drying parameters (e.g. air temperature, air velocity and dew point temperature) in the process. In sterilization of food products the cooking value is widely used as a cross-link between these parameters. In a similar way, the so-called cumulated thermal load (CTL) was introduced for drying processes. This was possible because most quality changes mainly depend on drying air temperature and drying time. In a first approach, the CTL was therefore defined as the time integral of the surface temperature of agricultural products. When conducting experiments with mangoes and pineapples, however, it was found that the CTL as it was used had to be adjusted to a more practical form. So the definition of the CTL was improved and the behaviour of the adjusted CTL (CTLad) was investigated in the drying of pineapples and mangoes. On the basis of these experiments and the work that had been done on the cooking value, it was found, that more optimization on the CTLad had to be done to be able to compare a great variety of different products as well as different quality parameters.
Flooded Edge Gateways
(2019)
Increasing numbers of internet-compatible devices, in particular in the context of IoT, usually cause increasing amounts of data. The processing and analysis of a continuously growing amount of data in real-time by means of cloud platforms cannot be guaranteed anymore. Approaches of Edge Computing decentralize parts of the data analysis logics towards the data sources in order to control the data transfer rate to the cloud through pre-processing with predefined quality-of-service parameters. In this paper, we present a solution for preventing overloaded gateways by optimizing the transfer of IoT data through a combination of Complex Event Processing and Machine Learning. The presented solution is completely based on open-source technologies and can therefore also be used in smaller companies.
With the increased deployment of biometric authentication systems, some security concerns have also arisen. In particular, presentation attacks directed to the capture device pose a severe threat. In order to prevent them, liveness features such as the blood flow can be utilised to develop presentation attack detection (PAD) mechanisms. In this context, laser speckle contrast imaging (LSCI) is a technology widely used in biomedical applications in order to visualise blood flow. We therefore propose a fingerprint PAD method based on textural information extracted from pre-processed LSCI images. Subsequently, a support vector machine is used for classification. In the experiments conducted on a database comprising 32 different artefacts, the results show that the proposed approach classifies correctly all bona fides. However, the LSCI technology experiences difficulties with thin and transparent overlay attacks.
Die Datenschutz-Grundverordnung (DSGVO) sieht in Art. 82 DSGVO einen Anspruch auf Ersatz materieller und immaterieller Schäden vor, allerdings ohne Aussagen darüber, wie die Schadenshöhe zu bestimmen ist. Der Beitrag beleuchtet die bisherige Gerichtspraxis kritisch und zeigt auf, welche Kriterien speziell in data leakage-Fällen herangezogen werden können, um die Schadenshöhe zu bestimmen. Aus der Sicht von Betroffenen stellt sich die Frage nach effektiver Rechtsdurchsetzung, aus der Sicht von Unternehmen stellt sich die Frage nach Prävention und Abwehr von Ansprüchen.
Online-based business models, such as shopping platforms, have added new possibilities for consumers over the last two decades. Aside from basic differences to other distribution channels, customer reviews on such platforms have become a powerful tool, which bestows an additional source for gaining transparency to consumers. Related research has, for the most part, been labelled under the term electronic word-of-mouth (eWOM). An approach, providing a theoretical basis for this phenomenon, will be provided here. The approach is mainly based on work in the field of consumer culture theory (CCT) and on the concept of co-creation. The work of several authors in these streams of research is used to construct a culturally informed resource-based theory, as advocated by Arnould & Thompson and Algesheimer & Gurâu.
This article describes a research project that aims at investigating individual entrepreneurial founders concerning their shift tendencies of decision-making logics - especially during the respective phases of the venture creating process. Prior studies found that team founders show a hybrid perspective on strategic decision-making. They not only combine causation (planning-based) and effectuation (flexible) logics but also show logic shifts and also re-shifts over time. Due to the fact, that founders' social identity shapes early structuring processes, this article describes the necessity of elimination of in-group influences of multi-founding ventures and focus on individuals in order to make specific assessments on logic shifts and re-shifts. Based on an extensive literature review, a pre-selection-test and a qualitative case study design from the empirical body of the paper. Insofar, this study applies a qualitative design of a process research approach to investigate shifts of decision-making logics of individual founders in new venture creation over time.
Ziel der Masterarbeit war es, die Feuchtigkeitseigenschaften von Estrichen bei unterschiedlichen Klimaten mithilfe von Sorptionsisothermen zu charakterisieren. Die wenigen Literaturangaben zu Sorptionsisothermen von mineralischen Estrichen beziehen sich im Wesentlichen auf Calciumsulfatestriche und genormte Zementestriche (ohne dass die Zementart: Portlandzemente, Hochofenzemente bzw. CEM I, CEM II, CEM III etc. unterschieden werden) und i.d.R. nur auf eine Lufttemperatur (= 20 Grad C). Anliegen der Arbeit war es, zusätzlich die seit ca. 20 Jahren marktüblichen ternären Schnellzemente mit zu untersuchen und die baupraktisch interessanten Temperaturen von 15 Grad C und 25 Grad C einzubeziehen. Ebenso wurden die Auswirkungen der Klimabedingung auf der Baustelle (Jahreszeit, Luftfeuchtigkeit, Temperatur) auf den Hydratationsvorgang der Estriche untersucht. Dabei wurden jeweils nicht nur ein Vertreter der verschiedenen Bindemittelsysteme, sondern mindestens zwei verschiedene Estriche unterschiedlicher Hersteller mit einbezogen. In Kombination mit den Ergebnissen der Gefügeuntersuchungen (u. a. Hg-Porosametrie) wird belegt, weshalb sich die zement- und schnellzementgebundenen Estriche vollkommen anders verhalten als die calciumsulfatgebundenen Estriche. Dieses unterschiedliche Verhalten ist auch einer der Gründe, warum Estriche mit der KRL-Methode in Bezug auf ihren Feuchtegehalt nicht bewertet werden können. Aus diesem Grund folgt ein Vergleich der Materialfeuchtemessungen "KRL-Methode" mit der handwerksüblichen und seit Jahrzehnten in der Praxis bewährten "CM-Methode".
Methodology
(2019)
Chapter three introduces the methodology of the study and the research design. Its nested, multi-layered methodological concept constitutes one of the major innovations of the book and enables us to map and measure peacebuilding activities of Christian church actors in the conflicts of Mindanao and Maluku (Ambon). The methodological concept draws from Katzenstein’s “analytic eclecticism,” which transcends the rigidity of existing research schools and seeks to fuse elements of several approaches into a new research agenda. The nested, multi-layered analysis of this book seamlessly combines process tracing, discourse analysis, statistical analysis of primary data generated in two field surveys, Qualitative Comparative Analysis (QCA), and statistical regression analysis.
This PhD investigation lies at the intersection of Architecture, Textile Design and Interaction Design and speculates about sustainable forms of future living, focussing on bionic principles to create alternative lightweight building structures with textiles and digital fabrication techniques. In an interdisciplinary, practice- based design approach, informed by radical case studies from the 1960s to 80s on soft architectures like Archigram, Buckminster Fuller, Cedric Price, or Yona Friedman and critical theory on new materialism, (D. Haraway 1997, K. Barad 1998, J. Bennett 2007) sociological, philosophical (B. Latour 2005, G.Deleuze F., Guattari F 1987) and phenomenological thinkers (L. Malafouris 2005, J.Rancière 2004, B. Massumi 2002, N. Bourriaud 2002 , M. Merleau-Ponty 1963) this research investigates the cultural and social rootedness (Verortung) of novel materials and technologies, exploring in between prosthetic relations between the body and the environment.
Generative Design Software - How does digitalization change the professional profile of architects
(2019)
Abstract, Poster und Vortrag
Ein Beitrag zum Beobachterentwurf und zur sensorlosen Folgeregelung translatorischer Magnetaktoren
(2020)
Untersuchung und Darstellung der Qualitätsveränderung von Agrarprodukten während der Trocknung
(2019)
Das Ziel der Arbeit war es optimale Trocknungsprozesse für verschiedene Agrarprodukte zu finden. Dazu wurden die Qualitätskriterien frischer und getrockneter Agrarprodukte analysiert und die Veränderungen durch die unterschiedlichen Trocknungsparameter, wie Luftgeschwindigkeit, Taupunkttemperatur, Trocknungstemperatur und –zeit dargestellt. In einer Literaturrecherche wurden sowohl die Faktoren für die Nachernteverluste und deren Höhe in Industrie- sowie Schwellen- und Entwicklungsländer untersucht. Zudem sind die Agrarprodukte und deren qualitätsbestimmenden Inhaltsstoffe vorgestellt. Auch die Extraktions- sowie die Analyse-Methoden werden aufgezeigt und erklärt. Dabei handelt es sich um die Hochleistungsflüssigkeit- und die Ionenausschlusschromatographie, aber auch um die UV/Vis-Spektroskopie und die Polarimetrie. Des Weiteren wurden während den Trocknungsprozessen mit der integrierten Kamera des Trockners in definierten Zeitabständen Bilder aufgenommen und diese über eine speziell entwickelte Software im Hinblick auf die Farbveränderung und die Schrumpfung der Agrarprodukte untersucht. Die Erstellung und Überprüfung der Versuchsergebnisse fand mittels Statistik-Software statt. Es wurden neue Diagramme, sogenannte Schädigungsdiagramme, eingeführt. Dabei handelt es sich um Diagramme, mit deren Hilfe die Identifizierung optimaler Trocknungsprozesse möglich ist. Für Chilis erwies sich eine Trocknungstemperatur von ~ 60 °C, für Kartoffeln von ~ 64 °C bis 74 °C, für Ananas von ~ 43 °C und Mangos von ~ 60 °C als optimal. Auch Taupunkttemperaturen von ~ <12 °C / >27 °C für Chilis, ~ 30 °C für Kartoffeln, ~ 14 °C für Ananas und ~ 20 °C Mangos waren optimal. Die Luftgeschwindigkeit wurde mit rund 1,2 m/s (Kartoffeln: ~ 1.2 m/s; Ananas: ~ 1.2 m/s und Mangos: ~ 0.9 m/s) als optimal befunden. Die Ergebnisse zeigten, dass bei jedem der vier Agrarprodukte die Trocknungstemperatur den größten Effekt auf die Reduzierung der qualitätsbestimmenden Eigenschaften hatte. Bei-spielsweise wurden die Ascorbinsäure, der Gesamtzucker-Gehalt sowie die organischen Säuren mit zunehmender Trocknungstemperatur stärker abgebaut. In Zukunft sollte neben den optimalen Trocknungsbedingungen auch beachtet werden, dass die Größe, Form, und Beschaffenheit der Proben einen entscheidenden Einfluss auf die stationären Trocknungsprozesse haben. Weiter ist es denkbar, instationäre Trocknungsprozesse zum Einsatz zu bringen. Dabei werden zuerst bei hohen Temperaturen die qualitätsreduzierenden Enzyme inaktiviert und anschließend bei geringen Temperatur und damit geringerer thermischer Belastung getrocknet. Weiter sollte darauf geachtet werden, dass Produkte nicht übertrocknen, so dass in Zukunft nur bis knapp unter den maximalen Restfeuchte-Gehalt und nicht wie in dieser Arbeit bis zur Gewichtskonstanz getrocknet wird.
Engineering and management
(2019)
The reliable supply of energy is an essential prerequisite for the economic success of a country. Questions of sustainability and the replacement of import dependencies require new tasks with new approaches. This contribution provides an overview of dependencies using the example of German electrical power grids integrating renewable energies. Aspects of energy trading and grid stability are brought into connection, stock exchange trading, grid codes and volatility of used primary energies are discussed.
Vortrag und Abstract
Some 165 global experts and specialists from industry and academic institutes met at the 8th Metallization & Interconnection Workshop (MIW2019) that took place from 13 to 14 May 2019 in Konstanz, Germany. Participants from 19 countries debated results of 28 oral and 11 poster presentations.
All presentations are available on www.metallizationworkshop.info as pdf documents. As in previous editions, lots of room was available for discussions and networking during the two-days program which included panel and market-place discussions as well as social events (reception, workshop dinner).
These proceedings contain: a summary of the oral and poster presentations, the results of the survey conducted during the workshop, and peer-reviewed papers based on workshop contributions.
This paper summarizes the trends in metallization and interconnection technology in the eyes of the participants of the 8th Metallization and Interconnection Workshop. Participants were asked in a questionnaire to share their view on the future development of metallization technology, the kind of metal used for front side metallization and the future development of interconnection technology. The continuous improvement of the screen-printing technology is reflected in the high expected percentage share decreasing from 88% in three years to still 70% in ten years. The dominating front side metal in the view of the participants will be silver with an expected percentage share of nearly 70% in 2029. Regarding interconnection technologies, the experts of the workshop expect new technologies to gain significant technology shares faster. Whereas in three years soldering on busbars is expected to dominate with a percentage share of 71% it will drop in ten years to 35% in the eyes of the participants. Multiwire and shingling technologies are seen to have the highest potential with expected percentage shares of 33% (multiwire) and 16% (shingling) in ten years.
Summary of the 8th Workshop on Metallization and Interconnection for Crystalline Silicon Solar Cells
(2019)
This article gives a summary of the 8th Metallization and Interconnection workshop and attempts to place each contribution in the appropriate context. The field of metallization and interconnection continues to progress at a very fast pace. Several printing techniques can now achieve linewidths below 20 μm. Screen printing is more than ever the dominating metallization technology in the industry, with finger widths of 45 μm in routine mass production and values below 20 μm in the lab. Plating technology is also being improved, particularly through the development of lower cost patterning techniques. Interconnection technology is changing fast, with introduction in mass production of multiwire and shingled cells technologies. New models and characterization techniques are being introduced to study and understand in detail these new interconnection technologies.
We present an innovative decision support system (DSS) for distribution system operators (DSO) based on an artificial neural network (ANN). A trained ANN has the ability to recognize problem patterns and to propose solutions that can be implemented directly in real time grid management. The principle functionality of this ANN based optimizer has been demonstrated by means of a simple virtual electrical grid. For this grid, the trained ANN predicted the solution minimizing the total line power dissipation in 98 percent of the cases considered. In 99 percent of the cases, a valid solution in compliance with the specified operating conditions was found. First ANN tests on a more realistic grid, calibrated with household load measurements, revealed a prediction rate between 88 and 90 percent depending on the optimization criteria. This approach promises a faster, more cost-efficient and potentially secure method to support distribution system operators in grid management.
We present an alternative approach to grid management in low voltage grids by the use of artificial intelligence. The developed decision support system is based on an artificial neural network (ANN). Due to the fast reaction time of our system, real time grid management will be possible. Remote controllable switches and tap changers in transformer stations are used to actively manage the grid infrastructure. The algorithm can support the distribution system operators to keep the grid in a safe state at any time. Its functionality is demonstrated by a case study using a virtual test grid. The ANN achieves a prediction rate of around 90% for the different grid management strategies. By considering the four most likely solutions proposed by the ANN, the prediction rate increases to 98.8%, with a 0.1 second increase in the running time of the model.
Das hier vorgestellte Netzoptimierungstool kann dem Verteilnetzbetreiber bei einem Störfall im Netz in Echtzeit eine Lösung zur Steuerung seiner Betriebsmittel vorschlagen. Dadurch kann das bestehende Netz optimal genutzt werden und ein kostenintensiver Netzausbau im Mittel- und Niederspannungsnetz verringert oder sogar verhindert werden. Als Grundlage für den Netzoptimierer dient ein künstliches neuronales Netz (KNN). Zum Training des KNN wurden Störfälle generiert, die auf reellen Erzeugungs- und Lastprofilen aus dem CoSSMic-Projekt basieren [1]. Für jeden Störfall wurde aus allen möglichen und sinnvollen Netzkonfigurationen eine optimierte Netztopologie anhand von Lastflussberechnungen ermittelt. Durch die Variation der Stufenschalter der Transformatoren und der Stellungen aller installierten Schalter im Netz wurde berechnet, wie der Stromfluss gelenkt werden muss, damit keines der Betriebsmittel die zulässigen Belastungsgrenzen mehr überschreitet. Für ein virtuelles Testnetz konnte mit einem trainierten KNN zu 90 Prozent die optimale Lösung des jeweiligen Störfalls erkannt werden. Durch die Anwendung der N-Best Methode konnte die Vorhersagewahrscheinlichkeit auf annähernd 99 Prozent erhöht werden.
Die Studienanfänger in den technischen Studiengängen der Hochschulen für angewandte Wissenschaften haben nicht nur in Mathematik sondern auch in Physik sehr unterschiedliche Vorkenntnisse. Obwohl diese Fächer für das grundlegende Verständnis technischer Vorgänge von großer Bedeutung sind, kann die Ausbildung in diesen Bereichen angesichts der begrenzten dafür im Verlauf des Studiums zur Verfügung stehenden Zeitfenster nicht bei Null anfangen. Für Mathematik wurde daher von der Arbeitsgruppe cosh ein Mindestanforderungskatalog zusammengestellt und 2014 veröffentlicht. Er beschreibt Kenntnisse und Fertigkeiten, die Studienanfänger zur erfolgreichen Aufnahme eines WiMINT-Studiums (Wirtschaft, Mathematik, Informatik, Naturwissenschaft, Technik) an einer Hochschule benötigen. Inzwischen hat sich nun eine Arbeitsgruppe von Physikerinnen und Physikern an Hochschulen in Baden-Württemberg gebildet, deren Ziel es ist, einen analogen Mindestanforderungskatalog für den Bereich Physik zu erstellen. Hier wird der aktuell erreichte Stand der Arbeiten vorgestellt.
A physics lab-setup has been developed for engineering students in their first year at university. The so-called LabTeamCoaching helps to improve general lab skills, such as preparing an experiment, writing a documentation, using graphs and drawing conclusions. By using a flipped classroom approach, students get better involved than in our former physics labs when we applied classical methods. This approach will be described and an overview of our 10 years of experience using this method will be given.
One important skill for engineers is the ability of optimizing their experiments. On their job they will often spend a lot more time designing and improving an experimental setup compared to running the actual experiment itself. Is it possible to teach this complex task in physics labs? A method for reaching this goal is proposed an example is given and discussed.
Fast and reliable acquisition of truth data for document analysis using cyclic suggest algorithms
(2019)
In document analysis the availability of ground truth data plays a crucial role for the success of a project. This is even more true at the rise of new deep learning methods which heavily rely on the availability of training data. But even for traditional, hand crafted algorithms that are not trained on data, reliable test data is important for the improvement and evaluation of the methods. Because ground truth acquisition is expensive and time consuming, semi-automatic methods are introduced which make use of suggestions coming from document analysis systems. The interaction between the human operator and the automatic analysis algorithms is the key to speed up the process while improving the quality of the data. The final confirmation of data may always be done by the human operator. This paper demonstrates a use case for acquisition of truth data in a mail processing system. It shows why a new, extended view on truth data is necessary in development and engineering of such systems. An overview over the tool and the data handling is given, the advantages in the workflow are shown, and consequences for the construction of analysis algorithms are discussed. It can be shown that the interplay between suggest algorithms and human operator leads to very fast truth data capturing. The surprising finding is the fact that if multiple suggest algorithms circularly depend on data, they are especially effective in terms of speed and accuracy.
Das WITg wird digitaler
(2019)
Pascal Laube presents machine learning approaches for three key problems of reverse engineering of defective structured surfaces: parametrization of curves and surfaces, geometric primitive classification and inpainting of high-resolution textures. The proposed methods aim to improve the reconstruction quality while further automating the process. The contributions demonstrate that machine learning can be a viable part of the CAD reverse engineering pipeline.
Deep neural networks have become a veritable alternative to classic speaker recognition and clustering methods in recent years. However, while the speech signal clearly is a time series, and despite the body of literature on the benefits of prosodic (suprasegmental) features, identifying voices has usually not been approached with sequence learning methods. Only recently has a recurrent neural network (RNN) been successfully applied to this task, while the use of convolutional neural networks (CNNs) (that are not able to capture arbitrary time dependencies, unlike RNNs) still prevails. In this paper, we show the effectiveness of RNNs for speaker recognition by improving state of the art speaker clustering performance and robustness on the classic TIMIT benchmark. We provide arguments why RNNs are superior by experimentally showing a “sweet spot” of the segment length for successfully capturing prosodic information that has been theoretically predicted in previous work.
We propose a novel end-to-end neural network architecture that, once trained, directly outputs a probabilistic clustering of a batch of input examples in one pass. It estimates a distribution over the number of clusters k, and for each 1≤k≤kmax, a distribution over the individual cluster assignment for each data point. The network is trained in advance in a supervised fashion on separate data to learn grouping by any perceptual similarity criterion based on pairwise labels (same/different group). It can then be applied to different data containing different groups. We demonstrate promising performance on high-dimensional data like images (COIL-100) and speech (TIMIT). We call this “learning to cluster” and show its conceptual difference to deep metric learning, semi-supervise clustering and other related approaches while having the advantage of performing learnable clustering fully end-to-end.
Ceramics are often used in high-temperature applications. Therefore, thermomechanical and heat resistance of ceramic and refractory materials are important. The material behaviour is described by thermal stress resistance. Established material tests to determine thermal shock behaviour are complex. The potential of application-related material testing in combination with simulations is described below.