Refine
Year of publication
Document Type
- Conference Proceeding (642) (remove)
Language
- English (492)
- German (149)
- Multiple languages (1)
Keywords
- 360-degree coverage (1)
- 3D Extended Object Tracking (1)
- 3D Extended Object Tracking (EOT) (2)
- 3D shape tracking (1)
- 3D ship detection (1)
- AAL (1)
- ADAM (1)
- AHI (1)
- Abrasive grain material (1)
- Abtragsprinzip (1)
Institute
- Fakultät Bauingenieurwesen (9)
- Fakultät Elektrotechnik und Informationstechnik (10)
- Fakultät Informatik (50)
- Fakultät Maschinenbau (9)
- Fakultät Wirtschafts-, Kultur- und Rechtswissenschaften (8)
- Institut für Angewandte Forschung - IAF (53)
- Institut für Optische Systeme - IOS (19)
- Institut für Strategische Innovation und Technologiemanagement - IST (29)
- Institut für Systemdynamik - ISD (64)
- Institut für Werkstoffsystemtechnik Konstanz - WIK (5)
- Institut für Werkstoffsystemtechnik Thurgau - WITg (16)
- Institut für professionelles Schreiben - IPS (3)
- Konstanz Institut für Corporate Governance - KICG (3)
- Konstanzer Institut für Prozesssteuerung - KIPS (6)
Hot isostatic pressing (HIP) allows the production of complex components geometry. Generally, a high quality of the components is achieved due to the well managed composition of the metal powder and the non-isotropic properties. If a duplex stainless steel is produced, a heat treatment after the HIP-process is necessary to remove precipitations like carbides, nitrides and intermetallic phases. In a new process, the sintering step should be combined with the heat treatment. In this case a high cooling rate is necessary to avoid precipitations in duplex stainless steels. In this work, the influence of the HIP-temperature and the wall thickness on corrosion resistance, microstructure and impact strength were investigated. The results should help to optimize the process parameters like temperature and cooling rate. For the investigation, two HIP-temperatures were tested in a classical HIP-process step with a defined cooling rate. An additional heat treatment was not conducted. The specimens were cut from different sectors of the HIP-block. For investigation of the corrosion resistance, the critical pitting temperature was determined with electrochemical method according to EN ISO 17864. An impact test was used to determine the impact transition temperature. Metallographic investigations show the microstructure in the different sectors of the HIP-block.
The development of home health systems can provide continuous and user-friendly monitoring of key health parameters. This project aims to create a concept for such a system, implement it on a test basis, and evaluate it. Three health areas were selected for this purpose:
Sleep, Stress, and Rehabilitation. Appropriate devices were installed in the homes of test subjects and used by them for two weeks. Besides, relevant questionnaires were completed to obtain a complete picture. Finally, the implemented system was evaluated, and the results of the conducted study showed that home health systems have great potential. However, it is necessary to consider some points to increase the usability of the system and the motivation of the users. Among others, ease of use of the equipment is of extreme importance.
One important skill for engineers is the ability of optimizing their experiments. On their job they will often spend a lot more time designing and improving an experimental setup compared to running the actual experiment itself. Is it possible to teach this complex task in physics labs? A method for reaching this goal is proposed an example is given and discussed.
Corporate venturing has gained much attention due
to challenges and changes that occur because of discontinuous
innovations – which seem to be promoted by digitalization. In this
context, open innovation has become a promising tool for
established companies to strengthen their innovation capabilities.
While the external opening of the innovation process has gained
much attention, the internal opening lacks on investigations.
Especially new organizational forms, such as Internal Corporate
Accelerators, have not been investigated sufficiently. This study,
which is based on 13 interviews from two German tech-companies,
contributes to a better understanding of this new form of corporate
venturing and the resulting effects on the organizational renewal.
Strategic renewal and the development of new types of innovation pose special challenges to established small and medium-sized companies. The paper at hand aims at answering the questions what the underlying mechanism of these challenges are and which approaches might help to properly counteracting them. This case study investigates the strategic renewal process and its corresponding interventions in a high-tech SME company during a four-year period. We analyse the findings in relation to existing frameworks for dynamic capabilities and strategic learning and provide new recommendations for practice and future research.
This paper studies suitable models for the identification of nonlinear acoustic systems. A cascaded structure of nonlinear filters is proposed that contains several parallel branches, consisting of polynomial functions followed by a linear filter for each order of nonlinearity. The second order of nonlinearity is additionally modelled with a parallel branch, containing a Volterra filter. These are followed by a long linear FIR filter that is able to model the room acoustics. The model is applied to the identification of a tube power amplifier feeding a guitar loudspeaker cabinet in an acoustic room. The adaptive identification is performed by the normalized least mean square (NLMS) algorithm. Compared with a generalized polynomial Hammerstein (GPH) model, the accuracy in modelling the dedicated real world system can be improved to a greater extend than increasing the order of nonlinearity in the GPH model.
Autismus-Spektrum-Störungen (ASD) bei Kindern werden häufig zu spät diagnostiziert und die Begleitung der chronischen Krankheit gestaltet sich schwierig. Der vorgestellte Ansatz erlaubt die Behandlung der Kinder in dem bekannten häuslichen Umfeld und versucht die Beziehungen zwischen Schlaf und Verhalten herauszuarbeiten. Die gewonnenen Erkenntnisse sollen die Lebensqualität der Patienten verbessern und den Eltern Hilfestellung geben. Die notwendige infrastrukturelle Unterstützung wird durch medizinisches Fachpersonal geleistet, das auf einen web-basierten Service zurückgreifen kann, der sämtliche Prozesse (Diagnostik, Datenerfassung, -aufzeichnung und Training etc.) begleitet. Die anonymisierten Daten werden in einem Diagnosesystem zentral abgelegt und können so für zukünftige Behandlungsstrategien nutzbar sein. Die umfassende Lösung setzt auf zentrale Elemente von Smart-Homes und AAL auf.
I4 Production
(2017)
Identifikation von Schlaf- und Wachzuständen durch die Auswertung von Atem- und Bewegungssignalen
(2021)
Measuring cardiorespiratory parameters in sleep, using non-contact sensors and the Ballistocardiography technique has received much attention due to the low-cost, unobtrusive, and non-invasive method. Designing a user-friendly, simple-to-use, and easy-to-deployment preserving less errorprone remains open and challenging due to the complex morphology of the signal. In this work, using four forcesensitive resistor sensors, we conducted a study by designing four distributions of sensors, in order to simplify the complexity of the system by identifying the region of interest for heartbeat and respiration measurement. The sensors are deployed under the mattress and attached to the bed frame without any interference with the subjects. The four distributions are combined in two linear horizontal, one linear vertical, and one square, covering the influencing region in cardiorespiratory activities. We recruited 4 subjects and acquired data in four regular sleeping positions, each for a duration of 80 seconds. The signal processing was performed using discrete wavelet transform bior 3.9 and smooth level of 4 as well as bandpass filtering. The results indicate that we have achieved the mean absolute error of 2.35 and 4.34 for respiration and heartbeat, respectively. The results recommend the efficiency of a triangleshaped structure of three sensors for measuring heartbeat and respiration parameters in all four regular sleeping positions.
Deep neural networks have been successfully applied to problems such as image segmentation, image super-resolution, coloration and image inpainting. In this work we propose the use of convolutional neural networks (CNN) for image inpainting of large regions in high-resolution textures. Due to limited computational resources processing high-resolution images with neural networks is still an open problem. Existing methods separate inpainting of global structure and the transfer of details, which leads to blurry results and loss of global coherence in the detail transfer step. Based on advances in texture synthesis using CNNs we propose patch-based image inpainting by a single network topology that is able to optimize for global as well as detail texture statistics. Our method is capable of filling large inpainting regions, oftentimes exceeding quality of comparable methods for images of high-resolution (2048x2048px). For reference patch look-up we propose to use the same summary statistics that are used in the inpainting process.
The detection of anomalous or novel images given a training dataset of only clean reference data (inliers) is an important task in computer vision. We propose a new shallow approach that represents both inlier and outlier images as ensembles of patches, which allows us to effectively detect novelties as mean shifts between reference data and outliers with the Hotelling T2 test. Since mean-shift can only be detected when the outlier ensemble is sufficiently separate from the typical set of the inlier distribution, this typical set acts as a blind spot for novelty detection. We therefore minimize its estimated size as our selection rule for critical hyperparameters, such as, e.g., the size of the patches is crucial. To showcase the capabilities of our approach, we compare results with classical and deep learning methods on the popular datasets MNIST and CIFAR-10, and demonstrate its real-world applicability in a large-scale industrial inspection scenario.
Digitale Signaturen zum Überprüfen der Integrität von Daten, beispielsweise von Software-Updates, gewinnen zunehmend an Bedeutung. Im Bereich der eingebetteten Systeme kommen derzeit wegen der geringen Komplexität noch überwiegend symmetri-sche Verschlüsselungsverfahren zur Berechnung eines Authentifizierungscodes zum Einsatz. Asym-metrische Kryptosysteme sind rechenaufwendiger, bieten aber mehr Sicherheit, weil der Schlüssel zur Authentifizierung nicht geheim gehalten werden muss. Asymmetrische Signaturverfahren werden typischerweise zweistufig berechnet. Der Schlüssel wird nicht direkt auf die Daten angewendet, sondern auf deren Hash-Wert, der mit Hilfe einer Hash-funktion zuvor berechnet wurde. Zum Einsatz dieser Verfahren in eingebetteten Systemen ist es erforder-lich, dass die Hashfunktion einen hinreichend gro-ßen Datendurchsatz ermöglicht. In diesem Beitrag wird eine effiziente Hardware-Implementierung der SHA-256 Hashfunktion vorgestellt.
In diesem Beitrag wird die Hardware-Implementierung eines Datenkompressionsverfahrens auf einem FPGA vorgestellt. Das Verfahren wurde speziell für Kompression kurzer Datenblöcke in Flash-Speichern entwickelt. Dabei werden Quelldaten mithilfe eines Encoders komprimiert und mit einem Decoder verlustlos dekomprimiert. Durch die Reduktion der Datenrate kann in Flash-Speichern die Übertragungsdauer zum Lesen und Schreiben reduziert werden. Ebenso ist eine Kompression von Nutzdaten sinnvoll, um zusätzliche Redundanzen für einen Fehlerschutz einfügen zu können, ohne den Gesamtspeicherplatzbedarf zu erhöhen.
Recent years have seen the proposal of several different gradient-based optimization methods for training artificial neural networks. Traditional methods include steepest descent with momentum, newer methods are based on per-parameter learning rates and some approximate Newton-step updates. This work contains the result of several experiments comparing different optimization methods. The experiments were targeted at offline handwriting recognition using hierarchical subsampling networks with recurrent LSTM layers. We present an overview of the used optimization methods, the results that were achieved and a discussion of why the methods lead to different results.
The Industrial Internet of Things (IIoT) will leverage on wireless network technologies to integrate in a seamless manner Cyber-Physical Systems into existing information systems. In this context, the 6TiSCH architecture, proposed by IETF, represents the current leading standardization effort to enable timed and reliable data communication within IPv6 networks for industrial applications. In wireless networks, Link Quality Estimation (LQE) is a crucial task to select the best routes for data forwarding, regardless of unpredictable time varying conditions. Although, many solutions for LQE have been proposed in literature, the majority of them are not designed specifically for 6TiSCH networks. In this paper, we analyze the performance of existing LQE strategies on 6TiSCH networks.
First, we run a set of simulations to measure the performance of one existing LQE strategy in 6TiSCH. Our simulations show that such strategy can result in measurements with low accuracy due to the 6TiSCH default timeslot allocation strategy. Consequently, we propose an extension of the 6TiSCH Minimal Configuration that allocates specific timeslots for the transmission of probing messages to mitigate the problem. The proposed methodology is demonstrated to effectively reduce the LQE error.
Improving the tribological properties of Stainless Steels by low-temperature surface hardening
(2022)
Corporate Entrepreneurship (CE) became the new paradigm for organizations to cope with the accelerated development of innovations. Therefore, especially established organizations increasingly implement CE activities, even in combination. Scholars point out that a coordinated portfolio of CE activities could yield synergies and thus higher value for the organization and further call for more scientific examinations. This literature review aims to better the understanding of the combined and coordinated use of CE activities as well as about resulting synergies. Results show that there are only very few studies that addressed a combination and/or coordination of CE activities with respect to the creation of additional value, however, without empirical analyses. Yet, five categories of direct and indirect synergies could be derived. Discussing the results as well as the heterogenous use of terminology and concepts, this paper concludes with a research agenda for future analyses.
InBetween
(2017)
This PhD investigation lies at the intersection of Architecture, Textile Design and Interaction Design and speculates about sustainable forms of future living, focussing on bionic principles to create alternative lightweight building structures with textiles and digital fabrication techniques. In an interdisciplinary, practice- based design approach, informed by radical case studies from the 1960s to 80s on soft architectures like Archigram, Buckminster Fuller, Cedric Price, or Yona Friedman and critical theory on new materialism, (D. Haraway 1997, K. Barad 1998, J. Bennett 2007) sociological, philosophical (B. Latour 2005, G.Deleuze F., Guattari F 1987) and phenomenological thinkers (L. Malafouris 2005, J.Rancière 2004, B. Massumi 2002, N. Bourriaud 2002 , M. Merleau-Ponty 1963) this research investigates the cultural and social rootedness (Verortung) of novel materials and technologies, exploring in between prosthetic relations between the body and the environment.
Increasing robustness of handwriting recognition using character N-Gram decoding on large lexica
(2016)
Offline handwriting recognition systems often include a decoding step, that is retrieving the most likely character sequence from the underlying machine learning algorithm. Decoding is sensitive to ranges of weakly predicted characters, caused e.g. by obstructions in the scanned document. We present a new algorithm for robust decoding of handwriting recognizer outputs using character n-grams. Multidimensional hierarchical subsampling artificial neural networks with Long-Short-Term-Memory cells have been successfully applied to offline handwriting recognition. Output activations from such networks, trained with Connectionist Temporal Classification, can be decoded with several different algorithms in order to retrieve the most likely literal string that it represents. We present a new algorithm for decoding the network output while restricting the possible strings to a large lexicon. The index used for this work is an n-gram index with tri-grams used for experimental comparisons. N-grams are extracted from the network output using a backtracking algorithm and each n-gram assigned a mean probability. The decoding result is obtained by intersecting the n-gram hit lists while calculating the total probability for each matched lexicon entry. We conclude with an experimental comparison of different decoding algorithms on a large lexicon.
Incremental one-class learning using regularized null-space training for industrial defect detection
(2024)
One-class incremental learning is a special case of class-incremental learning, where only a single novel class is incrementally added to an existing classifier instead of multiple classes. This case is relevant in industrial defect detection scenarios, where novel defects usually appear during operation. Existing rolled-out classifiers must be updated incrementally in this scenario with only a few novel examples. In addition, it is often required that the base classifier must not be altered due to approval and warranty restrictions. While simple finetuning often gives the best performance across old and new classes, it comes with the drawback of potentially losing performance on the base classes (catastrophic forgetting [1]). Simple prototype approaches [2] work without changing existing weights and perform very well when the classes are well separated but fail dramatically when not. In theory, null-space training (NSCL) [3] should retain the basis classifier entirely, as parameter updates are restricted to the null space of the network with respect to existing classes. However, as we show, this technique promotes overfitting in the case of one-class incremental learning. In our experiments, we found that unconstrained weight growth in null space is the underlying issue, leading us to propose a regularization term (R-NSCL) that penalizes the magnitude of amplification. The regularization term is added to the standard classification loss and stabilizes null-space training in the one-class scenario by counteracting overfitting. We test the method’s capabilities on two industrial datasets, namely AITEX and MVTec, and compare the performance to state-of-the-art algorithms for class-incremental learning.
Industry 4.0
(2017)
This paper compares the surface morphology of differently finished austenitic stainless steel AISI 316L, also in combination with low temperature carburization. Milled and tumbled surfaces were analyzed by means of corrosion resistance and surface morphology. The results of potentiodynamic measurements show that professional grinding operations with SiC and Al2O3 always lead to a better corrosion resistance of low temperature carburized surfaces compared to the untreated reference in the used acidified chloride solution. Big influence on the corrosion resistance of vibratory ground or tumbled surfaces has the amount of plastic deformation while machining, that has to be kept low for austenitic stainless steels. Due to the high ductility, plastic deformation can lead to the formation of meta stable pits that can be initiation points of corrosion. The formation of meta stable pits can be aggravated by low temperature diffusion processes.
In biomechanics laboratories the ground reaction force time histories of the foot-fall of persons are usually measured using a force plate. The accelerations of the floor, in which the force plate is embedded, have to be limited, as they may influence the accuracy of the force measurements. For the numerical simulation of vibrations induced by humans in biomechanical laboratories, loading scenarios are defined. They include continuous motions of persons (walking, running) as well as jumps, typical for biomechanical investigations on athletes. The modeling of floors has to take into account the influence of floor screed in case of portable force plates. Criteria for the assessment of the measuring error provoked by floor vibrations are given. As an example a floor designed to accommodate a force platform in a biomechanical laboratory of the University Hospital in Tübingen, Germany, has been investi-gated for footfall induced vibrations. The numerical simulation by a finite element analysis has been validated by field measurements. As a result, the measuring error of the force plate installed in the laboratory is obtained for diverse scenarios.
The exposure to the light has a great influence on human beings in their everyday life. Various lighting sources produce light that reaches the human eye and influences a rhythmic release of melatonin hormone, that is a sleep promoting factor.
Since the development of new technologies provides more control over illuminance, this work uses an IoT based lighting system to set up dim and bright scenarios. A small study has been performed on the influence of illuminance on sleep latency. The system consists of different light bulbs, sensors and a central bridge which are interconnected like a mesh network. Also, a mobile app has been developed, that allows to adjust the lighting in various rooms. With the help of a ferro-electret sensor, like applied in sleep monitoring systems, a subject’s sleep was monitored. The sensor is placed below the mattress and it collects data, which is stored and processed in a cloud or in other alternative locations.
The research was conducted on healthy young subjects after being previously exposed to the preconfigured illuminance for at least three hours before bedtime. The results indicate correlation between sleep onset latency and exposure to different illuminance before bedtime. In a dimmed environment, the subject fell asleep in average 28% faster compared to the brighter environment.
Influence of Temperature on the Corrosion behaviour of Stainless Steels under Tribological Stress
(2018)
Posterpräsentation
Ingenieure auf die Bühne
(2018)
Healthy sleep is required for sufficient restoration of the human body and brain. Therefore, in the case of sleep disorders, appropriate therapy should be applied timely, which requires a prompt diagnosis. Traditionally, a sleep diary is a part of diagnosis and therapy monitoring for some sleep disorders, such as cognitive behaviour therapy for insomnia. To automatise sleep monitoring and make it more comfortable for users, substituting a sleep diary with a smartwatch measurement could be considered. With the aim of providing accurate results, a study with a total of 30 night recordings was conducted. Objective sleep measurement with a Samsung Galaxy Watch 4 was compared with a subjective approach (sleep diary), evaluating the four relevant sleep characteristics: time of getting asleep, wake up time, sleep efficiency (SE), and total sleep time (TST). The performed analysis has demonstrated that the median difference between both measurement approaches was equal to 7 and 3 minutes for a time of getting asleep and wake up time correspondingly, which allows substituting a subjective measurement with a smartwatch. The SE was determined with a median difference between the two measurement methods of 5.22%. This result also implicates a possibility of substitution. Some single recordings have indicated a higher variance between the two approaches. Therefore, the conclusion can be made that a substitution provides reliable results primarily in the case of long-term monitoring. The results of the evaluation of the TST measurement do not allow to recommend substitution of the measurement method.
The development of a new product can be accelerated by using an approach called crowdsourcing. The engineers compete and try their best to provide the related solution based on the given product requirement submitted in the online crowdsourcing platform. The one who has submitted the best solution get a financial reward. This approach is proven to be three time faster than the conventional one. However, the crowdsourcing process is usually not transparent to a new user. The risk for the execution of a new project for developing a new product is not easy to be calculated [1, 2]. We developed a method InnoCrowd to handle this problem and the new user could use during the planning of a new product development project. This system uses AI concepts to generate a knowledgebase representing histories of successful product development projects. The system uses the knowledge to determine qualitative and quantitative risks of a new project. This paper describes the new method, the InnoCrowd design, and results of a validation experiment based on data from a current crowdsourcing platform. Finally, we compare InnoCrowd to related methods and systems in terms of design and benefits.
Innovation Labs
(2021)
Today's increasing pace of change and intense competition places demands on organizations to use a different approach to innovation, going beyond the incremental innovation that is typically developed within the core of the organization. As an option to escape the existing beliefs of the core organization, innovation labs are used to develop more discontinuous innovation. Despite the abundance of these so-called innovation labs in practice, researchers have devoted little effort to scrutinizing the concept and to provide managers with a framework for exploiting this form of innovation. In this paper, we aim to perform an empirical investigation and to create a consensus around the concept of innovation labs. To do so, we conducted a multiple case study in large international organizations with a total of 31 interviews of an average length of 70 minutes. We offer a framework by identifying four innovation lab types and consider when each is most appropriate. Furthermore, we highlight the importance for managers and their organizations to align the strategic intent with the innovation lab type as well as the interface between the innovation lab and the core business.
We present an innovative decision support system (DSS) for distribution system operators (DSO) based on an artificial neural network (ANN). A trained ANN has the ability to recognize problem patterns and to propose solutions that can be implemented directly in real time grid management. The principle functionality of this ANN based optimizer has been demonstrated by means of a simple virtual electrical grid. For this grid, the trained ANN predicted the solution minimizing the total line power dissipation in 98 percent of the cases considered. In 99 percent of the cases, a valid solution in compliance with the specified operating conditions was found. First ANN tests on a more realistic grid, calibrated with household load measurements, revealed a prediction rate between 88 and 90 percent depending on the optimization criteria. This approach promises a faster, more cost-efficient and potentially secure method to support distribution system operators in grid management.
Organizations deploy a plethora of information technology (IT) systems. Various types of enterprise systems (ES) may coexist with the shadow IT systems (SITS) implemented by individual business units without the involvement of the IT department. The associated redundancy of SITS and ES suggests their integration. After integration, however, organization may find it challenging to retain the flexibility and innovation that the development of SITS offers the business. In this study, we conduct a literature review on IT systems integration. This review and the specific characteristics of SITS then serve to define SITS integration, derive guidelines for the integration decision, the phases preceding and following integration, and the integration process itself. SITS and ES integration can profit from existing knowledge of integration benefits, costs, and of the available technologies. Our study offers IT decision makers an insight into the specifics of SITS integration, and provides a basis for future SITS research.
A novel implant system for bone elongation will be presented. With this technique, the body's own bone material, so-called callus, can be formed by gradual distraction of the tubular bones, thus achieving an extension of femur and tibia bones. The driving principle of this fully implantable bone lengthening system is based on a shape memory element. During the surgical treatment, the intramedullary nail serves to stabilize the severed bone and enables the formation of new, endogenous bone material to lengthen the limbs or to bridge bone defects. The intramedullary nail is implanted into the medullary cavity and fixed at both ends with locking bolts. A receiver coil implanted under the skin receives the necessary energy twice a day through high-frequency energy transport to activate the thermal phase transformation of the shape memory element. This gradually increases the bone gap by 0.5 mm each time and stimulates callus formation. Consequently, osteoblasts or osteocytes are formed in the area of the desired bone extension and load-bearing bone material is formed. Three nail prototypes have already been tested for their functionality in a cadaver study in a German clinic. Currently a redesign of this intelligent implant system is underway, focusing on a novel coil geometry, a monitoring sensor system and control technology and a novel connection technology for the drive components. With this intelligent implant system, it will be possible for the first time to lengthen the bones in a patient-friendly manner and to continuously monitor, document and evaluate the entire lengthening process.
Sleep is a multi-dimensional influencing factor on physical health, cognitive function, emotional well-being, mental health, daily performance, and productivity. The barriers such as time-consuming, invasiveness, and expense have caused a gradual shift in sleep monitoring from traditional and standard in-lab approach, e. g., polysomnography (PSG) to unobtrusive and noninvasive in-home sleep monitoring, yet further improvement is required. Despite an increasing interest in fiberoptic-based methods for cardiorespiratory estimation, the traditional mechanical-based sensors consist of force-sensitive resistors (FSR), lead zirconate titanate piezoelectric (PZT), and accelerometers yet serve as the dominant approach. The part of popularity lies in reducing the system’s complexity, expense, easy maintenance, and user-friendliness. However, care must be taken regarding the performance of such sensors with respect to accuracy and calibration.
Besides energy efficiency, product quality is gaining importance in the design of drying processes for sensitive biological foodstuffs. The influence of drying parameters on the drying kinetics of apples has been extensively investigated; the information about effects on product quality available in literature however, is often contradictory. Furthermore quality changes obtained applying different drying parameters are usually hard to compare. As most quality changes can be expressed as zero, first or second order reactions and mainly depend on drying air temperature and drying time, it would be desirable to cross-check the results in function thereof. This paper introduces a method of quality determination using a new reference value, the cumulated thermal load. It is defined as the time integral of the product surface temperature and improves the comparability of quality changes obtained by different experimental settings in drying of apples and tomatoes. It could be shown that quality parameters like color changes and shrinkage during apple drying and the content of temperature sensitive acids in tomatoes vary linearly with the integral of product temperature over time.
Investigation of magnetic effects on austenitic stainless steels after low temperature carburization
(2018)
In recent decades, it can be observed that a steady increase in the volume of tourism is a stable trend. To offer travel opportunities to all groups, it is also necessary to prepare offers for people in need of long-term care or people with disabilities. One of the ways to improve accessibility could be digital technologies, which could help in planning as well as in carrying out trips. In the work presented, a study of barriers was first conducted, which led to selecting technologies for a test setup after analysis. The main focus was on a mobile app with travel information and 360° tours. The evaluation results showed that both technologies could increase accessibility, but some essential aspects (such as usability, completeness, relevance, etc.) need to be considered when implementing them.
The digital transformation of business processes and the integration of IT systems leads to opportunities and risks for small and medium-sized enterprises (SMEs). Risks that can result in a lack of IT Governance, Risk and Compliance (GRC). The purpose of this paper is to present the Design and Evaluation phase of creating an artefact, to reduce these risks. With this, the Design Science Research approach based on Hevner is using. The artefact will be developed by selecting relevant existing frameworks and the identification of SME-specific competencies. The method enables IT-GRC managers to transfer or adapt the frameworks to an SME organizational structure. The results from ten interviews and further three feedback loops showed that the method can be applied in practice and that a tailoring of established frameworks can take place. Contrary to the previous basic orientation of the research, this paper focuses on the concretization of approaches.
In multi-extended object tracking, parameters (e.g., extent) and trajectory are often determined independently. In this paper, we propose a joint parameter and trajectory (JPT) state and its integration into the Bayesian framework. This allows processing measurements that contain information about parameters and states. Examples of such measurements are bounding boxes given from an image processing algorithm. It is shown that this approach can consider correlations between states and parameters. In this paper, we present the JPT Bernoulli filter. Since parameters and state elements are considered in the weighting of the measurement data assignment hypotheses, the performance is higher than with the conventional Bernoulli filter. The JPT approach can be also used for other Bayes filters.
Klimawandel
(2017)
Die Schlafapnoe ist eine häufig auftretende Schlafstörung,
die unterschiedliche Auswirkungen auf unseren Alltag hat; so wurde z. B.
über eine Tagesschläfrigkeit von etwa 25 % der Patienten mit obstruktiver
Schlafapnoe (OSA) berichtet. Ziel dieser Arbeit ist die Entwicklung eines
Systems, das eine nichtinvasive Erkennung der Schlafapnoe in häuslicher
Umgebung ermöglichen soll.
Kreativwirtschaft Bodensee
(2017)
A physics lab-setup has been developed for engineering students in their first year at university. The so-called LabTeamCoaching helps to improve general lab skills, such as preparing an experiment, writing a documentation, using graphs and drawing conclusions. By using a flipped classroom approach, students get better involved than in our former physics labs when we applied classical methods. This approach will be described and an overview of our 10 years of experience using this method will be given.
We are interested in computing a mini-batch-capable end-to-end algorithm to identify statistically independent components (ICA) in large scale and high-dimensional datasets. Current algorithms typically rely on pre-whitened data and do not integrate the two procedures of whitening and ICA estimation. Our online approach estimates a whitening and a rotation matrix with stochastic gradient descent on centered or uncentered data. We show that this can be done efficiently by combining Batch Karhunen-Löwe-Transformation [1] with Lie group techniques. Our algorithm is recursion-free and can be organized as feed-forward neural network which makes the use of GPU acceleration straight-forward. Because of the very fast convergence of Batch KLT, the gradient descent in the Lie group of orthogonal matrices stabilizes quickly. The optimization is further enhanced by integrating ADAM [2], an improved stochastic gradient descent (SGD) technique from the field of deep learning. We test the scaling capabilities by computing the independent components of the well-known ImageNet challenge (144 GB). Due to its robustness with respect to batch and step size, our approach can be used as a drop-in replacement for standard ICA algorithms where memory is a limiting factor.
Leadership in a global world
(2017)
We propose a novel end-to-end neural network architecture that, once trained, directly outputs a probabilistic clustering of a batch of input examples in one pass. It estimates a distribution over the number of clusters k, and for each 1≤k≤kmax, a distribution over the individual cluster assignment for each data point. The network is trained in advance in a supervised fashion on separate data to learn grouping by any perceptual similarity criterion based on pairwise labels (same/different group). It can then be applied to different data containing different groups. We demonstrate promising performance on high-dimensional data like images (COIL-100) and speech (TIMIT). We call this “learning to cluster” and show its conceptual difference to deep metric learning, semi-supervise clustering and other related approaches while having the advantage of performing learnable clustering fully end-to-end.
Lernfabrik
(2016)
Die Einführung von cyberphysischen Systemen in der Fertigung wird die Arbeitsbedingungen und Prozesse genauso wie Geschäftsmodelle stark verändern. In der Praxis kann eine wachsende Diskrepanz zwischen Großunternehmen und KMU beobachtet werden. Genau diese Diskrepanz soll die im Folgenden präsentierte Lernfabrik überbrücken, die Unternehmen eine Plattform zum Probieren bietet, die Möglichkeit zur Ausbildung von Studenten und Mitarbeitern schafft und Beratungsangebote bereithält. Zur Umsetzung wird ein integriertes, offenes und standardisiertes Automatisierungskonzept vorgestellt, das einzelne Geräte, ganze Produktionslinien bis hin zu höheren Automatisierungssystemen umfasst und auch eine Community bereitstellt sowie zur Umsetzung neuer Geschäftsmodelle dient.
Digitization and sustainability are the two big topics of our current time. As the usage of digital products like IoT devices continues to grow, it affects the energy consumption caused by the Internet. At the same time, more and more companies feel the need to become carbon neutral and sustainable. Determining the environmental impact of an IoT device is challenging, as the production of the hardware components should be considered and the electricity consumption of the Internet since this is the primary communication medium of an IoT device. Estimating the electricity consumption of the Internet itself is a complex task. We performed a life cycle assessment (LCA) to determine the environmental impact of an intelligent smoke detector sold in Germany, taking its whole life-cycle from cradle-to-grave into account. We applied the impact assessment method ReCiPe 2016 Midpoint and compared its results with ILCD 2011 Midpoint+ to check the robustness of our results. The LCA results showed that electricity consumption during the use phase is the main contributor to environmental impacts. The mining of coal causes this contribution, which is a part of the German electricity mix. Consequently, the smoke detector mainly contributes to the impact categories of freshwater and marine ecotoxicity, but only marginally to global warming.
CO2 compensation measures, in particular the compensation of flights, are becoming more and more popular. Carbon offsetting is defined as measures financed by donations that save greenhouse gases previously emitted elsewhere through climate protection projects.
CO2 abatement costs are often low in developing countries. This is why most offset projects are implemented there. Nevertheless, this does not mean that the holiday resort and the project country are in any way related to each other.
By linking carbon offset projects with the destination country, the tourist is able to get an impression of the co-financed project. In case such projects are realized in cooperation with the hotel, the hotel operator obtains a new tourist attraction and can demonstrate its efforts to climate protection in a PR-effective way.
List decoding for concatenated codes based on the Plotkin construction with BCH component codes
(2021)
Reed-Muller codes are a popular code family based on the Plotkin construction. Recently, these codes have regained some interest due to their close relation to polar codes and their low-complexity decoding. We consider a similar code family, i.e., the Plotkin concatenation with binary BCH component codes. This construction is more flexible regarding the attainable code parameters. In this work, we consider a list-based decoding algorithm for the Plotkin concatenation with BCH component codes. The proposed list decoding leads to a significant coding gain with only a small increase in computational complexity. Simulation results demonstrate that the Plotkin concatenation with the proposed decoding achieves near maximum likelihood decoding performance. This coding scheme can outperform polar codes for moderate code lengths.
Loch im Putz = alles neu?
(2016)
Market research institutes forecast a growing relevance of Low-Code Development Platforms (LCDPs) for organizations. Moreover, the rising number of scientific publications in recent years shows the increasing interest of the academic community. However, an overview of current research focuses and fruitful future research topics is missing. This paper conducts a first scientific literature review on LCDPs to close this gap. The socio-technical system (STS) model, which categorizes information systems into a social and a technical system, serves to analyze the identified 32 publications. Most of current research focuses on the technical system (technology or task). In contrast, only three publications explicitly target the social system (structure or people). Hence, this paper enables future research to address the identified research gaps. Additionally, practitioners gain awareness of technical and social aspects involved in the development, implementation, and application of LCDPs.
This work proposes a suboptimal detection algorithm for generalized multistream spatial modulation. Many suboptimal detection algorithms for spatial modulation use two-stage detection schemes where the set of active antennas is detected in the first stage and the transmitted symbols in the second stage. For multistream spatial modulation with large signal constellations the second detection step typically dominates the detection complexity. With the proposed detection scheme, the modified Gaussian approximation method is used for detecting the antenna pattern. In order to reduce the complexity for detecting the signal points, we propose a combined equalization and list decoding approach. Simulation results demonstrate that the new algorithm achieves near-maximum-likelihood performance with small list sizes. It significantly reduces the complexity when compared with conventional two-stage detection schemes.
Multi-dimensional spatial modulation is a multipleinput/ multiple-output wireless transmission technique, that uses only a few active antennas simultaneously. The computational complexity of the optimal maximum-likelihood (ML) detector at the receiver increases rapidly as more transmit antennas or larger modulation orders are employed. ML detection may be infeasible for higher bit rates. Many suboptimal detection algorithms for spatial modulation use two-stage detection schemes where the set of active antennas is detected in the first stage and the transmitted symbols in the second stage. Typically, these detection schemes use the ML strategy for the symbol detection. In this work, we consider a suboptimal detection algorithm for the second detection stage. This approach combines equalization and list decoding. We propose an algorithm for multi-dimensional signal constellations with a reduced search space in the second detection stage through set partitioning. In particular, we derive a set partitioning from the properties of Hurwitz integers. Simulation results demonstrate that the new algorithm achieves near-ML performance. It significantly reduces the complexity when compared with conventional two-stage detection schemes. Multi-dimensional constellations in combination with suboptimal detection can even outperform conventional signal constellations in combination with ML detection.
The computational complexity of the optimal maximum likelihood (ML) detector for spatial modulation increases rapidly as more transmit antennas or larger modulation orders are employed. Hence, ML detection may be infeasible for higher bit rates. This work proposes an improved suboptimal detection algorithm based on the Gaussian approximation method. It is demonstrated that the new method is closely related to the previously published signal vector based detection and the modified maximum ratio combiner, but can improve the detection performance compared to these methods. Furthermore, the performance of different signal constellations with suboptimal detection is investigated. Simulation results indicate that the performance loss compared to ML detection depends heavily on the signal constellation, where the recently proposed Eisenstein integer constellations are beneficial compared to classical QAM or PSK constellations.
This work proposes a construction for low-density parity-check (LDPC) codes over finite Gaussian integer fields. Furthermore, a new channel model for codes over Gaussian integers is introduced and its channel capacity is derived. This channel can be considered as a first order approximation of the additive white Gaussian noise channel with hard decision detection where only errors to nearest neighbors in the signal constellation are considered. For this channel, the proposed LDPC codes can be decoded with a simple non-probabilistic iterative decoding algorithm similar to Gallager's decoding algorithm A.
Algorithms for calculating the string edit distance are used in e.g. information retrieval and document analysis systems or for evaluation of text recognizers. Text recognition based on CTC-trained LSTM networks includes a decoding step to produce a string, possibly using a language model, and evaluation using the string edit distance. The decoded string can further be used as a query for database search, e.g. in document retrieval. We propose to closely integrate dictionary search with text recognition to train both combined in a continuous fashion. This work shows that LSTM networks are capable of calculating the string edit distance while allowing for an exchangeable dictionary to separate learned algorithm from data. This could be a step towards integrating text recognition and dictionary search in one deep network.
The evaluation of the effectiveness of different machine learning algorithms on a publicly available database of signals derived from wearable devices is presented with the goal of optimizing human activity recognition and classification. Among the wide number of body signals we choose a couple of signals, namely photoplethysmographic (optically detected subcutaneous blood volume) and tri-axis acceleration signals that are easy to be simultaneously acquired using commercial widespread devices (e.g. smartwatches) as well as custom wearable wireless devices designed for sport, healthcare, or clinical purposes. To this end, two widely used algorithms (decision tree and k-nearest neighbor) were tested, and their performance were compared to two new recent algorithms (particle Bernstein and a Monte Carlo-based regression) both in terms of accuracy and processing time. A data preprocessing phase was also considered to improve the performance of the machine learning procedures, in order to reduce the problem size and a detailed analysis of the compression strategy and results is also presented.
Magnetic effects on austenitic stainless steels, formed during a low temperature carburizing depending on the alloy composition are discussed in this paper. Samples of different austenitic stainless steel alloys have been subjected to a multiple low-temperature carburization. Layer characteriszation with light microscope and hardness profiles show a growth of the layer thickness. The formation of an expanded austenite layer (lattice expansion) could be detected by X-ray diffraction (XRD). Feritscope was used to determine the magnetizability, whereby not all austenitic alloys form a magnetizability after treatment. Furthermore, test procedures were developed to visualize the magnetizability. For this purpose, magnetic force microscope measurements and investigations with ferrofluid were carried out and a fir tree ferromagnetic layer strucure could be proven.
Fachvortrag auf der 10th International European Stainless Steel Conference and 6th European Duplex Stainless Steel Conference (ESSC & DUPLEX 2019), 30.09. – 02.10.2019, Vienna, Austria
The use of deep learning models with medical data is becoming more widespread. However, although numerous models have shown high accuracy in medical-related tasks, such as medical image recognition (e.g. radiographs), there are still many problems with seeing these models operating in a real healthcare environment. This article presents a series of basic requirements that must be taken into account when developing deep learning models for biomedical time series classification tasks, with the aim of facilitating the subsequent production of the models in healthcare. These requirements range from the correct collection of data, to the existing techniques for a correct explanation of the results obtained by the models. This is due to the fact that one of the main reasons why the use of deep learning models is not more widespread in healthcare settings is their lack of clarity when it comes to explaining decision making.
We have analyzed a pool of 37,839 articles published in 4,404 business-related journals in the entrepreneurship research field using a novel literature review approach that is based on machine learning and text data mining. Most papers have been published in the journals ‘Small Business Economics’, ‘International Journal of Entrepreneurship and Small Business’, and ‘Sustainability’ (Switzerland), while the sum of citations is highest in the ‘Journal of Business Venturing’, ‘Entrepreneurship Theory and Practice’, and ‘Small Business Economics’. We derived 29 overarching themes based on 52 identified clusters. The social entrepreneurship, development, innovation, capital, and economy clusters represent the largest ones among those with high thematic clarity. The most discussed clusters measured by the average number of citations per assigned paper are research, orientation, capital, gender, and growth. Clusters with the highest average growth in publications per year are social entrepreneurship, innovation, development, entrepreneurship education, and (business-) models. Measured by the average yearly citation rate per paper, the thematic cluster ‘research’, mostly containing literature studies, received most attention. The MLR allows for an inclusion of a significantly higher number of publications compared to traditional reviews thus providing a comprehensive, descriptive overview of the whole research field.
It is widely recognized that sustainability is a new challenge for many manufacturing companies. In this paper, we tackle this issue by presenting an approach that deals with material and substance compliance within Product Lifecycle Management in a complex value chain. Our analysis explains why, how and when sustainable manufacturing arises, and it identifies, quantifies and evaluates the environmental impact of a new product. We propose (I) a Life Cycle Assessment tool (LCA) and (II) a model to validate this approach and evaluate the risk of noncompliance in supply chain. Our LCA approach provides comprehensive information on environmental impacts of a product.
Product and materials cycles are parallel and intersecting, making it challenging to integrate Material Selection Process across Product Lifecycle Management, Integration of LCA with PLM. We provide only a foundation. Further research in systems engineering is necessary. LCA is sensitive to data quality. Outsourcing production and having problems in supplier cooperation can result in material mismatch (such as property, composition mismatching) in the production process due to that may cause misleading of LCA results.
This paper also describes research challenges using riskbased due diligence.
It is widely recognized that sustainability is a new challenge for many manufacturing companies. In this paper, we tackle this issue by presenting an approach that deals with material and substance compliance within Product Lifecycle Management in a complex value chain. Our analysis explains why, how and when sustainable manufacturing arises, and it identifies, quantifies and evaluates the environmental impact of a new product. We propose (I) a Life Cycle Assessment tool (LCA) and (II) a model to validate this approach and evaluate the risk of non-compliance in supply chain. Our LCA approach provides comprehensive information on environmental impacts of a product.
Product and materials cycles are parallel and intersecting, making it challenging to integrate Material Selection Process across Product Lifecycle Management, Integration of LCA with PLM. We provide only a foundation. Further research in systems engineering is necessary. LCA is sensitive to data quality. Outsourcing production and having problems in supplier cooperation can result in material mismatch (such as property, composition mismatching) in the production process due to that may cause misleading of LCA results. This paper also describes research challenges using risk-based due diligence.
In recent years, there has been a noticeable trend towards a general contractor strategy for plant engineering companies. Multiple disciplines and departments must be administered in a joint project. In the process, different work results are often managed in various systems without any associative relationship. A possible way to address this complexity is to implement a specifically tailored PLM strategy to gain a competitive advantage. Maturity models as well as methods to evaluate possible benefits constitute increasingly applied tools during this journey. Both methods have been theoretically described in previous publications. However, this paper should provide insights in the practical application within machinery industry. Therefore, a medium-sized German plant engineering company serves as an example for determining the scope and value of a multi-national overarching Product Lifecycle Management architecture as the central piece of a future digitalization strategy. The company’s current maturity levels for several digitalization capabilities are evaluated, prioritized and benchmarked against a set of similar companies. This allows to derive suitable target states in terms of maturity levels as well as the technical specification of digitalization use cases. In order to provide profound data for cost justification the resulting benefits are quantified.
This paper broadens the resource-based approach to explaining survival of new technology-based firms (NTBFs) by focusing on the entrepreneur's ability to transform resources in response to triggers resulting from market interactions. Network theory is used to define a construct that allows determining the status of venture emergence (VE).The operationalization of the VE construct is built on the firm's value network maturity in the four market dimensions customer, investor, partner, and human resource. Business plans of NTBFs represent the artifact that contains this data in the form of transaction relation descriptions. Using content analysis, a multi-step combined human and computer coding process has been developed to empirically determine NTBFs' status of VE.Results of the business plan analysis suggests that the level of transaction relations allows to draw conclusions on the status of VE. Moreover, applying the developed process, a business plan coding test shows that the transaction relation based VE status significantly relates to NTBFs' survival capabilities.
Szenografie ist eine Universaldisziplin, die mit Raum, Licht, Grafik, Ton und digitalen Medien Erlebnisräume schafft, in denen Wissen vermittelt und Besuchern verborgene Zusammenhänge intuitiv zugänglich gemacht werden können. Hierbei spielen mediale Vermittlungsstrategien eine zentrale Rolle. Eine dieser Strategien ist Information on demand: kontextualisierende Informationen werden hier nicht permanent angeboten, sondern nur dann, wenn der Besucher sie abruft. Dies ermöglicht sowohl eine auratische Objektpräsentation als auch gleichermaßen Kontextualisierung und Informationsvermittlung. Eine weitere Strategie sind
personalisierte Devices, die zudem eine zielgruppenorientierte Ansprache von Besuchern ermöglichen. Darüber hinaus bieten VR- und AR-Technologien die Möglichkeit zur publikumswirksamen Darstellung komplexer wissenschaftlicher Zusammenhänge.