Refine
Year of publication
- 2018 (101) (remove)
Document Type
- Conference Proceeding (64)
- Article (24)
- Part of a Book (6)
- Doctoral Thesis (5)
- Patent (1)
- Working Paper (1)
Language
- English (101) (remove)
Keywords
- Actuators (1)
- Agenda 2030 (1)
- Anisotropic hyperelasticity (1)
- Antenna arrays (1)
- Application Integration (1)
- Automotive Industry (1)
- Basic prices (1)
- Bernstein Basis (1)
- Bernstein polynomial (2)
- Bio-vital data (1)
Institute
- Fakultät Architektur und Gestaltung (1)
- Fakultät Bauingenieurwesen (1)
- Fakultät Informatik (15)
- Fakultät Maschinenbau (3)
- Fakultät Wirtschafts-, Kultur- und Rechtswissenschaften (5)
- Institut für Angewandte Forschung - IAF (5)
- Institut für Optische Systeme - IOS (13)
- Institut für Strategische Innovation und Technologiemanagement - IST (4)
- Institut für Systemdynamik - ISD (14)
- Konstanz Institut für Corporate Governance - KICG (1)
This work studies a wind noise reduction approach for communication applications in a car environment. An endfire array consisting of two microphones is considered as a substitute for an ordinary cardioid microphone capsule of the same size. Using the decomposition of the multichannel Wiener filter (MWF), a suitable beamformer and a single-channel post filter are derived. Due to the known array geometry and the location of the speech source, assumptions about the signal properties can be made to simplify the MWF beamformer and to estimate the speech and noise power spectral densities required for the post filter. Even for closely spaced microphones, the different signal properties at the microphones can be exploited to achieve a significant reduction of wind noise. The proposed beamformer approach results in an improved speech signal regarding the signal-to-noise-ratio and keeps the linear speech distortion low. The derived post filter shows equal performance compared to known approaches but reduces the effort for noise estimation.
Visualization-Assisted Development of Deep Learning Models in Offline Handwriting Recognition
(2018)
Deep learning is a field of machine learning that has been the focus of active research and successful applications in recent years. Offline handwriting recognition is one of the research fields and applications were deep neural networks have shown high accuracy. Deep learning models and their training pipeline show a large amount of hyper-parameters in their data selection, transformation, network topology and training process that are sometimes interdependent. This increases the overall difficulty and time necessary for building and training a model for a specific data set and task at hand. This work proposes a novel visualization-assisted workflow that guides the model developer through the hyper-parameter search in order to identify relevant parameters and modify them in a meaningful way. This decreases the overall time necessary for building and training a model. The contributions of this work are a workflow for hyper-parameter search in offline handwriting recognition and a heat map based visualization technique for deep neural networks in multi-line offline handwriting recognition. This work applies to offline handwriting recognition, but the general workflow can possibly be adapted to other tasks as well.
Although the Hospice Foundation in Constance knew they had a personnel
problem, they were unsure how to begin to fix it. In addition to difficulties in
finding and keeping employees, the Hospice Foundation’s employees were
often on sick leave, adding pressure on remaining staff. Twelve communication
design students in the masters program at the University of Applied
Sciences in Constance (HTWG Konstanz) conducted a study aimed at
identifying the causes for these problems and, more generally, understanding
how the employees work and feel. Even though the methods in this
study are well known, it presents an important prototype for designers and
design researchers because of its success in finding useful insights. It also
serves as a pre-design project briefing for both management and designers.
It demonstrates the usefulness of qualitative methods in providing a deeper
understanding of a complex situation and its usefulness as a strategic tool
and for defining a project’s focus and scope. Ideally, it also provides insights
into health care for the elderly.
A growing share of modern trade policy instruments is shaped by non-tariff barriers (NTBs). Based on a structural gravity equation and the recently updated Global Trade Alert database, we empirically investigate the effect of NTBs on imports. Our analysis reveals that the implementation of NTBs reduces imports of affected products by up to 12%. Their trade dampening effect is thus comparable to that of trade defence instruments such as anti-dumping duties. It is smaller for exporters that have a free trade agreement with the importing country. Different types of NTBs affect trade to a different extent. Finally, we investigate the effect of behind-the-border measures, showing that they significantly lower the importer’s market access.
Business units are increasingly able to fuel the transformation that digitalization demands of organizations. Thereby, they implement Shadow IT (SIT) to create flexible and innovative solutions. However, the individual implementation of SIT leads to high complexities and redundancies. Integration suggests itself to meet these challenges but can also eliminate the described benefits. In this emergent research, we develop propositions for a conceptual decision framework, that balances the benefits and drawbacks of an integration of SIT using a literature review as well as a multiple-case study. We thereby integrate the perspective of the overall organization as well as the specific business unit. We then pose six propositions regarding SIT integration that will serve to evaluate our conceptual framework in future research.
In 1970, B.A. Asner, Jr., proved that for a real quasi-stable polynomial, i.e., a polynomial whose zeros lie in the closed left half-plane of the complex plane, its finite Hurwitz matrix is totally nonnegative, i.e., all its minors are nonnegative, and that the converse statement is not true. In this work, we explain this phenomenon in detail, and provide necessary and sufficient conditions for a real polynomial to have a totally nonnegative finite Hurwitz matrix.
New Technology-Based Firms (NTBFs) learn their business in the early-stages of their life-cycle. As a central element of the entrepreneurial learning process, the business model describes the value-creation functions that are conceptualized in different stages of the NTBF’s life-cycle. Transaction relations connect the model with the business reality and ideally mature in strength over time to a functioning value-network. This chapter describes the development of a research design that determines, extracts, and evaluates semantics constructs of this entrepreneurial learning out of a convenient sample and three cohorts of business plans submitted to a business plan award between 2008 and 2010. The analysis shows empirical evidence for the survival and growth of those NTBFs that exhibit a balanced status of entrepreneurial learning in the maturity of the value-network that can be characterized as early startup-stage. The empirical findings of the network theory based business plan analysis will allow for a better explanation of the performance in the entrepreneurial process that is discussed for NTBFs based on theory of organizational learning.
The Role of Support-Activities for the successful Implementation of Internal Corporate Accelerators
(2018)
To get a better understanding of Cross Site Scripting vulnerabilities, we investigated 50 randomly selected CVE reports which are related to open source projects. The vulnerable and patched source code was manually reviewed to find out what kind of source code patterns were used. Source code pattern categories were found for sources, concatenations, sinks, html context and fixes. Our resulting categories are compared to categories from CWE. A source code sample which might have led developers to believe that the data was already sanitized is described in detail. For the different html context categories, the necessary Cross Site Scripting prevention mechanisms are described.
We investigated 50 randomly selected buffer overflow vulnerabilities in Firefox. The source code of these vulnerabilities and the corresponding patches were manually reviewed and patterns were identified. Our main contribution are taxonomies of errors, sinks and fixes seen from a developer's point of view. The results are compared to the CWE taxonomy with an emphasis on vulnerability details. Additionally, some ideas are presented on how the taxonomy could be used to improve the software security education.
Error correction coding based on soft-input decoding can significantly improve the reliability of non-volatile flash memories. This work proposes a soft-input decoder for generalized concatenated (GC) codes. GC codes are well suited for error correction in flash memories for high reliability data storage. We propose GC codes constructed from inner extended binary Bose-Chaudhuri-Hocquenghem (BCH) codes and outer Reed-Solomon codes. The extended BCH codes enable an efficient hard-input decoding. Furthermore, a low-complexity soft-input decoding method is proposed. This bit-flipping decoder uses a fixed number of test patterns and an algebraic decoder for soft-decoding. An acceptance criterion for the final candidate codeword is proposed. Combined with error and erasure decoding of the outer Reed-Solomon codes, this acceptance criterion can improve the decoding performance and reduce the decoding complexity. The presented simulation results show that the proposed bit-flipping decoder in combination with outer error and erasure decoding can outperform maximum likelihood decoding of the inner codes.
Generalised concatenated (GC) codes are well suited for error correction in flash memories for high-reliability data storage. The GC codes are constructed from inner extended binary Bose–Chaudhuri–Hocquenghem (BCH) codes and outer Reed–Solomon codes. The extended BCH codes enable high-rate GC codes and low-complexity soft input decoding. This work proposes a decoder architecture for high-rate GC codes. For such codes, outer error and erasure decoding are mandatory. A pipelined decoder architecture is proposed that achieves a high data throughput with hard input decoding. In addition, a low-complexity soft input decoder is proposed. This soft decoding approach combines a bit-flipping strategy with algebraic decoding. The decoder components for the hard input decoding can be utilised which reduces the overhead for the soft input decoding. Nevertheless, the soft input decoding achieves a significant coding gain compared with hard input decoding.
This paper presents a bed system able to analyze a person’s movement, breathing and recognize the positions that the subject is lying on the bed during the night without any additional physical contact. The measurements are performed with sensors placed between the mattress and the bed-frame. An Intel Edison board was used as an endpoint that served as a communication node from the mesh network to external service. Two nodes and Intel Edison are attached to the bottom of the bed frame and they are connected to the sensors. First test results have indicated the potential of the proposed approach for the recognition of sleep positions with 83% of correct recognized positions.
Research on Shadow IT is facing a conceptual dilemma in cases where previously “covert” systems developed by business entities are integrated in the organizational IT management. These systems become visible, are thus not “in the shadows” anymore, and subsequently do not fit to existing definitions of Shadow IT. Practice shows that some information systems share characteristics of Shadow IT but are created openly in alignment with the IT organization. This paper proposes the term “Business-managed IT” to describe “overt” information systems developed or managed by business entities and distinguishes it from Shadow IT by illustrating case vignettes. Accordingly, our contribution is to suggest a concept and its delineation against other concepts. In this way, IS researchers interested in IT originated from or maintained by business entities can construct theories with a wider scope of application that are at the same time more specific to practical problems. In addition, the terminology allows to value potentially innovative developments by business entities more adequately.
The process of restoring our body and brain from fatigue is directly depend-ing on the quality of sleep. It can be determined from the report of the sleep study results. Classification of sleep stages is the first step of this study and this includes the measurement of biovital data and its further processing.
In this work, the sleep analysis system is based on a hardware sensor net, namely a grid of 24 pressure sensors, supporting sleep phase recognition. In comparison to the leading standard, which is polysomnography, the proposed approach is a non-invasive system. It recognises respiration and body move-ment with only one type of low-cost pressure sensors forming a mesh archi-tecture. The nodes implement as a series of pressure sensors connected to a low-power and performant microcontroller. All nodes are connected via a system wide bus with address arbitration. The embedded processor is the mesh network endpoint that enables network configuration, storing and pre-processing of the data, external data access and visualization.
The system was tested by executing experiments recording the sleep of different healthy young subjects. The results obtained have indicated the po-tential to detect breathing rate and body movement. A major difference of this system in comparison to other approaches is the innovative way to place the sensors under the mattress. This characteristic facilitates the continuous using of the system without any influence on the common sleep process.
SDG Voyager - A practical guide to align business excellence with Sustainable Development Goals
(2018)
By now, an inflationary high number of international publications on the topic “Agenda 2030” exist. But unanswered to this day seems to be the question of how the CSR-management of a company can make a concrete contribution to the SDGs. Instead of unilaterally demanding the reporting of companies’ sustainability activities, the SDG Voyager starts earlier in the process with the intention of encouraging companies of all sizes to become familiar with the fields of action for corporate responsibility and to attend to these issues without feeling overwhelmed. Many companies will find that they are already making a big contribution to sustainable development in a number of fields. In other areas, however, there will still be an urgent need for action. The SDG Voyager aims to acquaint companies with these topics and support them to fulfill their responsibilities towards their stakeholders and society.
We identify 74 generic, reusable technical requirements based on the GDPR that can be applied to software products which process personal data. The requirements can be traced to corresponding articles and recitals of the GDPR and fulfill the key principles of lawfulness and transparency. Therefore, we present an approach to requirements engineering with regard to developing legally compliant software that satisfies the principles of privacy by design, privacy by default as well as security by design.
We consider the problem of increasing the informative value of electrocardiographic (ECG) surveys using data from multichannel electrocardiographic leads, that include both recorded electrocardiosignals and the coordinates of the electrodes placed on the surface of the human torso. In this area, we were interested in reconstruction of the surface distribution of the equivalent sources during the cardiac cycle at relatively low hardware cost. In our work, we propose to reconstruct the equivalent electrical sources by numerical methods, based on integral connection between the density of electrical sources and potential in a conductive medium. We consider maps of distributions of equivalent electric sources on the heart surface (HSSM), presenting source distributions in the form of a simple or double electrical layer. We indicate the dynamics of the heart electrical activity by the space-time mapping of equivalent electrical sources in HSSM.
In tourism, energy demands are particularly high.Tourism facilities such as hotels require large amounts ofelectric and heating resp. cooling energy. Their supply howeveris usually still based on fossil energies. This research approachanalyses the potential of promoting renewable energies in BlackForest tourism. It focuses on a combined and hence highlyefficient production of both electric and thermal energy bybiogas plants on the one hand and its provision to local tourismfacilities via short distance networks on the other. Basing onsurveys and qualitative empiricism and considering regionalresource availability as well as socio-economic aspects, it thusexamines strengths, weaknesses, opportunities and threats thatcan arise from such a cooperation.
In tourism, energy demands are particularly high. Tourism facilities such as hotels require large amounts of electric and heating / cooling energy while their supply is usually still based on fossil energies.
This research approach analyses the potential of promoting renewable energies in tourism. It focuses on a combined and hence highly efficient production of both electric and thermal energy by biogas plants on the one hand and its provision to local tourism facilities via short distance networks on the other. Considering regional resource availability as well as socio-economic aspects, it thus examines strengths, weaknesses, opportunities and threats that can arise from such a micro-cooperation. The research aim is to provide an actor-based, spatially transferable feasibility analysis.
The overall goal of this work is to detect and analyze a person's movement, breathing and heart rate during sleep in a common bed overnight without any additional physical contact. The measurement is performed with the help of
sensors placed between the mattress and the frame. A two-stage pattern classification algorithm based has been implemented that applies statistics analysis to recognize the position of patients. The system is implemented in a sensors-network, hosting several nodes and communication end-points to support quick and efficient classification. The overall tests show convincing results for the position recognition and a reasonable overlap in matching.
According to the World Food Organization, nearly half of all root and tuber crops worldwide are not consumed, but are lost due to inappropriate storage and post-harvest losses. In developing countries such as Ethiopia, potatoes have not been dried, but are traditionally stored in potato clamps. So far, dried potatoes have not been converted into usable foods.
The aim of the present work is to convert potatoes - perishable rootlets and tubers - into stable products by hot air drying. Hot air dryers are economical to operate in industrialized countries. In Africa, this is reserved for larger industrial companies only. In regions with a tropical climate, however, the use of solar tunnel dryers is worthwhile. These are a good choice for farming and small industries and wherever electrical energy is difficult or impossible to obtain.
In a first part of the work, the drying process of potatoes was investigated, in particular with regard to the change of thermal, mechanical and chemical quality parameters. In an evaluation of the literature it was found that potatoes are not subject to quality changes if the water activityis below a value of 0.2. In order to determine the water content associated with this value at storage temperature, the known equations for the sorption equilibrium were evaluated and verified with own experimental investigations. This determined the end point of the drying process.
The following experimental investigations showed a process-dependent change of the quality criteria such as color, shrinkage, and mechanical properties as well as the content of valuedetermining substances such as vitamin C and starch. The differences in the course and magnitude of the quality changes were attributed to the glass transition that takes place during the drying process. For the determination of the glass transition temperature a new, simple method based on the measurement of mechanical properties could be developed. The knowledge of the glass transition temperature allowed optimizing the drying process. The drying process could be carried out in the rubbery or glassy region, depending on the expected quality changes. Thus, all information was available to produce high quality dried potatoes in an industrial process.
Since the production of potato products in less industrialized regions without sufficient supply of electrical energy should be included, potatoes were dried with a solar tunnel dryer. Examination of the quality properties mentioned above confirmed the process-dependent quality changes.
Finally, the dried product was ground and with the flour thus produced, wheat flour was replaced for baking bread. An evaluation of the finished bread by a panel showed that the acceptance of the bread according to the new recipe was high, also with regard to baking volume, taste, texture and color.
This work shows that by drying potatoes can be transformed a well accepted, storable and easily transportable product. The risk of losses or degradation is minimized. It can be produced on an industrial as well as on farm level. If the influence of the glass transition is taken into account, it is possible to optimize the quality of the product.
Optical surface inspection: A novelty detection approach based on CNN-encoded texture features
(2018)
In inspection systems for textured surfaces, a reference texture is typically known before novel examples are inspected. Mostly, the reference is only available in a digital format. As a consequence, there is no dataset of defective examples available that could be used to train a classifier. We propose a texture model approach to novelty detection. The texture model uses features encoded by a convolutional neural network (CNN) trained on natural image data. The CNN activations represent the specific characteristics of the digital reference texture which are learned by a one-class classifier. We evaluate our novelty detector in a digital print inspection scenario. The inspection unit is based on a camera array and a flashing light illumination which allows for inline capturing of multichannel images at a high rate. In order to compare our results to manual inspection, we integrated our inspection unit into an industrial single-pass printing system.
A constructive nonlinear observer design for self-sensing of digital (ON/OFF) single coil electromagnetic actuators is studied. Self-sensing in this context means that solely the available energizing signals, i.e., coil current and driving voltage are used to estimate the position and velocity trajectories of the moving plunger. A nonlinear sliding mode observer is considered, where the stability of the reduced error dynamics is analyzed by the equivalent control method. No simplifications are made regarding magnetic saturation and eddy currents in the underlying dynamical model. The observer gains are constructed by taking into account some generic properties of the systems nonlinearities. Two possible choices of the observer gains are discussed. Furthermore, an observer-based tracking control scheme to achieve sensorless soft landing is considered and its closed-loop stability is studied. Experimental results for observer-based soft landing of a fast-switching solenoid valve under dry conditions are presented to demonstrate the usefulness of the approach.
Sleep study can be used for detection of sleep quality and in general bed behaviors. These results can helpful for regulating sleep and recognizing different sleeping disorders of human. In comparison to the leading standard measuring system, which is Polysomnography (PSG), the system proposed in this work is a non-invasive sleep monitoring device. For continuous analysis or home use, the PSG or wearable Actigraphy devices tends to be uncomfortable. Besides, these methods not only decrease practicality due to the process of having to put them on, but they are also very expensive. The system proposed in this paper classifies respiration and body movement with only one type of sensor and also in a noninvasive way. The sensor used is a pressure sensor. This sensor is low cost and can be used for commercial proposes. The system was tested by carrying out an experiment that recorded the sleep process of a subject. These recordings showed excellent results in the classification of breathing rate and body movements.
Product development and product manufacturing are entering a new era, namely an era where engineering tasks are executed under collaboration of all involved parties. Engineers and potential customers work together mainly in a virtual world for the design and realization of the product. We address this so called “crowdsourcing” trend in the automotive industry that lowers cost and accelerates production of new car. Current practice and prior studies fail to handle data management and collaboration aspects in sufficient detail. We propose a PLM based crowdsourcing platform that applies best practices to the established approach and adapt it with new methods for handling specific requirements. Our work provides a basis for establishing an improved collaboration platform to support a Gig Economy in the automotive industry.
This paper focuses on the multivariable control of a drawing tower process. The nature of the process together with the differences in measurement noise levels that affect the variables to be controlled motivated the development of a new MPC algorithm. An extension of a multivariable predictive control algorithm with separated prediction horizons is proposed. The obtained experimental results show the usefulness of the proposed algorithm..
Autonomous moving systems require very detailed information about their environment and potential colliding objects. Thus, the systems are equipped with high resolution sensors. These sensors have the property to generate more than one detection per object per time step. This results in an additional complexity for the target tracking algorithm, since standard tracking filters assume that an object generates at most one detection per object. This requires new methods for data association and system state filtering.
As new data association methods, in this thesis two different extensions of the Joint Integrated Probabilistic Data Association (JIPDA) filter to assign more than one detection to tracks are proposed.
The first method that is introduced, is a generalization of the JIPDA to assign a variable number of measurements to each track based on some predefined statistical models, which will be called Multi Detection - Joint Integrated Probabilistic Data Association (MD-JIPDA).
Since this scheme suffers from exponential increase of association hypotheses, also a new approximation scheme is presented. The second method is an extension for the special case, when the number and locations of measurements are a priori known. In preparation of this method, a new notation and computation scheme for the standard Joint Integrated Data Association is outlined, which also enables the derivation of a new fast approximation scheme called balanced permanent-JIPDA.
For state filtering, also two different concepts are applied: the Random Matrix Framework and the Measurement Generating Points. For the Random Matrix framework, first an alternative prediction method is proposed to account for kinematic state changes in the extension state prediction as well. Secondly, various update methods are investigated to account for the polar to Cartesian noise transformation problem. The filtering concepts are connected with the new MD-JIPDA and their characteristics analyzed with various Monte Carlo simulations.
In case an object can be modeled by a finite number of fixed Measurement Generating Points (MGP), also a proposition to track these object via a JIPDA filter is made. In this context, a fast Track-to-Track fusion algorithm is proposed as well and compared against the MGP-JIPDA.
The proposed algorithms are evaluated in two applications where scanning is done using radar sensors only. The first application is a typical automotive scenario, where a passenger car is equipped with six radar sensors to cover its complete environment.
In this application, the location of the measurements on an object can be considered stationary and that is has a rectangular shape. Thus, the MGP based algorithms are applied here. The filters are evaluated by tracking especially vehicles on nearside lanes.
The second application covers the tracking of vessels on inland waters. Here, two different kind of Radar systems are applied, but for both sensors a uniform distribution of the measurements over the target's extent can be assumed. Further, the assumption that the targets have elliptical shape holds, and so the Random Matrix Framework in combination with the MD-JIPDA is evaluated.
Exemplary test scenarios also illustrate the performance of this tracking algorithm.
Offline handwriting recognition systems often use LSTM networks, trained with line- or word-images. Multi-line text makes it necessary to use segmentation to explicitly obtain these images. Skewed, curved, overlapping, incorrectly written text, or noise can lead to errors during segmentation of multi-line text and reduces the overall recognition capacity of the system. Last year has seen the introduction of deep learning methods capable of segmentation-free recognition of whole paragraphs. Our method uses Conditional Random Fields to represent text and align it with the network output to calculate a loss function for training. Experiments are promising and show that the technique is capable of training a LSTM multi-line text recognition system.
Embodiments are generally related to the field of channel and source coding of data to be sent over a channel, such as a communication link or a data memory. Some specific embodiments are related to a method of encoding data for transmission over a channel, a corresponding decoding method, a coding device for performing one or both of these methods and a computer program comprising instructions to cause said coding device to perform one or both of said methods.
It is widely recognized that sustainability is a new challenge for many manufacturing companies. In this paper, we tackle this issue by presenting an approach that deals with material and substance compliance within Product Lifecycle Management in a complex value chain. Our analysis explains why, how and when sustainable manufacturing arises, and it identifies, quantifies and evaluates the environmental impact of a new product. We propose (I) a Life Cycle Assessment tool (LCA) and (II) a model to validate this approach and evaluate the risk of noncompliance in supply chain. Our LCA approach provides comprehensive information on environmental impacts of a product.
Product and materials cycles are parallel and intersecting, making it challenging to integrate Material Selection Process across Product Lifecycle Management, Integration of LCA with PLM. We provide only a foundation. Further research in systems engineering is necessary. LCA is sensitive to data quality. Outsourcing production and having problems in supplier cooperation can result in material mismatch (such as property, composition mismatching) in the production process due to that may cause misleading of LCA results.
This paper also describes research challenges using riskbased due diligence.
It is widely recognized that sustainability is a new challenge for many manufacturing companies. In this paper, we tackle this issue by presenting an approach that deals with material and substance compliance within Product Lifecycle Management in a complex value chain. Our analysis explains why, how and when sustainable manufacturing arises, and it identifies, quantifies and evaluates the environmental impact of a new product. We propose (I) a Life Cycle Assessment tool (LCA) and (II) a model to validate this approach and evaluate the risk of non-compliance in supply chain. Our LCA approach provides comprehensive information on environmental impacts of a product.
Product and materials cycles are parallel and intersecting, making it challenging to integrate Material Selection Process across Product Lifecycle Management, Integration of LCA with PLM. We provide only a foundation. Further research in systems engineering is necessary. LCA is sensitive to data quality. Outsourcing production and having problems in supplier cooperation can result in material mismatch (such as property, composition mismatching) in the production process due to that may cause misleading of LCA results. This paper also describes research challenges using risk-based due diligence.
Algorithms for calculating the string edit distance are used in e.g. information retrieval and document analysis systems or for evaluation of text recognizers. Text recognition based on CTC-trained LSTM networks includes a decoding step to produce a string, possibly using a language model, and evaluation using the string edit distance. The decoded string can further be used as a query for database search, e.g. in document retrieval. We propose to closely integrate dictionary search with text recognition to train both combined in a continuous fashion. This work shows that LSTM networks are capable of calculating the string edit distance while allowing for an exchangeable dictionary to separate learned algorithm from data. This could be a step towards integrating text recognition and dictionary search in one deep network.
This work proposes a construction for low-density parity-check (LDPC) codes over finite Gaussian integer fields. Furthermore, a new channel model for codes over Gaussian integers is introduced and its channel capacity is derived. This channel can be considered as a first order approximation of the additive white Gaussian noise channel with hard decision detection where only errors to nearest neighbors in the signal constellation are considered. For this channel, the proposed LDPC codes can be decoded with a simple non-probabilistic iterative decoding algorithm similar to Gallager's decoding algorithm A.
Knot placement for curve approximation is a well known and yet open problem in geometric modeling. Selecting knot values that yield good approximations is a challenging task, based largely on heuristics and user experience. More advanced approaches range from parametric averaging to genetic algorithms.
In this paper, we propose to use Support Vector Machines (SVMs) to determine suitable knot vectors for B-spline curve approximation. The SVMs are trained to identify locations in a sequential point cloud where knot placement will improve the approximation error. After the training phase, the SVM can assign, to each point set location, a so-called score. This score is based on geometric and differential geometric features of points. It measures the quality of each location to be used as knots in the subsequent approximation. From these scores, the final knot vector can be constructed exploring the topography of the score-vector without the need for iteration or optimization in the approximation process. Knot vectors computed with our approach outperform state of the art methods and yield tighter approximations.
We propose a novel end-to-end neural network architecture that, once trained, directly outputs a probabilistic clustering of a batch of input examples in one pass. It estimates a distribution over the number of clusters k, and for each 1≤k≤kmax, a distribution over the individual cluster assignment for each data point. The network is trained in advance in a supervised fashion on separate data to learn grouping by any perceptual similarity criterion based on pairwise labels (same/different group). It can then be applied to different data containing different groups. We demonstrate promising performance on high-dimensional data like images (COIL-100) and speech (TIMIT). We call this “learning to cluster” and show its conceptual difference to deep metric learning, semi-supervise clustering and other related approaches while having the advantage of performing learnable clustering fully end-to-end.
Know when you don't know
(2018)
Deep convolutional neural networks show outstanding performance in image-based phenotype classification given that all existing phenotypes are presented during the training of the network. However, in real-world high-content screening (HCS) experiments, it is often impossible to know all phenotypes in advance. Moreover, novel phenotype discovery itself can be an HCS outcome of interest. This aspect of HCS is not yet covered by classical deep learning approaches. When presenting an image with a novel phenotype to a trained network, it fails to indicate a novelty discovery but assigns the image to a wrong phenotype. To tackle this problem and address the need for novelty detection, we use a recently developed Bayesian approach for deep neural networks called Monte Carlo (MC) dropout to define different uncertainty measures for each phenotype prediction. With real HCS data, we show that these uncertainty measures allow us to identify novel or unclear phenotypes. In addition, we also found that the MC dropout method results in a significant improvement of classification accuracy. The proposed procedure used in our HCS case study can be easily transferred to any existing network architecture and will be beneficial in terms of accuracy and novelty detection.
Investigation of magnetic effects on austenitic stainless steels after low temperature carburization
(2018)
This work aims at investigating the magnetic effects of austenitc stainless steels which can occur after a low temperature carburisation depending on the alloy. Samples were prepared of different alloys and subjected to a multiple low temperature carburisation to obtain different treatment conditions for each alloy. The layer characterisation was carried out by light microscope and also by hardening profiles and shows that the layer develops with each additional treatment cycle. A lattice expansion could be detected in all treated samples by X-ray diffraction. Magnetisability was measured using Feritscope and SQUID measurements. Not all alloys showed magnetisability after treatment. In addition to MFM measurements, experiments with Ferrofluid were also used to visualize the magnetic areas. These studies show that only about half of the formed layer becomes magnetisable and has a domain-like structure.
Investigation of magnetic effects on austenitic stainless steels after low temperature carburization
(2018)
Influence of Temperature on the Corrosion behaviour of Stainless Steels under Tribological Stress
(2018)
Posterpräsentation
Deep neural networks have been successfully applied to problems such as image segmentation, image super-resolution, coloration and image inpainting. In this work we propose the use of convolutional neural networks (CNN) for image inpainting of large regions in high-resolution textures. Due to limited computational resources processing high-resolution images with neural networks is still an open problem. Existing methods separate inpainting of global structure and the transfer of details, which leads to blurry results and loss of global coherence in the detail transfer step. Based on advances in texture synthesis using CNNs we propose patch-based image inpainting by a single network topology that is able to optimize for global as well as detail texture statistics. Our method is capable of filling large inpainting regions, oftentimes exceeding quality of comparable methods for images of high-resolution (2048x2048px). For reference patch look-up we propose to use the same summary statistics that are used in the inpainting process.
Corporate venturing has gained much attention due
to challenges and changes that occur because of discontinuous
innovations – which seem to be promoted by digitalization. In this
context, open innovation has become a promising tool for
established companies to strengthen their innovation capabilities.
While the external opening of the innovation process has gained
much attention, the internal opening lacks on investigations.
Especially new organizational forms, such as Internal Corporate
Accelerators, have not been investigated sufficiently. This study,
which is based on 13 interviews from two German tech-companies,
contributes to a better understanding of this new form of corporate
venturing and the resulting effects on the organizational renewal.
This letter proposes two contributions to improve the performance of transmission with generalized multistream spatial modulation (SM). In particular, a modified suboptimal detection algorithm based on the Gaussian approximation method is proposed. The proposed modifications reduce the complexity of the Gaussian approximation method and improve the performance for high signal-to-noise ratios. Furthermore, this letter introduces signal constellations based on Hurwitz integers, i.e., a 4-D lattice. Simulation results demonstrate that these signal constellations are beneficial for generalized SM with two active antennas.
Further applications of the Cauchon algorithm to rank determination and bidiagonal factorization
(2018)
For a class of matrices connected with Cauchon diagrams, Cauchon matrices, and the Cauchon algorithm, a method for determining the rank, and for checking a set of consecutive row (or column) vectors for linear independence is presented. Cauchon diagrams are also linked to the elementary bidiagonal factorization of a matrix and to certain types of rank conditions associated with submatrices called descending rank conditions.
Today’s markets are characterized by fast and radical changes, posing an essential challenge to established companies. Startups, yet, seem to be more capable in developing radical innovations to succeed in those volatile markets. Thus, established companies started to experiment with various approaches to implement startup-like structures in their organization. Internal corporate accelerators (ICAs) are a novel form of corporate venturing, aiming to foster bottom-up innovations through intrapreneurship. However, ICAs still lack empirical investigations. This work contributes to a deeper understanding of the interface between the ICA and the core organization and the respective support activities (resource access and support services) that create an innovation-supportive work environment for the intrapreneurial team. The results of this qualitative study, comprising 12 interviews with ICA teams out of two German high-tech companies, show that the resources provided by ICAs differ from the support activities of external accelerators. Further, the study shows that some resources show both supportive as well as obstructive potential for the intrapreneurial teams within the ICA.
With the increased deployment of biometric authentication systems, some security concerns have also arisen. In particular, presentation attacks directed to the capture device pose a severe threat. In order to prevent them, liveness features such as the blood flow can be utilised to develop presentation attack detection (PAD) mechanisms. In this context, laser speckle contrast imaging (LSCI) is a technology widely used in biomedical applications in order to visualise blood flow. We therefore propose a fingerprint PAD method based on textural information extracted from pre-processed LSCI images. Subsequently, a support vector machine is used for classification. In the experiments conducted on a database comprising 32 different artefacts, the results show that the proposed approach classifies correctly all bona fides. However, the LSCI technology experiences difficulties with thin and transparent overlay attacks.
Due to their structure of crossed yarns embedded in coating, woven fabric membranes are characterised by a highly nonlinear stress-strain behaviour. In order to determine an accurate structural response of membrane structures, a suitable description of the material behaviour is required. Typical phenomenological material models like linear-elastic orthotropic models only allow a limited determination of the real material behaviour. A more accurate approach becomes evident by focusing on the meso-scale, which reveals an inhomogeneous however periodic structure of woven fabrics. The present work focuses on an established meso-scale model. The novelty of this work is an enhancement of this model with regard to the coating stiffness. By performing an inverse process of parameter identification using a state-of-the-art Levenberg-Marquardt algorithm, a close fit w.r.t. measured data from a common biaxial test is shown and compared to results applying established models. Subsequently, the enhanced meso-scale model is processed into a multi-scale model and is implemented as a material law into a finite element program. Within finite element analyses of an exemplary full scale membrane structure by using the implemented material model as well as by using established material models, the results are compared and discussed.
E-mobility in Tourism
(2018)
This article examines chances for and obstacles to e-mobility in tourism at the cross-border region of Lake Constance, Germany. Using secondary internet research, a database of key e-mobility supply factors was generated and visualized utilizing a geographical information system. The results show that fragmentation in infrastructure and information due to the cross-border situation of the four-country region is the main obstacle for e-mobility in tourism in the Lake Constance region. Cooperation and coordination of the supply side of e-mobility in the Lake Constance region turned out to be weak. To improve the chances of e-mobility in cross-border tourism a more client-oriented approach regarding information, accessibility, and conditions of use is necessary.
Simon Grimm examines new multi-microphone signal processing strategies that aim to achieve noise reduction and dereverberation. Therefore, narrow-band signal enhancement approaches are combined with broad-band processing in terms of directivity based beamforming. Previously introduced formulations of the multichannel Wiener filter rely on the second order statistics of the speech and noise signals. The author analyses how additional knowledge about the location of a speaker as well as the microphone arrangement can be used to achieve further noise reduction and dereverberation.
The article opens with brief examples of the varied contexts in which professionals with expert knowledge of intercultural communication are commissioned
by organisations to help improve organisational and individual performance.
The development needs that are evident in these contexts entail a broader repertoire of competencies than those generally reflected in the term intercultural communication skills, and so the next section elaborates on the concept of intercultural interaction competence (ICIC), reporting on the numerous sub-competencies which go to make up the ability to perform joint and purposeful activity effectively and appropriately across cultures.
The third section reports on approaches to developing the ICIC of members of organisations, discussing desirable framework conditions for the development
intervention, its possible goals, the nature of the cognitive, affective and behavioural development outcomes which can be achieved, and the content, methods and tools available to the intercultural developer.
The article finishes with a brief consideration of the qualification profile of interculturalists engaged in this kind of work, pointing out that a multi-disciplinary background will enable interculturalists to meet a broad range of organisational and human resource development needs in the area of intercultural communication which go beyond the ‘mere’ development of communicative foreign-language skills.
When integrating task-based learning into LSP courses, assessment procedures have to reflect the communicative goals of such tasks. However, performances on authentic and complex tasks are often difficult to score, because they involve not only specific language ability but also field specific knowledge. Moreover, the emphasis of tasks on meaning suggests that the communicative outcome should be regarded as the main criterion of success. This paper focuses on task design and scoring. It is argued that tasks have to be adapted to assessment purposes. However, to avoid a misalignment between teaching/learning and assessment, they should comprise the main features of LSP tasks in the sense of task-based language learning.
In this paper we present a method using deep learning to compute parametrizations for B-spline curve approximation. Existing methods consider the computation of parametric values and a knot vector as separate problems. We propose to train interdependent deep neural networks to predict parametric values and knots. We show that it is possible to include B-spline curve approximation directly into the neural network architecture. The resulting parametrizations yield tight approximations and are able to outperform state-of-the-art methods.
Corporate venturing is one way for corporations to
introduce strategic renewal into their business portfolios, which is
imperative for ongoing success in innovation-driven industries.
Prior research finds that corporate ventures should be separated
from the mainstream business in loosely coupled sub-units, but
scholars continue to discuss how loose or tight the ventures should
be to balance exploration and exploitation. Hence, the antecedents
for successful venture management are yet to be fully explored and
our study contributes to this effort. The study shows that
corporate venture success is enhanced when corporate
management grants job and strategic autonomy to the venture
managers. This is further amplified when corporate management
simultaneously imposes an exploitative policy that forces venture
managers to prioritize extensions to and improvements of existing
competences and product-market offerings.
Long-term sleep monitoring can be done primarily in the home environment. Good patient acceptance requires low user and installation barriers. The selection of parameters in this approach is significantly limited compared to a PSG session. The aim is a qualified selection of parameters, which on the one hand allow a sufficiently good classification of sleep phases and on the other hand can be detected by non-invasive methods.
Business units are increasingly able to fuel the transformation that digitalization demands of organizations. Thereby, they can implement Shadow IT (SIT) without involving a central IT department to create flexible and innovative solutions. Self-reinforcing effects lead to an intertwinement of SIT with the organization. As a result, high complexities, redundancies, and sometimes even lock-ins occur. IT Integration suggests itself to meet these challenges. However, it can also eliminate the benefits that SIT presents. To help organizations in this area of conflict, we are conducting a literature review including a systematic search and an analysis from a systemic viewpoint using path dependency and switching costs. Our resulting conceptual framework for SIT integration drawbacks classifies the drawbacks into three dimensions. The first dimension consists of switching costs that account for the financial, procedural, and emotional drawbacks and the drawbacks from a loss of SIT benefits. The second dimension includes organizational, technical, and level-spanning criteria. The third dimension classifies the drawbacks into the global level, the local level, and the interaction between them. We contribute to the scientific discussion by introducing a systemic viewpoint to the research on shadow IT. Practitioners can use the presented criteria to collect evidence to reach an IT integration decision.
Industrial growth and a rapidly growing world population have large impacts on the global environment and allocation of material resources. Most changes in the environment are brought about by human activities and these activities result in a flow of materials. The flows of resources from the natural environment to the economy are a prerequisite of production while flows of residuals from the economy to the environment are the consequence of production and consumption. A full understanding of these processes requires a complete description of the physical dimension of the economy and its interaction with the environment.
Comparison and Identifiability Analysis of Friction Models for the Dither Motion of a Solenoid
(2018)
In this paper, the mechanical subsystem of a proportional solenoid excited by a dither signal is considered. The objective is to find a suitable friction model that reflects the characteristic mechanical properties of the dynamic system. Several different friction models from the literature are compared. The friction models are evaluated with respect to their accuracy as well as their practical identifiability, the latter being quantified based on the Fisher information matrix.
Deep neural networks have become a veritable alternative to classic speaker recognition and clustering methods in recent years. However, while the speech signal clearly is a time series, and despite the body of literature on the benefits of prosodic (suprasegmental) features, identifying voices has usually not been approached with sequence learning methods. Only recently has a recurrent neural network (RNN) been successfully applied to this task, while the use of convolutional neural networks (CNNs) (that are not able to capture arbitrary time dependencies, unlike RNNs) still prevails. In this paper, we show the effectiveness of RNNs for speaker recognition by improving state of the art speaker clustering performance and robustness on the classic TIMIT benchmark. We provide arguments why RNNs are superior by experimentally showing a “sweet spot” of the segment length for successfully capturing prosodic information that has been theoretically predicted in previous work.
Research on Shadow IT is facing a conceptual dilemma in cases where previously "covert" systems developed by business entities (individual users, business workgroups, or business units) are integrated in the organizational IT management. These systems become visible, are therefore not "in the shadows" anymore, and subsequently do not fit to existing definitions of Shadow IT. Practice shows that some information systems share characteristics of Shadow IT, but are created openly in alignment with the IT department. This paper therefore proposes the term "Business-managed IT" to describe "overt" information systems developed or managed by business entities. We distinguish Business-managed IT from Shadow IT by illustrating case vignettes. Accordingly, our contribution is to suggest a concept and its delineation against other concepts. In this way, IS researchers interested in IT originated from or maintained by business entities can construct theories with a wider scope of application that are at the same time more specific to practical problems. In addition, value-laden terminology is complemented by a vocabulary that values potentially innovative developments by business entities more adequately. From a practical point of view, the distinction can be used to discuss the distribution of task responsibilities for information systems.
This thesis considers bounding functions for multivariate polynomials and rational functions over boxes and simplices. It also considers the synthesis of polynomial Lyapunov functions for obtaining the stability of control systems. Bounding the range of functions is an important issue in many areas of mathematics and its applications like global optimization, computer aided geometric design, robust control etc.
Objective: This paper presents an algorithm for non-invasive sleep stage identification using respiratory, heart rate and movement signals. The algorithm is part of a system suitable for long-term monitoring in a home environment, which should support experts analysing sleep. Approach: As there is a strong correlation between bio-vital signals and sleep stages, multinomial logistic regression was chosen for categorical distribution of sleep stages. Several derived parameters of three signals (respiratory, heart rate and movement) are input for the proposed method. Sleep recordings of five subjects were used for the training of a machine learning model and 30 overnight recordings collected from 30 individuals with about 27 000 epochs of 30 s intervals each were evaluated. Main results: The achieved rate of accuracy is 72% for Wake, NREM, REM (with Cohen's kappa value 0.67) and 58% for Wake, Light (N1 and N2), Deep (N3) and REM stages (Cohen's kappa is 0.50). Our approach has confirmed the potential of this method and disclosed several ways for its improvement. Significance: The results indicate that respiratory, heart rate and movement signals can be used for sleep studies with a reasonable level of accuracy. These inputs can be obtained in a non-invasive way applying it in a home environment. The proposed system introduces a convenient approach for a long-term monitoring system which could support sleep laboratories. The algorithm which was developed allows for an easy adjustment of input parameters that depend on available signals and for this reason could also be used with various hardware systems.
Input–Output modellers are often faced with the task of estimating missing Use tables at basic prices and also valuation matrices of the individual countries. This paper examines a selection of estimation methods applied to the European context where the analysts are not in possession of superior data. The estimation methods are restricted to the use of automated methods that would require more than just the row and column sums of the tables (as in projections) but less than a combination of various conflicting information (as in compilation). The results are assessed against the official Supply, Use and Input–Output tables of Belgium, Germany, Italy, Netherlands, Finland, Austria and Slovakia by using matrix difference metrics. The main conclusion is that using the structures of previous years usually performs better than any other approach.
This paper describes the effectiveness and efficiency of Virtual Reality training during a commissioning process. Therefore, 500 picking orders with more than 2000 part-picking operations with 30 test persons have been conducted and analyzed in the Modellfabrik Bodensee. The study points out the advantages and disadvantages of virtual training in comparison to a real execution of a picking process with and without any training.
The paper investigates an innovative actuator combination based on the magnetic shape memory technology. The actuator is composed of an electromagnet, which is activated to produce motion, and a magnetic shape memory element, which is used passively to yield multistability, i.e. the possibility of holding a position without input power. Based on the experimental open-loop frequency characterization of the actuator, a position controller is developed and tested in several experiments.
Advanced approaches for analysis and form finding of membrane structures with finite elements
(2018)
Part I deals with material modelling of woven fabric membranes. Due to their structure of crossed yarns embedded in coating, woven fabric membranes are characterised by a highly nonlinear stress-strain behaviour. In order to determine an accurate structural response of membrane structures, a suitable description of the material behaviour is required. A linear elastic orthotropic model approach, which is current practice, only allows a relative coarse approximation of the material behaviour. The present work focuses on two different material approaches: A first approach becomes evident by focusing on the meso-scale. The inhomogeneous, however periodic structure of woven fabrics motivates for microstructural modelling. An established microstructural model is considered and enhanced with regard to the coating stiffness. Secondly, an anisotropic hyperelastic material model for woven fabric membranes is considered. By performing inverse processes of parameter identification, fits of the two different material models w.r.t. measured data from a common biaxial test are shown. The results of the inversely parametrised material models are compared and discussed.
Part II presents an extended approach for a simultaneous form finding and cutting patterning computation of membrane structures. The approach is formulated as an optimisation problem in which both the geometries of the equilibrium and cutting patterning configuration are initially unknown. The design objectives are minimum deviations from prescribed stresses in warp and fill direction along with minimum shear deformation. The equilibrium equations are introduced into the optimisation problem as constraints. Additional design criteria can be formulated (for the geometry of seam lines etc.). Similar to the motivation for the Updated Reference Strategy [4] the described problem is singular in the tangent plane. In both the equilibrium and the cutting patterning configuration finite element nodes can move without changing stresses. Therefore, several approaches are presented to stabilise the algorithm. The overall result of the computation is a stressed equilibrium and an unstressed cutting patterning geometry. The interaction of both configurations is described in Total Lagrangian formulation.
The microstructural model, which is focused in Part I, is applied. Based on this approach, information about fibre orientation as well as the ending of fibres at cutting edges are available. As a result, more accurate results can be computed compared to simpler approaches commonly used in practice.
When designing drying processes for sensitive biological foodstuffs like fruit or vegetables, energy and time efficiency as well as product quality are gaining more and more importance. These all are greatly influenced by the different drying parameters (e.g. air temperature, air velocity and dew point temperature) in the process. In sterilization of food products the cooking value is widely used as a cross-link between these parameters. In a similar way, the so-called cumulated thermal load (CTL) was introduced for drying processes. This was possible because most quality changes mainly depend on drying air temperature and drying time. In a first approach, the CTL was therefore defined as the time integral of the surface temperature of agricultural products. When conducting experiments with mangoes and pineapples, however, it was found that the CTL as it was used had to be adjusted to a more practical form. So the definition of the CTL was improved and the behaviour of the adjusted CTL (CTLad) was investigated in the drying of pineapples and mangoes. On the basis of these experiments and the work that had been done on the cooking value, it was found, that more optimization on the CTLad had to be done to be able to compare a great variety of different products as well as different quality parameters.
The introduction of multiple-level cell (MLC) and triple-level cell (TLC) technologies reduced the reliability of flash memories significantly compared with single-level cell flash. With MLC and TLC flash cells, the error probability varies for the different states. Hence, asymmetric models are required to characterize the flash channel, e.g., the binary asymmetric channel (BAC). This contribution presents a combined channel and source coding approach improving the reliability of MLC and TLC flash memories. With flash memories data compression has to be performed on block level considering short-data blocks. We present a coding scheme suitable for blocks of 1 kB of data. The objective of the data compression algorithm is to reduce the amount of user data such that the redundancy of the error correction coding can be increased in order to improve the reliability of the data storage system. Moreover, data compression can be utilized to exploit the asymmetry of the channel to reduce the error probability. With redundant data, the proposed combined coding scheme results in a significant improvement of the program/erase cycling endurance and the data retention time of flash memories.
Generalized concatenated (GC) codes with soft-input decoding were recently proposed for error correction in flash memories. This work proposes a soft-input decoder for GC codes that is based on a low-complexity bit-flipping procedure. This bit-flipping decoder uses a fixed number of test patterns and an algebraic decoder for soft-input decoding. An acceptance criterion for the final candidate codeword is proposed. Combined with error and erasure decoding of the outer Reed-Solomon codes, this bit-flipping decoder can improve the decoding performance and reduce the decoding complexity compared to the previously proposed sequential decoding. The bit-flipping decoder achieves a decoding performance similar to a maximum likelihood decoder for the inner codes.
A constructive method for the design of nonlinear observers is discussed. To formulate conditions for the construction of the observer gains, stability results for nonlinear singularly perturbed systems are utilised. The nonlinear observer is designed directly in the given coordinates, where the error dynamics between the plant and the observer becomes singularly perturbed by a high-gain part of the observer injection, and the information of the slow manifold is exploited to construct the observer gains of the reduced-order dynamics. This is in contrast to typical high-gain observer approaches, where the observer gains are chosen such that the nonlinearities are dominated by a linear system. It will be demonstrated that the considered approach is particularly suited for self-sensing electromechanical systems. Two variants of the proposed observer design are illustrated for a nonlinear electromagnetic actuator, where the mechanical quantities, i.e. the position and the velocity, are not measured
Online-based business models, such as shopping platforms, have added new possibilities for consumers over the last two decades. Aside from basic differences to other distribution channels, customer reviews on such platforms have become a powerful tool, which bestows an additional source for gaining transparency to consumers. Related research has, for the most part, been labelled under the term electronic word-of-mouth (eWOM). An approach, providing a theoretical basis for this phenomenon, will be provided here. The approach is mainly based on work in the field of consumer culture theory (CCT) and on the concept of co-creation. The work of several authors in these streams of research is used to construct a culturally informed resource-based theory, as advocated by Arnould & Thompson and Algesheimer & Gurâu.