Refine
Year of publication
- 2018 (88) (remove)
Document Type
- Conference Proceeding (54)
- Article (22)
- Part of a Book (6)
- Doctoral Thesis (5)
- Patent (1)
Language
- English (88) (remove)
Has Fulltext
- no (88) (remove)
Keywords
- Actuators (1)
- Agenda 2030 (1)
- Anisotropic hyperelasticity (1)
- Antenna arrays (1)
- Application Integration (1)
- Automotive Industry (1)
- Basic prices (1)
- Bernstein Basis (1)
- Bernstein polynomial (1)
- Bio-vital data (1)
Institute
- Fakultät Architektur und Gestaltung (1)
- Fakultät Informatik (15)
- Fakultät Maschinenbau (3)
- Fakultät Wirtschafts-, Kultur- und Rechtswissenschaften (4)
- Institut für Angewandte Forschung - IAF (2)
- Institut für Optische Systeme - IOS (9)
- Institut für Strategische Innovation und Technologiemanagement - IST (3)
- Institut für Systemdynamik - ISD (14)
- Konstanz Institut für Corporate Governance - KICG (1)
- Konstanzer Institut für Prozesssteuerung - KIPS (4)
This work studies a wind noise reduction approach for communication applications in a car environment. An endfire array consisting of two microphones is considered as a substitute for an ordinary cardioid microphone capsule of the same size. Using the decomposition of the multichannel Wiener filter (MWF), a suitable beamformer and a single-channel post filter are derived. Due to the known array geometry and the location of the speech source, assumptions about the signal properties can be made to simplify the MWF beamformer and to estimate the speech and noise power spectral densities required for the post filter. Even for closely spaced microphones, the different signal properties at the microphones can be exploited to achieve a significant reduction of wind noise. The proposed beamformer approach results in an improved speech signal regarding the signal-to-noise-ratio and keeps the linear speech distortion low. The derived post filter shows equal performance compared to known approaches but reduces the effort for noise estimation.
Visualization-Assisted Development of Deep Learning Models in Offline Handwriting Recognition
(2018)
Deep learning is a field of machine learning that has been the focus of active research and successful applications in recent years. Offline handwriting recognition is one of the research fields and applications were deep neural networks have shown high accuracy. Deep learning models and their training pipeline show a large amount of hyper-parameters in their data selection, transformation, network topology and training process that are sometimes interdependent. This increases the overall difficulty and time necessary for building and training a model for a specific data set and task at hand. This work proposes a novel visualization-assisted workflow that guides the model developer through the hyper-parameter search in order to identify relevant parameters and modify them in a meaningful way. This decreases the overall time necessary for building and training a model. The contributions of this work are a workflow for hyper-parameter search in offline handwriting recognition and a heat map based visualization technique for deep neural networks in multi-line offline handwriting recognition. This work applies to offline handwriting recognition, but the general workflow can possibly be adapted to other tasks as well.
Although the Hospice Foundation in Constance knew they had a personnel
problem, they were unsure how to begin to fix it. In addition to difficulties in
finding and keeping employees, the Hospice Foundation’s employees were
often on sick leave, adding pressure on remaining staff. Twelve communication
design students in the masters program at the University of Applied
Sciences in Constance (HTWG Konstanz) conducted a study aimed at
identifying the causes for these problems and, more generally, understanding
how the employees work and feel. Even though the methods in this
study are well known, it presents an important prototype for designers and
design researchers because of its success in finding useful insights. It also
serves as a pre-design project briefing for both management and designers.
It demonstrates the usefulness of qualitative methods in providing a deeper
understanding of a complex situation and its usefulness as a strategic tool
and for defining a project’s focus and scope. Ideally, it also provides insights
into health care for the elderly.
Business units are increasingly able to fuel the transformation that digitalization demands of organizations. Thereby, they implement Shadow IT (SIT) to create flexible and innovative solutions. However, the individual implementation of SIT leads to high complexities and redundancies. Integration suggests itself to meet these challenges but can also eliminate the described benefits. In this emergent research, we develop propositions for a conceptual decision framework, that balances the benefits and drawbacks of an integration of SIT using a literature review as well as a multiple-case study. We thereby integrate the perspective of the overall organization as well as the specific business unit. We then pose six propositions regarding SIT integration that will serve to evaluate our conceptual framework in future research.
New Technology-Based Firms (NTBFs) learn their business in the early-stages of their life-cycle. As a central element of the entrepreneurial learning process, the business model describes the value-creation functions that are conceptualized in different stages of the NTBF’s life-cycle. Transaction relations connect the model with the business reality and ideally mature in strength over time to a functioning value-network. This chapter describes the development of a research design that determines, extracts, and evaluates semantics constructs of this entrepreneurial learning out of a convenient sample and three cohorts of business plans submitted to a business plan award between 2008 and 2010. The analysis shows empirical evidence for the survival and growth of those NTBFs that exhibit a balanced status of entrepreneurial learning in the maturity of the value-network that can be characterized as early startup-stage. The empirical findings of the network theory based business plan analysis will allow for a better explanation of the performance in the entrepreneurial process that is discussed for NTBFs based on theory of organizational learning.
The Role of Support-Activities for the successful Implementation of Internal Corporate Accelerators
(2018)
To get a better understanding of Cross Site Scripting vulnerabilities, we investigated 50 randomly selected CVE reports which are related to open source projects. The vulnerable and patched source code was manually reviewed to find out what kind of source code patterns were used. Source code pattern categories were found for sources, concatenations, sinks, html context and fixes. Our resulting categories are compared to categories from CWE. A source code sample which might have led developers to believe that the data was already sanitized is described in detail. For the different html context categories, the necessary Cross Site Scripting prevention mechanisms are described.
We investigated 50 randomly selected buffer overflow vulnerabilities in Firefox. The source code of these vulnerabilities and the corresponding patches were manually reviewed and patterns were identified. Our main contribution are taxonomies of errors, sinks and fixes seen from a developer's point of view. The results are compared to the CWE taxonomy with an emphasis on vulnerability details. Additionally, some ideas are presented on how the taxonomy could be used to improve the software security education.
Error correction coding based on soft-input decoding can significantly improve the reliability of non-volatile flash memories. This work proposes a soft-input decoder for generalized concatenated (GC) codes. GC codes are well suited for error correction in flash memories for high reliability data storage. We propose GC codes constructed from inner extended binary Bose-Chaudhuri-Hocquenghem (BCH) codes and outer Reed-Solomon codes. The extended BCH codes enable an efficient hard-input decoding. Furthermore, a low-complexity soft-input decoding method is proposed. This bit-flipping decoder uses a fixed number of test patterns and an algebraic decoder for soft-decoding. An acceptance criterion for the final candidate codeword is proposed. Combined with error and erasure decoding of the outer Reed-Solomon codes, this acceptance criterion can improve the decoding performance and reduce the decoding complexity. The presented simulation results show that the proposed bit-flipping decoder in combination with outer error and erasure decoding can outperform maximum likelihood decoding of the inner codes.
Generalised concatenated (GC) codes are well suited for error correction in flash memories for high-reliability data storage. The GC codes are constructed from inner extended binary Bose–Chaudhuri–Hocquenghem (BCH) codes and outer Reed–Solomon codes. The extended BCH codes enable high-rate GC codes and low-complexity soft input decoding. This work proposes a decoder architecture for high-rate GC codes. For such codes, outer error and erasure decoding are mandatory. A pipelined decoder architecture is proposed that achieves a high data throughput with hard input decoding. In addition, a low-complexity soft input decoder is proposed. This soft decoding approach combines a bit-flipping strategy with algebraic decoding. The decoder components for the hard input decoding can be utilised which reduces the overhead for the soft input decoding. Nevertheless, the soft input decoding achieves a significant coding gain compared with hard input decoding.
This paper presents a bed system able to analyze a person’s movement, breathing and recognize the positions that the subject is lying on the bed during the night without any additional physical contact. The measurements are performed with sensors placed between the mattress and the bed-frame. An Intel Edison board was used as an endpoint that served as a communication node from the mesh network to external service. Two nodes and Intel Edison are attached to the bottom of the bed frame and they are connected to the sensors. First test results have indicated the potential of the proposed approach for the recognition of sleep positions with 83% of correct recognized positions.
Research on Shadow IT is facing a conceptual dilemma in cases where previously “covert” systems developed by business entities are integrated in the organizational IT management. These systems become visible, are thus not “in the shadows” anymore, and subsequently do not fit to existing definitions of Shadow IT. Practice shows that some information systems share characteristics of Shadow IT but are created openly in alignment with the IT organization. This paper proposes the term “Business-managed IT” to describe “overt” information systems developed or managed by business entities and distinguishes it from Shadow IT by illustrating case vignettes. Accordingly, our contribution is to suggest a concept and its delineation against other concepts. In this way, IS researchers interested in IT originated from or maintained by business entities can construct theories with a wider scope of application that are at the same time more specific to practical problems. In addition, the terminology allows to value potentially innovative developments by business entities more adequately.
The process of restoring our body and brain from fatigue is directly depend-ing on the quality of sleep. It can be determined from the report of the sleep study results. Classification of sleep stages is the first step of this study and this includes the measurement of biovital data and its further processing.
In this work, the sleep analysis system is based on a hardware sensor net, namely a grid of 24 pressure sensors, supporting sleep phase recognition. In comparison to the leading standard, which is polysomnography, the proposed approach is a non-invasive system. It recognises respiration and body move-ment with only one type of low-cost pressure sensors forming a mesh archi-tecture. The nodes implement as a series of pressure sensors connected to a low-power and performant microcontroller. All nodes are connected via a system wide bus with address arbitration. The embedded processor is the mesh network endpoint that enables network configuration, storing and pre-processing of the data, external data access and visualization.
The system was tested by executing experiments recording the sleep of different healthy young subjects. The results obtained have indicated the po-tential to detect breathing rate and body movement. A major difference of this system in comparison to other approaches is the innovative way to place the sensors under the mattress. This characteristic facilitates the continuous using of the system without any influence on the common sleep process.
SDG Voyager - A practical guide to align business excellence with Sustainable Development Goals
(2018)
By now, an inflationary high number of international publications on the topic “Agenda 2030” exist. But unanswered to this day seems to be the question of how the CSR-management of a company can make a concrete contribution to the SDGs. Instead of unilaterally demanding the reporting of companies’ sustainability activities, the SDG Voyager starts earlier in the process with the intention of encouraging companies of all sizes to become familiar with the fields of action for corporate responsibility and to attend to these issues without feeling overwhelmed. Many companies will find that they are already making a big contribution to sustainable development in a number of fields. In other areas, however, there will still be an urgent need for action. The SDG Voyager aims to acquaint companies with these topics and support them to fulfill their responsibilities towards their stakeholders and society.
We identify 74 generic, reusable technical requirements based on the GDPR that can be applied to software products which process personal data. The requirements can be traced to corresponding articles and recitals of the GDPR and fulfill the key principles of lawfulness and transparency. Therefore, we present an approach to requirements engineering with regard to developing legally compliant software that satisfies the principles of privacy by design, privacy by default as well as security by design.
We consider the problem of increasing the informative value of electrocardiographic (ECG) surveys using data from multichannel electrocardiographic leads, that include both recorded electrocardiosignals and the coordinates of the electrodes placed on the surface of the human torso. In this area, we were interested in reconstruction of the surface distribution of the equivalent sources during the cardiac cycle at relatively low hardware cost. In our work, we propose to reconstruct the equivalent electrical sources by numerical methods, based on integral connection between the density of electrical sources and potential in a conductive medium. We consider maps of distributions of equivalent electric sources on the heart surface (HSSM), presenting source distributions in the form of a simple or double electrical layer. We indicate the dynamics of the heart electrical activity by the space-time mapping of equivalent electrical sources in HSSM.
The overall goal of this work is to detect and analyze a person's movement, breathing and heart rate during sleep in a common bed overnight without any additional physical contact. The measurement is performed with the help of
sensors placed between the mattress and the frame. A two-stage pattern classification algorithm based has been implemented that applies statistics analysis to recognize the position of patients. The system is implemented in a sensors-network, hosting several nodes and communication end-points to support quick and efficient classification. The overall tests show convincing results for the position recognition and a reasonable overlap in matching.
According to the World Food Organization, nearly half of all root and tuber crops worldwide are not consumed, but are lost due to inappropriate storage and post-harvest losses. In developing countries such as Ethiopia, potatoes have not been dried, but are traditionally stored in potato clamps. So far, dried potatoes have not been converted into usable foods.
The aim of the present work is to convert potatoes - perishable rootlets and tubers - into stable products by hot air drying. Hot air dryers are economical to operate in industrialized countries. In Africa, this is reserved for larger industrial companies only. In regions with a tropical climate, however, the use of solar tunnel dryers is worthwhile. These are a good choice for farming and small industries and wherever electrical energy is difficult or impossible to obtain.
In a first part of the work, the drying process of potatoes was investigated, in particular with regard to the change of thermal, mechanical and chemical quality parameters. In an evaluation of the literature it was found that potatoes are not subject to quality changes if the water activityis below a value of 0.2. In order to determine the water content associated with this value at storage temperature, the known equations for the sorption equilibrium were evaluated and verified with own experimental investigations. This determined the end point of the drying process.
The following experimental investigations showed a process-dependent change of the quality criteria such as color, shrinkage, and mechanical properties as well as the content of valuedetermining substances such as vitamin C and starch. The differences in the course and magnitude of the quality changes were attributed to the glass transition that takes place during the drying process. For the determination of the glass transition temperature a new, simple method based on the measurement of mechanical properties could be developed. The knowledge of the glass transition temperature allowed optimizing the drying process. The drying process could be carried out in the rubbery or glassy region, depending on the expected quality changes. Thus, all information was available to produce high quality dried potatoes in an industrial process.
Since the production of potato products in less industrialized regions without sufficient supply of electrical energy should be included, potatoes were dried with a solar tunnel dryer. Examination of the quality properties mentioned above confirmed the process-dependent quality changes.
Finally, the dried product was ground and with the flour thus produced, wheat flour was replaced for baking bread. An evaluation of the finished bread by a panel showed that the acceptance of the bread according to the new recipe was high, also with regard to baking volume, taste, texture and color.
This work shows that by drying potatoes can be transformed a well accepted, storable and easily transportable product. The risk of losses or degradation is minimized. It can be produced on an industrial as well as on farm level. If the influence of the glass transition is taken into account, it is possible to optimize the quality of the product.
Optical surface inspection: A novelty detection approach based on CNN-encoded texture features
(2018)
In inspection systems for textured surfaces, a reference texture is typically known before novel examples are inspected. Mostly, the reference is only available in a digital format. As a consequence, there is no dataset of defective examples available that could be used to train a classifier. We propose a texture model approach to novelty detection. The texture model uses features encoded by a convolutional neural network (CNN) trained on natural image data. The CNN activations represent the specific characteristics of the digital reference texture which are learned by a one-class classifier. We evaluate our novelty detector in a digital print inspection scenario. The inspection unit is based on a camera array and a flashing light illumination which allows for inline capturing of multichannel images at a high rate. In order to compare our results to manual inspection, we integrated our inspection unit into an industrial single-pass printing system.
A constructive nonlinear observer design for self-sensing of digital (ON/OFF) single coil electromagnetic actuators is studied. Self-sensing in this context means that solely the available energizing signals, i.e., coil current and driving voltage are used to estimate the position and velocity trajectories of the moving plunger. A nonlinear sliding mode observer is considered, where the stability of the reduced error dynamics is analyzed by the equivalent control method. No simplifications are made regarding magnetic saturation and eddy currents in the underlying dynamical model. The observer gains are constructed by taking into account some generic properties of the systems nonlinearities. Two possible choices of the observer gains are discussed. Furthermore, an observer-based tracking control scheme to achieve sensorless soft landing is considered and its closed-loop stability is studied. Experimental results for observer-based soft landing of a fast-switching solenoid valve under dry conditions are presented to demonstrate the usefulness of the approach.
Sleep study can be used for detection of sleep quality and in general bed behaviors. These results can helpful for regulating sleep and recognizing different sleeping disorders of human. In comparison to the leading standard measuring system, which is Polysomnography (PSG), the system proposed in this work is a non-invasive sleep monitoring device. For continuous analysis or home use, the PSG or wearable Actigraphy devices tends to be uncomfortable. Besides, these methods not only decrease practicality due to the process of having to put them on, but they are also very expensive. The system proposed in this paper classifies respiration and body movement with only one type of sensor and also in a noninvasive way. The sensor used is a pressure sensor. This sensor is low cost and can be used for commercial proposes. The system was tested by carrying out an experiment that recorded the sleep process of a subject. These recordings showed excellent results in the classification of breathing rate and body movements.
Product development and product manufacturing are entering a new era, namely an era where engineering tasks are executed under collaboration of all involved parties. Engineers and potential customers work together mainly in a virtual world for the design and realization of the product. We address this so called “crowdsourcing” trend in the automotive industry that lowers cost and accelerates production of new car. Current practice and prior studies fail to handle data management and collaboration aspects in sufficient detail. We propose a PLM based crowdsourcing platform that applies best practices to the established approach and adapt it with new methods for handling specific requirements. Our work provides a basis for establishing an improved collaboration platform to support a Gig Economy in the automotive industry.
This paper focuses on the multivariable control of a drawing tower process. The nature of the process together with the differences in measurement noise levels that affect the variables to be controlled motivated the development of a new MPC algorithm. An extension of a multivariable predictive control algorithm with separated prediction horizons is proposed. The obtained experimental results show the usefulness of the proposed algorithm..
Autonomous moving systems require very detailed information about their environment and potential colliding objects. Thus, the systems are equipped with high resolution sensors. These sensors have the property to generate more than one detection per object per time step. This results in an additional complexity for the target tracking algorithm, since standard tracking filters assume that an object generates at most one detection per object. This requires new methods for data association and system state filtering.
As new data association methods, in this thesis two different extensions of the Joint Integrated Probabilistic Data Association (JIPDA) filter to assign more than one detection to tracks are proposed.
The first method that is introduced, is a generalization of the JIPDA to assign a variable number of measurements to each track based on some predefined statistical models, which will be called Multi Detection - Joint Integrated Probabilistic Data Association (MD-JIPDA).
Since this scheme suffers from exponential increase of association hypotheses, also a new approximation scheme is presented. The second method is an extension for the special case, when the number and locations of measurements are a priori known. In preparation of this method, a new notation and computation scheme for the standard Joint Integrated Data Association is outlined, which also enables the derivation of a new fast approximation scheme called balanced permanent-JIPDA.
For state filtering, also two different concepts are applied: the Random Matrix Framework and the Measurement Generating Points. For the Random Matrix framework, first an alternative prediction method is proposed to account for kinematic state changes in the extension state prediction as well. Secondly, various update methods are investigated to account for the polar to Cartesian noise transformation problem. The filtering concepts are connected with the new MD-JIPDA and their characteristics analyzed with various Monte Carlo simulations.
In case an object can be modeled by a finite number of fixed Measurement Generating Points (MGP), also a proposition to track these object via a JIPDA filter is made. In this context, a fast Track-to-Track fusion algorithm is proposed as well and compared against the MGP-JIPDA.
The proposed algorithms are evaluated in two applications where scanning is done using radar sensors only. The first application is a typical automotive scenario, where a passenger car is equipped with six radar sensors to cover its complete environment.
In this application, the location of the measurements on an object can be considered stationary and that is has a rectangular shape. Thus, the MGP based algorithms are applied here. The filters are evaluated by tracking especially vehicles on nearside lanes.
The second application covers the tracking of vessels on inland waters. Here, two different kind of Radar systems are applied, but for both sensors a uniform distribution of the measurements over the target's extent can be assumed. Further, the assumption that the targets have elliptical shape holds, and so the Random Matrix Framework in combination with the MD-JIPDA is evaluated.
Exemplary test scenarios also illustrate the performance of this tracking algorithm.
Embodiments are generally related to the field of channel and source coding of data to be sent over a channel, such as a communication link or a data memory. Some specific embodiments are related to a method of encoding data for transmission over a channel, a corresponding decoding method, a coding device for performing one or both of these methods and a computer program comprising instructions to cause said coding device to perform one or both of said methods.
It is widely recognized that sustainability is a new challenge for many manufacturing companies. In this paper, we tackle this issue by presenting an approach that deals with material and substance compliance within Product Lifecycle Management in a complex value chain. Our analysis explains why, how and when sustainable manufacturing arises, and it identifies, quantifies and evaluates the environmental impact of a new product. We propose (I) a Life Cycle Assessment tool (LCA) and (II) a model to validate this approach and evaluate the risk of noncompliance in supply chain. Our LCA approach provides comprehensive information on environmental impacts of a product.
Product and materials cycles are parallel and intersecting, making it challenging to integrate Material Selection Process across Product Lifecycle Management, Integration of LCA with PLM. We provide only a foundation. Further research in systems engineering is necessary. LCA is sensitive to data quality. Outsourcing production and having problems in supplier cooperation can result in material mismatch (such as property, composition mismatching) in the production process due to that may cause misleading of LCA results.
This paper also describes research challenges using riskbased due diligence.
It is widely recognized that sustainability is a new challenge for many manufacturing companies. In this paper, we tackle this issue by presenting an approach that deals with material and substance compliance within Product Lifecycle Management in a complex value chain. Our analysis explains why, how and when sustainable manufacturing arises, and it identifies, quantifies and evaluates the environmental impact of a new product. We propose (I) a Life Cycle Assessment tool (LCA) and (II) a model to validate this approach and evaluate the risk of non-compliance in supply chain. Our LCA approach provides comprehensive information on environmental impacts of a product.
Product and materials cycles are parallel and intersecting, making it challenging to integrate Material Selection Process across Product Lifecycle Management, Integration of LCA with PLM. We provide only a foundation. Further research in systems engineering is necessary. LCA is sensitive to data quality. Outsourcing production and having problems in supplier cooperation can result in material mismatch (such as property, composition mismatching) in the production process due to that may cause misleading of LCA results. This paper also describes research challenges using risk-based due diligence.
This work proposes a construction for low-density parity-check (LDPC) codes over finite Gaussian integer fields. Furthermore, a new channel model for codes over Gaussian integers is introduced and its channel capacity is derived. This channel can be considered as a first order approximation of the additive white Gaussian noise channel with hard decision detection where only errors to nearest neighbors in the signal constellation are considered. For this channel, the proposed LDPC codes can be decoded with a simple non-probabilistic iterative decoding algorithm similar to Gallager's decoding algorithm A.
Knot placement for curve approximation is a well known and yet open problem in geometric modeling. Selecting knot values that yield good approximations is a challenging task, based largely on heuristics and user experience. More advanced approaches range from parametric averaging to genetic algorithms.
In this paper, we propose to use Support Vector Machines (SVMs) to determine suitable knot vectors for B-spline curve approximation. The SVMs are trained to identify locations in a sequential point cloud where knot placement will improve the approximation error. After the training phase, the SVM can assign, to each point set location, a so-called score. This score is based on geometric and differential geometric features of points. It measures the quality of each location to be used as knots in the subsequent approximation. From these scores, the final knot vector can be constructed exploring the topography of the score-vector without the need for iteration or optimization in the approximation process. Knot vectors computed with our approach outperform state of the art methods and yield tighter approximations.
Know when you don't know
(2018)
Deep convolutional neural networks show outstanding performance in image-based phenotype classification given that all existing phenotypes are presented during the training of the network. However, in real-world high-content screening (HCS) experiments, it is often impossible to know all phenotypes in advance. Moreover, novel phenotype discovery itself can be an HCS outcome of interest. This aspect of HCS is not yet covered by classical deep learning approaches. When presenting an image with a novel phenotype to a trained network, it fails to indicate a novelty discovery but assigns the image to a wrong phenotype. To tackle this problem and address the need for novelty detection, we use a recently developed Bayesian approach for deep neural networks called Monte Carlo (MC) dropout to define different uncertainty measures for each phenotype prediction. With real HCS data, we show that these uncertainty measures allow us to identify novel or unclear phenotypes. In addition, we also found that the MC dropout method results in a significant improvement of classification accuracy. The proposed procedure used in our HCS case study can be easily transferred to any existing network architecture and will be beneficial in terms of accuracy and novelty detection.
Investigation of magnetic effects on austenitic stainless steels after low temperature carburization
(2018)
This work aims at investigating the magnetic effects of austenitc stainless steels which can occur after a low temperature carburisation depending on the alloy. Samples were prepared of different alloys and subjected to a multiple low temperature carburisation to obtain different treatment conditions for each alloy. The layer characterisation was carried out by light microscope and also by hardening profiles and shows that the layer develops with each additional treatment cycle. A lattice expansion could be detected in all treated samples by X-ray diffraction. Magnetisability was measured using Feritscope and SQUID measurements. Not all alloys showed magnetisability after treatment. In addition to MFM measurements, experiments with Ferrofluid were also used to visualize the magnetic areas. These studies show that only about half of the formed layer becomes magnetisable and has a domain-like structure.
Investigation of magnetic effects on austenitic stainless steels after low temperature carburization
(2018)
Influence of Temperature on the Corrosion behaviour of Stainless Steels under Tribological Stress
(2018)
Posterpräsentation
Deep neural networks have been successfully applied to problems such as image segmentation, image super-resolution, coloration and image inpainting. In this work we propose the use of convolutional neural networks (CNN) for image inpainting of large regions in high-resolution textures. Due to limited computational resources processing high-resolution images with neural networks is still an open problem. Existing methods separate inpainting of global structure and the transfer of details, which leads to blurry results and loss of global coherence in the detail transfer step. Based on advances in texture synthesis using CNNs we propose patch-based image inpainting by a single network topology that is able to optimize for global as well as detail texture statistics. Our method is capable of filling large inpainting regions, oftentimes exceeding quality of comparable methods for images of high-resolution (2048x2048px). For reference patch look-up we propose to use the same summary statistics that are used in the inpainting process.
Corporate venturing has gained much attention due
to challenges and changes that occur because of discontinuous
innovations – which seem to be promoted by digitalization. In this
context, open innovation has become a promising tool for
established companies to strengthen their innovation capabilities.
While the external opening of the innovation process has gained
much attention, the internal opening lacks on investigations.
Especially new organizational forms, such as Internal Corporate
Accelerators, have not been investigated sufficiently. This study,
which is based on 13 interviews from two German tech-companies,
contributes to a better understanding of this new form of corporate
venturing and the resulting effects on the organizational renewal.
This letter proposes two contributions to improve the performance of transmission with generalized multistream spatial modulation (SM). In particular, a modified suboptimal detection algorithm based on the Gaussian approximation method is proposed. The proposed modifications reduce the complexity of the Gaussian approximation method and improve the performance for high signal-to-noise ratios. Furthermore, this letter introduces signal constellations based on Hurwitz integers, i.e., a 4-D lattice. Simulation results demonstrate that these signal constellations are beneficial for generalized SM with two active antennas.
With the increased deployment of biometric authentication systems, some security concerns have also arisen. In particular, presentation attacks directed to the capture device pose a severe threat. In order to prevent them, liveness features such as the blood flow can be utilised to develop presentation attack detection (PAD) mechanisms. In this context, laser speckle contrast imaging (LSCI) is a technology widely used in biomedical applications in order to visualise blood flow. We therefore propose a fingerprint PAD method based on textural information extracted from pre-processed LSCI images. Subsequently, a support vector machine is used for classification. In the experiments conducted on a database comprising 32 different artefacts, the results show that the proposed approach classifies correctly all bona fides. However, the LSCI technology experiences difficulties with thin and transparent overlay attacks.