Refine
Year of publication
- 2016 (91) (remove)
Document Type
- Conference Proceeding (54)
- Article (21)
- Part of a Book (8)
- Other Publications (5)
- Book (1)
- Doctoral Thesis (1)
- Report (1)
Language
- English (91) (remove)
Keywords
- Abstract interpretation (1)
- Bernstein polynomial (1)
- Block codes (1)
- Body sensor networks (1)
- Business life-cycle (1)
- Business plan (3)
- Bytecode (1)
- Cauchon algorithm (1)
- Chassis dynamometer (1)
- Code Generation (1)
Institute
The IT unit is not the only provider of information technology (IT) used in business processes. Aiming for increased performance, many business workgroups autonomously implement IT resources not covered by their organizational IT service management. This is called shadow IT. Risks and inefficiencies associated with this phenomenon challenge organizations. Organizations need to decide how to deal with identified shadow IT and if the business or the IT unit should be responsible for corresponding tasks and components. This study proposes design principles for a method to control identified shadow IT following action design research in four organizational settings. The procedure results in an allocation of IT task responsibilities between the business workgroups and the IT unit following risk considerations and transaction cost economics, leading to an IT service governance. This contributes to governance research regarding adaptive and efficient arrangements with reduced risks for business-located IT activities.
In this paper, utilisation of an Unscented Kalman Filter for concurrently performing disturbance estimation and wave filtering is investigated. Experimental results are provided that demonstrate very good performance subject to both tasks. For the filter, a dynamic model has been used which was optimised via correlation analysis in order to obtain a minimum set of relevant parameters. This model has also been validated by experiments deploying a small vessel. A simulation study is presented to evaluate the performance using known quantities. Experimental trials have been performed on the Rhine river. The results show that for instance flow direction and varying current velocities can continuously be estimated with decent precision, even while the boat is performing turning manoeuvres. Moreover, the filtering properties are very satisfactory. This makes the filter suitable for being used, for instance, in autonomous vessel applications or assistance systems.
An approach for an adaptive position-dependent friction estimation for linear electromagnetic actuators with altered characteristics is proposed in this paper. The objective is to obtain a friction model that can be used to describe different stages of aging of magnetic actuators. It is compared to a classical Stribeck friction model by means of model fit, sensitivity, and parameter correlation. The identifiability of the parameters in the friction model is of special interest since the model is supposed to be used for diagnostic and prognostic purposes. A method based on the Fisher information matrix is employed to analyze the quality of the model structure and the parameter estimates.
Nowadays there is a rich diversity of sleep monitoring systems available on the market. They promise to offer information about sleep quality of the user by recording a limited number of vital signals, mainly heart rate and body movement. Typically, fitness trackers, smart watches, smart shirts, smartphone applications or patches do not provide access to the raw sensor data. Moreover, the sleep classification algorithm and the agreement ratio with the gold standard, polysomnography (PSG) are not disclosed. Some commercial systems record and store the data on the wearable device, but the user needs to transfer and import it into specialised software applications or return it to the doctor, for clinical evaluation of the data set. Thus an immediate feedback mechanism or the possibility of remote control and supervision are lacking. Furthermore, many such systems only distinguish between sleep and wake states, or between wake, light sleep and deep sleep. It is not always clear how these stages are mapped to the four known sleep stages: REM, NREM1, NREM2, NREM3-4. [1] The goal of this research is to find a reduced complexity method to process a minimum number of bio vital signals, while providing accurate sleep classification results. The model we propose offers remote control and real time supervision capabilities, by using Internet of Things (IoT) technology. This paper focuses on the data processing method and the sleep classification logic. The body sensor network representing our data acquisition system will be described in a separate publication. Our solution showed promising results and a good potential to overcome the limitations of existing products. Further improvements will be made and subjects with different age and health conditions will be tested.
Software startups
(2016)
Software startup companies develop innovative, software-intensive products within limited time frames and with few resources, searching for sustainable and scalable business models. Software startups are quite distinct from traditional mature software companies, but also from micro-, small-, and medium-sized enterprises, introducing new challenges relevant for software engineering research. This paper’s research agenda focuses on software engineering in startups, identifying, in particular, 70+ research questions in the areas of supporting startup engineering activities, startup evolution models and patterns, ecosystems and innovation hubs, human aspects in software startups, applying startup concepts in non-startup environments, and methodologies and theories for startup research. We connect and motivate this research agenda with past studies in software startup research, while pointing out possible future directions. While all authors of this research agenda have their main background in Software Engineering or Computer Science, their interest in software startups broadens the perspective to the challenges, but also to the opportunities that emerge from multi-disciplinary research. Our audience is therefore primarily software engineering researchers, even though we aim at stimulating collaborations and research that crosses disciplinary boundaries. We believe that with this research agenda we cover a wide spectrum of the software startup industry current needs.
This paper broadens the resource-based approach to explaining survival of new technology-based firms (NTBFs) by focusing on the entrepreneur's ability to transform resources in response to triggers resulting from market interactions. Network theory is used to define a construct that allows determining the status of venture emergence (VE).The operationalization of the VE construct is built on the firm's value network maturity in the four market dimensions customer, investor, partner, and human resource. Business plans of NTBFs represent the artifact that contains this data in the form of transaction relation descriptions. Using content analysis, a multi-step combined human and computer coding process has been developed to empirically determine NTBFs' status of VE.Results of the business plan analysis suggests that the level of transaction relations allows to draw conclusions on the status of VE. Moreover, applying the developed process, a business plan coding test shows that the transaction relation based VE status significantly relates to NTBFs' survival capabilities.
This paper builds upon the widely-used resource-based approach to explaining survival of new technology-based firms (NTBFs). However, instead of looking at the NTBF's initial resource configuration, a process-oriented perspective is taken by focusing on the entrepreneur's ability to transform resources in response to triggers resulting from market interactions. Transaction relations reflect these interactions and are thus operationalized with a suggested method for measuring the status of venture emergence (VE) applicable to early-stage NTBFs. NTBFs' value network maturity is reflected in the number and strength of their transaction relations in the four market dimensions customer, investor, partner, and human resource. Business plans of NTBFs represent the artifact that contains this data in the form of transaction relation descriptions. Using content analysis, a multi-step combined human and computer coding process has been developed to annotate and classify transaction relations from business plans in order to empirically determine NTBFs' status of VE. Results of the business plan analysis suggest that the level of transaction relations allows to draw conclusions on the VE status. Moreover, applying the developed process, first analysis of a business plan coding test shows that the transaction relation based VE status significantly relates to NTBF survival capability.
Purpose The purpose of this paper is to find out tourism movement patterns via the tracking of tourists with the help of positioning systems like GPS in the rural area of the Lake Constance destination in Germany. In doing so past, present and future of tourist tracking is illustrated. Design/methodology/approach The tracking is realized via common smartphones extended by an app, with dedicated sensors like position loggers and a survey. The three different approaches are applied in order to compare and cross-check results (triangulation of data and methods). Findings Movement patterns turned out to be diverse and individualistic within the rural destination of Lake Constance and following an ants trail in sub-destinations like the city of Constance. Repeat visitors and first-time visitors alike always visit the bigger cities and main day-trip destinations of the Lake. A possible prediction tool enables new avenues of governing tourism movement patterns. Research limitations/implications The tracking techniques can be developed further into the direction of “quantified self” using gamification in order to make the tracking app even more attractive. Practical implications An algorithm-based prediction tool would offer new perspectives to the management of tourism movements. Social implications Further research is needed to overcome the feeling of invasiveness of the app to allow tracking with that approach. Originality/value This study is original and innovative because of the first-time use of a smartphone app in tourist tracking, the application on a rural destination and the conceptual description of a prediction tool.
A lot of procedures for estimating the spool position in linear electromagnetic actuators using voltage and current measurements only, can be found in the literature. Subject to the accuracy of the estimated spool position some achieve better, some worse results. However, in almost every approach hysteresis has a huge impact on the estimation accuracy that can be achieved. Regardless whether these effects are caused by magnetic or mechanical hysteresis, they will limit the accuracy of the position estimate, if not taken into account. In this paper, a model is introduced which covers the hysteresis effects as well as other nonlinear ities occurring in estimated position-dependent parameters. A classical Preisach model is deployed first, which is then adjusted by using novel elementary preceding Relay-Operators. The resulting model for the estimated position-dependent parameters including the adjusted Preisach model can be easily applied to position estimation tasks. It is shown that the considered model distinctly improves the accuracy for the spool position estimate, while it is kept as simple as possible for real-time implementation reasons.
This paper proposes a soft input decoding algorithm and a decoder architecture for generalized concatenated (GC) codes. The GC codes are constructed from inner nested binary Bose-Chaudhuri-Hocquenghem (BCH) codes and outer Reed-Solomon codes. In order to enable soft input decoding for the inner BCH block codes, a sequential stack decoding algorithm is used. Ordinary stack decoding of binary block codes requires the complete trellis of the code. In this paper, a representation of the block codes based on the trellises of supercodes is proposed in order to reduce the memory requirements for the representation of the BCH codes. This enables an efficient hardware implementation. The results for the decoding performance of the overall GC code are presented. Furthermore, a hardware architecture of the GC decoder is proposed. The proposed decoder is well suited for applications that require very low residual error rates.
The corporate entrepreneur
(2016)
Corporate entrepreneurship is one tool for established companies to strengthen their capabilities for strategic renewal and innovativeness. The question, however, which factors are influencing the success of a corporate entrepreneurship initiative requires further attention. The corporate entrepreneur who is acting as both the leader of embedded entrepreneurial teams and linking pin to the corporate, is providing one possible perspective. Based on 6 interviews conducted in 6 German organizations this study contributes to the understanding of the role of the corporate entrepreneur and how this role can be distinguished from other roles in the context of innovation.
Present demographic change and a growing population of elderly people leads to new medical needs. Meeting these with state of the art technology is as a consequence a rapidly growing market. So this work is aimed at taking modern concepts of mobile and sensor technology and putting them in a medical context. By measuring a user’s vital signs on sensors which are processed on a Android smartphone, the target system is able to determine the current health state of the user and to visualize gathered information. The system also includes a weather forecasting functionality, which alerts the user on possibly dangerous future meteorological events. All information are collected centrally and distributed to users based on their location. Further, the system can correlate the client-side measurement of vital signs with a server-side weather history. This enables personalized forecasting for each user individually. Finally, a portable and affordable application was developed that continuously monitors the health status by many vital sensors, all united on a common smartphone.
Corrosion
(2016)
Navigation on the Danube
(2016)
This report contains two parts: The first part presents an overview on studies concerning the Danube, inland navigation or the impact of climate change on either of those. The second part gives a more detailed analysis of inland navigation on the Danube, partly based on studies presented in part one. Part two covers the current situation along the Danube by covering the topic of bottlenecks and other limitations for shipping along the Danube. Based on these informations, an estimation of the economic impacts of low water periods on inland navigation is made. As a last step, measures to reduce the impact of low water on inland navigations are presented. The report shows, that inland navigation still is an important transport mode, along the Danube as well as in other european regions. Especially in Romania, inland navigation still has a large share of more than 20% and rising in total transport. However, inland navigation depends strongly on good conditions of its infrastructure. These good conditions are limited mainly by two factors: one are the so called bottlenecks. Those are areas with sub-optimal shipping conditions e.g. due to solid rock formations in the river that lead to a reduced water depths. The other factor is the weather (and, on a longer time scale, the climate) which, mostly depending on precipitation and evaporation, can lead to low water levels seasonally. In addition to these two natural factors, laws which e.g. regulate the maximum number of barges allowed. Human build structures like locks limit the size of vessels as well as the speed they can travel with. These limiting factors are identified and located in the first chapter of part two of this report, before the water depth needed by several ship sizes as well as the cargo fleet available along the Danube are presented. One of the targets of this report is to estimate the economic impact of low water periods. All the factors named above as well as the freight prices charged for connections along the Danube are used to each this target in chapter II.4. To estimate the impact of low water periods on the freight prices, a method developed by Jonkeren et al. (2007) for the Rhine is transfered to the Danube. By transfering Jonkeren et al. (2007) method, regression equations for several transport connections along the Danube are identified that give a first estimate for the connection of freight prices and water levels. With the help of these regression equations, an estimation of the total expenses for transport via inland navigation for several years is possible. The yearly and seasonal variability is identified as well as the additional expenses due to water levels below 280cm. But additional expenses are not the only impact of changing water levels on inland navigation. Another is, that while the demand for transport stays at the same level, sometimes the water levels are not sufficient enough to use the full capacity of the fleet. Therefore, the (theoretical) amount of cargo that could not be transported due to low water levels is calculated as well and presented in chapter II.5. Finally, some measures to overcome some of the here named problems of inland navigation due to low water levels are presented. These are separated into two general approches: change the ship or change the river. Both methods have their advantages and disadvantages due to technical as well as regulatory and other factors. The list presented here however is incomplete and only gives a few ideas of how some problems can be overcome. In the end, an individual mix for the different regions along the river and sometimes for the individual companies must be found.
The magneto-mechanical behavior of magnetic shape memory (MSM) materials has been investigated by means of different simulation and modeling approaches by several research groups. The target of this paper is to simulate actuators driven by MSM alloys and to understand the MSM element behavior during actuation, which shall lead to an increased performance of the actuator. It is shown that internal and external stresses should be taken into consideration using numerical computation tools for magnetic fields in an efficient way.
Stress is a recognized as a predominant disease with growing costs of treatment. The approach presented here is aimed to detect stress using a light weighted, mobile, cheap and easy to use system. The result shows that stress can be detected even in case a person’s natural bio vital data is out of the main range. The system enables storage of measured data, while maintaining communication channels of online and post-processing.
Stress is recognized as a predominant disease with raising costs for rehabilitation and treatment. Currently there several different approaches that can be used for determining and calculating the stress levels. Usually the methods for determining stress are divided in two categories. The first category do not require any special equipment for measuring the stress. This category useless the variation in the behaviour patterns that occur while stress. The core disadvantage for the category is their limitation to specific use case. The second category uses laboratories instruments and biological sensors. This category allow to measure stress precisely and proficiently but on the same time they are not mobile and transportable and do not support real-time feedback. This work presents a mobile system that provides the calculation of stress. For achieving this, the of a mobile ECG sensor is analysed, processed and visualised over a mobile system like a smartphone. This work also explains the used stress measurement algorithm. The result of this work is a portable system that can be used with a mobile system like a smartphone as visual interface for reporting the current stress level.
So is About Urbanity
(2016)
Südstadt Tübingen is a successful military land redevelopment project, in which functional mix and urban diversity have been identified as main goals. In the whole development process, joint building venture has been utilized as important urban development instrument. On one hand, this instrument helps citizens development adaptive and affordable living space and let them find own identity to the new quarter; on the other hand, it is also important to cultivate urban diversity and suitable spatial feeling, mix relationship in living and working conditions, so that the quarters will have very flexible structure to accommodate diversified urban life styles.
Increasing robustness of handwriting recognition using character N-Gram decoding on large lexica
(2016)
Offline handwriting recognition systems often include a decoding step, that is retrieving the most likely character sequence from the underlying machine learning algorithm. Decoding is sensitive to ranges of weakly predicted characters, caused e.g. by obstructions in the scanned document. We present a new algorithm for robust decoding of handwriting recognizer outputs using character n-grams. Multidimensional hierarchical subsampling artificial neural networks with Long-Short-Term-Memory cells have been successfully applied to offline handwriting recognition. Output activations from such networks, trained with Connectionist Temporal Classification, can be decoded with several different algorithms in order to retrieve the most likely literal string that it represents. We present a new algorithm for decoding the network output while restricting the possible strings to a large lexicon. The index used for this work is an n-gram index with tri-grams used for experimental comparisons. N-grams are extracted from the network output using a backtracking algorithm and each n-gram assigned a mean probability. The decoding result is obtained by intersecting the n-gram hit lists while calculating the total probability for each matched lexicon entry. We conclude with an experimental comparison of different decoding algorithms on a large lexicon.
Recent years have seen the proposal of several different gradient-based optimization methods for training artificial neural networks. Traditional methods include steepest descent with momentum, newer methods are based on per-parameter learning rates and some approximate Newton-step updates. This work contains the result of several experiments comparing different optimization methods. The experiments were targeted at offline handwriting recognition using hierarchical subsampling networks with recurrent LSTM layers. We present an overview of the used optimization methods, the results that were achieved and a discussion of why the methods lead to different results.
Cognitive radio (CR) is a key enabler of wireless in industrial applications especially for those with strict quality-of-service (QoS) requirements. The cornerstone of CR is spectrum occupancy prediction that enables agile and proactive spectrum access and efficient utilization of spectral resources. Hidden Markov Models (HMM) provide powerful and flexible tools for statistical spectrum prediction. In this paper we introduce a HMM-based spectrum prediction algorithm for industrial applications that accurately predicts multiple slots in the future. Traditional HMM prediction approaches use two hidden states enabling the prediction of only one step ahead in the future. This one step is most often not enough due to internal hardware delays that render it outdated. We show in this work that extending the number of hidden states and formulating the prediction problem as a maximum likelihood (ML) classification approach enables a prediction span of multiple slots in the future even with fine spectrum sensing resolution. We verify the suitability of our approach to industrial wireless through extensive simulations that utilize a realistic measurement-based traffic model specifically tailored for industrial automotive settings.
Realistic traffic modeling plays a key role in efficient Dynamic Spectrum Access (DSA) which is considered as enabler for the employment of wireless technologies in critical industrial automation applications (IAA). The majority of models of spectrum usage are not suitable for this specific use case as they are based on measurement campaigns conducted in urban or controlled laboratory environments. In this work we present a time-domain traffic model for industrial communication in the 2.4 GHz industrial, scientific, medical (ISM) band based on measurements in an industrial automotive production site. As DSA is usually implemented on Software Defined Radios (SDR), our measurement campaign is based on SDR platforms rather than sophisticated spectrum analyzers. We show through the estimation of the Hurst parameter that industrial wireless traffic possesses inherent self-similarity that could be exploited for efficient DSA. We also show that wireless traffic could be modeled as a semi-Markov model with channel on and off durations Log-normally and Pareto distributed, respectively. We finally estimate the parameters of the derived models using Maximum Likelihood estimation.
These days computer analysis of ECG (Electrocardiograms) signals is common. There are many real-time QRS recognition algorithms; one of these algorithms is Pan-Tompkins Algorithm. Which the Pan-Tompkins Algorithm can detect QRS complexes of ECG signals. The proposed algorithm is analysed the data stream of the heartbeat based on the digital analysis of the amplitude, the bandwidth, and the slope. In addition to that, the stress algorithm compares whether the current heartbeat is similar or different to the last heartbeat after detecting the ECG signals. This algorithm determines the stress detection for the patient on the real-time. In order to implement the new algorithm with higher performance, the parallel programming language CUDA is used. The algorithm determines stress at the same time by determining the RR interval. The algorithm uses a different function as beat detector and a beat classifier of stress.
In this paper we provide a performance analysis framework for wireless industrial networks by deriving a service curve and a bound on the delay violation probability. For this purpose we use the (min,×)stochastic network calculus as well as a recently presented recursive formula for an end-to-end delay bound of wireless heterogeneous networks. The derived results are mapped to WirelessHART networks used in process automation and were validated via simulations. In addition to WirelessHART, our results can be applied to any wireless network whose physical layer conforms the IEEE 802.15.4 standard, while its MAC protocol incorporates TDMA and channel hopping, like e.g. ISA100.11a or TSCH-based networks. The provided delay analysis is especially useful during the network design phase, offering further research potential towards optimal routing and power management in QoS-constrained wireless industrial networks.
To learn from the past, we analyse 1,088 "computer as a target" judgements for evidential reasoning by extracting four case elements: decision, intent, fact, and evidence. Analysing the decision element is essential for studying the scale of sentence severity for cross-jurisdictional comparisons. Examining the intent element can facilitate future risk assessment. Analysing the fact element can enhance an organization's capability of analysing criminal activities for future offender profiling. Examining the evidence used against a defendant from previous judgements can facilitate the preparation of evidence for upcoming legal disclosure. Follow the concepts of argumentation diagrams, we develop an automatic judgement summarizing system to enhance the accessibility of judgements and avoid repeating past mistakes. Inspired by the feasibility of extracting legal knowledge for argument construction and employing grounds of inadmissibility for probability assessment, we conduct evidential reasoning of kernel traces for forensic readiness. We integrate the narrative methods from attack graphs/languages for preventing confirmation bias, the argumentative methods from argumentation diagrams for constructing legal arguments, and the probabilistic methods from Bayesian networks for comparing hypotheses.
Creating cages that enclose a 3D-model of some sort is part of many preprocessing pipelines in computational geometry. Creating a cage of preferably lower resolution than the original model is of special interest when performing an operation on the original model might be to costly. The desired operation can be applied to the cage first and then transferred to the enclosed model. With this paper the authors present a short survey of recent and well known methods for cage computation.
The authors would like to give the reader an insight in common methods and their differences.
The business plan is one of the most frequently available artifacts to innovation intermediaries of technology-based ventures' presentations in their early stages [1]–[4]. Agreement on the evaluations of venturing projects based on the business plans highly depends on the individual perspective of the readers [5], [6]. One reason is that little empirical proof exists for descriptions in business plans that suggest survival of early-stage technology ventures [7]–[9]. We identified descriptions of transaction relations [10]–[13] as an anchor of the snapshot model business plan to business reality [13]. In the early-stage, surviving ventures are building transaction relations to human resources, financial resources, and suppliers on the input side, and customers on the output side of the business towards a stronger ego-centric value network [10]–[13]. We conceptualized a multidimensional measurement instrument that evaluates the maturity of this ego-centric value networks based on the transaction relations of different strength levels that are described in business plans of early-stage technology ventures [13]. In this paper, the research design and the instrument are purified to achieve high agreement in the evaluation of business plans [14]–[16]. As a result, we present an overall research design that can reach acceptable quality for quantitative research. The paper so contributes to the literature on business analysis in the early-stage of technology-based ventures and the research technique of content analysis.
Smart factory and education
(2016)
The introduction of cyber physical systems into production companies is highly changing working conditions and processes as well as business models. In practice a growing discrepancy between big and small respectively medium-sized companies can be observed. Bridging that gap a university smart factory is introduced to give that companies a platform to trial, educate employees and access consultancy. Realizing the smart factory a highly integrated, open and standardized automation concept is shown comprising single devices, production lines up to a higher automation system maintaining a community or business models.
Scoring LSP tasks
(2016)
adidas and Reebok
(2016)
When mobile devices at the network edge want to communicate with each other, they too often depend on the availability of faraway resources. For direct communication, feasible user-friendly service discovery is essential. DNS Service Discovery over Multicast DNS (DNS-SD/mDNS) is widely used for configurationless service discovery in local networks, due inno small part to the fact that it is based on the well establishedDNS, and efficient in small networks. In our research, we enhance DNS-SD/mDNS providing versatility, user control, efficiency, and privacy, while maintaining the deployment simplicity and backward compatibility. These enhancements are necessary to make it a solid, flexible foundationfor device communication in the edge of the Internet. In this paper, we focus on providing multi-link capabilities and scalable scopes for DNS-SD while being mindful of both user-friendliness and efficiency. We propose DNS-SD over StatelessDNS (DNS-SD/sDNS), a solution that allows configurationless service discovery in arbitrary self-named scopes - largely independentof the physical network layout - by leveraging ourStateless DNS technique and the Raft consensus algorithm.
ERP systems integrate a major part of all business processes and organizations include them in their IT service management. Besides these formal systems, there are additional systems that are rather stand-alone and not included in the IT management tasks. These so-called ‘shadow systems’ also support business processes but hinder a high enterprise integration. Shadow systems appear during their explicit detection or during software maintenance projects such as enhancements or release changes of enterprise systems. Organizations then have to decide if and to what extent they integrate the identified shadow systems into their ERP systems. For this decision, organizations have to compare the capabilities of each identified shadow system with their ERP systems. Based on multiple-case studies, we provide a dependency approach to enable their comparison. We derive categories for different stages of the dependency and base insights into integration possibilities on these stages. Our results show that 64% of the shadow systems in our case studies are related to ERP systems. This means that they share parts or all of their data and/or functionality with the ERP system. Our research contributes to the field of integration as well as to the discussion about shadow systems.
Measuring the natural frequency of buildings and bridges is a possibility to get information about the stiffness of the construction. Decreasing stiffness can be detected be repeatedly measurements. Damaged parts of the construction or too high wood moisture can be reasons for decreasing stiffness. The earlier the failure is detected the better is the chance to repair it with low costs. The method of monitoring by repeatedly measuring the natural frequency is applied at timber bridges, especially on footbridges. As damages due to high wood moisture cannot be seen easily, measuring the natural frequency is a good possibility to detect them and then to repair them. Equations to calculate the natural frequencies regarding the damaged parts are shown and applied to a simple supported beam.
Even though immutability is a desirable property, especially in a multi-threaded environment, implementing immutable Java classes is surprisingly hard because of a lack of language support. We present a static analysis tool using abstract bytecode interpretation that checks Java classes for compliance with a set of rules that together constitute state-based immutability. Being realized as a Find Bugs plug in, the tool can easily be integrated into most IDEs and hence the software development process. Our evaluation on a large, real world codebase shows that the average run-time effort for a single class is in the range of a few milliseconds, with only a very few statistical spikes.
The corrosion resistance of stainless steels is massively influenced by the condition of their surface. The surface quality includes the topography of the surface, the structure and composition of the passive layer, and the surface near structure of the base material. These factors are influenced by final physical/chemical surface treatments. The presented work shows significantly lower corrosion resistance for mechanical machined specimens than for etched specimens. It also turns out that the rougher the surface, the lower the corrosion resistance gets. However, there is no general finding which shows if blasted or grinded surfaces are more appropriate, but a dependency on process parameters and the characteristics on corrosive exposure in terms of corrosion behavior. The results show that not only the surface roughness Ra has an influence on corrosion behavior but also the shape of peaks and valleys which are evolved by surface treatments. Imperfections in the base material, like sulfidic inclusions lead to a weaker passive layer, respectively, to a decrease of the corrosion resistance. By using special passivating techniques the corrosion resistance of stainless steels can be increased to a higher level in comparison to common passivation.
Digital cameras are used in a large variety of scientific and industrial applications. For most applications the acquired data should represent the real light intensity per pixel as accurately as possible. However, digital cameras are subject to different sources of noise which distort the resulting image. Noise includes photon noise, fixed pattern noise and read noise. The aim of the radiometric calibration is to improve the quality of the resulting images by reducing the influence of the different types of noise on the measured data. In this paper, a new approach for the radiometric calibration of digital cameras using sparse Gaussian process regression is presented. Gaussian process regression is a kernel based supervised machine learning technique. It is used to learn the response of a camera system from a set of training images to allow for the calibration of new images. Compared to the standard Gaussian process method or flat field correction our sparse approach allows for faster calibration and higher reconstruction quality.
Optical surface inspection
(2016)
The multichannel Wiener filter (MWF) is a well-established noise reduction technique for speech processing. Most commonly, the speech component in a selected reference microphone is estimated. The choice of this reference microphone influences the broadband output signal-to-noise ratio (SNR) as well as the speech distortion. Recently, a generalized formulation for the MWF (G-MWF) was proposed that uses a weighted sum of the individual transfer functions from the speaker to the microphones to form a better speech reference resulting in an improved broadband output SNR. For the MWF, the influence of the phase reference is often neglected, because it has no impact on the narrow-band output SNR. The G-MWF allows an arbitrary choice of the phase reference especially in the context of spatially distributed microphones.
In this work, we demonstrate that the phase reference determines the overall transfer function and hence has an impact on both the speech distortion and the broadband output SNR. We propose two speech references that achieve a better signal-to-reverberation ratio (SRR) and an improvement in the broadband output SNR. Both proposed references are based on the phase of a delay-and-sum beamformer. Hence, the time-difference-of-arrival (TDOA) of the speech source is required to align the signals. The different techniques are compared in terms of SRR and SNR performance.
This paper studies suitable models for the identification of nonlinear acoustic systems. A cascaded structure of nonlinear filters is proposed that contains several parallel branches, consisting of polynomial functions followed by a linear filter for each order of nonlinearity. The second order of nonlinearity is additionally modelled with a parallel branch, containing a Volterra filter. These are followed by a long linear FIR filter that is able to model the room acoustics. The model is applied to the identification of a tube power amplifier feeding a guitar loudspeaker cabinet in an acoustic room. The adaptive identification is performed by the normalized least mean square (NLMS) algorithm. Compared with a generalized polynomial Hammerstein (GPH) model, the accuracy in modelling the dedicated real world system can be improved to a greater extend than increasing the order of nonlinearity in the GPH model.
For European space missions the importance of electric propulsion is strongly growing and has recently experienced a real burst in the telecom market. The initial drivers of this development were programs of the European Space Agency and projects of the European national space agencies. In addition, electric propulsion is now on the priority list of European commercial satellite manufacturers. Actual programs target orbit raising and station keeping with full electric propulsion for telecom satellites. European space industry, represented by individual companies, has developed specific and generic solutions for the electronics dedicated to powering and controlling electric propulsion systems. The European Space Agency and the European Commission providing support for enabling technology related to Power Processing Units (PPUs) and increasing competitiveness.
Domain-specific modeling is increasingly adopted by the software development industry. While textual domain-specific languages (DSLs) already have a wide impact, graphical DSLs still need to live up to their full potential. Textual DSLs are usually generated from a grammar or other short textual notations; their development is often cost-efficient. In this paper, we describe an approach to similarly create graphical DSLs from textual notations. The paper describes an approach to generate a graphical node and edge online editor, using a set of carefully designed textual DSLs to fully describe graphical DSLs. Combined with an adequate metamodel, these textual definitions represent the input for a generator that produces a graphical Editor for the web with features such as collaboration, online storage and being always available. The entire project is made available as open source under the name Zeta. This paper focuses on the overall approach and the description of the textual DSLs that can be used to develop graphical modeling languages and editors.
Domain-Specific modelling is increasingly adopted in the software development industry. While textual domain specific languages (DSLs) already have a wide impact, graphical DSLs still need to live up to their full potential. In this paper we describe an approach that reduces the time to create a graphical DSL to hours instead of months. The paper describes a generative approach to the creation of graphical editors for the Eclipse platform. A set of carefully designed textual DSLs together with an EMF meta-model are the input for the generator. The output is an Eclipse plugin for a graphical editor for the intended graphical language. The entire project is made available as open source under the name Spray and is being developed by an active community. This paper focuses on the description of the workflow and provides an introduction into the possibilities through this approach of a graphical modelling environment.
Domain-specific modeling is more and more understood as a comparable solution compared to classical software development. Textual domain-specific languages (DSLs) already have a massive impact in contrast tographical DSLs, they still have to show their full potential. The established textual DSLs are normally generated from a domain specific grammar or maybe other specific textual descriptions. And advantage of textual DSLs is that they can be development cost-efficient.
In this paper, we describe asimilar approach for the creation of graphical DSLs from textual descriptions. We present a set of specially developed textual DSLs to fully describe graphical DSLs based on node and edge diagrams. These are, together with an EMF meta-model, the input for a generator that produces an eclipse-based graphical Editor. The entire project is available as open source under the name MoDiGen.
Domain-specific modeling is increasingly adopted in the software development industry. While textual domain-specific languages (DSLs) already have a wide impact, graphical DSLs still need to live up to their full potential. In this paper, we describe an approach to automatically generate a graphical DSL from a set of textual languages. With our approach, node and edge type graphical DSLs can be described using textual models. A set of carefully designed textual DSLs is the input for our generators. The result of the generation is a graphical editor for the intended domain. The development time for a graphical editor is reduced significantly. The whole project is available as open source under the name "Zeta". This publication focuses on the explanation of the textual DSLs for defining a graphical node and edge editor.
Three-level inverters are used in electrical drive systems, as grid infeed inverter in PV power plants or as active power line filters. Up to now so called hard switching topologies have been used. A new 'Soft Switching Three Level Inverter (S3L Inverter)' which is now available provides reduced switching losses and higher efficiency. In this paper the S3L inverter is compared with a hard switching T-type inverter topology (H3L inverter). S3L inverters provide higher efficiency and additionally advantages in electromagnetic compatibility due to the soft switching performance, especially when using the 'Super Soft Switching Three Level Inverter (SS3L Inverter)'.
A method is investigated by which tight bounds on the range of a multivariate rational function over a box can be computed. The approach relies on the expansion of the numerator and denominator polynomials in Bernstein polynomials. Convergence of the bounds to the range with respect to degree elevation of the Bernstein expansion, to the width of the box and to subdivision are proven and the inclusion isotonicity of the related enclosure function is shown.
This paper considers intervals of real matrices with respect to partial orders and the problem to infer from some exposed matrices lying on the boundary of such an interval that all real matrices taken from the interval possess a certain property. In many cases such a property requires that the chosen matrices have an identically signed inverse. We also briefly survey related problems, e.g., the invariance of matrix properties under entry-wise perturbations.
The person’s heart rate is an important indicator of their health status. A heart rate that is too high or too low could be a sign of several different diseases, such as a heart disorder, obesity, asthma, or many others. Many devices require users to wear the device on their chest or place a finger on the device. The approach presented in this paper describes the principle and implementation of a heart rate monitoring device, which is able to detect the heart rate with high precision with the sensor integrated in a wristband. One method to measure the heart rate is the photoplethysmogram technique. This method measures the change of blood volume through the absorption or reflection of light. A light emitting diode (LED) shines through a thin amount of tissue. A photo-diode registers the intensity of light that traverses the tissue or is reflected by the tissue. Since blood changes its volume with each heartbeat, the photo-diode detects more or less light from the LED. The device is able to measure the heart rate with a high precision, it has low performance and hardware requirements, and it allows an implementation with small micro-controllers.
This letter introduces signal constellations based on multiplicative groups of Eisenstein integers, i.e., hexagonal lattices. These sets of Eisenstein integers are proposed as signal constellations for generalized spatial modulation. The algebraic properties of the new constellations are investigated and a set partitioning technique is developed. This technique can be used to design coded modulation schemes over hexagonal lattices.
In this paper we propose a method to determine the active speaker for each time-frequency point in the noisy signals of a microphone array. This detection is based on a statistical model where the speech signals as well as noise signals are assumed to be multivariate Gaussian random variables in the Fourier domain. Based on this model we derive a maximum-likelihood detector for the active speaker. The decision is based on the a posteriori signal to noise ratio (SNR) of a speaker dependent max-SNR beamformer.
TU Darmstadt HUMVIB-Bridge
(2016)
The simulation of the human-induced vibrations of lightweight footbridges is in general a complex problem where the dynamics of the pedestrian system meets the structural dynamics of the bridge. However, standard methods for numerical analysis of pedestrian bridges deal with this issue by using simplified approaches. The structure is mostly represented either by discretised multi mass systems or through a formulation in modal coordinates, while the excitation is typically described by a moving load.
Positive effects of the interaction between the two systems (pedestrian and structure) are usually completely neglected. This paper, which is partially
extracted from an actual research report of the Institute of Structural Mechanics and Design (TU Darmstadt), presents an experimental set-up developed for investigations of the human-structure interaction (HSI), as well as results of the preliminary investigations carried out in the same context.
The development of native user interface components is a time consuming and repetitive process, especially for quite simple components like text fields in a form. In order to save time during development an approach is presented in this paper, abstracting the description of the elements into separate files independent from the source code. With aspects from generative and model-driven approaches this leads to simple reusable UI components without the need of deep knowledge in native programming languages.
Stress is becoming an important topic in modern life. The influence of stress results in a higher rate of health disorders such as burnout, heart problems, obesity, asthma, diabetes, depressions and many others. Furthermore individual’s behavior and capabilities could be directly affected leading to altered cognition, inappropriate decision making and problem solving skills. In a dynamic and unpredictable environment, such as automotive, this can result in a higher risk for accidents. Different papers faced the estimation as well as prediction of drivers’ stress level during driving. Another important question is not only the stress level of the driver himself, but also the influence on and of a group of other drivers in the near area. This paper proposes a system, which determines a group of drivers in a near area as clusters and it derives the individual stress level. This information will be analyzed to generate a stress map, which represents a graphical view about road section with a higher stress influence. Aggregated data can be used to generate navigation routes with a lower stress influence to decrease stress influenced driving as well as improve road safety.
Sleep is an important aspect in life of every human being. The average sleep duration for an adult is approximately 7 h per day. Sleep is necessary to regenerate physical and psychological state of a human. A bad sleep quality has a major impact on the health status and can lead to different diseases. In this paper an approach will be presented, which uses a long-term monitoring of vital data gathered by a body sensor during the day and the night supported by mobile application connected to an analyzing system, to estimate sleep quality of its user as well as give recommendations to improve it in real-time. Actimetry and historical data will be used to improve the individual recommendations, based on common techniques used in the area of machine learning and big data analysis.
The aim of the paper is to present the simulation of the sweeping process based on a mathematical model that includes the drag force, the lift force, the sideway force, and the gravity. At the beginning, it is presented a short history of the street sweepers, some considerations about the sweeping process and the parameters of the sweeping process. Considering the developed model, in Matlab there is done some simulation for the trajectory of a spherical pebble. The obtained results are presented in graphical shape.
Several possibilities of tests under load on a chassis dynamometer are presented. Consumption measurements according standard driving cycles as the New European Drive Cycle (NEDC) and Worldwide harmonized light duty test procedure/cycle (WLTP/WLTC) make special attention to the observance of the regulations necessary. The rotational masses of inertia and the load depending on velocity have to match the required values. Load tests as well allow the determination of the maximum acceleration in the current gear and the slippage of the driven wheels.
The method of signal injection is investigated for position estimation of proportional solenoid valves. A simple observer is proposed to estimate a position-dependent parameter, i.e. the eddy current resistance, from which the position is calculated analytically. Therefore, the relationship of position and impedance in the case of sinusoidal excitation is accurately described by consideration of classical electrodynamics. The observer approach is compared with a standard identification method, and evaluated by practical experiments on an off-the-shelf proportional solenoid valve.
Sliding-mode observation with iterative parameter adaption for fast-switching solenoid valves
(2016)
Control of the armature motion of fast-switching solenoid valves is highly desired to reduce noise emission and wear of material. For feedback control, information of the current position and velocity of the armature are necessary. In mass production applications, however, position sensors are unavailable due to cost and fabrication reasons. Thus, position estimation by measuring merely electrical quantities is a key enabler for advanced control, and, hence, for efficient and robust operation of digital valves in advanced hydraulic applications. The work presented here addresses the problem of state estimation, i.e., position and velocity of the armature, by sole use of electrical measurements. The considered devices typically exhibit nonlinear and very fast dynamics, which makes observer design a challenging task. In view of the presence of parameter uncertainty and possible modeling inaccuracy, the robustness properties of sliding mode observation techniques are deployed here. The focus is on error convergence in the presence of several sources for modeling uncertainty and inaccuracy. Furthermore, the cyclic operation of switching solenoids is exploited to iteratively correct a critical parameter by taking into account the norm of the observation error of past switching cycles of the process. A thorough discussion on real-world experimental results highlights the usefulness of the proposed state observation approach.
We analyse the results of a finite element simulation of a macroscopic model, which describes the movement of a crowd, that is considered as a continuum. A new formulation based on the macroscopic model from Hughes [2] is given. We present a stable numerical algorithm by approximating with a viscosity solution. The fundamental setting is given by an arbitrary domain that can contain several obstacles, several entries and must have at least one exit. All pedestrians have the goal to leave the room as quickly as possible. Nobody prefers a particular exit.
In this paper, a gain-scheduled nonlinear control structure is proposed for a surface vessel, which takes advantage of extended linearisation techniques. Thereby, an accurate tracking of desired trajectories can be guaranteed that contributes to a safe and reliable water transport. The PI state feedback control is extended by a feedforward control based on an inverse system model. To achieve an accurate trajectory tracking, however, an observer-based disturbance compensation is necessary: external disturbances by cross currents or wind forces in lateral direction and wave-induced measurement disturbances are estimated by a nonlinear observer and used for a compensation. The efficiency and the achieved tracking performance are shown by simulation results using a validated model of the ship Korona at the HTWG Konstanz, Germany. Here, both tracking behaviour and rejection of disturbance forces in lateral direction are considered.
This work investigates data compression algorithms for applications in non-volatile flash memories. The main goal of the data compression is to minimize the amount of user data such that the redundancy of the error correction coding can be increased and the reliability of the error correction can be improved. A compression algorithm is proposed that combines a modified move-to-front algorithm with Huffman coding. The proposed data compression algorithm has low complexity, but provides a compression gain comparable to the Lempel-Ziv-Welch algorithm.
In this paper totally nonnegative (positive) matrices are considered which are matrices having all their minors nonnegative (positve); the almost totally positive matrices form a class between the totally nonnegative matrices and the totally positive ones. An efficient determinantal test based on the Cauchon algorithm for checking a given matrix for falling in one of these three classes of matrices is applied to matrices which are related to roots of polynomials and poles of rational functions, specifically the Hankel matrix associated with the Laurent series at infinity of a rational function and matrices of Hurwitz type associated with polynomials. In both cases it is concluded from properties of one or two finite sections of the infinite matrix that the infinite matrix itself has these or related properties. Then the results are applied to derive a sufficient condition for the Hurwitz stability of an interval family of polynomials. Finally, interval problems for a subclass of the rational functions, viz. R-functions, are investigated. These problems include invariance of exclusively positive poles and exclusively negative roots in the presence of variation of the coefficients of the polynomials within given intervals.
A real matrix is called totally nonnegative if all of its minors are nonnegative. In this paper the extended Perron complement of a principal submatrix in a matrix A is investigated. In extension of known results it is shown that if A is irreducible and totally nonnegative and the principal submatrix consists of some specified consecutive rows then the extended Perron complement is totally nonnegative. Also inequalities between minors of the extended Perron complement and the Schur complement are presented.
Intercultural management
(2016)
A case-based examination of issues in international management that helps students explore theory in the context of real-life practical situations. A focus on skills-development prepares students for future careers in international management. Cases are from a range of countries including central and Eastern Europe as well as the Asian economies.