Refine
Year of publication
- 2016 (91) (remove)
Document Type
- Conference Proceeding (54)
- Article (21)
- Part of a Book (8)
- Other Publications (5)
- Book (1)
- Doctoral Thesis (1)
- Report (1)
Language
- English (91) (remove)
Keywords
- Abstract interpretation (1)
- Bernstein polynomial (1)
- Block codes (1)
- Body sensor networks (1)
- Business life-cycle (1)
- Business plan (3)
- Bytecode (1)
- Cauchon algorithm (1)
- Chassis dynamometer (1)
- Code Generation (1)
- Cognitive radio (1)
- Computational linguistics (1)
- Concatenated codes (1)
- Condition monitoring (1)
- Content analysis (2)
- Content analysis (keywords) (1)
- Corporate modelling (1)
- Correlation analysis (1)
- Damages (1)
- Data acquisition (1)
- Data compression (1)
- Data privacy (1)
- Decoding (1)
- Disturbance rejection (1)
- Diversity (1)
- Document image processing (1)
- Domain-Specific Language (DSL) (3)
- Domain-Specific-Language (DSL) (1)
- EMF (1)
- ERP systems (1)
- Eclipse Modeling Framework (EMF) (1)
- Electrocardiography (1)
- Electroencephalography (1)
- Electromagnetic actuators (2)
- Electromagnetic devices (1)
- Enterprise systems (1)
- Entry-wise perturbation (1)
- Error correction (1)
- Extended Perron complement (1)
- Extended linearisation (1)
- Factory automation (1)
- Fault diagnosis (1)
- Finite-element (1)
- Flash memories (1)
- Footbridges (1)
- Friction (1)
- Fuel consumption (1)
- Future tools and methods (1)
- Gain scheduling (1)
- Graphical Online Editor (1)
- Handwriting recognition (1)
- Hankel matrix (1)
- Hidden Markov models (1)
- Huffman codes (1)
- Hurwitz matrix (1)
- Hydraulic actuators (1)
- Hysteresis modeling (1)
- IT risk (1)
- IT service governance (1)
- Immutability (1)
- Industrial training (1)
- Internet (1)
- Joint building venture (1)
- Lake constance (1)
- Learning (artificial intelligence) (1)
- Magnetic fields (1)
- Markov processes (1)
- Matlab (1)
- Matrix interval (1)
- Maximum likelihood estimation (1)
- Measurement methods (2)
- Measurements (1)
- Metamodel (1)
- Metamodel Definition (1)
- Metamodel Model-Driven Architecture (MDA) (1)
- Military land redevelopment (1)
- Mobile computing (1)
- Model-Driven Architecture (MDA) (2)
- Model-Driven Software Development (MDSD) (3)
- Model-Driven-Development (MDD) (1)
- Modelling (1)
- Motion estimation (1)
- Movement patterns (1)
- Multicast communication (1)
- NEDC (1)
- Natural frequency (1)
- Observers (1)
- Parameter estimation (3)
- Pedestrian (1)
- Perron complement (1)
- Prediction (1)
- Probability (1)
- Production management (1)
- Quality of service (1)
- R-function (1)
- Radio spectrum management (1)
- Range bounding (1)
- Rational function (1)
- Redundancy (1)
- Relation (1)
- Scala (1)
- Schur complement (1)
- Shadow IT (2)
- Shadow systems (1)
- Ship control (1)
- Sign regular matrix (1)
- Simulation (1)
- Sleep (1)
- Software radio (1)
- Spectral analysers (1)
- Sprachakustik (1)
- Street sweeper (1)
- Task allocation (1)
- Technology transfer (1)
- Telecommunication traffic (2)
- Timber bridges (1)
- Totally nonnegative matrix (3)
- Totally nonpositive matrix (1)
- Totally positive matrix (2)
- Trajectory tracking (1)
- Transaction cost economics (1)
- Transaction relations (2)
- Triangulation (1)
- Unscented Kalman Filter (1)
- Urban development pattern (1)
- Urbanity (1)
- Value network (1)
- Venture emergence (2)
- Vertex matrix (1)
- WLTC (1)
- WLTP (1)
- Wave filtering (1)
- Xtext (2)
- algebraic codes (1)
- modulation coding (1)
Institute
This work investigates data compression algorithms for applications in non-volatile flash memories. The main goal of the data compression is to minimize the amount of user data such that the redundancy of the error correction coding can be increased and the reliability of the error correction can be improved. A compression algorithm is proposed that combines a modified move-to-front algorithm with Huffman coding. The proposed data compression algorithm has low complexity, but provides a compression gain comparable to the Lempel-Ziv-Welch algorithm.
In this paper, a gain-scheduled nonlinear control structure is proposed for a surface vessel, which takes advantage of extended linearisation techniques. Thereby, an accurate tracking of desired trajectories can be guaranteed that contributes to a safe and reliable water transport. The PI state feedback control is extended by a feedforward control based on an inverse system model. To achieve an accurate trajectory tracking, however, an observer-based disturbance compensation is necessary: external disturbances by cross currents or wind forces in lateral direction and wave-induced measurement disturbances are estimated by a nonlinear observer and used for a compensation. The efficiency and the achieved tracking performance are shown by simulation results using a validated model of the ship Korona at the HTWG Konstanz, Germany. Here, both tracking behaviour and rejection of disturbance forces in lateral direction are considered.
Sliding-mode observation with iterative parameter adaption for fast-switching solenoid valves
(2016)
Control of the armature motion of fast-switching solenoid valves is highly desired to reduce noise emission and wear of material. For feedback control, information of the current position and velocity of the armature are necessary. In mass production applications, however, position sensors are unavailable due to cost and fabrication reasons. Thus, position estimation by measuring merely electrical quantities is a key enabler for advanced control, and, hence, for efficient and robust operation of digital valves in advanced hydraulic applications. The work presented here addresses the problem of state estimation, i.e., position and velocity of the armature, by sole use of electrical measurements. The considered devices typically exhibit nonlinear and very fast dynamics, which makes observer design a challenging task. In view of the presence of parameter uncertainty and possible modeling inaccuracy, the robustness properties of sliding mode observation techniques are deployed here. The focus is on error convergence in the presence of several sources for modeling uncertainty and inaccuracy. Furthermore, the cyclic operation of switching solenoids is exploited to iteratively correct a critical parameter by taking into account the norm of the observation error of past switching cycles of the process. A thorough discussion on real-world experimental results highlights the usefulness of the proposed state observation approach.
The method of signal injection is investigated for position estimation of proportional solenoid valves. A simple observer is proposed to estimate a position-dependent parameter, i.e. the eddy current resistance, from which the position is calculated analytically. Therefore, the relationship of position and impedance in the case of sinusoidal excitation is accurately described by consideration of classical electrodynamics. The observer approach is compared with a standard identification method, and evaluated by practical experiments on an off-the-shelf proportional solenoid valve.
Several possibilities of tests under load on a chassis dynamometer are presented. Consumption measurements according standard driving cycles as the New European Drive Cycle (NEDC) and Worldwide harmonized light duty test procedure/cycle (WLTP/WLTC) make special attention to the observance of the regulations necessary. The rotational masses of inertia and the load depending on velocity have to match the required values. Load tests as well allow the determination of the maximum acceleration in the current gear and the slippage of the driven wheels.
The aim of the paper is to present the simulation of the sweeping process based on a mathematical model that includes the drag force, the lift force, the sideway force, and the gravity. At the beginning, it is presented a short history of the street sweepers, some considerations about the sweeping process and the parameters of the sweeping process. Considering the developed model, in Matlab there is done some simulation for the trajectory of a spherical pebble. The obtained results are presented in graphical shape.
Stress is becoming an important topic in modern life. The influence of stress results in a higher rate of health disorders such as burnout, heart problems, obesity, asthma, diabetes, depressions and many others. Furthermore individual’s behavior and capabilities could be directly affected leading to altered cognition, inappropriate decision making and problem solving skills. In a dynamic and unpredictable environment, such as automotive, this can result in a higher risk for accidents. Different papers faced the estimation as well as prediction of drivers’ stress level during driving. Another important question is not only the stress level of the driver himself, but also the influence on and of a group of other drivers in the near area. This paper proposes a system, which determines a group of drivers in a near area as clusters and it derives the individual stress level. This information will be analyzed to generate a stress map, which represents a graphical view about road section with a higher stress influence. Aggregated data can be used to generate navigation routes with a lower stress influence to decrease stress influenced driving as well as improve road safety.
Sleep is an important aspect in life of every human being. The average sleep duration for an adult is approximately 7 h per day. Sleep is necessary to regenerate physical and psychological state of a human. A bad sleep quality has a major impact on the health status and can lead to different diseases. In this paper an approach will be presented, which uses a long-term monitoring of vital data gathered by a body sensor during the day and the night supported by mobile application connected to an analyzing system, to estimate sleep quality of its user as well as give recommendations to improve it in real-time. Actimetry and historical data will be used to improve the individual recommendations, based on common techniques used in the area of machine learning and big data analysis.
Increasing robustness of handwriting recognition using character N-Gram decoding on large lexica
(2016)
Offline handwriting recognition systems often include a decoding step, that is retrieving the most likely character sequence from the underlying machine learning algorithm. Decoding is sensitive to ranges of weakly predicted characters, caused e.g. by obstructions in the scanned document. We present a new algorithm for robust decoding of handwriting recognizer outputs using character n-grams. Multidimensional hierarchical subsampling artificial neural networks with Long-Short-Term-Memory cells have been successfully applied to offline handwriting recognition. Output activations from such networks, trained with Connectionist Temporal Classification, can be decoded with several different algorithms in order to retrieve the most likely literal string that it represents. We present a new algorithm for decoding the network output while restricting the possible strings to a large lexicon. The index used for this work is an n-gram index with tri-grams used for experimental comparisons. N-grams are extracted from the network output using a backtracking algorithm and each n-gram assigned a mean probability. The decoding result is obtained by intersecting the n-gram hit lists while calculating the total probability for each matched lexicon entry. We conclude with an experimental comparison of different decoding algorithms on a large lexicon.
The magneto-mechanical behavior of magnetic shape memory (MSM) materials has been investigated by means of different simulation and modeling approaches by several research groups. The target of this paper is to simulate actuators driven by MSM alloys and to understand the MSM element behavior during actuation, which shall lead to an increased performance of the actuator. It is shown that internal and external stresses should be taken into consideration using numerical computation tools for magnetic fields in an efficient way.
Stress is recognized as a predominant disease with raising costs for rehabilitation and treatment. Currently there several different approaches that can be used for determining and calculating the stress levels. Usually the methods for determining stress are divided in two categories. The first category do not require any special equipment for measuring the stress. This category useless the variation in the behaviour patterns that occur while stress. The core disadvantage for the category is their limitation to specific use case. The second category uses laboratories instruments and biological sensors. This category allow to measure stress precisely and proficiently but on the same time they are not mobile and transportable and do not support real-time feedback. This work presents a mobile system that provides the calculation of stress. For achieving this, the of a mobile ECG sensor is analysed, processed and visualised over a mobile system like a smartphone. This work also explains the used stress measurement algorithm. The result of this work is a portable system that can be used with a mobile system like a smartphone as visual interface for reporting the current stress level.
Stress is a recognized as a predominant disease with growing costs of treatment. The approach presented here is aimed to detect stress using a light weighted, mobile, cheap and easy to use system. The result shows that stress can be detected even in case a person’s natural bio vital data is out of the main range. The system enables storage of measured data, while maintaining communication channels of online and post-processing.
Navigation on the Danube
(2016)
This report contains two parts: The first part presents an overview on studies concerning the Danube, inland navigation or the impact of climate change on either of those. The second part gives a more detailed analysis of inland navigation on the Danube, partly based on studies presented in part one. Part two covers the current situation along the Danube by covering the topic of bottlenecks and other limitations for shipping along the Danube. Based on these informations, an estimation of the economic impacts of low water periods on inland navigation is made. As a last step, measures to reduce the impact of low water on inland navigations are presented. The report shows, that inland navigation still is an important transport mode, along the Danube as well as in other european regions. Especially in Romania, inland navigation still has a large share of more than 20% and rising in total transport. However, inland navigation depends strongly on good conditions of its infrastructure. These good conditions are limited mainly by two factors: one are the so called bottlenecks. Those are areas with sub-optimal shipping conditions e.g. due to solid rock formations in the river that lead to a reduced water depths. The other factor is the weather (and, on a longer time scale, the climate) which, mostly depending on precipitation and evaporation, can lead to low water levels seasonally. In addition to these two natural factors, laws which e.g. regulate the maximum number of barges allowed. Human build structures like locks limit the size of vessels as well as the speed they can travel with. These limiting factors are identified and located in the first chapter of part two of this report, before the water depth needed by several ship sizes as well as the cargo fleet available along the Danube are presented. One of the targets of this report is to estimate the economic impact of low water periods. All the factors named above as well as the freight prices charged for connections along the Danube are used to each this target in chapter II.4. To estimate the impact of low water periods on the freight prices, a method developed by Jonkeren et al. (2007) for the Rhine is transfered to the Danube. By transfering Jonkeren et al. (2007) method, regression equations for several transport connections along the Danube are identified that give a first estimate for the connection of freight prices and water levels. With the help of these regression equations, an estimation of the total expenses for transport via inland navigation for several years is possible. The yearly and seasonal variability is identified as well as the additional expenses due to water levels below 280cm. But additional expenses are not the only impact of changing water levels on inland navigation. Another is, that while the demand for transport stays at the same level, sometimes the water levels are not sufficient enough to use the full capacity of the fleet. Therefore, the (theoretical) amount of cargo that could not be transported due to low water levels is calculated as well and presented in chapter II.5. Finally, some measures to overcome some of the here named problems of inland navigation due to low water levels are presented. These are separated into two general approches: change the ship or change the river. Both methods have their advantages and disadvantages due to technical as well as regulatory and other factors. The list presented here however is incomplete and only gives a few ideas of how some problems can be overcome. In the end, an individual mix for the different regions along the river and sometimes for the individual companies must be found.
The corrosion resistance of stainless steels is massively influenced by the condition of their surface. The surface quality includes the topography of the surface, the structure and composition of the passive layer, and the surface near structure of the base material. These factors are influenced by final physical/chemical surface treatments. The presented work shows significantly lower corrosion resistance for mechanical machined specimens than for etched specimens. It also turns out that the rougher the surface, the lower the corrosion resistance gets. However, there is no general finding which shows if blasted or grinded surfaces are more appropriate, but a dependency on process parameters and the characteristics on corrosive exposure in terms of corrosion behavior. The results show that not only the surface roughness Ra has an influence on corrosion behavior but also the shape of peaks and valleys which are evolved by surface treatments. Imperfections in the base material, like sulfidic inclusions lead to a weaker passive layer, respectively, to a decrease of the corrosion resistance. By using special passivating techniques the corrosion resistance of stainless steels can be increased to a higher level in comparison to common passivation.
Even though immutability is a desirable property, especially in a multi-threaded environment, implementing immutable Java classes is surprisingly hard because of a lack of language support. We present a static analysis tool using abstract bytecode interpretation that checks Java classes for compliance with a set of rules that together constitute state-based immutability. Being realized as a Find Bugs plug in, the tool can easily be integrated into most IDEs and hence the software development process. Our evaluation on a large, real world codebase shows that the average run-time effort for a single class is in the range of a few milliseconds, with only a very few statistical spikes.
Smart factory and education
(2016)
The introduction of cyber physical systems into production companies is highly changing working conditions and processes as well as business models. In practice a growing discrepancy between big and small respectively medium-sized companies can be observed. Bridging that gap a university smart factory is introduced to give that companies a platform to trial, educate employees and access consultancy. Realizing the smart factory a highly integrated, open and standardized automation concept is shown comprising single devices, production lines up to a higher automation system maintaining a community or business models.
To learn from the past, we analyse 1,088 "computer as a target" judgements for evidential reasoning by extracting four case elements: decision, intent, fact, and evidence. Analysing the decision element is essential for studying the scale of sentence severity for cross-jurisdictional comparisons. Examining the intent element can facilitate future risk assessment. Analysing the fact element can enhance an organization's capability of analysing criminal activities for future offender profiling. Examining the evidence used against a defendant from previous judgements can facilitate the preparation of evidence for upcoming legal disclosure. Follow the concepts of argumentation diagrams, we develop an automatic judgement summarizing system to enhance the accessibility of judgements and avoid repeating past mistakes. Inspired by the feasibility of extracting legal knowledge for argument construction and employing grounds of inadmissibility for probability assessment, we conduct evidential reasoning of kernel traces for forensic readiness. We integrate the narrative methods from attack graphs/languages for preventing confirmation bias, the argumentative methods from argumentation diagrams for constructing legal arguments, and the probabilistic methods from Bayesian networks for comparing hypotheses.
In this paper we provide a performance analysis framework for wireless industrial networks by deriving a service curve and a bound on the delay violation probability. For this purpose we use the (min,×)stochastic network calculus as well as a recently presented recursive formula for an end-to-end delay bound of wireless heterogeneous networks. The derived results are mapped to WirelessHART networks used in process automation and were validated via simulations. In addition to WirelessHART, our results can be applied to any wireless network whose physical layer conforms the IEEE 802.15.4 standard, while its MAC protocol incorporates TDMA and channel hopping, like e.g. ISA100.11a or TSCH-based networks. The provided delay analysis is especially useful during the network design phase, offering further research potential towards optimal routing and power management in QoS-constrained wireless industrial networks.
These days computer analysis of ECG (Electrocardiograms) signals is common. There are many real-time QRS recognition algorithms; one of these algorithms is Pan-Tompkins Algorithm. Which the Pan-Tompkins Algorithm can detect QRS complexes of ECG signals. The proposed algorithm is analysed the data stream of the heartbeat based on the digital analysis of the amplitude, the bandwidth, and the slope. In addition to that, the stress algorithm compares whether the current heartbeat is similar or different to the last heartbeat after detecting the ECG signals. This algorithm determines the stress detection for the patient on the real-time. In order to implement the new algorithm with higher performance, the parallel programming language CUDA is used. The algorithm determines stress at the same time by determining the RR interval. The algorithm uses a different function as beat detector and a beat classifier of stress.
Realistic traffic modeling plays a key role in efficient Dynamic Spectrum Access (DSA) which is considered as enabler for the employment of wireless technologies in critical industrial automation applications (IAA). The majority of models of spectrum usage are not suitable for this specific use case as they are based on measurement campaigns conducted in urban or controlled laboratory environments. In this work we present a time-domain traffic model for industrial communication in the 2.4 GHz industrial, scientific, medical (ISM) band based on measurements in an industrial automotive production site. As DSA is usually implemented on Software Defined Radios (SDR), our measurement campaign is based on SDR platforms rather than sophisticated spectrum analyzers. We show through the estimation of the Hurst parameter that industrial wireless traffic possesses inherent self-similarity that could be exploited for efficient DSA. We also show that wireless traffic could be modeled as a semi-Markov model with channel on and off durations Log-normally and Pareto distributed, respectively. We finally estimate the parameters of the derived models using Maximum Likelihood estimation.
ERP systems integrate a major part of all business processes and organizations include them in their IT service management. Besides these formal systems, there are additional systems that are rather stand-alone and not included in the IT management tasks. These so-called ‘shadow systems’ also support business processes but hinder a high enterprise integration. Shadow systems appear during their explicit detection or during software maintenance projects such as enhancements or release changes of enterprise systems. Organizations then have to decide if and to what extent they integrate the identified shadow systems into their ERP systems. For this decision, organizations have to compare the capabilities of each identified shadow system with their ERP systems. Based on multiple-case studies, we provide a dependency approach to enable their comparison. We derive categories for different stages of the dependency and base insights into integration possibilities on these stages. Our results show that 64% of the shadow systems in our case studies are related to ERP systems. This means that they share parts or all of their data and/or functionality with the ERP system. Our research contributes to the field of integration as well as to the discussion about shadow systems.
adidas and Reebok
(2016)
Present demographic change and a growing population of elderly people leads to new medical needs. Meeting these with state of the art technology is as a consequence a rapidly growing market. So this work is aimed at taking modern concepts of mobile and sensor technology and putting them in a medical context. By measuring a user’s vital signs on sensors which are processed on a Android smartphone, the target system is able to determine the current health state of the user and to visualize gathered information. The system also includes a weather forecasting functionality, which alerts the user on possibly dangerous future meteorological events. All information are collected centrally and distributed to users based on their location. Further, the system can correlate the client-side measurement of vital signs with a server-side weather history. This enables personalized forecasting for each user individually. Finally, a portable and affordable application was developed that continuously monitors the health status by many vital sensors, all united on a common smartphone.
This paper proposes a soft input decoding algorithm and a decoder architecture for generalized concatenated (GC) codes. The GC codes are constructed from inner nested binary Bose-Chaudhuri-Hocquenghem (BCH) codes and outer Reed-Solomon codes. In order to enable soft input decoding for the inner BCH block codes, a sequential stack decoding algorithm is used. Ordinary stack decoding of binary block codes requires the complete trellis of the code. In this paper, a representation of the block codes based on the trellises of supercodes is proposed in order to reduce the memory requirements for the representation of the BCH codes. This enables an efficient hardware implementation. The results for the decoding performance of the overall GC code are presented. Furthermore, a hardware architecture of the GC decoder is proposed. The proposed decoder is well suited for applications that require very low residual error rates.
A lot of procedures for estimating the spool position in linear electromagnetic actuators using voltage and current measurements only, can be found in the literature. Subject to the accuracy of the estimated spool position some achieve better, some worse results. However, in almost every approach hysteresis has a huge impact on the estimation accuracy that can be achieved. Regardless whether these effects are caused by magnetic or mechanical hysteresis, they will limit the accuracy of the position estimate, if not taken into account. In this paper, a model is introduced which covers the hysteresis effects as well as other nonlinear ities occurring in estimated position-dependent parameters. A classical Preisach model is deployed first, which is then adjusted by using novel elementary preceding Relay-Operators. The resulting model for the estimated position-dependent parameters including the adjusted Preisach model can be easily applied to position estimation tasks. It is shown that the considered model distinctly improves the accuracy for the spool position estimate, while it is kept as simple as possible for real-time implementation reasons.
When mobile devices at the network edge want to communicate with each other, they too often depend on the availability of faraway resources. For direct communication, feasible user-friendly service discovery is essential. DNS Service Discovery over Multicast DNS (DNS-SD/mDNS) is widely used for configurationless service discovery in local networks, due inno small part to the fact that it is based on the well establishedDNS, and efficient in small networks. In our research, we enhance DNS-SD/mDNS providing versatility, user control, efficiency, and privacy, while maintaining the deployment simplicity and backward compatibility. These enhancements are necessary to make it a solid, flexible foundationfor device communication in the edge of the Internet. In this paper, we focus on providing multi-link capabilities and scalable scopes for DNS-SD while being mindful of both user-friendliness and efficiency. We propose DNS-SD over StatelessDNS (DNS-SD/sDNS), a solution that allows configurationless service discovery in arbitrary self-named scopes - largely independentof the physical network layout - by leveraging ourStateless DNS technique and the Raft consensus algorithm.
The person’s heart rate is an important indicator of their health status. A heart rate that is too high or too low could be a sign of several different diseases, such as a heart disorder, obesity, asthma, or many others. Many devices require users to wear the device on their chest or place a finger on the device. The approach presented in this paper describes the principle and implementation of a heart rate monitoring device, which is able to detect the heart rate with high precision with the sensor integrated in a wristband. One method to measure the heart rate is the photoplethysmogram technique. This method measures the change of blood volume through the absorption or reflection of light. A light emitting diode (LED) shines through a thin amount of tissue. A photo-diode registers the intensity of light that traverses the tissue or is reflected by the tissue. Since blood changes its volume with each heartbeat, the photo-diode detects more or less light from the LED. The device is able to measure the heart rate with a high precision, it has low performance and hardware requirements, and it allows an implementation with small micro-controllers.
Three-level inverters are used in electrical drive systems, as grid infeed inverter in PV power plants or as active power line filters. Up to now so called hard switching topologies have been used. A new 'Soft Switching Three Level Inverter (S3L Inverter)' which is now available provides reduced switching losses and higher efficiency. In this paper the S3L inverter is compared with a hard switching T-type inverter topology (H3L inverter). S3L inverters provide higher efficiency and additionally advantages in electromagnetic compatibility due to the soft switching performance, especially when using the 'Super Soft Switching Three Level Inverter (SS3L Inverter)'.
Domain-Specific modelling is increasingly adopted in the software development industry. While textual domain specific languages (DSLs) already have a wide impact, graphical DSLs still need to live up to their full potential. In this paper we describe an approach that reduces the time to create a graphical DSL to hours instead of months. The paper describes a generative approach to the creation of graphical editors for the Eclipse platform. A set of carefully designed textual DSLs together with an EMF meta-model are the input for the generator. The output is an Eclipse plugin for a graphical editor for the intended graphical language. The entire project is made available as open source under the name Spray and is being developed by an active community. This paper focuses on the description of the workflow and provides an introduction into the possibilities through this approach of a graphical modelling environment.
Domain-specific modeling is more and more understood as a comparable solution compared to classical software development. Textual domain-specific languages (DSLs) already have a massive impact in contrast tographical DSLs, they still have to show their full potential. The established textual DSLs are normally generated from a domain specific grammar or maybe other specific textual descriptions. And advantage of textual DSLs is that they can be development cost-efficient.
In this paper, we describe asimilar approach for the creation of graphical DSLs from textual descriptions. We present a set of specially developed textual DSLs to fully describe graphical DSLs based on node and edge diagrams. These are, together with an EMF meta-model, the input for a generator that produces an eclipse-based graphical Editor. The entire project is available as open source under the name MoDiGen.
Domain-specific modeling is increasingly adopted by the software development industry. While textual domain-specific languages (DSLs) already have a wide impact, graphical DSLs still need to live up to their full potential. Textual DSLs are usually generated from a grammar or other short textual notations; their development is often cost-efficient. In this paper, we describe an approach to similarly create graphical DSLs from textual notations. The paper describes an approach to generate a graphical node and edge online editor, using a set of carefully designed textual DSLs to fully describe graphical DSLs. Combined with an adequate metamodel, these textual definitions represent the input for a generator that produces a graphical Editor for the web with features such as collaboration, online storage and being always available. The entire project is made available as open source under the name Zeta. This paper focuses on the overall approach and the description of the textual DSLs that can be used to develop graphical modeling languages and editors.
The development of native user interface components is a time consuming and repetitive process, especially for quite simple components like text fields in a form. In order to save time during development an approach is presented in this paper, abstracting the description of the elements into separate files independent from the source code. With aspects from generative and model-driven approaches this leads to simple reusable UI components without the need of deep knowledge in native programming languages.
TU Darmstadt HUMVIB-Bridge
(2016)
The simulation of the human-induced vibrations of lightweight footbridges is in general a complex problem where the dynamics of the pedestrian system meets the structural dynamics of the bridge. However, standard methods for numerical analysis of pedestrian bridges deal with this issue by using simplified approaches. The structure is mostly represented either by discretised multi mass systems or through a formulation in modal coordinates, while the excitation is typically described by a moving load.
Positive effects of the interaction between the two systems (pedestrian and structure) are usually completely neglected. This paper, which is partially
extracted from an actual research report of the Institute of Structural Mechanics and Design (TU Darmstadt), presents an experimental set-up developed for investigations of the human-structure interaction (HSI), as well as results of the preliminary investigations carried out in the same context.
In this paper we propose a method to determine the active speaker for each time-frequency point in the noisy signals of a microphone array. This detection is based on a statistical model where the speech signals as well as noise signals are assumed to be multivariate Gaussian random variables in the Fourier domain. Based on this model we derive a maximum-likelihood detector for the active speaker. The decision is based on the a posteriori signal to noise ratio (SNR) of a speaker dependent max-SNR beamformer.
This letter introduces signal constellations based on multiplicative groups of Eisenstein integers, i.e., hexagonal lattices. These sets of Eisenstein integers are proposed as signal constellations for generalized spatial modulation. The algebraic properties of the new constellations are investigated and a set partitioning technique is developed. This technique can be used to design coded modulation schemes over hexagonal lattices.
Corrosion
(2016)
Purpose The purpose of this paper is to find out tourism movement patterns via the tracking of tourists with the help of positioning systems like GPS in the rural area of the Lake Constance destination in Germany. In doing so past, present and future of tourist tracking is illustrated. Design/methodology/approach The tracking is realized via common smartphones extended by an app, with dedicated sensors like position loggers and a survey. The three different approaches are applied in order to compare and cross-check results (triangulation of data and methods). Findings Movement patterns turned out to be diverse and individualistic within the rural destination of Lake Constance and following an ants trail in sub-destinations like the city of Constance. Repeat visitors and first-time visitors alike always visit the bigger cities and main day-trip destinations of the Lake. A possible prediction tool enables new avenues of governing tourism movement patterns. Research limitations/implications The tracking techniques can be developed further into the direction of “quantified self” using gamification in order to make the tracking app even more attractive. Practical implications An algorithm-based prediction tool would offer new perspectives to the management of tourism movements. Social implications Further research is needed to overcome the feeling of invasiveness of the app to allow tracking with that approach. Originality/value This study is original and innovative because of the first-time use of a smartphone app in tourist tracking, the application on a rural destination and the conceptual description of a prediction tool.
Software startups
(2016)
Software startup companies develop innovative, software-intensive products within limited time frames and with few resources, searching for sustainable and scalable business models. Software startups are quite distinct from traditional mature software companies, but also from micro-, small-, and medium-sized enterprises, introducing new challenges relevant for software engineering research. This paper’s research agenda focuses on software engineering in startups, identifying, in particular, 70+ research questions in the areas of supporting startup engineering activities, startup evolution models and patterns, ecosystems and innovation hubs, human aspects in software startups, applying startup concepts in non-startup environments, and methodologies and theories for startup research. We connect and motivate this research agenda with past studies in software startup research, while pointing out possible future directions. While all authors of this research agenda have their main background in Software Engineering or Computer Science, their interest in software startups broadens the perspective to the challenges, but also to the opportunities that emerge from multi-disciplinary research. Our audience is therefore primarily software engineering researchers, even though we aim at stimulating collaborations and research that crosses disciplinary boundaries. We believe that with this research agenda we cover a wide spectrum of the software startup industry current needs.
Nowadays there is a rich diversity of sleep monitoring systems available on the market. They promise to offer information about sleep quality of the user by recording a limited number of vital signals, mainly heart rate and body movement. Typically, fitness trackers, smart watches, smart shirts, smartphone applications or patches do not provide access to the raw sensor data. Moreover, the sleep classification algorithm and the agreement ratio with the gold standard, polysomnography (PSG) are not disclosed. Some commercial systems record and store the data on the wearable device, but the user needs to transfer and import it into specialised software applications or return it to the doctor, for clinical evaluation of the data set. Thus an immediate feedback mechanism or the possibility of remote control and supervision are lacking. Furthermore, many such systems only distinguish between sleep and wake states, or between wake, light sleep and deep sleep. It is not always clear how these stages are mapped to the four known sleep stages: REM, NREM1, NREM2, NREM3-4. [1] The goal of this research is to find a reduced complexity method to process a minimum number of bio vital signals, while providing accurate sleep classification results. The model we propose offers remote control and real time supervision capabilities, by using Internet of Things (IoT) technology. This paper focuses on the data processing method and the sleep classification logic. The body sensor network representing our data acquisition system will be described in a separate publication. Our solution showed promising results and a good potential to overcome the limitations of existing products. Further improvements will be made and subjects with different age and health conditions will be tested.