Refine
Year of publication
- 2015 (89) (remove)
Document Type
- Conference Proceeding (64)
- Article (9)
- Other Publications (8)
- Part of a Book (3)
- Doctoral Thesis (2)
- Report (2)
- Working Paper (1)
Language
- English (89) (remove)
Keywords
- Algorithm (1)
- Atom interferometer (1)
- Autonomy (1)
- BCH codes (1)
- Bernstein coefficients (1)
- Bernstein polynomial (1)
- Bernstein polynomials (1)
- Biomechanics Laboratory (1)
- Body-movement (1)
- Bose-Einstein condensate (1)
Vortrag auf dem Doktorandenkolloquium des Kooperativen Promotionskollegs der HTWG, 09.07.2015
We consider classes of (Formula presented.)-by-(Formula presented.) sign regular matrices, i.e. of matrices with the property that all their minors of fixed order (Formula presented.) have one specified sign or are allowed also to vanish, (Formula presented.). If the sign is nonpositive for all (Formula presented.), such a matrix is called totally nonpositive. The application of the Cauchon algorithm to nonsingular totally nonpositive matrices is investigated and a new determinantal test for these matrices is derived. Also matrix intervals with respect to the checkerboard ordering are considered. This order is obtained from the usual entry-wise ordering on the set of the (Formula presented.)-by-(Formula presented.) matrices by reversing the inequality sign for each entry in a checkerboard fashion. For some classes of sign regular matrices, it is shown that if the two bound matrices of such a matrix interval are both in the same class then all matrices lying between these two bound matrices are in the same class, too.
Model Order Reduction
(2015)
This chapter offers an introduction to Model Order Reduction (MOR). It gives an overview on the methods that are mostly used. It also describes the main concepts behind the methods and the properties that are aimed to be preserved. The sections are in a prefered order for reading, but can be read independentlty. Section 4.1, written by Michael Striebel, E. Jan W. ter Maten, Kasra Mohaghegh and Roland Pulch, overviews the basic material for MOR and its use in circuit simulation. Issues like Stability, Passivity, Structure preservation, Realizability are discussed. Projection based MOR methods include Krylov-space methods (like PRIMA and SPRIM) and POD-methods. Truncation based MOR includes Balanced Truncation, Poor Man’s TBR and Modal Truncation.Section 4.2, written by Joost Rommes and Nelson Martins, focuses on Modal Truncation. Here eigenvalues are the starting point. The eigenvalue problems related to large-scale dynamical systems are usually too large to be solved completely. The algorithms described in this section are efficient and effective methods for the computation of a few specific dominant eigenvalues of these large-scale systems. It is shown how these algorithms can be used for computing reduced-order models with modal approximation and Krylov-based methods.Section 4.3, written by Maryam Saadvandi and Joost Rommes, concerns passivity preserving model order reduction using the spectral zero method. It detailedly discusses two algorithms, one by Antoulas and one by Sorenson. These two approaches are based on a projection method by selecting spectral zeros of the original transfer function to produce a reduced transfer function that has the specified roots as its spectral zeros. The reduced model preserves passivity.Section 4.4, written by Roxana Ionutiu, Joost Rommes and Athanasios C. Antoulas, refines the spectral zero MOR method to dominant spectral zeros. The new model reduction method for circuit simulation preserves passivity by interpolating dominant spectral zeros. These are computed as poles of an associated Hamiltonian system, using an iterative solver: the subspace accelerated dominant pole algorithm (SADPA). Based on a dominance criterion, SADPA finds relevant spectral zeros and the associated invariant subspaces, which are used to construct the passivity preserving projection. RLC netlist equivalents for the reduced models are provided.Section 4.5, written by Roxana Ionutiu and Joost Rommes, deals with synthesis of a reduced model: reformulate it as a netlist for a circuit. A framework for model reduction and synthesis is presented, which greatly enlarges the options for the re-use of reduced order models in circuit simulation by simulators of choice. Especially when model reduction exploits structure preservation, we show that using the model as a current-driven element is possible, and allows for synthesis without controlled sources. Two synthesis techniques are considered: (1) by means of realizing the reduced transfer function into a netlist and (2) by unstamping the reduced system matrices into a circuit representation. The presented framework serves as a basis for reduction of large parasitic R/RC/RCL networks.
The improvement of collision avoidance for vessels in close range encounter situations is an important topic for maritime traffic safety. Typical approaches generate evasive trajectories or optimise the trajectories of all involved vessels. Such a collision avoidance system has to produce evasive manoeuvres that do not confuse other navigators. To achieve this behaviour, a probabilistic obstacle handling based on information from a radar sensor with target tracking, that considers measurement and tracking uncertainties is proposed. A grid based path search algorithm, that takes the information from the probabilistic obstacle handling into account, is then used to generate evasive trajectories. The proposed algorithms have been tested and verified in a simulated environment for inland waters.
Motion safety for vessels
(2015)
The improvement of collision avoidance for vessels in close range encounter situations is an important topic for maritime traffic safety. Typical approaches generate evasive trajectories or optimise the trajectories of all involved vessels. The idea of this work is to validate these trajectories related to guaranteed motion safety, which means that it is not sufficient for a trajectory to be collision-free, but it must additionally ensure that an evasive manoeuvre is performable at any time. An approach using the distance and the evolution of the distance to the other vessels is proposed. The concept of Inevitable Collision States (ICS) is adopted to identify the states for which no evasive manoeuvre exist. Furthermore, it is implemented into a collision avoidance system for recreational crafts to demonstrate the performance.
Knowing the position of the spool in a solenoid valve, without using costly position sensors, is of considerable interest in a lot of industrial applications. In this paper, the problem of position estimation based on state observers for fast-switching solenoids, with sole use of simple voltage and current measurements, is investigated. Due to the short spool traveling time in fast-switching valves, convergence of the observer errors has to be achieved very fast. Moreover, the observer has to be robust against modeling uncertainties and parameter variations. Therefore, different state observer approaches are investigated, and compared to each other regarding possible uncertainties. The investigation covers a High-Gain-Observer approach, a combined High-Gain Sliding-Mode-Observer approach, both based on extended linearization, and a nonlinear Sliding-Mode-Observer based on equivalent output injection. The results are discussed by means of numerical simulations for all approaches, and finally physical experiments on a valve-mock-up are thoroughly discussed for the nonlinear Sliding-Mode-Observer.
A semilinear distributed parameter approach for solenoid valve control including saturation effects
(2015)
In this paper a semilinear parabolic PDE for the control of solenoid valves is presented. The distributed parameter model of the cylinder becomes nonlinear by the inclusion of saturation effects due to the material's B/H-curve. A flatness based solution of the semilinear PDE is shown as well as a convergence proof of its series solution. By numerical simulation results the adaptability of the approach is demonstrated, and differences between the linear and the nonlinear case are discussed. The major contribution of this paper is the inclusion of saturation effects into the magnetic field governing linear diffusion equation, and the development of a flatness based solution for the resulting semilinear PDE as an extension of previous works [1] and [2].
Classification of point clouds by different types of geometric primitives is an essential part in the reconstruction process of CAD geometry. We use support vector machines (SVM) to label patches in point clouds with the class labels tori, ellipsoids, spheres, cones, cylinders or planes. For the classification features based on different geometric properties like point normals, angles, and principal curvatures are used. These geometric features are estimated in the local neighborhood of a point of the point cloud. Computing these geometric features for a random subset of the point cloud yields a feature distribution. Different features are combined for achieving best classification results. To minimize the time consuming training phase of SVMs, the geometric features are first evaluated using linear discriminant analysis (LDA).
LDA and SVM are machine learning approaches that require an initial training phase to allow for a subsequent automatic classification of a new data set. For the training phase point clouds are generated using a simulation of a laser scanning device. Additional noise based on an laser scanner error model is added to the point clouds. The resulting LDA and SVM classifiers are then used to classify geometric primitives in simulated and real laser scanned point clouds.
Compared to other approaches, where all known features are used for classification, we explicitly compare novel against known geometric features to prove their effectiveness.
This Chapter introduces parameterized, or parametric, Model Order Reduction (pMOR). The Sections are offered in a prefered order for reading, but can be read independently. Section 5.1, written by Jorge Fernández Villena, L. Miguel Silveira, Wil H.A. Schilders, Gabriela Ciuprina, Daniel Ioan and Sebastian Kula, overviews the basic principles for pMOR. Due to higher integration and increasing frequency-based effects, large, full Electromagnetic Models (EM) are needed for accurate prediction of the real behavior of integrated passives and interconnects. Furthermore, these structures are subject to parametric effects due to small variations of the geometric and physical properties of the inherent materials and manufacturing process. Accuracy requirements lead to huge models, which are expensive to simulate and this cost is increased when parameters and their effects are taken into account. This Section introduces the framework of pMOR, which aims at generating reduced models for systems depending on a set of parameters.
We present a 3d-laser-scan simulation in virtual
reality for creating synthetic scans of CAD models. Consisting of
the virtual reality head-mounted display Oculus Rift and the
motion controller Razer Hydra our system can be used like
common hand-held 3d laser scanners. It supports scanning of
triangular meshes as well as b-spline tensor product surfaces
based on high performance ray-casting algorithms. While point
clouds of known scanning simulations are missing the man-made
structure, our approach overcomes this problem by imitating
real scanning scenarios. Calculation speed, interactivity and the
resulting realistic point clouds are the benefits of this system.
Reconstruction of hand-held laser scanner data is used in industry primarily for reverse engineering. Traditionally, scanning and reconstruction are separate steps. The operator of the laser scanner has no feedback from the reconstruction results. On-line reconstruction of the CAD geometry allows for such an immediate feedback.
We propose a method for on-line segmentation and reconstruction of CAD geometry from a stream of point data based on means that are updated on-line. These means are combined to define complex local geometric properties, e.g., to radii and center points of spherical regions. Using means of local scores, planar, cylindrical, and spherical segments are detected and extended robustly with region growing. For the on-line computation of the means we use so-called accumulated means. They allow for on-line insertion and removal of values and merging of means. Our results show that this approach can be performed on-line and is robust to noise. We demonstrate that our method reconstructs spherical, cylindrical, and planar segments on real scan data containing typical errors caused by hand-held laser scanners.
This contribution presents a data compression scheme for applications in non-volatile flash memories. The objective of the data compression algorithm is to reduce the amount of user data such that the redundancy of the error correction coding can be increased in order to improve the reliability of the data storage system. The data compression is performed on block level considering data blocks of 1 kilobyte. We present an encoder architecture that has low memory requirements and provides a fast data encoding.
Codes over quotient rings of Lipschitz integers have recently attracted some attention. This work investigates the performance of Lipschitz integer constellations for transmission over the AWGN channel by means of the constellation figure of merit. A construction of sets of Lipschitz integers that leads to a better constellation figure of merit compared to ordinary Lipschitz integer constellations is presented. In particular, it is demonstrated that the concept of set partitioning can be applied to quotient rings of Lipschitz integers where the number of elements is not a prime number. It is shown that it is always possible to partition such quotient rings into additive subgroups in a manner that the minimum Euclidean distance of each subgroup is strictly larger than in the original set. The resulting signal constellations have a better performance for transmission over an additive white Gaussian noise channel compared to Gaussian integer constellations and to ordinary Lipschitz integer constellations. In addition, we present multilevel code constructions for the new signal constellations.
Codes over quotient rings of Lipschitz integers have recently attracted some attention. This work investigates the performance of Lipschitz integer constellations for transmission over the AWGN channel by means of the constellation figure of merit. A construction of sets of Lipschitz integers is presented that leads to a better constellation figure of merit compared to ordinary Lipschitz integer constellations. In particular, it is demonstrated that the concept of set partitioning can be applied to quotient rings of Lipschitz integers where the number of elements is not a prime number. It is shown that it is always possible to partition such quotient rings into additive subgroups in a manner that the minimum Euclidean distance of each subgroup is strictly larger than in the original set. The resulting signal constellations have a better performance for transmission over an additive white Gaussian noise channel compared to Gaussian integer constellations and to ordinary Lipschitz integer constellations.
This work proposes an efficient hardware Implementation of sequential stack decoding of binary block codes. The decoder can be applied for soft input decoding for generalized concatenated (GC) codes. The GC codes are constructed from inner nested binary Bose-Chaudhuri-Hocquenghem (BCH) codes and outer Reed-Solomon (RS) codes. In order to enable soft input decoding for the inner BCH block codes, a sequential stack decoding algorithm is used.
Nowadays, there is a continuous need for many corporations to renew their business portfolio strategically in anticipation of changes in the business environment (e.g., technological change). The ongoing booming of founding international start-ups suggests that small entrepreneurial teams are an effective means to develop new businesses. Corporations should be able to benefit from this form of self-organized innovation when entering novel business domains for strategic renewal. However, corporations that establish small entrepreneurial teams (corporate ventures) are facing two obstacles. First, corporate ventures often fail for reasons that are not well explored. Second, it remains unclear how the partial successes may be improved to large successes. Although the key success factors remain ambiguous, there is little hope that corporate ventures will be successful without effective management. Since an empirical model for corporate venture management does not exists so far, the thesis formulates and answers the following problem statement: How can corporate management effectively manage corporate ventures? Building on qualitative and quantitative research methodologies, a model for effective corporate venture management is developed and tested statistically in the German IT consulting industry. The research results reveal some of the essential management principles through which corporate management can increase corporate venture success systematically.
Domain-specific modelling is increasingly adopted in the software development industry. While open source metamodels like Ecore have a wide impact, they still have some problems. The independent storage of nodes (classes) and edges (references) is currently only possible with complex, specific solutions. Furthermore the developed models are stored in the extensible markup language (XML) data format, which leads to problems with large models in terms of scaling. In this paper we describe an approach that solves the problem of independent classes and references in metamodels and we store the models in the JavaScript Object Notation (JSON) data format to support high scalability. First results of our tests show that the developed approach works and classes and references can be defined independently. In addition, our approach reduces the amount of characters per model by a factor of approximately two compared to Ecore. The entire project is made available as open source under the name MoDiGen. This paper focuses on the description of the metamodel definition in terms of scaling.
Technology commercialization is described as the most dreadful challenge for technology-based entrepreneurs. The scarcity of resources and limited managerial experience make it a daunting task, putting in danger the whole firm emergence. Prior research has often build upon the resource-based view to propose that the new firms' performance is dependent on their initial resource endowments and configurations. Nevertheless, little is known on how the early-stage decisions of the entrepreneur might influence on the growth of the firm. Scholars have suggested that both technology and market orientation actions could influence the performance and growth of firms in this context; nevertheless, there is limited empirical evidence of the influence of these different orientations in the context of new technology-based firms (NTBFs). In this study we propose to explore the influence of technology and demand creation actions adopting a demand-side view. We use a longitudinal study on a panel dataset (2004-2007) with 249 U.S. new high-technology firms to test our hypothesis. The results point towards a rather limited influence of initial resource configurations, as well as an unexpected influence of market and technology orientation in the growth dimensions of an NTBF. The research holds implications for the management of new technology-based firms and for those interested in supporting the development of technology entrepreneurship.
The detection of differences between images of a printed reference and a reprinted wood decor often requires an initial image registration step. Depending on the digitalization method, the reprint will be displaced and rotated with respect to the reference. The aim of registration is to match the images as precisely as possible. In our approach, images are first matched globally by extracting feature points from both images and finding corresponding point pairs using the RANSAC algorithm. From these correspondences, we compute a global projective transformation between both images. In order to get a pixel-wise registration, we train a learning machine on the point correspondences found by RANSAC. The learning algorithm (in our case Gaussian process regression) is used to nonlinearly interpolate between the feature points which results in a high precision image registration method on wood decors.
Vortrag auf dem Doktorandenkolloquium des Kooperativen Promotionskollegs der HTWG, 09.07.2015
This chapter contains three advanced topics in model order reduction (MOR): nonlinear MOR, MOR for multi-terminals (or multi-ports) and finally an application in deriving a nonlinear macromodel covering phase shift when coupling oscillators. The sections are offered in a preferred order for reading, but can be read independently.
The proposed approach applies current unsupervised clustering approaches in a different dynamic manner. Instead of taking all the data as input and finding clusters among them, the given approach clusters Holter ECG data (long-term electrocardiography data from a holter monitor) on a given interval which enables a dynamic clustering approach (DCA). Therefore advanced clustering techniques based on the well known Dynamic Time Warping algorithm are used. Having clusters e.g. on a daily basis, clusters can be compared by defining cluster shape properties. Doing this gives a measure for variation in unsupervised cluster shapes and may reveal unknown changes in healthiness. Embedding this approach into wearable devices offers advantages over the current techniques. On the one hand users get feedback if their ECG data characteristic changes unforeseeable over time which makes early detection possible. On the other hand cluster properties like biggest or smallest cluster may help a doctor in making diagnoses or observing several patients. Further, on found clusters known processing techniques like stress detection or arrhythmia classification may be applied.
Post harvest technology
(2015)
The problem of vessel collisions or near-collision situations on sea, often caused by human error due to incomplete or overwhelming information, is becoming more and more important with rising maritime traffic. Approaches to supply navigators and Vessel Traffic Services with expert knowledge and suggest trajectories for all vessels to avoid collisions, are often aimed at situations where a single planner guides all vessels with perfect information. In contrast, we suggest a two-part procedure which plans trajectories using a specialised A* and negotiates trajectories until a solution is found, which is acceptable for all vessels. The solution obeys collision avoidance rules, includes a dynamic model of all vessels and negotiates trajectories to optimise globally without a global planner and extensive information disclosure. The procedure combines all components necessary to solve a multi-vessel encounter and is tested currently in simulation and on several test beds. The first results show a fast converging optimisation process which after a few negotiation rounds already produce feasible, collision free trajectories.
RELOAD
(2015)
Vortrag auf dem Doktorandenkolloquium des Kooperativen Promotionskollegs der HTWG, 09.07.2015
This paper compares the surface morphology of differently finished austenitic stainless steel AISI 316L, also in combination with low temperature carburization. Milled and tumbled surfaces were analyzed by means of corrosion resistance and surface morphology. The results of potentiodynamic measurements show that professional grinding operations with SiC and Al2O3 always lead to a better corrosion resistance of low temperature carburized surfaces compared to the untreated reference in the used acidified chloride solution. Big influence on the corrosion resistance of vibratory ground or tumbled surfaces has the amount of plastic deformation while machining, that has to be kept low for austenitic stainless steels. Due to the high ductility, plastic deformation can lead to the formation of meta stable pits that can be initiation points of corrosion. The formation of meta stable pits can be aggravated by low temperature diffusion processes.
To evaluate the quality of a person's sleep it is essential to identify the sleep stages and their durations. Currently, the gold standard in terms of sleep analysis is overnight polysomnography (PSG), during which several techniques like EEG (eletroencephalogram), EOG (electrooculogram), EMG (electromyogram), ECG (electrocardiogram), SpO2 (blood oxygen saturation) and for example respiratory airflow and respiratory effort are recorded. These expensive and complex procedures, applied in sleep laboratories, are invasive and unfamiliar for the subjects and it is a reason why it might have an impact on the recorded data. These are the main reasons why low-cost home diagnostic systems are likely to be advantageous. Their aim is to reach a larger population by reducing the number of parameters recorded. Nowadays, many wearable devices promise to measure sleep quality using only the ECG and body-movement signals. This work presents an android application developed in order to proof the accuracy of an algorithm published in the sleep literature. The algorithm uses ECG and body movement recordings to estimate sleep stages. The pre-recorded signals fed into the algorithm have been taken from physionet1 online database. The obtained results have been compared with those of the standard method used in PSG. The mean agreement ratios between the sleep stages REM, Wake, NREM-1, NREM-2 and NREM-3 were 38.1%, 14%, 16%, 75% and 54.3%.
In this paper an approach towards databased fault diagnosis of linear electromagnetic actuators is presented. Time and time-frequency-domain methods were applied to extract fault related features from current and voltage measurements. The resulting features were transformed to enhance class separability using either Principal Component Analysis (PCA) or Optimal Transformation. Feature selection and dimensionality reduction was performed employing a modified Fisher-ratio. Fault detection was carried out using a Support-Vector-Machine classifier trained with randomly selected data subsets. Results showed, that not only the used feature sets (time-domain/time-frequency-domain) are crucial for fault detection and classification, but also feature pre-processing. PCA transformed time-domain features allow fault detection and classification without misclassification, relying on current and voltage measurements making two sensors necessary to generate the data. Optimal transformed time-frequency-domain features allow a misclassification free result as well, but as they are calculated from current measurements only, a dedicated voltage sensor is not necessary. Using those features is a promising alternative even for detecting purely supply voltage related faults.
Vortrag auf dem Doktorandenkolloquium des Kooperativen Promotionskollegs der HTWG, 09.07.2015