Refine
Year of publication
Document Type
- Conference Proceeding (463)
- Article (164)
- Part of a Book (45)
- Doctoral Thesis (30)
- Other Publications (28)
- Book (8)
- Patent (3)
- Preprint (2)
- Report (2)
Language
- English (745) (remove)
Has Fulltext
- no (745) (remove)
Keywords
- (Strict) sign-regularity (1)
- 360-degree coverage (1)
- 3D Extended Object Tracking (EOT) (2)
- 3D ship detection (1)
- 3D urban planning (1)
- AAL (3)
- ADAM (1)
- AHI (1)
- ASEAN (1)
- Aboriginal people (1)
Institute
- Fakultät Architektur und Gestaltung (4)
- Fakultät Bauingenieurwesen (10)
- Fakultät Elektrotechnik und Informationstechnik (10)
- Fakultät Informatik (51)
- Fakultät Maschinenbau (6)
- Fakultät Wirtschafts-, Kultur- und Rechtswissenschaften (31)
- Institut für Angewandte Forschung - IAF (60)
- Institut für Optische Systeme - IOS (26)
- Institut für Strategische Innovation und Technologiemanagement - IST (36)
- Institut für Systemdynamik - ISD (85)
Besides energy efficiency, product quality is gaining importance in the design of drying processes for sensitive biological foodstuffs. The influence of drying parameters on the drying kinetics of apples has been extensively investigated; the information about effects on product quality available in literature however, is often contradictory. Furthermore quality changes obtained applying different drying parameters are usually hard to compare. As most quality changes can be expressed as zero, first or second order reactions and mainly depend on drying air temperature and drying time, it would be desirable to cross-check the results in function thereof. This paper introduces a method of quality determination using a new reference value, the cumulated thermal load. It is defined as the time integral of the product surface temperature and improves the comparability of quality changes obtained by different experimental settings in drying of apples and tomatoes. It could be shown that quality parameters like color changes and shrinkage during apple drying and the content of temperature sensitive acids in tomatoes vary linearly with the integral of product temperature over time.
Nowadays, the number of flexible and fast human to application system interactions is dramatically increasing. For instance, citizens interact with the help of the internet to organize surveys or meetings (in real-time) spontaneously. These interactions are supported by technologies and application systems such as free wireless networks, web -or mobile apps. Smart Cities aim at enabling their citizens to use these digital services, e.g., by providing enhanced networks and application infrastructures maintained by the public administration. However, looking beyond technology, there is still a significant lack of interaction and support between "normal" citizens and the public administration. For instance, democratic decision processes (e.g. how to allocate public disposable budgets) are often discussed by the public administration without citizen involvement. This paper introduces an approach, which describes the design of enhanced interactional web applications for Smart Cities based on dialogical logic process patterns. We demonstrate the approach with the help of a budgeting scenario as well as a summary and outlook on further research.
Conducting surveillance impact assessment is the first step to solve the "Who monitors the monitor?" problem. Since the surveillance impacts on different dimensions of privacy and society are always changing, measuring compliance and impact through metrics can ensure the negative consequences are minimized to acceptable levels. To develop metrics systematically for surveillance impact assessment, we follow the top-down process of the Goal/Question/Metric paradigm: 1) establish goals through the social impact model, 2) generate questions through the dimensions of surveillance activities, and 3) develop metrics through the scales of measure. With respect to the three factors of impact magnitude: the strength of sources, the immediacy of sources, and the number of sources, we generate questions concerning surveillance activities: by whom, for whom, why, when, where, of what, and how, and develop metrics with the scales of measure: the nominal scale, the ordinal scale, the interval scale, and the ratio scale. In addition to compliance assessment and impact assessment, the developed metrics have the potential to address the power imbalance problem through sousveillance, which employs surveillance to control and redirect the impact exposures.
Volterra and Wiener series
(2011)
Volterra and Wiener series are two classes of polynomial representations of nonlinear systems. They are perhaps the best understood and most widely used nonlinear system representations in signal processing and system identification. A Volterra or Wiener representation can be thought of as a natural extension of the classical linear system representation. In addition to the convolution of the input signal with the system's impulse response, the system representation includes a series of nonlinear terms that contain products of increasing order of the input signal with itself. It can be shown that these polynomial extension terms allow for representing a large class of nonlinear systems which basically encompasses all systems with scalar outputs that are time-invariant and have noninfinite memory.
Technology commercialization is described as the most dreadful challenge for technology-based entrepreneurs. The scarcity of resources and limited managerial experience make it a daunting task, putting in danger the whole firm emergence. Prior research has often build upon the resource-based view to propose that the new firms' performance is dependent on their initial resource endowments and configurations. Nevertheless, little is known on how the early-stage decisions of the entrepreneur might influence on the growth of the firm. Scholars have suggested that both technology and market orientation actions could influence the performance and growth of firms in this context; nevertheless, there is limited empirical evidence of the influence of these different orientations in the context of new technology-based firms (NTBFs). In this study we propose to explore the influence of technology and demand creation actions adopting a demand-side view. We use a longitudinal study on a panel dataset (2004-2007) with 249 U.S. new high-technology firms to test our hypothesis. The results point towards a rather limited influence of initial resource configurations, as well as an unexpected influence of market and technology orientation in the growth dimensions of an NTBF. The research holds implications for the management of new technology-based firms and for those interested in supporting the development of technology entrepreneurship.
Domain-specific modelling is increasingly adopted in the software development industry. While open source metamodels like Ecore have a wide impact, they still have some problems. The independent storage of nodes (classes) and edges (references) is currently only possible with complex, specific solutions. Furthermore the developed models are stored in the extensible markup language (XML) data format, which leads to problems with large models in terms of scaling. In this paper we describe an approach that solves the problem of independent classes and references in metamodels and we store the models in the JavaScript Object Notation (JSON) data format to support high scalability. First results of our tests show that the developed approach works and classes and references can be defined independently. In addition, our approach reduces the amount of characters per model by a factor of approximately two compared to Ecore. The entire project is made available as open source under the name MoDiGen. This paper focuses on the description of the metamodel definition in terms of scaling.
We consider classes of (Formula presented.)-by-(Formula presented.) sign regular matrices, i.e. of matrices with the property that all their minors of fixed order (Formula presented.) have one specified sign or are allowed also to vanish, (Formula presented.). If the sign is nonpositive for all (Formula presented.), such a matrix is called totally nonpositive. The application of the Cauchon algorithm to nonsingular totally nonpositive matrices is investigated and a new determinantal test for these matrices is derived. Also matrix intervals with respect to the checkerboard ordering are considered. This order is obtained from the usual entry-wise ordering on the set of the (Formula presented.)-by-(Formula presented.) matrices by reversing the inequality sign for each entry in a checkerboard fashion. For some classes of sign regular matrices, it is shown that if the two bound matrices of such a matrix interval are both in the same class then all matrices lying between these two bound matrices are in the same class, too.
This work proposes an efficient hardware Implementation of sequential stack decoding of binary block codes. The decoder can be applied for soft input decoding for generalized concatenated (GC) codes. The GC codes are constructed from inner nested binary Bose-Chaudhuri-Hocquenghem (BCH) codes and outer Reed-Solomon (RS) codes. In order to enable soft input decoding for the inner BCH block codes, a sequential stack decoding algorithm is used.
This contribution presents a data compression scheme for applications in non-volatile flash memories. The objective of the data compression algorithm is to reduce the amount of user data such that the redundancy of the error correction coding can be increased in order to improve the reliability of the data storage system. The data compression is performed on block level considering data blocks of 1 kilobyte. We present an encoder architecture that has low memory requirements and provides a fast data encoding.
Codes over quotient rings of Lipschitz integers have recently attracted some attention. This work investigates the performance of Lipschitz integer constellations for transmission over the AWGN channel by means of the constellation figure of merit. A construction of sets of Lipschitz integers is presented that leads to a better constellation figure of merit compared to ordinary Lipschitz integer constellations. In particular, it is demonstrated that the concept of set partitioning can be applied to quotient rings of Lipschitz integers where the number of elements is not a prime number. It is shown that it is always possible to partition such quotient rings into additive subgroups in a manner that the minimum Euclidean distance of each subgroup is strictly larger than in the original set. The resulting signal constellations have a better performance for transmission over an additive white Gaussian noise channel compared to Gaussian integer constellations and to ordinary Lipschitz integer constellations.
Reconstruction of hand-held laser scanner data is used in industry primarily for reverse engineering. Traditionally, scanning and reconstruction are separate steps. The operator of the laser scanner has no feedback from the reconstruction results. On-line reconstruction of the CAD geometry allows for such an immediate feedback.
We propose a method for on-line segmentation and reconstruction of CAD geometry from a stream of point data based on means that are updated on-line. These means are combined to define complex local geometric properties, e.g., to radii and center points of spherical regions. Using means of local scores, planar, cylindrical, and spherical segments are detected and extended robustly with region growing. For the on-line computation of the means we use so-called accumulated means. They allow for on-line insertion and removal of values and merging of means. Our results show that this approach can be performed on-line and is robust to noise. We demonstrate that our method reconstructs spherical, cylindrical, and planar segments on real scan data containing typical errors caused by hand-held laser scanners.
We present a 3d-laser-scan simulation in virtual
reality for creating synthetic scans of CAD models. Consisting of
the virtual reality head-mounted display Oculus Rift and the
motion controller Razer Hydra our system can be used like
common hand-held 3d laser scanners. It supports scanning of
triangular meshes as well as b-spline tensor product surfaces
based on high performance ray-casting algorithms. While point
clouds of known scanning simulations are missing the man-made
structure, our approach overcomes this problem by imitating
real scanning scenarios. Calculation speed, interactivity and the
resulting realistic point clouds are the benefits of this system.
This Chapter introduces parameterized, or parametric, Model Order Reduction (pMOR). The Sections are offered in a prefered order for reading, but can be read independently. Section 5.1, written by Jorge Fernández Villena, L. Miguel Silveira, Wil H.A. Schilders, Gabriela Ciuprina, Daniel Ioan and Sebastian Kula, overviews the basic principles for pMOR. Due to higher integration and increasing frequency-based effects, large, full Electromagnetic Models (EM) are needed for accurate prediction of the real behavior of integrated passives and interconnects. Furthermore, these structures are subject to parametric effects due to small variations of the geometric and physical properties of the inherent materials and manufacturing process. Accuracy requirements lead to huge models, which are expensive to simulate and this cost is increased when parameters and their effects are taken into account. This Section introduces the framework of pMOR, which aims at generating reduced models for systems depending on a set of parameters.
Classification of point clouds by different types of geometric primitives is an essential part in the reconstruction process of CAD geometry. We use support vector machines (SVM) to label patches in point clouds with the class labels tori, ellipsoids, spheres, cones, cylinders or planes. For the classification features based on different geometric properties like point normals, angles, and principal curvatures are used. These geometric features are estimated in the local neighborhood of a point of the point cloud. Computing these geometric features for a random subset of the point cloud yields a feature distribution. Different features are combined for achieving best classification results. To minimize the time consuming training phase of SVMs, the geometric features are first evaluated using linear discriminant analysis (LDA).
LDA and SVM are machine learning approaches that require an initial training phase to allow for a subsequent automatic classification of a new data set. For the training phase point clouds are generated using a simulation of a laser scanning device. Additional noise based on an laser scanner error model is added to the point clouds. The resulting LDA and SVM classifiers are then used to classify geometric primitives in simulated and real laser scanned point clouds.
Compared to other approaches, where all known features are used for classification, we explicitly compare novel against known geometric features to prove their effectiveness.
A semilinear distributed parameter approach for solenoid valve control including saturation effects
(2015)
In this paper a semilinear parabolic PDE for the control of solenoid valves is presented. The distributed parameter model of the cylinder becomes nonlinear by the inclusion of saturation effects due to the material's B/H-curve. A flatness based solution of the semilinear PDE is shown as well as a convergence proof of its series solution. By numerical simulation results the adaptability of the approach is demonstrated, and differences between the linear and the nonlinear case are discussed. The major contribution of this paper is the inclusion of saturation effects into the magnetic field governing linear diffusion equation, and the development of a flatness based solution for the resulting semilinear PDE as an extension of previous works [1] and [2].
Knowing the position of the spool in a solenoid valve, without using costly position sensors, is of considerable interest in a lot of industrial applications. In this paper, the problem of position estimation based on state observers for fast-switching solenoids, with sole use of simple voltage and current measurements, is investigated. Due to the short spool traveling time in fast-switching valves, convergence of the observer errors has to be achieved very fast. Moreover, the observer has to be robust against modeling uncertainties and parameter variations. Therefore, different state observer approaches are investigated, and compared to each other regarding possible uncertainties. The investigation covers a High-Gain-Observer approach, a combined High-Gain Sliding-Mode-Observer approach, both based on extended linearization, and a nonlinear Sliding-Mode-Observer based on equivalent output injection. The results are discussed by means of numerical simulations for all approaches, and finally physical experiments on a valve-mock-up are thoroughly discussed for the nonlinear Sliding-Mode-Observer.
Motion safety for vessels
(2015)
The improvement of collision avoidance for vessels in close range encounter situations is an important topic for maritime traffic safety. Typical approaches generate evasive trajectories or optimise the trajectories of all involved vessels. The idea of this work is to validate these trajectories related to guaranteed motion safety, which means that it is not sufficient for a trajectory to be collision-free, but it must additionally ensure that an evasive manoeuvre is performable at any time. An approach using the distance and the evolution of the distance to the other vessels is proposed. The concept of Inevitable Collision States (ICS) is adopted to identify the states for which no evasive manoeuvre exist. Furthermore, it is implemented into a collision avoidance system for recreational crafts to demonstrate the performance.
The improvement of collision avoidance for vessels in close range encounter situations is an important topic for maritime traffic safety. Typical approaches generate evasive trajectories or optimise the trajectories of all involved vessels. Such a collision avoidance system has to produce evasive manoeuvres that do not confuse other navigators. To achieve this behaviour, a probabilistic obstacle handling based on information from a radar sensor with target tracking, that considers measurement and tracking uncertainties is proposed. A grid based path search algorithm, that takes the information from the probabilistic obstacle handling into account, is then used to generate evasive trajectories. The proposed algorithms have been tested and verified in a simulated environment for inland waters.
Cyberspace: a world at war. Our privacy, freedom of speech, and with them the very foundations of democracy are under attack. In the virtual world frontiers are not set by nations or states, they are set by those, who control the flows of information. And control is, what everybody wants.
The Five Eyes are watching, storing, and evaluating every transmission. Internet corporations compete for our data and decide if, when, and how we gain access to that data and to their pretended free services. Search engines control what information we are allowed - or want - to consume. Network access providers and carriers are fighting for control of larger networks and for better ways to shape the traffic. Interest groups and copyright holders struggle to limit access to specific content. Network operators try to keep their networks and their data safe from outside - or inside - adversaries.
And users? Many of them just don’t care. Trust in concepts and techniques is implicit. Those who do care try to take back control of the Internet through privacy-preserving techniques.
This leads to an arms race between those who try to classify the traffic, and those who try to obfuscate it. But good or bad lies in the eye of the beholder, and one will find himself fighting on both sides.
Network Traffic Classification is an important tool for network security. It allows identification of malicious traffic and possible intruders, and can also optimize network usage. Network Traffic Obfuscation is required to protect transmissions of important data from unauthorized observers, to keep the information private. However, with security and privacy both crumbling under the grip of legal and illegal black hat crackers, we dare say that contemporary traffic classification and obfuscation techniques are fundamentally flawed. The underlying concepts cannot keep up with technological evolution. Their implementation is insufficient, inefficient and requires too much resources.
We provide (1) a unified view on the apparently opposed fields of traffic classification and obfuscation, their deficiencies and limitations, and how they can be improved. We show that (2) using multiple classification techniques, optimized for specific tasks improves overall resource requirements and subsequently increases classification speed. (3) Classification based on application domain behavior leads to more accurate information than trying to identify communication protocols. (4) Current approaches to identify signatures in packet content are slow and require much space or memory. Enhanced methods reduce these requirements and allow faster matching. (5) Simple and easy to implement obfuscation techniques allow circumvention of even sophisticated contemporary classification systems. (6) Trust and privacy can be increased by reducing communication to a required minimum and limit it to known and trustworthy communication partners.
Our techniques improve both security and privacy and can be applied efficiently on a large scale. It is but a small step in taking back the Web.
In this thesis, a new framework has been proposed, designed and developed for creating efficient and cost effective logistics chains for long items within the building industry. The building industry handles many long items such as pipes, profiles and so on. The handling of these long items is quite complicated and difficult because they are bulky, unstable and heavy. So it is not cost effective and efficient to handle them manually. Existing planning frameworks ignore these special requirements of such goods and are not planned for handling these goods. That leads to that many additional manual handling steps are currently required to handle long items. Therefore, it is very important to develop a new framework for creating the efficient and cost-effective logistics chain for long items. To propose such a new framework, the expert interviews were conducted to gain the fully understanding about the customer requirements. The experts from all stages of the building industry supply chain were interviewed. The data collected from the expert interviews has been analysed and the meaningful findings about the customer requirements have been applied as the valuable inputs for the proposition of the new framework. To have fully knowledge about current practices, all existing planning frameworks have been analysed and evaluated using SWOT analysis. The strengths, weaknesses, opportunities and threats of the current planning frameworks have been comparatively analysed and evaluated. The findings from SWOT analysis have been used for proposing, designing and developing the new framework. The great efforts have been made during the implementation stage. The six different key parameters for a successful implementation have been identified. They are: • Improvement Process with Employees • Control of the Improvements • Gifts/Money for the Improvements and Additional Work • KAIZEN Workshops • Motivation of the Employees for Improvements • Presentation of the Results Among these six parameters, it has been found that KAIZEN workshops is a very effective way for creating an efficient and cost-effective logistics chain for long items. It is believed that the new framework can be theoretically used for the planning of logistics that handle long items and commercial goods. This framework can also be used to plan all kinds of in-house logistics processes from the incoming goods, storage, picking, delivery combination areas and through to the outgoing goods area. The achievements of this project are as follows (1) the new framework for creating efficient and cost-effective logistics chains for long items, (2) the data collection and the data evaluation at the preliminary planning, (3) the decision for one planning variant already at the end of the structure planning, (4) the analysis and evaluation of customer requirements, (5) the consideration and implementation of the customer requirements in the new framework, (6) the creation of figures and tables as planning guideline, (7) the research and further development of Minomi with regards to long items, (8) the research on the information flow, (9) the classification of the improvements and the improvement handling at the implementation, (10) the identification of key parameters for a successful implementation of the planning framework. This framework has been evaluated both theoretically and through a case study of a logistics system planning for handling long items and commercial goods. It has been found that the new framework is theoretically sound and practically valuable. It can be applied to creating the logistics system for long items, especially for building industry.
The intentions of the so-called "More Electrical Aircraft" (MEA) are higher efficiency and lower weight. A main topic here is the application of electrical instead of hydraulical, pneumatical and mechanical systems. The necessary power electronic devices have intermediate DC-links, which are typically supplied by a three-phase system with active B6 and passive B12 rectifiers. A possible alternative is the B6 diode bridge in combination with an active power filter (APF). Due to the parallel arrangement, the APF offers a higher power density and is able to compensate for harmonics from several devices. The use of the diode bridge rectifier alone is not permitted due to the highly distorted phase current. The following investigations are dealing with the development of an active power filter for a three-phase supply with variable frequency from 360 to 800 Hz. All relevant components such as inductors, EMC-filters, power modules and DC-link capacitor are designed. A particular focus is put on the customized power module with SiC-MOSFETs and SiC-diodes, which is characterized electrically and thermally. The maximum supply frequency slope has a value of 50 Hz/ms, which requires a high dynamic and robustness on the control algorithm. Furthermore, the content of 5th and 7th harmonics must be reduced to less than 2 %, which demands a high accuracy. To cope with both requirements, a two-stage filter algorithm is developed and implemented in two independent signal processors. Simulations and laboratory experiments confirm the performance and robustness of the control algorithm. This work comprehensively presents the design of aerospace rectifiers. The results were published in conferences and patents.
Nowadays, there is a continuous need for many corporations to renew their business portfolio strategically in anticipation of changes in the business environment (e.g., technological change). The ongoing booming of founding international start-ups suggests that small entrepreneurial teams are an effective means to develop new businesses. Corporations should be able to benefit from this form of self-organized innovation when entering novel business domains for strategic renewal. However, corporations that establish small entrepreneurial teams (corporate ventures) are facing two obstacles. First, corporate ventures often fail for reasons that are not well explored. Second, it remains unclear how the partial successes may be improved to large successes. Although the key success factors remain ambiguous, there is little hope that corporate ventures will be successful without effective management. Since an empirical model for corporate venture management does not exists so far, the thesis formulates and answers the following problem statement: How can corporate management effectively manage corporate ventures? Building on qualitative and quantitative research methodologies, a model for effective corporate venture management is developed and tested statistically in the German IT consulting industry. The research results reveal some of the essential management principles through which corporate management can increase corporate venture success systematically.
The person’s heart rate is an important indicator of their health status. A heart rate that is too high or too low could be a sign of several different diseases, such as a heart disorder, obesity, asthma, or many others. Many devices require users to wear the device on their chest or place a finger on the device. The approach presented in this paper describes the principle and implementation of a heart rate monitoring device, which is able to detect the heart rate with high precision with the sensor integrated in a wristband. One method to measure the heart rate is the photoplethysmogram technique. This method measures the change of blood volume through the absorption or reflection of light. A light emitting diode (LED) shines through a thin amount of tissue. A photo-diode registers the intensity of light that traverses the tissue or is reflected by the tissue. Since blood changes its volume with each heartbeat, the photo-diode detects more or less light from the LED. The device is able to measure the heart rate with a high precision, it has low performance and hardware requirements, and it allows an implementation with small micro-controllers.
This paper considers intervals of real matrices with respect to partial orders and the problem to infer from some exposed matrices lying on the boundary of such an interval that all real matrices taken from the interval possess a certain property. In many cases such a property requires that the chosen matrices have an identically signed inverse. We also briefly survey related problems, e.g., the invariance of matrix properties under entry-wise perturbations.
The Kerala tourism model
(2017)
Sustainable tourism in Kerala is on the rise. Therefore, this South Indian state is assessed according to the sustainable tourism criteria of the Strasdas et al. (2007) framework. Kerala as a state does not qualify as a sustainable tourism destination, although individual success stories at the NGO and government level exist. This conceptual paper delivers a detailed analysis of the three dimensions of sustainability, i.e. ecology, economy and socio-cultural aspects, of the ‘Kerala tourism model’ and discusses the question of whether this model can be transferred to other developing countries. Copyright © 2016 John Wiley & Sons, Ltd and ERP Environment
Intercultural management
(2016)
A case-based examination of issues in international management that helps students explore theory in the context of real-life practical situations. A focus on skills-development prepares students for future careers in international management. Cases are from a range of countries including central and Eastern Europe as well as the Asian economies.
Rhetoric of logos
(2017)
Der Entwurf eines Signets als einem der wichtigsten Elemente des Corporate Designs ist für Kommunikationsdesigner eine ganz besondere Herausforderung. Die Überlegung, dass ein gutes Signet natürlich auch ein überzeugendes Signet ist, führt direkt zur Disziplin der Rhetorik, die laut Aristoteles die Fähigkeit hat, das Überzeugende zu erkennen, das jeder Sache innewohnt. Konzepte und Methoden der Rhetorik sind deshalb ideal, um die Wirksamkeit von Signets zu verstehen und auf dieser Basis den Horizont der gestalterischen Praxis zu erweitern.
In dieser Publikation wird dargelegt, wie Gestalter die Werkzeuge der über 2.500 Jahre alten Lehre anwenden können. Eine zentrale Rolle spielen dabei die rhetorischen Stilfiguren: Signets werden daraufhin analysiert und klassifiziert, um herauszuarbeiten, welche kommunikative Strategien und Wirkungsabsichten sie erfüllen. Die daraus gewonnenen Erkenntnisse liefern dem Gestalter einen Wissensschatz zur Analyse, Ideenfindung und Argumentation sowie zu einem tieferen Verständnis über den Entwurfsprozess.(Quelle: Verlag)
Bernstein polynomials on a simplex V are considered. The expansion of a given polynomial p into these polynomials provides bounds for range of p over V. Bounds for the range of a rational function over V can easily be obtained from the Bernstein expansions of the numerator and denominator polynomials of this function. In this paper it is shown that these bounds converge monotonically and linearly to the range of the rational function if the degree of the Bernstein expansion is elevated. If V is subdivided then the convergence is quadratic with respect to the maximum of the diameters of the subsimplices.
This paper proposes a pipelined decoder architecture for generalised concatenated (GC) codes. These codes are constructed from inner binary Bose-Chaudhuri-Hocquenghem (BCH) and outer Reed-Solomon codes. The decoding of the component codes is based on hard decision syndrome decoding algorithms. The concatenated code consists of several small BCH codes. This enables a hardware architecture where the decoding of the component codes is pipelined. A hardware implementation of a GC decoder is presented and the cell area, cycle counts as well as the timing constraints are investigated. The results are compared to a decoder for long BCH codes with similar error correction performance. In comparison, the pipelined GC decoder achieves a higher throughput and has lower area consumption.
Stress is recognized as a factor of predominant disease and in the future the costs for treatment will increase. The presented approach tries to detect stress in a very basic and easy to implement way, so that the cost for the device and effort to wear it remain low. The user should benefit from the fact that the system offers an easy interface reporting the status of his body in real time. In parallel, the system provides interfaces to pass the obtained data forward for further processing and (professional) analyses, in case the user agrees. The system is designed to be used in every day’s activities and it is not restricted to laboratory use or environments. The implementation of the enhanced prototype shows that the detection of stress and the reporting can be managed using correlation plots and automatic pattern recognition even on a very light-weighted microcontroller platform.
Codes over quotient rings of Lipschitz integers have recently attracted some attention. This work investigates the performance of Lipschitz integer constellations for transmission over the AWGN channel by means of the constellation figure of merit. A construction of sets of Lipschitz integers that leads to a better constellation figure of merit compared to ordinary Lipschitz integer constellations is presented. In particular, it is demonstrated that the concept of set partitioning can be applied to quotient rings of Lipschitz integers where the number of elements is not a prime number. It is shown that it is always possible to partition such quotient rings into additive subgroups in a manner that the minimum Euclidean distance of each subgroup is strictly larger than in the original set. The resulting signal constellations have a better performance for transmission over an additive white Gaussian noise channel compared to Gaussian integer constellations and to ordinary Lipschitz integer constellations. In addition, we present multilevel code constructions for the new signal constellations.
Model Order Reduction
(2015)
This chapter offers an introduction to Model Order Reduction (MOR). It gives an overview on the methods that are mostly used. It also describes the main concepts behind the methods and the properties that are aimed to be preserved. The sections are in a prefered order for reading, but can be read independentlty. Section 4.1, written by Michael Striebel, E. Jan W. ter Maten, Kasra Mohaghegh and Roland Pulch, overviews the basic material for MOR and its use in circuit simulation. Issues like Stability, Passivity, Structure preservation, Realizability are discussed. Projection based MOR methods include Krylov-space methods (like PRIMA and SPRIM) and POD-methods. Truncation based MOR includes Balanced Truncation, Poor Man’s TBR and Modal Truncation.Section 4.2, written by Joost Rommes and Nelson Martins, focuses on Modal Truncation. Here eigenvalues are the starting point. The eigenvalue problems related to large-scale dynamical systems are usually too large to be solved completely. The algorithms described in this section are efficient and effective methods for the computation of a few specific dominant eigenvalues of these large-scale systems. It is shown how these algorithms can be used for computing reduced-order models with modal approximation and Krylov-based methods.Section 4.3, written by Maryam Saadvandi and Joost Rommes, concerns passivity preserving model order reduction using the spectral zero method. It detailedly discusses two algorithms, one by Antoulas and one by Sorenson. These two approaches are based on a projection method by selecting spectral zeros of the original transfer function to produce a reduced transfer function that has the specified roots as its spectral zeros. The reduced model preserves passivity.Section 4.4, written by Roxana Ionutiu, Joost Rommes and Athanasios C. Antoulas, refines the spectral zero MOR method to dominant spectral zeros. The new model reduction method for circuit simulation preserves passivity by interpolating dominant spectral zeros. These are computed as poles of an associated Hamiltonian system, using an iterative solver: the subspace accelerated dominant pole algorithm (SADPA). Based on a dominance criterion, SADPA finds relevant spectral zeros and the associated invariant subspaces, which are used to construct the passivity preserving projection. RLC netlist equivalents for the reduced models are provided.Section 4.5, written by Roxana Ionutiu and Joost Rommes, deals with synthesis of a reduced model: reformulate it as a netlist for a circuit. A framework for model reduction and synthesis is presented, which greatly enlarges the options for the re-use of reduced order models in circuit simulation by simulators of choice. Especially when model reduction exploits structure preservation, we show that using the model as a current-driven element is possible, and allows for synthesis without controlled sources. Two synthesis techniques are considered: (1) by means of realizing the reduced transfer function into a netlist and (2) by unstamping the reduced system matrices into a circuit representation. The presented framework serves as a basis for reduction of large parasitic R/RC/RCL networks.