Refine
Year of publication
Document Type
- Conference Proceeding (463)
- Article (164)
- Part of a Book (45)
- Doctoral Thesis (30)
- Other Publications (28)
- Book (8)
- Patent (3)
- Preprint (2)
- Report (2)
Language
- English (745) (remove)
Has Fulltext
- no (745) (remove)
Keywords
- (Strict) sign-regularity (1)
- 360-degree coverage (1)
- 3D Extended Object Tracking (EOT) (2)
- 3D ship detection (1)
- 3D urban planning (1)
- AAL (3)
- ADAM (1)
- AHI (1)
- ASEAN (1)
- Aboriginal people (1)
Institute
- Fakultät Architektur und Gestaltung (4)
- Fakultät Bauingenieurwesen (10)
- Fakultät Elektrotechnik und Informationstechnik (10)
- Fakultät Informatik (51)
- Fakultät Maschinenbau (6)
- Fakultät Wirtschafts-, Kultur- und Rechtswissenschaften (31)
- Institut für Angewandte Forschung - IAF (60)
- Institut für Optische Systeme - IOS (26)
- Institut für Strategische Innovation und Technologiemanagement - IST (36)
- Institut für Systemdynamik - ISD (85)
Many secure software development methods and tools are well-known and understood. Still, the same software security vulnerabilities keep occurring. To find out if new source code patterns evolved or the same patterns are reoccurring, we investigate SQL injections in PHP open source projects. SQL injections are well-known and a core part of software security education. For each common part of SQL injections, the source code patterns are analysed. Examples are pointed out showing that developers had software security in mind, but nevertheless created vulnerabilities. A comparison to earlier work shows that some categories are not found as often as expected. Our main contribution is the categorization of source code patterns.
In extended object tracking, a target is capable to generate more than one measurement per scan. Assuming the target being of elliptical shape and given a point cloud of measurements, the Random Matrix Framework can be applied to concurrently estimate the target’s dynamic state and extension. If the point cloud contains also clutter measurements or origins from more than one target, the data association problem has to be solved as well. However, the well-known joint probabilistic data association method assumes that a target can generate at most one detection. In this article, this constraint is relaxed, and a multi-detection version of the joint integrated probabilistic data association is proposed. The data association method is then combined with the Random Matrix framework to track targets with elliptical shape. The final filter is evaluated in the context of tracking smaller vessels using a high resolution radar sensor. The performance of the filter is shown in simulation and in several experiments.
We examine to what extent a transaction relation-based value network maturity status of New Technology-Based Firms (NTBFs) is related to their survival. A specific challenge of NTBFs is their lack of market-orientation, which is why the maturity of the ties they form towards the market in terms of customers, financiers, personnel and partners is supposed to be a strong indicator for survival. We analyze a sample of 170 NTBFs by capturing their value network status from business plans and defining their survival status using secondary research. Simple statistical tests and regressions suggest that the official registration of the business is a pre-step for survival that requires industry-specific value network dimension strengths. A sub-sample survival analysis shows that for all NTBFs that have reached registration, regardless of their industry, a stronger customer value network maturity dimension prevents from failure and is thus a significant predictor for survival. Moreover, the analyses partly support the idea that NTBFs from the IT sector are less dependent on a strong value network in the financier dimension to survive. The results are of relevance for both practitioners and researchers in the innovation system: a better understanding of the factors impacting on NTBF survival can help to provide more tailored support services for young firms, increase the effectiveness of resource allocations, and provide a basis for further research.
The evolution of strain induced martensite in austenitic stainless steel AISI 304 was investigated in a rolling contact on a two-discs-tribometer. The effects of surface roughness, slip and normal force as well as the number of load cycles were examined. In comparison to the investigations of martensitic phase transformation during cold rolling, the applied stresses are considerably lower. The formation of strain induced martensite was detected in-situ by means of a FERITSCOPE MP30 and ex-situ by optical microscopy after etching with Kane etchant. Both number of load cycles and magnitude of normal force appeared to be the main influencing factors regarding strain induced martensitic evolution in low stress rolling contacts.
This work proposes a lossless data compression algorithm for short data blocks. The proposed compression scheme combines a modified move-to-front algorithm with Huffman coding. This algorithm is applicable in storage systems where the data compression is performed on block level with short block sizes, in particular, in non-volatile memories. For block sizes in the range of 1(Formula presented.)kB, it provides a compression gain comparable to the Lempel–Ziv–Welch algorithm. Moreover, encoder and decoder architectures are proposed that have low memory requirements and provide fast data encoding and decoding.
Sleep quality and in general, behavior in bed can be detected using a sleep state analysis. These results can help a subject to regulate sleep and recognize different sleeping disorders. In this work, a sensor grid for pressure and movement detection supporting sleep phase analysis is proposed. In comparison to the leading standard measuring system, which is Polysomnography (PSG), the system proposed in this project is a non-invasive sleep monitoring device. For continuous analysis or home use, the PSG or wearable Actigraphy devices tends to be uncomfortable. Besides this fact, they are also very expensive. The system represented in this work classifies respiration and body movement with only one type of sensor and also in a non-invasive way. The sensor used is a pressure sensor. This sensor is low cost and can be used for commercial proposes. The system was tested by carrying out an experiment that recorded the sleep process of a subject. These recordings showed the potential for classification of breathing rate and body movements. Although previous researches show the use of pressure sensors in recognizing posture and breathing, they have been mostly used by positioning the sensors between the mattress and bedsheet. This project however, shows an innovative way to position the sensors under the mattress.
Rethinking Compliance
(2017)
In the past Compliance Management has often failed, the Volkswagen emissions scandal just being one prominent example. Not everything has to be reinvented, and not everything that companies have done in the past regarding Compliance is wrong. But it is about time to think Compliance in new ways. What does “Compliance Management 2.0” really depend on? The following article aims at laying out the cornerstones for enduring effective Compliance which amongst others comprises sincerity and credibility and a moral foundation. Furthermore, the commitment and role model behavior of top managers and the training of line managers are crucial for the effectiveness of any Compliance Management System (CMS). Ultimately, for Compliance to function efficiently the efforts must be adequate for the respective company and realistic regarding the achievable goals.
To evaluate the quality of sleep, it is important to determine how much time was spent in each sleep stage during the night. The gold standard in this domain is an overnight polysomnography (PSG). But the recording of the necessary electrophysiological signals is extensive and complex and the environment of the sleep laboratory, which is unfamiliar to the patient, might lead to distorted results. In this paper, a sleep stage detection algorithm is proposed that uses only the heart rate signal, derived from electrocardiogram (ECG), as a discriminator. This would make it possible for sleep analysis to be performed at home, saving a lot of effort and money. From the heart rate, using the fast Fourier transformation (FFT), three parameters were calculated in order to distinguish between the different sleep stages. ECG data along with a hypnogram scored by professionals was used from Physionet database, making it easy to compare the results. With an agreement rate of 41.3%, this approach is a good foundation for future research.
Industry 4.0
(2017)
Deep 3D
(2017)
Vortrag
Vortrag
As a result of increasing needs and shrinking resources, aquaculture is gaining progressively significance in the recent years. Ecological issues such as negative effects on the ecological system due to the high fish density in the farms, the use of copper as antifouling strategy etc. are very present, particularly regarding the increasing number of fish going to be produced in farms in the future. Current trends focus on larger farms operated offshore. To make these farms working safe and economical, reliability has to be improved and maintenance costs need to be reduced. Also, alternatives with higher mechanical strength compared to current textile net materials as well as common metal wires might be necessary. In the last years, a new net system made of high strength duplex stainless steel wires with environmentally friendly antifouling properties suitable for offshore applications was developed. The first nets are operating for one year now as predator protection (i.e. seals) for fish farms and show a good performance in cleaning capability and predator protection. But in the real usage, some corrosion effects in the contact points of the net made of duplex stainless steel 1.4362 occur which were not observed in preliminary tests in laboratory and at different test sites around the world. These corrosion effects endanger the sustainable success of the net system. In this work, the observed corrosion effects are investigated. A laboratory test, which simulates the movement in the contact points of the net, was developed. Two pieces of wire are bent in the middle and get stucked into each other. One wire is fixed at both ends and the second wire is fixed on one end. On the other end, a circular movement with 1-2 rps and a 1 cm displacement is applied. The movement generates friction between the wires and the passive layer will be locally damaged. When the movement stops, a repassivation starts. The passivity breakdown and the repassivation were measured with electrochemical techniques. During the friction phase, when the surface will be activated, the open circuit potential breaks down. When the friction stops, the OCP increases. Between the movement phases, measurements of critical pitting potential were done. Thereby the quality of repassivation was investigated. The tests were done in a 3% sodium chloride solution. Different temperatures were tested as well as the influence of air saturation and low oxygen content.
Koreferat zum Vortrag
Poster
Earthquake engineering
(2017)
Vortrag
Vortrag
Steps to the stage
(2017)
A flight-like absolute optical frequency reference based on iodine for laser systems at 1064 nm
(2017)
We present an absolute optical frequency reference based on precision spectroscopy of hyperfine transitions in molecular iodine 127I2 for laser systems operating at 1064 nm. A quasi-monolithic spectroscopy setup was developed, integrated, and tested with respect to potential deployment in space missions that require frequency stable laser systems. We report on environmental tests of the setup and its frequency stability and reproducibility before and after each test. Furthermore, we report on the first measurements of the frequency stability of the iodine reference with an unsaturated absorption cell which will greatly simplify its application in space missions. Our frequency reference fulfills the requirements on the frequency stability for planned space missions such as LISA or NGGM.
Rhetoric of logos
(2017)
Der Entwurf eines Signets als einem der wichtigsten Elemente des Corporate Designs ist für Kommunikationsdesigner eine ganz besondere Herausforderung. Die Überlegung, dass ein gutes Signet natürlich auch ein überzeugendes Signet ist, führt direkt zur Disziplin der Rhetorik, die laut Aristoteles die Fähigkeit hat, das Überzeugende zu erkennen, das jeder Sache innewohnt. Konzepte und Methoden der Rhetorik sind deshalb ideal, um die Wirksamkeit von Signets zu verstehen und auf dieser Basis den Horizont der gestalterischen Praxis zu erweitern.
In dieser Publikation wird dargelegt, wie Gestalter die Werkzeuge der über 2.500 Jahre alten Lehre anwenden können. Eine zentrale Rolle spielen dabei die rhetorischen Stilfiguren: Signets werden daraufhin analysiert und klassifiziert, um herauszuarbeiten, welche kommunikative Strategien und Wirkungsabsichten sie erfüllen. Die daraus gewonnenen Erkenntnisse liefern dem Gestalter einen Wissensschatz zur Analyse, Ideenfindung und Argumentation sowie zu einem tieferen Verständnis über den Entwurfsprozess.(Quelle: Verlag)
The binary asymmetric channel (BAC) is a model for the error characterization of multi-level cell (MLC) flash memories. This contribution presents a joint channel and source coding approach improving the reliability of MLC flash memories. The objective of the data compression algorithm is to reduce the amount of user data such that the redundancy of the error correction coding can be increased in order to improve the reliability of the data storage system. Moreover, data compression can be utilized to exploit the asymmetry of the channel to reduce the error probability. With MLC flash memories data compression has to be performed on block level considering short data blocks. We present a coding scheme suitable for blocks of 1 kilobyte of data.
Error correction coding based on soft-input decoding can significantly improve the reliability of flash memories. Such soft-input decoding algorithms require reliability information about the state of the memory cell. This work proposes a channel model for soft-input decoding that considers the asymmetric error characteristic of multi-level cell (MLC) and triple-level cell (TLC) memories. Based on this model, an estimation method for the channel state information is devised which avoids additional pilot data for channel estimation. Furthermore, the proposed method supports page-wise read operations.
Smart factory and education
(2017)
ABCdarium of a journey
(2017)
Method and device for error correction coding based on high-rate generalized concatenated codes
(2017)
Field error correction coding is particularly suitable for applications in non-volatile flash memories. We describe a method for error correction encoding of data to be stored in a memory device, a corresponding method for decoding a codeword matrix resulting from the encoding method, a coding device, and a computer program for performing the methods on the coding device, using a new construction for high-rate generalized concatenated (GC) codes. The codes, which are well suited for error correction in flash memories for high reliability data storage, are constructed from inner nested binary Bose-Chaudhuri-Hocquenghem (BCH) codes and outer codes, preferably Reed-Solomon (RS) codes. For the inner codes extended BCH codes are used, where only single parity-check codes are applied in the first level of the GC code. This enables high-rate codes.
A soft input decoding method and a decoder for generalized concatenated (GC) codes. The GC codes are constructed from inner nested block codes, such as binary Bose-Chaudhuri-Hocquenghem, BCH, codes and outer codes, such as Reed-Solomon, RS, codes. In order to enable soft input decoding for the inner block codes, a sequential stack decoding algorithm is used. Ordinary stack decoding of binary block codes requires the complete trellis of the code. In one aspect, the present invention applies instead a representation of the block codes based on the trellises of supercodes in order to reduce the memory requirements for the representation of the inner codes. This enables an efficient hardware implementation. In another aspect, there is provided a soft input decoding method and device employing a sequential stack decoding algorithm in combination with list-of-two decoding which is particularly well suited for applications that require very low residual error rates.
Design of tension components
(2017)
The paper gives an introduction as well as background information on proposed changes and amendments in EN 1993-1-11 “Design of structures with tension components”, implemented during the ongoing revision. Due to some deficits in the currently applicable standard this revision is not only limited to some restructuring and editorial changes, but includes also major technical changes in the following fields: safety concept and structural analysis, actions and loads, robustness and rep-arability, design of tension components and design of clamps and saddles.
In several organizations, business workgroups autonomously implement information technology (IT) outside the purview of the IT department. Shadow IT, evolving as a type of workaround from nontransparent and unapproved end-user computing (EUC), is a term used to refer to this phenomenon, which challenges norms relative to IT controllability. This report describes shadow IT based on case studies of three companies and investigates its management. In 62% of cases, companies decided to reengineer detected instances or reallocate related subtasks to their IT department. Considerations of risks and transaction cost economics with regard to specificity, uncertainty, and scope explain these actions and the resulting coordination of IT responsibilities between the business workgroups and IT departments. This turns shadow IT into controlled business-managed IT activities and enhances EUC management. The results contribute to the governance of IT task responsibilities and provide a way to formalize the role of workarounds in business workgroups.
R concretes with a proportion of recycled aggregates are standardized normal concretes which are allowed for use in Germany up to strength class C30/37. Because of the good technical properties and the ecological advantages, the article presents possible applications in the field of concrete products and precast concrete elements. Read part 2 of the paper.
R concretes with a proportion of recycled aggregates are standardized normal concretes which are allowed for use in Germany up to strength class C30/37. Because of the good technical properties and the ecological advantages, the article presents possible applications in the field of concrete products and precast concrete elements. Read part 1 of the paper.
Tests for speeding up the determination of the Bernstein enclosure of the range of a multivariate polynomial and a rational function over a box and a simplex are presented. In the polynomial case, this enclosure is the interval spanned by the minimum and the maximum of the Bernstein coefficients which are the coefficients of the polynomial with respect to the tensorial or simplicial Bernstein basis. The methods exploit monotonicity properties of the Bernstein coefficients of monomials as well as a recently developed matrix method for the computation of the Bernstein coefficients of a polynomial over a box.
In this paper, multivariate polynomials in the Bernstein basis over a simplex (simplicial Bernstein representation) are considered. Two matrix methods for the computation of the polynomial coefficients with respect to the Bernstein basis, the so-called Bernstein coefficients, are presented. Also matrix methods for the calculation of the Bernstein coefficients over subsimplices generated by subdivision of the standard simplex are proposed and compared with the use of the de Casteljau algorithm. The evaluation of a multivariate polynomial in the power and in the Bernstein basis is considered as well. All the methods solely use matrix operations such as multiplication, transposition, and reshaping; some of them rely also on the bidiagonal factorization of the lower triangular Pascal matrix or the factorization of this matrix by a Toeplitz matrix. The latter one enables the use of the Fast Fourier Transform hereby reducing the amount of arithmetic operations.
Path planning and collision avoidance for safe autonomous vessel navigation in dynamic environments
(2017)
To assess the quality of a person’s sleep, it is essential to examine the sleep behaviour by identifying the several sleep stages, their durations and sleep cycles. The established and gold standard procedure for sleep stage scoring is overnight polysomnography (PSG) with the Rechtschaffen and Kales (R-K) method. Unfortunately, the conduct of PSG is timeconsuming and unfamiliar for the subjects and might have an impact of the recorded data. To avoid the disadvantages with PSG, it is important to make further investigations in low-cost home diagnostic systems. For this intention it is necessary to find suitable bio vital parameters for classifying sleep stages without any physical impairments at the same time.
Due to the promising results in several publications we want to analyse existing methods for sleep stage classification based on the parameters body movement,
heartbeat and respiration. Our aim was to find different behaviour patterns in the several sleep stages. Therefore, the average values of 15 wholenight PSG recordings -obtained from the ‘DREAMS Subjects Database’- where analysed in the light of heartbeat, body movement and respiration with 10 different methods.
Efficient privacy-preserving configurationless service discovery supporting multi-link networks
(2017)
Data is the pollution problem of the information age, and protecting privacy is the environmental challenge — this quotation from Bruce Schneier laconically illustrates the importance of protecting privacy. Protecting privacy — as well as protecting our planet — is fundamental for humankind. Privacy is a basic human right, stated in the 12th article of the United Nations’ Universal Declaration of Human Rights. The necessity to protect human rights is unquestionable. Nothing ever threatened privacy on a scale comparable to today’s interconnected computers. Ranging from small sensors over smart phones and notebooks to large compute clusters, they collect, generate and evaluate vast amounts of data. Often, this data is distributed via the network, not only rendering it accessible to addressees, but also — if not properly secured — to malevolent parties. Like a toxic gas, this data billows through networks and suffocates privacy. This thesis takes on the challenge of protecting privacy in the area of configurationless service discovery. Configurationless service discovery is a basis for user-friendly applications. It brings great benefits, allowing the configurationless network setup for various kinds of applications; e.g. for communicating, sharing documents and collaborating, or using infrastructure devices like printers. However, while today’s various protocols provide some means of privacy protection, typical configurationless service discovery solutions do not even consider privacy. As configurationless service discovery solutions are ubiquitous and run on almost every smart device, their privacy problems affect almost everyone. The quotation aligns very well with configurationless service discovery. Typically, configurationless service discovery solutions realize configurationlessness by using cleartext multicast messages literally polluting the local network and suffocating privacy. Messages containing private cleartext data are sent to everyone, even if they are only relevant for a few users. The typical means for mitigating the network pollution problem caused by multicast per se, regardless of the privacy aspects, is confining multicast messages to a single network link or to the access network of a WiFi access point; institutions often even completely deactivate multicast. While this mitigates the privacy problem, it also strongly scales configurationless service discovery down, either confining it or rendering it completely unusable. In this thesis, we provide an efficient configurationless service discovery framework that protects the users’ privacy. It further reduces the network pollution by reducing the number of necessary multicast messages and offers a mode of operation that is completely independent of multicast. Introducing a multicast independent mode of operation, we also address the problem of the limited range in which services are discoverable. Our framework comprises components for device pairing, privacy-preserving service discovery, and multi-link scaling. These components are independent and — while usable in a completely separated way — are meant to be used as an integrated framework as they work seamlessly together. Based on our device pairing and privacy-preserving service discovery components, we published IETF Internet drafts specifying a privacy extension for DNS service discovery over multicast DNS, a wildly used protocol stack for configurationless service discovery. As our drafts have already been adopted by the dnssd working group, they are likely to become standards.
A real matrix is called totally nonnegative if all of its minors are nonnegative. In this paper, the minors are determined from which the maximum allowable entry perturbation of a totally nonnegative matrix can be found, such that the perturbed matrix remains totally nonnegative. Also, the total nonnegativity of the first and second subdirect sum of two totally nonnegative matrices is considered.
Growth is a key indicator of the prosperity of an economy. In today's Germany the " Gründerzeit " still describes a period of enormous economic growth. Factors that lead to growth haven't been investigated in the context of the different life cycle stages of early-stage technology ventures so far. This paper proposes a model of early-stage ventures' growth based on factors. From a theoretical angle, we look at the business from the market-based view (MBV) and the resource-based view (RBV) on strategy in the longitudinal perspective of the business life cycle. With this view we get to know what are the stage specific needs and processes of new technology based ventures in order to provide appropriate support. We tested different potential growth indicators for the model with a questionnaire-based survey which was answered by 68 high-tech entrepreneurs. The results suggest that growth factors are stage specific in their relevance. While leading to growth in one stage, certain factors evince no or even negative influence on growth in other stages. Moreover, RBV factors as seen more relevant for the growth than the MBV factors. Further research requires a large and representative population to validate the results. Keywords:-growth factors, early-stage ventures, market-based view, resources based view.
Validity of the business model is a key indicator for buying into ventures in the early-stage. Business models of early-stage ventures decrease in validity when developing the business over the progressing stages of the business life-cycle. By doing so, the ventures are validating their business model when building transaction relationships to the surrounding value network. In prior research, we developed a research design based on existing business innovation proposals (onepager, pitch decks, business plans) that is assumed to evaluate the status of business model validation. The core hypothesis of the research design is that transaction relations represent a strong anchor between the business model and the business reality, thus providing information on the business model validity. In this research, we test this hypothesis by designing and analyzing a survey that was directed to founders taking part in a business plan competition. We compared the relationships described in the submitted business plans to the relations explicitely stated in the follow-up questionnaire. We identified that the described relations to customers, investors, and people (human resources) match the relationships expressed in questionnaires quite well. A significant disagreement, however, exists in the relationships to suppliers. We conclude that there is still a theoretical and empirical gap that leads to disagreement between business plans and reality in the group of suppliers.
This work investigates data compression algorithms for applications in non-volatile flash memories. The main goal of the data compression is to minimize the amount of user data such that the redundancy of the error correction coding can be increased and the reliability of the error correction can be improved. A compression algorithm is proposed that combines a modified move-to-front algorithm with Huffman coding. The proposed data compression algorithm has low complexity, but provides a compression gain comparable to the Lempel-Ziv-Welch algorithm.
In this paper, a gain-scheduled nonlinear control structure is proposed for a surface vessel, which takes advantage of extended linearisation techniques. Thereby, an accurate tracking of desired trajectories can be guaranteed that contributes to a safe and reliable water transport. The PI state feedback control is extended by a feedforward control based on an inverse system model. To achieve an accurate trajectory tracking, however, an observer-based disturbance compensation is necessary: external disturbances by cross currents or wind forces in lateral direction and wave-induced measurement disturbances are estimated by a nonlinear observer and used for a compensation. The efficiency and the achieved tracking performance are shown by simulation results using a validated model of the ship Korona at the HTWG Konstanz, Germany. Here, both tracking behaviour and rejection of disturbance forces in lateral direction are considered.
Sliding-mode observation with iterative parameter adaption for fast-switching solenoid valves
(2016)
Control of the armature motion of fast-switching solenoid valves is highly desired to reduce noise emission and wear of material. For feedback control, information of the current position and velocity of the armature are necessary. In mass production applications, however, position sensors are unavailable due to cost and fabrication reasons. Thus, position estimation by measuring merely electrical quantities is a key enabler for advanced control, and, hence, for efficient and robust operation of digital valves in advanced hydraulic applications. The work presented here addresses the problem of state estimation, i.e., position and velocity of the armature, by sole use of electrical measurements. The considered devices typically exhibit nonlinear and very fast dynamics, which makes observer design a challenging task. In view of the presence of parameter uncertainty and possible modeling inaccuracy, the robustness properties of sliding mode observation techniques are deployed here. The focus is on error convergence in the presence of several sources for modeling uncertainty and inaccuracy. Furthermore, the cyclic operation of switching solenoids is exploited to iteratively correct a critical parameter by taking into account the norm of the observation error of past switching cycles of the process. A thorough discussion on real-world experimental results highlights the usefulness of the proposed state observation approach.
The method of signal injection is investigated for position estimation of proportional solenoid valves. A simple observer is proposed to estimate a position-dependent parameter, i.e. the eddy current resistance, from which the position is calculated analytically. Therefore, the relationship of position and impedance in the case of sinusoidal excitation is accurately described by consideration of classical electrodynamics. The observer approach is compared with a standard identification method, and evaluated by practical experiments on an off-the-shelf proportional solenoid valve.
Several possibilities of tests under load on a chassis dynamometer are presented. Consumption measurements according standard driving cycles as the New European Drive Cycle (NEDC) and Worldwide harmonized light duty test procedure/cycle (WLTP/WLTC) make special attention to the observance of the regulations necessary. The rotational masses of inertia and the load depending on velocity have to match the required values. Load tests as well allow the determination of the maximum acceleration in the current gear and the slippage of the driven wheels.
The aim of the paper is to present the simulation of the sweeping process based on a mathematical model that includes the drag force, the lift force, the sideway force, and the gravity. At the beginning, it is presented a short history of the street sweepers, some considerations about the sweeping process and the parameters of the sweeping process. Considering the developed model, in Matlab there is done some simulation for the trajectory of a spherical pebble. The obtained results are presented in graphical shape.
Stress is becoming an important topic in modern life. The influence of stress results in a higher rate of health disorders such as burnout, heart problems, obesity, asthma, diabetes, depressions and many others. Furthermore individual’s behavior and capabilities could be directly affected leading to altered cognition, inappropriate decision making and problem solving skills. In a dynamic and unpredictable environment, such as automotive, this can result in a higher risk for accidents. Different papers faced the estimation as well as prediction of drivers’ stress level during driving. Another important question is not only the stress level of the driver himself, but also the influence on and of a group of other drivers in the near area. This paper proposes a system, which determines a group of drivers in a near area as clusters and it derives the individual stress level. This information will be analyzed to generate a stress map, which represents a graphical view about road section with a higher stress influence. Aggregated data can be used to generate navigation routes with a lower stress influence to decrease stress influenced driving as well as improve road safety.
Sleep is an important aspect in life of every human being. The average sleep duration for an adult is approximately 7 h per day. Sleep is necessary to regenerate physical and psychological state of a human. A bad sleep quality has a major impact on the health status and can lead to different diseases. In this paper an approach will be presented, which uses a long-term monitoring of vital data gathered by a body sensor during the day and the night supported by mobile application connected to an analyzing system, to estimate sleep quality of its user as well as give recommendations to improve it in real-time. Actimetry and historical data will be used to improve the individual recommendations, based on common techniques used in the area of machine learning and big data analysis.
The magneto-mechanical behavior of magnetic shape memory (MSM) materials has been investigated by means of different simulation and modeling approaches by several research groups. The target of this paper is to simulate actuators driven by MSM alloys and to understand the MSM element behavior during actuation, which shall lead to an increased performance of the actuator. It is shown that internal and external stresses should be taken into consideration using numerical computation tools for magnetic fields in an efficient way.
Stress is recognized as a predominant disease with raising costs for rehabilitation and treatment. Currently there several different approaches that can be used for determining and calculating the stress levels. Usually the methods for determining stress are divided in two categories. The first category do not require any special equipment for measuring the stress. This category useless the variation in the behaviour patterns that occur while stress. The core disadvantage for the category is their limitation to specific use case. The second category uses laboratories instruments and biological sensors. This category allow to measure stress precisely and proficiently but on the same time they are not mobile and transportable and do not support real-time feedback. This work presents a mobile system that provides the calculation of stress. For achieving this, the of a mobile ECG sensor is analysed, processed and visualised over a mobile system like a smartphone. This work also explains the used stress measurement algorithm. The result of this work is a portable system that can be used with a mobile system like a smartphone as visual interface for reporting the current stress level.
Stress is a recognized as a predominant disease with growing costs of treatment. The approach presented here is aimed to detect stress using a light weighted, mobile, cheap and easy to use system. The result shows that stress can be detected even in case a person’s natural bio vital data is out of the main range. The system enables storage of measured data, while maintaining communication channels of online and post-processing.
Navigation on the Danube
(2016)
This report contains two parts: The first part presents an overview on studies concerning the Danube, inland navigation or the impact of climate change on either of those. The second part gives a more detailed analysis of inland navigation on the Danube, partly based on studies presented in part one. Part two covers the current situation along the Danube by covering the topic of bottlenecks and other limitations for shipping along the Danube. Based on these informations, an estimation of the economic impacts of low water periods on inland navigation is made. As a last step, measures to reduce the impact of low water on inland navigations are presented. The report shows, that inland navigation still is an important transport mode, along the Danube as well as in other european regions. Especially in Romania, inland navigation still has a large share of more than 20% and rising in total transport. However, inland navigation depends strongly on good conditions of its infrastructure. These good conditions are limited mainly by two factors: one are the so called bottlenecks. Those are areas with sub-optimal shipping conditions e.g. due to solid rock formations in the river that lead to a reduced water depths. The other factor is the weather (and, on a longer time scale, the climate) which, mostly depending on precipitation and evaporation, can lead to low water levels seasonally. In addition to these two natural factors, laws which e.g. regulate the maximum number of barges allowed. Human build structures like locks limit the size of vessels as well as the speed they can travel with. These limiting factors are identified and located in the first chapter of part two of this report, before the water depth needed by several ship sizes as well as the cargo fleet available along the Danube are presented. One of the targets of this report is to estimate the economic impact of low water periods. All the factors named above as well as the freight prices charged for connections along the Danube are used to each this target in chapter II.4. To estimate the impact of low water periods on the freight prices, a method developed by Jonkeren et al. (2007) for the Rhine is transfered to the Danube. By transfering Jonkeren et al. (2007) method, regression equations for several transport connections along the Danube are identified that give a first estimate for the connection of freight prices and water levels. With the help of these regression equations, an estimation of the total expenses for transport via inland navigation for several years is possible. The yearly and seasonal variability is identified as well as the additional expenses due to water levels below 280cm. But additional expenses are not the only impact of changing water levels on inland navigation. Another is, that while the demand for transport stays at the same level, sometimes the water levels are not sufficient enough to use the full capacity of the fleet. Therefore, the (theoretical) amount of cargo that could not be transported due to low water levels is calculated as well and presented in chapter II.5. Finally, some measures to overcome some of the here named problems of inland navigation due to low water levels are presented. These are separated into two general approches: change the ship or change the river. Both methods have their advantages and disadvantages due to technical as well as regulatory and other factors. The list presented here however is incomplete and only gives a few ideas of how some problems can be overcome. In the end, an individual mix for the different regions along the river and sometimes for the individual companies must be found.
The corrosion resistance of stainless steels is massively influenced by the condition of their surface. The surface quality includes the topography of the surface, the structure and composition of the passive layer, and the surface near structure of the base material. These factors are influenced by final physical/chemical surface treatments. The presented work shows significantly lower corrosion resistance for mechanical machined specimens than for etched specimens. It also turns out that the rougher the surface, the lower the corrosion resistance gets. However, there is no general finding which shows if blasted or grinded surfaces are more appropriate, but a dependency on process parameters and the characteristics on corrosive exposure in terms of corrosion behavior. The results show that not only the surface roughness Ra has an influence on corrosion behavior but also the shape of peaks and valleys which are evolved by surface treatments. Imperfections in the base material, like sulfidic inclusions lead to a weaker passive layer, respectively, to a decrease of the corrosion resistance. By using special passivating techniques the corrosion resistance of stainless steels can be increased to a higher level in comparison to common passivation.
Even though immutability is a desirable property, especially in a multi-threaded environment, implementing immutable Java classes is surprisingly hard because of a lack of language support. We present a static analysis tool using abstract bytecode interpretation that checks Java classes for compliance with a set of rules that together constitute state-based immutability. Being realized as a Find Bugs plug in, the tool can easily be integrated into most IDEs and hence the software development process. Our evaluation on a large, real world codebase shows that the average run-time effort for a single class is in the range of a few milliseconds, with only a very few statistical spikes.
Smart factory and education
(2016)
The introduction of cyber physical systems into production companies is highly changing working conditions and processes as well as business models. In practice a growing discrepancy between big and small respectively medium-sized companies can be observed. Bridging that gap a university smart factory is introduced to give that companies a platform to trial, educate employees and access consultancy. Realizing the smart factory a highly integrated, open and standardized automation concept is shown comprising single devices, production lines up to a higher automation system maintaining a community or business models.
To learn from the past, we analyse 1,088 "computer as a target" judgements for evidential reasoning by extracting four case elements: decision, intent, fact, and evidence. Analysing the decision element is essential for studying the scale of sentence severity for cross-jurisdictional comparisons. Examining the intent element can facilitate future risk assessment. Analysing the fact element can enhance an organization's capability of analysing criminal activities for future offender profiling. Examining the evidence used against a defendant from previous judgements can facilitate the preparation of evidence for upcoming legal disclosure. Follow the concepts of argumentation diagrams, we develop an automatic judgement summarizing system to enhance the accessibility of judgements and avoid repeating past mistakes. Inspired by the feasibility of extracting legal knowledge for argument construction and employing grounds of inadmissibility for probability assessment, we conduct evidential reasoning of kernel traces for forensic readiness. We integrate the narrative methods from attack graphs/languages for preventing confirmation bias, the argumentative methods from argumentation diagrams for constructing legal arguments, and the probabilistic methods from Bayesian networks for comparing hypotheses.
In this paper we provide a performance analysis framework for wireless industrial networks by deriving a service curve and a bound on the delay violation probability. For this purpose we use the (min,×)stochastic network calculus as well as a recently presented recursive formula for an end-to-end delay bound of wireless heterogeneous networks. The derived results are mapped to WirelessHART networks used in process automation and were validated via simulations. In addition to WirelessHART, our results can be applied to any wireless network whose physical layer conforms the IEEE 802.15.4 standard, while its MAC protocol incorporates TDMA and channel hopping, like e.g. ISA100.11a or TSCH-based networks. The provided delay analysis is especially useful during the network design phase, offering further research potential towards optimal routing and power management in QoS-constrained wireless industrial networks.
These days computer analysis of ECG (Electrocardiograms) signals is common. There are many real-time QRS recognition algorithms; one of these algorithms is Pan-Tompkins Algorithm. Which the Pan-Tompkins Algorithm can detect QRS complexes of ECG signals. The proposed algorithm is analysed the data stream of the heartbeat based on the digital analysis of the amplitude, the bandwidth, and the slope. In addition to that, the stress algorithm compares whether the current heartbeat is similar or different to the last heartbeat after detecting the ECG signals. This algorithm determines the stress detection for the patient on the real-time. In order to implement the new algorithm with higher performance, the parallel programming language CUDA is used. The algorithm determines stress at the same time by determining the RR interval. The algorithm uses a different function as beat detector and a beat classifier of stress.
Realistic traffic modeling plays a key role in efficient Dynamic Spectrum Access (DSA) which is considered as enabler for the employment of wireless technologies in critical industrial automation applications (IAA). The majority of models of spectrum usage are not suitable for this specific use case as they are based on measurement campaigns conducted in urban or controlled laboratory environments. In this work we present a time-domain traffic model for industrial communication in the 2.4 GHz industrial, scientific, medical (ISM) band based on measurements in an industrial automotive production site. As DSA is usually implemented on Software Defined Radios (SDR), our measurement campaign is based on SDR platforms rather than sophisticated spectrum analyzers. We show through the estimation of the Hurst parameter that industrial wireless traffic possesses inherent self-similarity that could be exploited for efficient DSA. We also show that wireless traffic could be modeled as a semi-Markov model with channel on and off durations Log-normally and Pareto distributed, respectively. We finally estimate the parameters of the derived models using Maximum Likelihood estimation.
ERP systems integrate a major part of all business processes and organizations include them in their IT service management. Besides these formal systems, there are additional systems that are rather stand-alone and not included in the IT management tasks. These so-called ‘shadow systems’ also support business processes but hinder a high enterprise integration. Shadow systems appear during their explicit detection or during software maintenance projects such as enhancements or release changes of enterprise systems. Organizations then have to decide if and to what extent they integrate the identified shadow systems into their ERP systems. For this decision, organizations have to compare the capabilities of each identified shadow system with their ERP systems. Based on multiple-case studies, we provide a dependency approach to enable their comparison. We derive categories for different stages of the dependency and base insights into integration possibilities on these stages. Our results show that 64% of the shadow systems in our case studies are related to ERP systems. This means that they share parts or all of their data and/or functionality with the ERP system. Our research contributes to the field of integration as well as to the discussion about shadow systems.
adidas and Reebok
(2016)
Present demographic change and a growing population of elderly people leads to new medical needs. Meeting these with state of the art technology is as a consequence a rapidly growing market. So this work is aimed at taking modern concepts of mobile and sensor technology and putting them in a medical context. By measuring a user’s vital signs on sensors which are processed on a Android smartphone, the target system is able to determine the current health state of the user and to visualize gathered information. The system also includes a weather forecasting functionality, which alerts the user on possibly dangerous future meteorological events. All information are collected centrally and distributed to users based on their location. Further, the system can correlate the client-side measurement of vital signs with a server-side weather history. This enables personalized forecasting for each user individually. Finally, a portable and affordable application was developed that continuously monitors the health status by many vital sensors, all united on a common smartphone.
This paper proposes a soft input decoding algorithm and a decoder architecture for generalized concatenated (GC) codes. The GC codes are constructed from inner nested binary Bose-Chaudhuri-Hocquenghem (BCH) codes and outer Reed-Solomon codes. In order to enable soft input decoding for the inner BCH block codes, a sequential stack decoding algorithm is used. Ordinary stack decoding of binary block codes requires the complete trellis of the code. In this paper, a representation of the block codes based on the trellises of supercodes is proposed in order to reduce the memory requirements for the representation of the BCH codes. This enables an efficient hardware implementation. The results for the decoding performance of the overall GC code are presented. Furthermore, a hardware architecture of the GC decoder is proposed. The proposed decoder is well suited for applications that require very low residual error rates.
A lot of procedures for estimating the spool position in linear electromagnetic actuators using voltage and current measurements only, can be found in the literature. Subject to the accuracy of the estimated spool position some achieve better, some worse results. However, in almost every approach hysteresis has a huge impact on the estimation accuracy that can be achieved. Regardless whether these effects are caused by magnetic or mechanical hysteresis, they will limit the accuracy of the position estimate, if not taken into account. In this paper, a model is introduced which covers the hysteresis effects as well as other nonlinear ities occurring in estimated position-dependent parameters. A classical Preisach model is deployed first, which is then adjusted by using novel elementary preceding Relay-Operators. The resulting model for the estimated position-dependent parameters including the adjusted Preisach model can be easily applied to position estimation tasks. It is shown that the considered model distinctly improves the accuracy for the spool position estimate, while it is kept as simple as possible for real-time implementation reasons.