Refine
Year of publication
- 2015 (64) (remove)
Document Type
- Conference Proceeding (64) (remove)
Language
- English (64) (remove)
Keywords
- Algorithm (1)
- BCH codes (1)
- Bernstein coefficients (1)
- Bernstein polynomials (1)
- Biomechanics Laboratory (1)
- Body-movement (1)
- Checkerboard ordering (1)
- Collision avoidance (1)
- Convex optimization (1)
- Countermovement Jump (1)
In biomechanics laboratories the ground reaction force time histories of the foot-fall of persons are usually measured using a force plate. The accelerations of the floor, in which the force plate is embedded, have to be limited, as they may influence the accuracy of the force measurements. For the numerical simulation of vibrations induced by humans in biomechanical laboratories, loading scenarios are defined. They include continuous motions of persons (walking, running) as well as jumps, typical for biomechanical investigations on athletes. The modeling of floors has to take into account the influence of floor screed in case of portable force plates. Criteria for the assessment of the measuring error provoked by floor vibrations are given. As an example a floor designed to accommodate a force platform in a biomechanical laboratory of the University Hospital in Tübingen, Germany, has been investi-gated for footfall induced vibrations. The numerical simulation by a finite element analysis has been validated by field measurements. As a result, the measuring error of the force plate installed in the laboratory is obtained for diverse scenarios.
Tourist tracking
(2015)
This work investigates soft input decoding for generalized concatenated (GC) codes. The GC codes are constructed from inner nested binary Bose-Chaudhuri-Hocquenghem (BCH)codes and outer Reed-Solomon (RS) codes. In order to enable soft input decoding for the inner BCH block codes, a sequential stack decoding algorithm is used. Ordinary stack decoding of binary block codes requires the complete trellis of the code.
In this work a representation of the block codes based on the trellises of supercodes is proposed in order to reduce the memory requirements for the representation of the BCH codes. Results for the decoding performance of the overall GC code are presented.
Furthermore, an efficient hardware implementation of the GC decoder is proposed.
Hot isostatic pressing (HIP) allows the production of complex components geometry. Generally, a high quality of the components is achieved due to the well managed composition of the metal powder and the non-isotropic properties. If a duplex stainless steel is produced, a heat treatment after the HIP-process is necessary to remove precipitations like carbides, nitrides and intermetallic phases. In a new process, the sintering step should be combined with the heat treatment. In this case a high cooling rate is necessary to avoid precipitations in duplex stainless steels. In this work, the influence of the HIP-temperature and the wall thickness on corrosion resistance, microstructure and impact strength were investigated. The results should help to optimize the process parameters like temperature and cooling rate. For the investigation, two HIP-temperatures were tested in a classical HIP-process step with a defined cooling rate. An additional heat treatment was not conducted. The specimens were cut from different sectors of the HIP-block. For investigation of the corrosion resistance, the critical pitting temperature was determined with electrochemical method according to EN ISO 17864. An impact test was used to determine the impact transition temperature. Metallographic investigations show the microstructure in the different sectors of the HIP-block.
Differences in the pitting resistance between cold worked CrNi and CrNiMnN metastable austenites
(2015)
Fachvortrag auf dem Kongress CORROSION 2015, 15-19 March, Dallas, Texas, USA. NACE International
Effect of cold working on the localized corrosion behavior of CrNi and CrNiMnN metastable austenites
(2015)
Probabilistic data association for tracking extended targets under clutter using random matrices
(2015)
The use of random matrices for tracking extended objects has received high attention in recent years. It is an efficient approach for tracking objects that give rise to more than one measurement per time step. In this paper, the concept of random matrices is used to track surface vessels using highresolution automotive radar sensors. Since the radar also receives a large number of clutter measurements from the water, for the data association problem, a generalized probabilistic data association filter is applied. Additionally, a modification of the filter update step is proposed to incorporate the Doppler velocity measurements. The presented tracking algorithm is validated using Monte Carlo Simulation, and some performance results with real radar data are shown as well.
Small vessels or unmanned surface vehicles only have a limited amount of space and energy available. If these vessels require an active sensing collision avoidance system it is often not possible to mount large sensor systems like X-Band radars. Thus, in this paper an energy efficient automotive radar and a laser range sensor are evaluated for tracking surrounding vessels. For these targets, those type of sensors typically generate more than one detection per scan. Therefore, an extended target tracking problem has to be solved to estimate state end extension of the vessels. In this paper, an extended version of the probabilistic data association filter that uses random matrices is applied. The performance of the tracking system using either radar or laser range data is demonstrated in real experiments.
Stress is recognized as a factor of predominant disease and in the future the costs for treatment will increase. The presented approach tries to detect stress in a very basic and easy to implement way, so that the cost for the device and effort to wear it remain low. The user should benefit from the fact that the system offers an easy interface reporting the status of his body in real time. In parallel, the system provides interfaces to pass the obtained data forward for further processing and (professional) analyses, in case the user agrees. The system is designed to be used in every day’s activities and it is not restricted to laboratory use or environments. The implementation of the enhanced prototype shows that the detection of stress and the reporting can be managed using correlation plots and automatic pattern recognition even on a very light-weighted microcontroller platform.
Digital cameras are subject to physical, electronic and optic effects that result in errors and noise in the image. These effects include for example a temperature dependent dark current, read noise, optical vignetting or different sensitivities of individual pixels. The task of a radiometric calibration is to reduce these errors in the image and thus improve the quality of the overall application. In this work we present an algorithm for radiometric calibration based on Gaussian processes. Gaussian processes are a regression method widely used in machine learning that is particularly useful in our context. Then Gaussian process regression is used to learn a temperature and exposure time dependent mapping from observed gray-scale values to true light intensities for each pixel. Regression models based on the characteristics of single pixels suffer from excessively high runtime and thus are unsuitable for many practical applications. In contrast, a single regression model for an entire image with high spatial resolution leads to a low quality radiometric calibration, which also limits its practical use. The proposed algorithm is predicated on a partitioning of the pixels such that each pixel partition can be represented by one single regression model without quality loss. Partitioning is done by extracting features from the characteristic of each pixel and using them for lexicographic sorting. Splitting the sorted data into partitions with equal size yields the final partitions, each of which is represented by the partition centers. An individual Gaussian process regression and model selection is done for each partition. Calibration is performed by interpolating the gray-scale value of each pixel with the regression model of the respective partition. The experimental comparison of the proposed approach to classical flat field calibration shows a consistently higher reconstruction quality for the same overall number of calibration frames.
Shadow IT risk
(2015)
Nowadays, the number of flexible and fast human to application system interactions is dramatically increasing. For instance, citizens interact with the help of the internet to organize surveys or meetings (in real-time) spontaneously. These interactions are supported by technologies and application systems such as free wireless networks, web -or mobile apps. Smart Cities aim at enabling their citizens to use these digital services, e.g., by providing enhanced networks and application infrastructures maintained by the public administration. However, looking beyond technology, there is still a significant lack of interaction and support between "normal" citizens and the public administration. For instance, democratic decision processes (e.g. how to allocate public disposable budgets) are often discussed by the public administration without citizen involvement. This paper introduces an approach, which describes the design of enhanced interactional web applications for Smart Cities based on dialogical logic process patterns. We demonstrate the approach with the help of a budgeting scenario as well as a summary and outlook on further research.
Besides energy efficiency, product quality is gaining importance in the design of drying processes for sensitive biological foodstuffs. The influence of drying parameters on the drying kinetics of apples has been extensively investigated; the information about effects on product quality available in literature however, is often contradictory. Furthermore quality changes obtained applying different drying parameters are usually hard to compare. As most quality changes can be expressed as zero, first or second order reactions and mainly depend on drying air temperature and drying time, it would be desirable to cross-check the results in function thereof. This paper introduces a method of quality determination using a new reference value, the cumulated thermal load. It is defined as the time integral of the product surface temperature and improves the comparability of quality changes obtained by different experimental settings in drying of apples and tomatoes. It could be shown that quality parameters like color changes and shrinkage during apple drying and the content of temperature sensitive acids in tomatoes vary linearly with the integral of product temperature over time.