Refine
Year of publication
Document Type
- Conference Proceeding (641)
- Article (425)
- Other Publications (143)
- Part of a Book (141)
- Working Paper (128)
- Book (118)
- Report (115)
- Journal (Complete Issue of a Journal) (85)
- Master's Thesis (76)
- Doctoral Thesis (57)
Language
- German (1112)
- English (879)
- Multiple languages (8)
Keywords
Institute
- Fakultät Architektur und Gestaltung (41)
- Fakultät Bauingenieurwesen (104)
- Fakultät Elektrotechnik und Informationstechnik (33)
- Fakultät Informatik (121)
- Fakultät Maschinenbau (60)
- Fakultät Wirtschafts-, Kultur- und Rechtswissenschaften (106)
- Institut für Angewandte Forschung - IAF (114)
- Institut für Naturwissenschaften und Mathematik - INM (3)
- Institut für Optische Systeme - IOS (37)
- Institut für Strategische Innovation und Technologiemanagement - IST (60)
In 3D extended object tracking (EOT), well-established models exist for tracking the object extent using various shape priors. A single update, however, has to be performed for every measurement using these models leading to a high computational runtime for high-resolution sensors. In this paper, we address this problem by using various model-independent downsampling schemes based on distance heuristics and random sampling as pre-processing before the update. We investigate the methods in a simulated and real-world tracking scenario using two different measurement models with measurements gathered from a LiDAR sensor. We found that there is a huge potential for speeding up 3D EOT by dropping up to 95\% of the measurements in our investigated scenarios when using random sampling. Since random sampling, however, can also result in a subset that does not represent the total set very well, leading to a poor tracking performance, there is still a high demand for further research.
With the high resolution of modern sensors such as multilayer LiDARs, estimating the 3D shape in an extended object tracking procedure is possible. In recent years, 3D shapes have been estimated in spherical coordinates using Gaussian processes, spherical double Fourier series or spherical harmonics. However, observations have shown that in many scenarios only a few measurements are obtained from top or bottom surfaces, leading to error-prone estimates in spherical coordinates. Therefore, in this paper we propose to estimate the shape in cylindrical coordinates instead, applying harmonic functions. Specifically, we derive an expansion for 3D shapes in cylindrical coordinates by solving a boundary value problem for the Laplace equation. This shape representation is then integrated in a plain greedy association model and compared to shape estimation procedures in spherical coordinates. Since the shape representation is only integrated in a basic estimator, the results are preliminary and a detailed discussion for future work is presented at the end of the paper.
In this paper, a novel measurement model based on spherical double Fourier series (DFS) for estimating the 3D shape of a target concurrently with its kinematic state is introduced. Here, the shape is represented as a star-convex radial function, decomposed as spherical DFS. In comparison to ordinary DFS, spherical DFS do not suffer from ambiguities at the poles. Details will be given in the paper. The shape representation is integrated into a Bayesian state estimator framework via a measurement equation. As range sensors only generate measurements from the target side facing the sensor, the shape representation is modified to enable application of shape symmetries during the estimation process. The model is analyzed in simulations and compared to a shape estimation procedure using spherical harmonics. Finally, shape estimation using spherical and ordinary DFS is compared to analyze the effect of the pole problem in extended object tracking (EOT) scenarios.
In this paper, approximating the shape of a sailing boat using elliptic cones is investigated. Measurements are assumed to be gathered from the target's surface recorded by 3D scanning devices such as multilayer LiDAR sensors. Therefore, different models for estimating the sailing boat's extent are presented and evaluated in simulated and real-world scenarios. In particular, the measurement source association problem is addressed in the models. Simulated investigations are conducted with a static and a moving elliptic cone. The real-world scenario was recorded with a Velodyne Alpha Prime (VLP-128) mounted on a ferry of Lake Constance. Final results of this paper constitute the extent estimation of a single sailing boat using LiDAR data applying various measurement models.
In the past years, algorithms for 3D shape tracking using radial functions in spherical coordinates represented with different methods have been proposed. However, we have seen that mainly measurements from the lateral surface of the target can be expected in a lot of dynamic scenarios and only few measurements from the top and bottom parts leading to an error-prone shape estimate in the top and bottom regions when using a representation in spherical coordinates. We, therefore, propose to represent the shape of the target using a radial function in cylindrical coordinates, as these only represent regions of the lateral surface, and no information from the top or bottom parts is needed. In this paper, we use a Fourier-Chebyshev double series for 3D shape representation since a mixture of Fourier and Chebyshev series is a suitable basis for expanding a radial function in cylindrical coordinates. We investigate the method in a simulated and real-world maritime scenario with a CAD model of the target boat as a reference. We have found that shape representation in cylindrical coordinates has decisive advantages compared to a shape representation in spherical coordinates and should preferably be used if no prior knowledge of the measurement distribution on the surface of the target is available.
Summary of the 8th Workshop on Metallization and Interconnection for Crystalline Silicon Solar Cells
(2019)
This article gives a summary of the 8th Metallization and Interconnection workshop and attempts to place each contribution in the appropriate context. The field of metallization and interconnection continues to progress at a very fast pace. Several printing techniques can now achieve linewidths below 20 μm. Screen printing is more than ever the dominating metallization technology in the industry, with finger widths of 45 μm in routine mass production and values below 20 μm in the lab. Plating technology is also being improved, particularly through the development of lower cost patterning techniques. Interconnection technology is changing fast, with introduction in mass production of multiwire and shingled cells technologies. New models and characterization techniques are being introduced to study and understand in detail these new interconnection technologies.
Summary of the 9th workshop on metallization and interconnection for crystalline silicon solar cells
(2021)
The 9th edition of the Workshop on Metallization and Interconnection for Crystalline Silicon Solar Cells was held as an online event but nevertheless reached the workshop goals of knowledge sharing and networking. The technology of screen-printed contacts of high temperature pastes continues its fast progress enabled by better understanding of the phenomena taking place during printing and firing, and progress in materials. Great improvements were also achieved in low temperature paste printing and plated metallization. In the field of interconnection, progress was reported on multiwire approaches, electrically conductive adhesives and on foil-based approaches. Common to many contributions at the workshop was the use of advanced laser processes to improve performance or throughput.
Modelle der Bau- und Immobilienwirtschaft, des Architekturbüros und der Planungsdienstleistung
(2011)
Die Arbeit ist eine strukturelle Analyse dessen, was Architektur außerhalb der Sphäre des Diskurses innerhalb der Baukultur oder der Bautechnik ist. Sie beschreibt die flüchtigen Teile der Architektenleistung, welche sich nicht in den Werken manifestieren, anhand von sozialen Prozessen, ökonomischen Abhängigkeiten und der Kombination von Leistungsfaktoren zu Leistungsbündeln. Hierzu wird die Bauwirtschaft, das Architekturbüro und das Wesen der Leistung beschrieben und analysiert sowie anhand von drei Modellen strukturell abgebildet und exemplarisch angewendet. Die Arbeit unterbreitet abschließend Vorschläge für eine Weiterentwicklung der Lehre in diesem Bereich.
Towards an integrated theory of economic governance – Conclusions from the governance of ethics
(2004)
Study design:
Retrospective, mono-centric cohort research study.
Objectives:
The purpose of this study is to validate a novel artificial intelligence (AI)-based algorithm against human-generated ground truth for radiographic parameters of adolescent idiopathic scoliosis (AIS).
Methods:
An AI-algorithm was developed that is capable of detecting anatomical structures of interest (clavicles, cervical, thoracic, lumbar spine and sacrum) and calculate essential radiographic parameters in AP spine X-rays fully automatically. The evaluated parameters included T1-tilt, clavicle angle (CA), coronal balance (CB), lumbar modifier, and Cobb angles in the proximal thoracic (C-PT), thoracic, and thoracolumbar regions. Measurements from 2 experienced physicians on 100 preoperative AP full spine X-rays of AIS patients were used as ground truth and to evaluate inter-rater and intra-rater reliability. The agreement between human raters and AI was compared by means of single measure Intra-class Correlation Coefficients (ICC; absolute agreement; .75 rated as excellent), mean error and additional statistical metrics.
Results:
The comparison between human raters resulted in excellent ICC values for intra- (range: .97-1) and inter-rater (.85-.99) reliability. The algorithm was able to determine all parameters in 100% of images with excellent ICC values (.78-.98). Consistently with the human raters, ICC values were typically smallest for C-PT (eg, rater 1A vs AI: .78, mean error: 4.7°) and largest for CB (.96, -.5 mm) as well as CA (.98, .2°).
Conclusions:
The AI-algorithm shows excellent reliability and agreement with human raters for coronal parameters in preoperative full spine images. The reliability and speed offered by the AI-algorithm could contribute to the efficient analysis of large datasets (eg, registry studies) and measurements in clinical practice.
Structural interventions of the Commission comprise expenditures for objective 1, objective 2 and objective 3. The three priority objectives of the Structural Funds are: • promoting the development and structural adjustment of the regions whose development is lagging behind (objective 1); • supporting the economic and social conversion of areas facing structural difficulties (objective 2); • supporting the adaptation and modernisation of policies and systems of education, training and employment. (objective 3). The purpose of this study is to quantify the economic impacts of objective 1 interventions of the Structural Funds for the period 2000 – 2006. The expenditures of the Structural Funds for objective 2 and objective 3, the Cohesion Fund, the Instrument for Structural Policies for Pre-accession (ISPA) and loans which are granted by the European Investment Bank (EIB) are not included in the analysis. The study quantifies how much of expected development can be attributed to objective 1 expenditures for • Community interventions (Structural Funds), • public interventions (Structural Funds, national public interventions) and • total interventions (Structural Funds, national public interventions, private participation). The study uses the autumn 2001 forecast and medium-term projection of Directorate-General for Economic and Financial Affairs of the European Commission in order to calculate a baseline for the impact assessment. Today, the forecast itself seems rather optimistic. However, this does not cause problems for the analysis in this report, because the objective is to estimate the impact of the structural funds. In other words the objective is to estimate, for example, the additional growth caused by the structural funds and not to forecast growth as such. Therefore, whether the forecast as such will materialise is of no consequence for the impact analysis in this study.
For a long time, the use of intermediate products in production has been growing more rapidly in most countries than domestic production. This is a strong indication of more interdependency in production. The main purpose of input-output analysis is to study the interdependency of industries in an economy. Often the term interindustry analysis is also used. Therefore, the exchange of intermediate products is a key issue of input-output analysis. We will use input–output data for this study that the author prepared for the new ‘Handbook on Supply, Use and Input–Output Tables with Extensions and Applications’ of the United Nations. The supply use and input–output tables contain separate valuation matrices for trade margins, transport margins, value added tax, other taxes on products and subsidies on products. For the study, two input–output models were developed to evaluate the impact of fuel subsidy and taxation reform on output, gross domestic product, inflation and trade. Six scenarios are discussed covering different aspects of the reform.
For decades now, exports and import have grown more rapidly than domestic production. This is a strong indication that, besides the rapid growth of foreign trade in final goods, trade in intermediates is becoming increasingly important. For this reason, an input-output ap-proach is more appropriate for any analysis of diversification than a traditional approach based purely on macroeconomic data.
This article analyses economic diversification in Gulf Cooperation Council (GCC) countries using data from input-output tables which are an integral part of the national accounts. We compare the performance of the GCC economies with that of a reference case, Norway, which is considered to have successfully diversified its economy despite having a large oil resource base. It also assesses these countries’ relative progress on sustainable development using a measure of the World Bank, adjusted net savings, which evaluates the true rate of savings in an economy after accounting for investments in physical and human capital, de-pletion of natural resources, and damage from environmental pollution.
The article concludes that GCC countries have, contrary to expectation, collectively per-formed relatively well on diversification, but their performance on sustainable development varies.
Industrial growth and a rapidly growing world population have large impacts on the global environment and allocation of material resources. Most changes in the environment are brought about by human activities and these activities result in a flow of materials. The flows of resources from the natural environment to the economy are a prerequisite of production while flows of residuals from the economy to the environment are the consequence of production and consumption. A full understanding of these processes requires a complete description of the physical dimension of the economy and its interaction with the environment.
The main objective of this paper is to revisit Temursho’s (2020) article “On the Euro method” in a critical and constructive way. We have praised part of his work and at the same time, we have analysed some of his arguments against the Euro method and against the work published by Valderas-Jaramillo et al. (2019). Moreover, we have analysed some other relevant aspects of the SUT-Euro and SUT-RAS methods not covered in Temursho (2020). Temursho (2020) seems to conclude that no one should use the Euro method again because of its limitations and drawbacks. However, although not being the Euro method perfect, we are afraid that there is still space for the use of the Euro method in updating/regionalizing supply and use tables.
The State of Custom
(2021)
In our article, we engage with the anthropologist Gerd Spittler’s pathbreaking
article “Dispute settlement in the shadow of Leviathan” (1980) in which
he strives to integrate the existence of state courts (the eponymous Leviathan’s
shadow) in (post-)colonial Africa into the analysis on non-state court legal practices.
According to Spittler, it is because of undesirable characteristics inherent
in state courts that the disputing parties tended to rather involve mediators than
pursue a state court judgment. The less people liked state courts, the more likely
they were to (re-)turn to dispute settlement procedures. Now how has this situation
changed in the last four decades since its publication date? We relate his findings
to contemporary debates in legal anthropology that investigate the relationship
between disputing, law and the state. We also show through our own work in
Africa and Asia, particularly in Southern Ethiopia and Kyrgyzstan, in what ways
Spittler’s by now classical contribution to the field of legal anthropology in 1980
can be made fruitful for a contemporary anthropology of the state at a time when
not only (legal) anthropology has changed, but especially the way states deal with
putatively “customary” forms of dispute settlement.
The evaluation of the effectiveness of different machine learning algorithms on a publicly available database of signals derived from wearable devices is presented with the goal of optimizing human activity recognition and classification. Among the wide number of body signals we choose a couple of signals, namely photoplethysmographic (optically detected subcutaneous blood volume) and tri-axis acceleration signals that are easy to be simultaneously acquired using commercial widespread devices (e.g. smartwatches) as well as custom wearable wireless devices designed for sport, healthcare, or clinical purposes. To this end, two widely used algorithms (decision tree and k-nearest neighbor) were tested, and their performance were compared to two new recent algorithms (particle Bernstein and a Monte Carlo-based regression) both in terms of accuracy and processing time. A data preprocessing phase was also considered to improve the performance of the machine learning procedures, in order to reduce the problem size and a detailed analysis of the compression strategy and results is also presented.
Durch die Verkürzung der Produktlebenszyklen und die damit verbundene steigende Anzahl hochkomplexer Produktneuanläufe wird dem Anlaufmanagement künftig eine wachsende Bedeutung zuteil. Diese Aussage wird durch die steigende Anzahl an Aufsätzen und diversen Monographien zu diesem Themenkomplex bestätigt. Die zeit- und kostensensible Anlaufphase hat bedeutenden Einfluss auf die Produktivität, Produktrendite und somit auf den Markterfolg eines Unternehmens. Die Beherrschung reibungsloser Serienanläufe wird zum Tagesgeschäft und avanciert um entscheidenden Wettbewerbsvorteil. Gleichzeitig ist sie Prämisse um sich auch künftig an den hochdynamischen und globalen Märkten behaupten zu können. Entgegen den ausgereiften Konzepten, Methoden und Theorien zur Produktionsplanung und -steuerung in der Serie birgt das zielgerichtete Management von Serienanläufen noch enorme Verbesserungspotentiale. Verschiedene Handlungsfelder greifen ineinander, werden von Normen und Kundenforderungen beeinflusst und müssen integriert betrachtet werden. Dieses Buch soll einen Überblick über die Handlungsfelder des Anlaufmanagements sowie den Einfluss von Normen und Kundenanforderungen geben. Darüber hinaus soll die zentrale Bedeutung des Anlaufmanagements durch eine Umfrage bei OEM, 1st- und 2nd-Tier-Lieferanten bestätigt werden. Des weiteren werden Methoden und Werkzeuge vorgestellt, die bei der Einführung des Anlaufmanagements in einem Unternehmen der Automobilzulieferindustrie entwickelt wurden.
Path planning and collision avoidance for safe autonomous vessel navigation in dynamic environments
(2017)
The improvement of collision avoidance for vessels in close range encounter situations is an important topic for maritime traffic safety. Typical approaches generate evasive trajectories or optimise the trajectories of all involved vessels. Such a collision avoidance system has to produce evasive manoeuvres that do not confuse other navigators. To achieve this behaviour, a probabilistic obstacle handling based on information from a radar sensor with target tracking, that considers measurement and tracking uncertainties is proposed. A grid based path search algorithm, that takes the information from the probabilistic obstacle handling into account, is then used to generate evasive trajectories. The proposed algorithms have been tested and verified in a simulated environment for inland waters.
Motion safety for vessels
(2015)
The improvement of collision avoidance for vessels in close range encounter situations is an important topic for maritime traffic safety. Typical approaches generate evasive trajectories or optimise the trajectories of all involved vessels. The idea of this work is to validate these trajectories related to guaranteed motion safety, which means that it is not sufficient for a trajectory to be collision-free, but it must additionally ensure that an evasive manoeuvre is performable at any time. An approach using the distance and the evolution of the distance to the other vessels is proposed. The concept of Inevitable Collision States (ICS) is adopted to identify the states for which no evasive manoeuvre exist. Furthermore, it is implemented into a collision avoidance system for recreational crafts to demonstrate the performance.
Scrum und die IEC 62304
(2013)
Linear and nonlinear response functions (RF) are extracted for the climate system and the carbon cycle represented by the MPI-ESM and cGENIE models, respectively. Appropriately designed simulations are run for this purpose. Joining these RFs, we have a climate emulator with carbon emissions as the forcing and any desired observable quantity (provided the data is saved), such as the surface air temperature or precipitation, as the predictand. Like e.g. for atmospheric CO2 concentration, we also have RFs for the solar constant as a forcing — mimicking solar radiation management (SRM) geoengineering. We consider two application cases. 1. One is based on the Paris 2015 agreement, determining the necessary least amount of SRM geoengineering needed to keep the global mean surface air temperature below a certain threshold, e.g. 1.5 or 2 [oC], given a certain amount of carbon emission abatement (ABA) and carbon dioxide removal (CDR) geoengineering. 2. The other application considers the conservation of the Greenland ice sheet (GrIS). Using a zero-dimensional simplification of a complex ice sheet model, we determine (a) if we need SRM given some ABA and CDR, and, if possible, (b) the required least amount of SRM to avoid the collapse of the GrIS. Keeping temperatures below 2 [oC] even is hardly possible without sustained SRM (1.); however, the collapse of the GrIS can be avoided applying SRM even for moderate levels of CDR and ABA, an overshoot being affordable (2.).
The transformation to an Industry 4.0, which is in general seen as a solution to increasing market challenges, is forcing companies to radically change their way of thinking and to be open to new forms of cooperation. In this context, the opening-up of the innovation process is widely seen as a necessity to meet these challenges, especially for small and medium enterprises (SMEs). The aim of the study therefore is to analyze how cooperation today can be characterized, how this character has changed since the establishment of the term Industry 4.0 at Hanover Fair in 2011 and which cooperation strategies have proven successful. The analysis consists of a quantitative, secondary data analysis that includes country-specific data from 35 European countries of 2010 and 2016 collected by the European Commission and the OECD. The research, focusing on the secondary sector, shows that multinational enterprises MNEs still tend to cooperate more than SMEs, with a slight overall trend towards protectionism. Nevertheless, there is a clear tendency towards the opening-up of SMEs. In this regard, especially universities, competitors and suppliers have become increasingly attractive as cooperation partners for SMEs.
As one of the most important branches of the industry in Germany and
the European Union, the mechanical and plant engineering sector is confronted with fundamental changes due to ever shorter innovation cycles and increased competitive pressure. This makes it even more important to increase the level of service components in business models with a low service level, which are still frequently found in SMEs. This paper is dedicated to the changes that the individual components of a business model have experienced and will experience. Special attention is paid to economic sustainability, since service business models can also positively influence the long-term nature of a business. Seven interviews conducted with relevant companies serve as the empirical basis of this paper. The analysed effects of smart services and active customer integration are structured and summarized within the three pillars of every business model (value proposition, the value creation architecture and the revenue mechanic).
Gamification is one of the recognized methods of motivating people in various life processes, and it has spread to many spheres of life, including healthcare. This article proposes a system design for long-term care patients using the method mentioned. The proposed system aims to increase patient engagement in the treatment and rehabilitation process via gamification. Literature research on available and earlier proposed systems was conducted to develop a suited system design. The primary target group includes bedridden patients and a sedentary lifestyle (predominantly lying in bed). One of the main criteria for selecting a suitable option was its contactless realization for the mentioned target groups in long-term care cases. As a result, we developed the system design for hardware and software that could prevent bedsores and other health problems from occurring because of low activity. The proposed design can be tested in hospitals, nursing homes, and rehabilitation centers.
Evaluation of a Contactless Accelerometer Sensor System for Heart Rate Monitoring During Sleep
(2024)
The monitoring of a patient's heart rate (HR) is critical in the diagnosis of diseases. In the detection of sleep disorders, it also plays an important role. Several techniques have been proposed, including using sensors to record physiological signals that are automatically examined and analysed. This work aims to evaluate using a contactless HR monitoring system based on an accelerometer sensor during sleep. For this purpose, the oscillations caused by chest movements during heart contractions are recorded by an installation mounted under the bed mattress. The processing algorithm presented in this paper filters the signals and determines the HR. As a result, an average error of about 5 bpm has been documented, i.e., the system can be considered to be used for the forecasted domain.
Sleep is extremely important for physical and mental health. Although polysomnography is an established approach in sleep analysis, it is quite intrusive and expensive. Consequently, developing a non-invasive and non-intrusive home sleep monitoring system with minimal influence on patients, that can reliably and accurately measure cardiorespiratory parameters, is of great interest. The aim of this study is to validate a non-invasive and unobtrusive cardiorespiratory parameter monitoring system based on an accelerometer sensor. This system includes a special holder to install the system under the bed mattress. The additional aim is to determine the optimum relative system position (in relation to the subject) at which the most accurate and precise values of measured parameters could be achieved. The data were collected from 23 subjects (13 males and 10 females). The obtained ballistocardiogram signal was sequentially processed using a sixth-order Butterworth bandpass filter and a moving average filter. As a result, an average error (compared to reference values) of 2.24 beats per minute for heart rate and 1.52 breaths per minute for respiratory rate was achieved, regardless of the subject’s sleep position. For males and females, the errors were 2.28 bpm and 2.19 bpm for heart rate and 1.41 rpm and 1.30 rpm for respiratory rate. We determined that placing the sensor and system at chest level is the preferred configuration for cardiorespiratory measurement. Further studies of the system’s performance in larger groups of subjects are required, despite the promising results of the current tests in healthy subjects.
Sleep is an essential part of human existence, as we are in this state for approximately a third of our lives. Sleep disorders are common conditions that can affect many aspects of life. Sleep disorders are diagnosed in special laboratories with a polysomnography system, a costly procedure requiring much effort for the patient. Several systems have been proposed to address this situation, including performing the examination and analysis at the patient's home, using sensors to detect physiological signals automatically analysed by algorithms. This work aims to evaluate the use of a contactless respiratory recording system based on an accelerometer sensor in sleep apnea detection. For this purpose, an installation mounted under the bed mattress records the oscillations caused by the chest movements during the breathing process. The presented processing algorithm performs filtering of the obtained signals and determines the apnea events presence. The performance of the developed system and algorithm of apnea event detection (average values of accuracy, specificity and sensitivity are 94.6%, 95.3%, and 93.7% respectively) confirms the suitability of the proposed method and system for further ambulatory and in-home use.
Sleep is essential to physical and mental health. However, the traditional approach to sleep analysis—polysomnography (PSG)—is intrusive and expensive. Therefore, there is great interest in the development of non-contact, non-invasive, and non-intrusive sleep monitoring systems and technologies that can reliably and accurately measure cardiorespiratory parameters with minimal impact on the patient. This has led to the development of other relevant approaches, which are characterised, for example, by the fact that they allow greater freedom of movement and do not require direct contact with the body, i.e., they are non-contact. This systematic review discusses the relevant methods and technologies for non-contact monitoring of cardiorespiratory activity during sleep. Taking into account the current state of the art in non-intrusive technologies, we can identify the methods of non-intrusive monitoring of cardiac and respiratory activity, the technologies and types of sensors used, and the possible physiological parameters available for analysis. To do this, we conducted a literature review and summarised current research on the use of non-contact technologies for non-intrusive monitoring of cardiac and respiratory activity. The inclusion and exclusion criteria for the selection of publications were established prior to the start of the search. Publications were assessed using one main question and several specific questions. We obtained 3774 unique articles from four literature databases (Web of Science, IEEE Xplore, PubMed, and Scopus) and checked them for relevance, resulting in 54 articles that were analysed in a structured way using terminology. The result was 15 different types of sensors and devices (e.g., radar, temperature sensors, motion sensors, cameras) that can be installed in hospital wards and departments or in the environment. The ability to detect heart rate, respiratory rate, and sleep disorders such as apnoea was among the characteristics examined to investigate the overall effectiveness of the systems and technologies considered for cardiorespiratory monitoring. In addition, the advantages and disadvantages of the considered systems and technologies were identified by answering the identified research questions. The results obtained allow us to determine the current trends and the vector of development of medical technologies in sleep medicine for future researchers and research.
The respiratory rate is a vital sign indicating breathing illness. It is necessary to analyze the mechanical oscillations of the patient's body arising from chest movements. An inappropriate holder on which the sensor is mounted, or an inappropriate sensor position is some of the external factors which should be minimized during signal registration. This paper considers using a non-invasive device placed under the bed mattress and evaluates the respiratory rate. The aim of the work is the development of an accelerometer sensor holder for this system. The normal and deep breathing signals were analyzed, corresponding to the relaxed state and when taking deep breaths. The evaluation criterion for the holder's model is its influence on the patient's respiratory signal amplitude for each state. As a result, we offer a non-invasive system of respiratory rate detection, including the mechanical component providing the most accurate values of mentioned respiratory rate.
Determination of accelerometer sensor position for respiration rate detection: Initial research
(2022)
Continuous monitoring of a patient's vital signs is essential in many chronic illnesses. The respiratory rate (RR) is one of the vital signs indicating breathing diseases. This article proposes the initial investigation for determining the accelerometric sensor position of a non-invasive and unobtrusive respiratory rate monitoring system. This research aims to determine the sensor position in relation to the patient, which can provide the most accurate values of the mentioned physiological parameter. In order to achieve the result, the particular system setup, including a mechanical sensor holder construction was used. The breathing signals from 5 participants were analyzed corresponding to the relaxed state. The main criterion for selecting a suitable sensor position was each patient's average acceleration amplitude excursion, which corresponds to the respiratory signal. As a result, we provided one more defined important parameter for the considered system, which was not determined before.
Botswana serves as a role model for other African countries due to its rapid development in recent decades. Since the country is sparsely populated and a large part of the rural population depends on agriculture, especially livestock, this sector forms the backbone of the national economy. The digitization of this sector offers promising opportunities for economic growth and driving Botswana's evolution to a digital economy, while real value is being created for smallholder farmers. To support this process, an ITU research project made the key recommendation for the development of a digital crowdfarming tool and marketplace to create a digital ecosystem for smallholder agriculture. Within the research project, infrastructural challenges such as the creation of rural electricity supply and internet access, as well as the smallholders' need for remote monitoring, management, and better connectivity, were identified.
Based on the findings of the ITU research report, this bachelor's thesis aims to identify potential innovations for the digital development of smallholder agriculture in Botswana and to conceptualize proposals to address the identified challenges and needs of smallholder farmers. To achieve this, solutions were developed through literature research, technology analysis and expert involvement. These included the design of a decentralized mini-grid for power supply, proposals to create internet access, and the graphic visualization of a conceptual app. The latter addresses smallholder farmers' needs for remote monitoring, market access, knowledge enhancement, and connection to colleagues, buyers, and investors.
The proposed solutions and developed concepts provide impulses for further research and can serve as a basis for an extended evaluation through further involvement of experts and stakeholders.
Global agriculture will face major challenges in the future. In addition to the increasing demand for food due to constant population growth, the consequences of climate change will make it even more difficult to operate agriculture and supply people with food. In addition to further productivity increases in traditional agriculture, new concepts for sustainable and scalable food production are needed. Vertical farming offers a promising approach.
The aim of this project is to demonstrate how vertical farming can be used to ensure sustainable food production and how this concept can be applied in the pioneering Maun Science Park project in Botswana. In doing so, the Maun Science Park will address future challenges such as demographics, governance and climate change and become a best practice model for Botswana, the whole of Africa and the world. The country of Botswana grew to become one of the most prosperous countries in Africa in recent decades due to strong economic growth from mining. However, the population faces great challenges in the future; in addition to great social inequality, climate change threatens the country's overall supply.
With the help of a literature review and qualitative and quantitative interviews with stakeholders from Maun (Botswana), the potentials and challenges for vertical farming in Botswana could be identified and future measures for a possible realization could be derived. Basically, some challenges in Botswana are addressed by the technology, for example, Vertical Farming offers high food security through year-round production of food through the closed ecosystem and creates independence from current and future climatic conditions, poor conditions for traditional agriculture (e.g. infertile soils) and foreign imports. However, the main structural problems of agriculture in Botswana, such as the lack of infrastructure, know-how and policy support, are not addressed.
Deep neural networks (DNNs) are known for their high prediction performance, especially in perceptual tasks such as object recognition or autonomous driving. Still, DNNs are prone to yield unreliable predictions when encountering completely new situations without indicating their uncertainty. Bayesian variants of DNNs (BDNNs), such as MC dropout BDNNs, do provide uncertainty measures. However, BDNNs are slow during test time because they rely on a sampling approach. Here we present a single shot MC dropout approximation that preserves the advantages of BDNNs without being slower than a DNN. Our approach is to analytically approximate for each layer in a fully connected network the expected value and the variance of the MC dropout signal. We evaluate our approach on different benchmark datasets and a simulated toy example. We demonstrate that our single shot MC dropout approximation resembles the point estimate and the uncertainty estimate of the predictive distribution that is achieved with an MC approach, while being fast enough for real-time deployments of BDNNs.
Salem baut Neu-Birnau
(2012)
Ziel der Masterarbeit war es, die Feuchtigkeitseigenschaften von Estrichen bei unterschiedlichen Klimaten mithilfe von Sorptionsisothermen zu charakterisieren. Die wenigen Literaturangaben zu Sorptionsisothermen von mineralischen Estrichen beziehen sich im Wesentlichen auf Calciumsulfatestriche und genormte Zementestriche (ohne dass die Zementart: Portlandzemente, Hochofenzemente bzw. CEM I, CEM II, CEM III etc. unterschieden werden) und i.d.R. nur auf eine Lufttemperatur (= 20 Grad C). Anliegen der Arbeit war es, zusätzlich die seit ca. 20 Jahren marktüblichen ternären Schnellzemente mit zu untersuchen und die baupraktisch interessanten Temperaturen von 15 Grad C und 25 Grad C einzubeziehen. Ebenso wurden die Auswirkungen der Klimabedingung auf der Baustelle (Jahreszeit, Luftfeuchtigkeit, Temperatur) auf den Hydratationsvorgang der Estriche untersucht. Dabei wurden jeweils nicht nur ein Vertreter der verschiedenen Bindemittelsysteme, sondern mindestens zwei verschiedene Estriche unterschiedlicher Hersteller mit einbezogen. In Kombination mit den Ergebnissen der Gefügeuntersuchungen (u. a. Hg-Porosametrie) wird belegt, weshalb sich die zement- und schnellzementgebundenen Estriche vollkommen anders verhalten als die calciumsulfatgebundenen Estriche. Dieses unterschiedliche Verhalten ist auch einer der Gründe, warum Estriche mit der KRL-Methode in Bezug auf ihren Feuchtegehalt nicht bewertet werden können. Aus diesem Grund folgt ein Vergleich der Materialfeuchtemessungen "KRL-Methode" mit der handwerksüblichen und seit Jahrzehnten in der Praxis bewährten "CM-Methode".
Die wenigen Literaturangaben zu Sorptionsisothermen von mineralischen Estrichen beziehen sich im Wesentlichen auf Calciumsulfatestriche und genormte Zementestriche, sowie i.d.R. nur auf eine festgesetzte Lufttemperatur (= 20 Grad C). Daher war es das Anliegen der im Beitrag beschriebenen Untersuchung, die Feuchtigkeitseigenschaften von Estrichen bei unterschiedlichen Klimaten mithilfe von Sorptionsisothermen zu charakterisieren. Ergänzend sollten die seit ca. 20 Jahren marktüblichen ternären Schnellzemente mit untersucht und die baupraktisch interessanten Temperaturen von 15 Grad C und 25 Grad C einbezogen werden. Ebenso wurden die Auswirkungen der Klimabedingungen auf der Baustelle (Jahreszeit, Luftfeuchtigkeit, Temperatur) auf den Hydratationsvorgang der Estriche untersucht. In Kombination mit den Ergebnissen der Gefügeuntersuchungen (u.a. Hg-Porosimetrie) wird belegt, weshalb sich die zement- und schnellzementgebundenen Estriche vollkommen anders verhalten als die calciumsulfatgebundenen Estriche. Dieses unterschiedliche Verhalten ist auch einer der Gründe, warum Estriche mit der KRL-Methode in Bezug auf ihren Feuchtegehalt nicht bewertet werden können. Deshalb folgt ein Vergleich der Materialfeuchtemessungen "KRL-Methode" mit der handwerksüblichen und seit Jahrzehnten in der Praxis bewährten "CM-Methode".
Ein Beitrag zum Beobachterentwurf und zur sensorlosen Folgeregelung translatorischer Magnetaktoren
(2020)
Sliding-mode observation with iterative parameter adaption for fast-switching solenoid valves
(2016)
Control of the armature motion of fast-switching solenoid valves is highly desired to reduce noise emission and wear of material. For feedback control, information of the current position and velocity of the armature are necessary. In mass production applications, however, position sensors are unavailable due to cost and fabrication reasons. Thus, position estimation by measuring merely electrical quantities is a key enabler for advanced control, and, hence, for efficient and robust operation of digital valves in advanced hydraulic applications. The work presented here addresses the problem of state estimation, i.e., position and velocity of the armature, by sole use of electrical measurements. The considered devices typically exhibit nonlinear and very fast dynamics, which makes observer design a challenging task. In view of the presence of parameter uncertainty and possible modeling inaccuracy, the robustness properties of sliding mode observation techniques are deployed here. The focus is on error convergence in the presence of several sources for modeling uncertainty and inaccuracy. Furthermore, the cyclic operation of switching solenoids is exploited to iteratively correct a critical parameter by taking into account the norm of the observation error of past switching cycles of the process. A thorough discussion on real-world experimental results highlights the usefulness of the proposed state observation approach.
Observer-based self sensing for digital (on–off) single-coil solenoid valves is investigated. Self sensing refers to the case where merely the driving signals used to energize the actuator (voltage and coil current) are available to obtain estimates of both the position and velocity. A novel observer approach for estimating the position and velocity from the driving signals is presented, where the dynamics of the mechanical subsystem can be neglected in the model. Both the effect of eddy currents and saturation effects are taken into account in the observer model. Practical experimental results are shown and the new method is compared with a full-order sliding mode observer.
Sensorlose Positionsregelung eines hydraulischen Proportional-Wegeventils mittels Signalinjektion
(2017)
Es wird eine Methode zur sensorlosen Positionsbestimmung bei elektromagnetisch betätigten Aktoren vorgestellt. Dabei werden basierend auf einer Signalinjektion die positionsabhängigen Parameter bei der injizierten Frequenz bestimmt und daraus über ein geeignetes Modell die Position des Magnetankers ermittelt. Die Eignung des Verfahrens zur sensorlosen Positionsregelung wird an einem bidirektionalen Proportionalventil anhand praktischer Versuche demonstriert.
The method of signal injection is investigated for position estimation of proportional solenoid valves. A simple observer is proposed to estimate a position-dependent parameter, i.e. the eddy current resistance, from which the position is calculated analytically. Therefore, the relationship of position and impedance in the case of sinusoidal excitation is accurately described by consideration of classical electrodynamics. The observer approach is compared with a standard identification method, and evaluated by practical experiments on an off-the-shelf proportional solenoid valve.
A constructive method for the design of nonlinear observers is discussed. To formulate conditions for the construction of the observer gains, stability results for nonlinear singularly perturbed systems are utilised. The nonlinear observer is designed directly in the given coordinates, where the error dynamics between the plant and the observer becomes singularly perturbed by a high-gain part of the observer injection, and the information of the slow manifold is exploited to construct the observer gains of the reduced-order dynamics. This is in contrast to typical high-gain observer approaches, where the observer gains are chosen such that the nonlinearities are dominated by a linear system. It will be demonstrated that the considered approach is particularly suited for self-sensing electromechanical systems. Two variants of the proposed observer design are illustrated for a nonlinear electromagnetic actuator, where the mechanical quantities, i.e. the position and the velocity, are not measured
A constructive nonlinear observer design for self-sensing of digital (ON/OFF) single coil electromagnetic actuators is studied. Self-sensing in this context means that solely the available energizing signals, i.e., coil current and driving voltage are used to estimate the position and velocity trajectories of the moving plunger. A nonlinear sliding mode observer is considered, where the stability of the reduced error dynamics is analyzed by the equivalent control method. No simplifications are made regarding magnetic saturation and eddy currents in the underlying dynamical model. The observer gains are constructed by taking into account some generic properties of the systems nonlinearities. Two possible choices of the observer gains are discussed. Furthermore, an observer-based tracking control scheme to achieve sensorless soft landing is considered and its closed-loop stability is studied. Experimental results for observer-based soft landing of a fast-switching solenoid valve under dry conditions are presented to demonstrate the usefulness of the approach.
Flatness-based feed-forward control of solenoid actuators is considered. For precise motion planning and accurate steering of conventional solenoids, eddy currents cannot be neglected. The system of ordinary differential equations including eddy currents, that describes the nonlinear dynamics of such actuators, is not differentially flat. Thus, a distributed parameter approach based on a diffusion equation is considered, that enables the parametrization of the eddy current by the armature position and its time derivatives. In order to design the feedforward control, the distributed parameter model of the eddy current subsystem is combined with a typical nonlinear lumped parameter model for the electrical and mechanical subsystems of the solenoid. The control design and its application are illustrated by numerical and practical results for an industrial solenoid actuator.
Knowing the position of the spool in a solenoid valve, without using costly position sensors, is of considerable interest in a lot of industrial applications. In this paper, the problem of position estimation based on state observers for fast-switching solenoids, with sole use of simple voltage and current measurements, is investigated. Due to the short spool traveling time in fast-switching valves, convergence of the observer errors has to be achieved very fast. Moreover, the observer has to be robust against modeling uncertainties and parameter variations. Therefore, different state observer approaches are investigated, and compared to each other regarding possible uncertainties. The investigation covers a High-Gain-Observer approach, a combined High-Gain Sliding-Mode-Observer approach, both based on extended linearization, and a nonlinear Sliding-Mode-Observer based on equivalent output injection. The results are discussed by means of numerical simulations for all approaches, and finally physical experiments on a valve-mock-up are thoroughly discussed for the nonlinear Sliding-Mode-Observer.
A semilinear distributed parameter approach for solenoid valve control including saturation effects
(2015)
In this paper a semilinear parabolic PDE for the control of solenoid valves is presented. The distributed parameter model of the cylinder becomes nonlinear by the inclusion of saturation effects due to the material's B/H-curve. A flatness based solution of the semilinear PDE is shown as well as a convergence proof of its series solution. By numerical simulation results the adaptability of the approach is demonstrated, and differences between the linear and the nonlinear case are discussed. The major contribution of this paper is the inclusion of saturation effects into the magnetic field governing linear diffusion equation, and the development of a flatness based solution for the resulting semilinear PDE as an extension of previous works [1] and [2].
Projekte aus dem Hochbau, wie z.B. Schulgebäude sind eine klassische Aufgabe der Tragwerksplanung. Dabei steht eine Vielzahl an unterschiedlichen Bauweisen zur Verfügung. Vor dem Hintergrund der Klimakrise und den damit verbundenen Klimaschutzzielen von Paris besteht akuter Handlungsbedarf in allen Bereichen des Lebens, besonders aber im Bausektor, welcher maßgeblich zu den gesamten Treibhausgasemissionen beiträgt. Deshalb wird momentan vermehrt auf Lösungen in Holzbauweise zurückgegriffen. Neben der Gründung nehmen die Geschossdecken den größten Anteil an den Umweltwirkungen ein. Dieser Anteil steigt mit zunehmender Deckenfläche an. Dadurch besteht ein großes Potenzial bei der Verringerung der Treibhausgasemissionen durch geeignete Auswahl des Deckensystems und Optimierung der Tragstruktur.
In Diskussionen über den Klimawandel wird oft eine nachhaltige Entwicklung thematisiert. Für den Vergleich wurde eine Optimierung durchgeführt zur Beurteilung des CO2-Einsparpotenziales. Um alle Aspekte der Nachhaltigkeit zu erfüllen, wurden die gefundenen Varianten auf Basis gängiger Zertifizierungssysteme für Nachhaltigkeit bewertet. Jede Variante erfüllt dabei die Anforderungen an maximale Spannungen und Verformungen. Durch die Optimierung und den folgenden Vergleich wurde das Optimierungspotenzial verdeutlicht. Gleichzeitig entstehen teilweise nutzungsbedingte Konflikte aus der Optimierung und Anforderungen aus dem Schallschutz.
Scheinselbständigkeit
(2023)
Botswana, a new construction project – the Maun Science Park - is to be built with a focus on sustainability and to create a new living space for the rapidly growing population in Africa. The project will be a blueprint for future projects in Africain terms of progress, technology and sustainability. This thesis will deal with its financial framework and will serve as a basis for the development of ways and means of financing such projects.
In der vorliegenden Arbeit wird eine Vorlage für ein generalisiertes Betonfertigteil-Tracking für die Bauindustrie geschaffen. In dem ersten der vier Kapitel wird zuerst der Prozess des Fertigteil-Trackings dargestellt. Dieser wird anhand von Experteninterviews modifiziert und es wird ein Standardprozess als Vorlage präsentiert. Dieser Standardprozess soll für projektspezifische Wünsche modular erweiterbar sein. Hierfür werden Vorteile, Probleme und Verbesserungswünsche der Anwender eingearbeitet. Um den Prozess auf die gesamte Baubranche erweitern zu können, werden die softwaretechnischen Voraussetzungen für eine Standardschnittstelle der Fertigteil-Werke analysiert. Um Probleme bei der Implementierung und Durchführung des Trackings zu verhindern, werden die Partikularinteressen aller Beteiligten analysiert. Die Ergebnisse werden aus Experteninterviews generiert sowie durch Literaturrecherche und die eigenen Erfahrungen bei der Implementierung des Fertigteil-Trackings auf der Baustelle Telekom Areal der Ed. Züblin AG ergänzt.
Mehr als 40 Prozent der Unternehmen nutzen keine operativen Prozessdaten, um die Durchlaufzeiten oder Kosten ihrer Prozesse effektiv zu überwachen. Dennoch geben mehr als 60 Prozent der Unternehmen an, mit Prozessmanagement ihre Effizienz steigern zu wollen. Dies zeigt die Studie «Business Process Management 2015» der ZHAW School of Management and Law (SML). Die Ergebnisse wurden heute am BPM Symposium in Winterthur vorgestellt und mit einem breiten Fachpublikum aus Praxis und Wissenschaft diskutiert.
Grundlage für die digitale Transformation: Die Studie untersucht, wie und in welchem Ausmass Unternehmen das Standardrepertoire des Geschäftsprozessmanagements in Richtung Prozessintelligenz erweitern. Prozessintelligenz schliesst die Lücke zum operativen Geschäft und liefert eine neue Perspektive auf das Management der Geschäftsprozesse. Dabei konzentriert sie sich auf die Informationen, die in den operativen Prozessen entstehen und gebraucht werden und ist somit eine wesentliche Grundlage für die aktuell viel diskutierte digitale Transformation von Unternehmen. Um Prozesse besser verstehen, steuern und optimieren zu können, werden Methoden und Werkzeuge des Geschäftsprozessmanagements (BPM) und der Business Intelligence (BI) kombiniert.
Wertvolle Erfahrungen aus der Praxis: Für die Studie wurden in einer Online-Befragung über 80 Unternehmen zum Status quo ihrer «Prozessintelligenz» befragt. Ein Praxisworkshop diente als Rahmen, um Erfolgsmuster aus fünf Fallstudien bei Roche, AXA Winterthur, der St. Galler Kantonalbank sowie den Städten Lausanne und Konstanz zu identifizieren. Der Softwarehersteller Axon Ivy und SBB Immobilien haben im Rahmen einer Studienpartnerschaft mit dem Institut für Wirtschaftsinformatik der SML und dem Institut für Prozesssteuerung der HTWG Konstanz wertvolle Praxiserfahrungen beigesteuert. Das Resultat ist eine Momentaufnahme der strategischen, analytischen und praktischen Fähigkeiten, Methoden und Werkzeuge, mit denen Organisationen ihre Geschäftsprozesse gestalten, ausführen, überwachen und fortlaufend weiterentwickeln.
Das Ziel dieser Bachelorarbeit ist die Entwicklung eines Architekturkonzepts für ein Order Management System mit einer integrierten Prozess-Engine. Dabei soll es möglichst einfach sein neue Order-Typen in das Order Management System zu integrieren und mit Hilfe der Prozess-Engine auszuführen. Seitens der Credit Suisse AG, Zürich wurden dazu die Systemanforderungen, sowie ein generischer Order-Life-Cycle vorgegeben.
Zu Beginn wurde ein BPMN 2.0 Beispielprozess definiert. Daraus wurde dann zusammen mit dem vorgegebenen Order-Life-Cycle ein generisches und modulares Prozess-Template entwickelt. Dieses dient der Modularisierung und der einheitlichen Verarbeitung von (Order-Typ)-Prozessen.
Basierend darauf wurde ein Anwendungskonzept entwickelt, welches die Grundlage für den standardisierten Entwurf (Blueprint) hinsichtlich der Anwendungsklasse (Order Management) liefert.
Abschliessend wurde die Anwendungsarchitektur mit Hilfe eines kleinen Prototypen verifiziert.
Aus dem Vorwort:
"NTF - Hinführung zur naturwissenschaftlichen-technischen Fachsprache" ist ein programmierter, fachsprachlicher Aufbaukurs, der zusammen mit dem Grundkurs "MNF - Hinführung zur mathematisch-naturwissenschaftlichen Fachsprache" ein zweistufiges Unterrichtssystem zur sprachlichen Vorbereitung von Ausländern auf ein naturwissenschaftlich-technisches Studium an Fachhochschulen und Technischen Universitäten darstellt.
Darüber hinaus wendet sich NTF an alle, die über die grundlegende Terminologie der Bereiche Werkstoffkunde, Maschinenbau, Baustoffkunde und Elektrotechnik/Informatik verfügen und Fachtexte aus diesen Gebieten lesen müssen.
Aus dem Vorwort: "NTF - Hinführung zur naturwissenschaftlichen-technischen Fachsprache" ist ein programmierter, fachsprachlicher Aufbaukurs, der zusammen mit dem Grundkurs "MNF - Hinführung zur mathematisch-naturwissenschaftlichen Fachsprache" ein zweistufiges Unterrichtssystem zur sprachlichen Vorbereitung von Ausländern auf ein naturwissenschaftlich-technisches Studium an Fachhochschulen und Technischen Universitäten darstellt. Darüber hinaus wendet sich NTF an alle, die über die grundlegende Terminologie der Bereiche Werkstoffkunde, Maschinenbau, Baustoffkunde und Elektrotechnik/Informatik verfügen und Fachtexte aus diesen Gebieten lesen müssen.
Aus dem Vorwort: "NTF - Hinführung zur naturwissenschaftlichen-technischen Fachsprache" ist ein programmierter, fachsprachlicher Aufbaukurs, der zusammen mit dem Grundkurs "MNF - Hinführung zur mathematisch-naturwissenschaftlichen Fachsprache" ein zweistufiges Unterrichtssystem zur sprachlichen Vorbereitung von Ausländern auf ein naturwissenschaftlich-technisches Studium an Fachhochschulen und Technischen Universitäten darstellt.
Darüber hinaus wendet sich NTF an alle, die über die grundlegende Terminologie der Bereiche Werkstoffkunde, Maschinenbau, Baustoffkunde und Elektrotechnik/Informatik verfügen und Fachtexte aus diesen Gebieten lesen müssen.
In diesem Aufsatz, der im Rahmen eines Freistellungssemesters entstanden ist, werden technische, umwelttechnische, ethische und rechtliche Themen zum Autonomen Fahren im Hinblick auf die Umsetzung in der Lehre betrachtet. Curricula zum Autonomen Fahren der Kettering University und Udacity werden vorgestellt.
Bei der Auslegung des Verteilergetriebes leistungsverzweigter Hybridantriebe müssen die Übersetzungen sorgfältig hinsichtlich des Wirkungsgrades ausgewählt werden. Die ungünstige Leistungsübertragung im seriellen, elektrischen Zweig sowie die Blindleistung sind zu minimieren. Die Formelzusammenhänge werden in diesem Aufsatz, der im Rahmen eines Freistellungssemesters entstanden ist, hergeleitet.
In der automobilen Kompaktklasse werden trockenlaufende Stufenlosgetriebe bisher nicht eingesetzt. Dies ist vor allem auf die nicht zufriedenstellende Übertragbarkeit von Leistung im Zusammenhang mit einem ausreichenden Stellbereich (Getriebespreizung) der Umschlingungsmittel zurückzuführen. Neben Weiterentwicklungen auf dem Gebiet der Verbundkeilriemen, die für diesen Zweck eingesetzt werden, müssen vor allem jedoch konstruktive Lösungen für eine Leistungssteigerung gefunden werden. Es soll das Konzept eines Doppelriemen- (Twinbelt-) Getriebes untersucht werden. Durch die Verwendung zweier Riemen statt eines einzelnen kann nahezu eine Verdoppelung der übertragbaren Leistung erwartet werden, was die Einsatzmöglichkeiten für das Getriebe wesentlich erweitert. Als problematisch ist vor allem die Bauraumfrage des zusätzlichen Riementriebs, sowie der erhöhte Aufwand in der Getriebesteuerung zu sehen. Außerdem müssen für eine rentable Fertigung des Getriebes die notwendigen zusätzlichen Bauteile auf ein Minimum reduziert werden. Neben der theoretischen und konstruktiven Auslegung der zu entwickelnden Komponenten des Twinbelt-Getriebes und der Bauraumuntersuchung für die Anwendung in der automobilen Kompaktklasse ist auch die Entwicklung eines Komponenten-Prüfstandes durchzuführen. Der Prüfstand soll die Untersuchung der Funktionalität dieser neuen Getriebebauform, aber auch die Versuche in bezug auf die notwendigen Verstell- und Anpresssysteme, sowie das Betriebsverhalten der eingesetzten Riemen ermöglichen.
At the University of Applied Sciences Konstanz, Germany, a modern electronically controlled dynamometer and several cars are available for tests. Numerous studies have been carried out, and the latest results will be presented. The paper is intended to explain different tests under load. One focus is the driving cycle WLTC (Worldwide harmonized Light vehicles Test Cycle) and the requirements for the proper conduct of investigation with this driving cycle. Two and three wheelers have a great importance for mobility in various Asian countries. But also in other countries, this segment is very important for the so-called First or Last Mile Vehicles. Because of this, a short explanation of the driving cycle WMTC (Worldwide harmonized Motorcycle Emissions Certification/Test Procedure) is given. The various possibilities for the operation of the dynamometer and for carrying out various experiments are shown.
Other important figures that can be determined on a dynamometer are the wheel power, the power losses and eventually the engine performance. With the brake specific torque, the traction force at the propelled wheels, the maximum acceleration or maximum gradeability of a car can be determined.
As well the slippage related to load can be measured on the dynamometer. The dynamic wheel radius of the driven wheels has a significant influence on the slippage. Because of the temperature increase of the tires during the tests the tire pressure increases. A rise of tire temperature, tire pressure, and wheel speed results in an increase of the dynamic wheel radius and slippage. Equations for the determination of the dynamic wheel radius are presented.
Several possibilities of tests under load on a chassis dynamometer are presented. Consumption measurements according standard driving cycles as the New European Drive Cycle (NEDC) and Worldwide harmonized light duty test procedure/cycle (WLTP/WLTC) make special attention to the observance of the regulations necessary. The rotational masses of inertia and the load depending on velocity have to match the required values. Load tests as well allow the determination of the maximum acceleration in the current gear and the slippage of the driven wheels.
The aim of the paper is to present the simulation of the sweeping process based on a mathematical model that includes the drag force, the lift force, the sideway force, and the gravity. At the beginning, it is presented a short history of the street sweepers, some considerations about the sweeping process and the parameters of the sweeping process. Considering the developed model, in Matlab there is done some simulation for the trajectory of a spherical pebble. The obtained results are presented in graphical shape.
Ziel dieses Projektes ist die Entwicklung eines miniaturisierten vollimplantierbaren Gerätes zur Verlängerung von Kiefer- und Schädelknochen, basierend auf den Erkenntnissen von Plattenosteosynthese-Verfahren. Dieses Gerät, im folgenden Mechatronische Osteosyntheseplatte (kurz MO) oder Distraktor genannt, ist ein kleiner Teleskopaktuator, der eine Kraft von mindestens 30 N erzeugt, die zur Verlängerung eines Knochens benötigt wird. Der Antrieb erfolgt durch Formgedächtnisdrähte aus einer Nickel-Titan-Legierung, die im Inneren des Gerätes gespannt sind. Diese Drähte werden über eine drahtlose niederfrequente Energieeinkopplung mit Strom versorgt. Über einen Ratschenmechanismus wird die Kraft auf eine teleskopartig ausfahrende Platte übertragen. Der Distraktor wird direkt auf den Knochen mit kommerziell erhältlichen Titan-Knochenschrauben aufgeschraubt. Die Neuerung bei dieser Entwicklung ist u.a. der Antrieb durch Drähte aus Formgedächtnislegierung, sowie die vollständige Implantierbarkeit des Gerätes. Formgedächtnislegierungen können in kaltem Zustand mit geringem Kraftaufwand verformt werden. Werden sie anschließend erhitzt, so „erinnert“ sich die Legierung an ihre ursprüngliche Form und nimmt den vorherigen unverformten Zustand wieder ein. Ziel dieser Neuentwicklung ist, durch das vollständige Verschließen des operierten Bereichs die Infektionsgefahr zu minimieren, das Risiko für den Patienten zu senken und den Behandlungskonfort zu erhöhen. Durch die minimierte Narbenbildung wird das kosmetische Ergebnis ebenfalls verbessert. Desweiteren soll die Erfassung von Messwerten für Weg und Kraft integriert werden, um so dem betreuenden Arzt Auskunft über den Fortschritt des Verlängerungsprozess und die Heilung zu geben. Eingesetzt wird die Osteosynthese durch Knochenverlängerung in der plastischen Chirurgie bei Missbildungen, Fehlstellungen oder Unfällen.
Vertrauen durch Integrität
(2021)
In der Diskussion um unternehmerische Verantwortung hat die Frage nach der Reichweite der Unternehmensverantwortung an Intensität zugenommen, getrieben von der Globalisierung und gesellschaftlichen, ökologischen und technologischen Herausforderungen. Für viele gesellschaftliche und ökologische Problemstellungen wird verstärkt auch die Verantwortung von Unternehmen eingefordert, etwa bei der Achtung von Menschenrechten in der Wertschöpfungskette, der Einhaltung von Sozial- und Umweltstandards, beim Thema Nachhaltigkeit und bei der Produktsicherheit. Wie Unternehmen durch Integritäts- und Compliance-Management ihrer Verantwortung gerecht werden können, zeigt der vorliegende Beitrag.
Welche Kompetenzen brauchen Führungskräfte, damit der Ansatz Compliance und Integrity als Führungsaufgabe in Organisationen verfängt? Und wie lassen sich diese systematisch nutzen und trainieren? Der Beitrag stellt den ersten Baustein eines am Konstanz Institut für Corporate Governance angesiedelten Forschungsprojekts vor, das darauf abzielt, bestehende Compliance-Systeme in Unternehmen praxistauglicher zu machen und die Wirksamkeit der Maßnahmen eines Compliance-Management-Systems (CMS) zu steigern.
This thesis deals with the object tracking problem of multiple extended objects. For instance, this tracking problem occurs when a car with sensors drives on the road and detects multiple other cars in front of it. When the setup between the senor and the other cars is in a such way that multiple measurements are created by each single car, the cars are called extended objects. This can occur in real world scenarios, mainly with the use of high resolution sensors in near field applications. Such a near field scenario leads a single object to occupy several resolution cells of the sensor so that multiple measurements are generated per scan. The measurements are additionally superimposed by the sensor’s noise. Beside the object generated measurements, there occur false alarms, which are not caused by any object and sometimes in a sensor scan, single objects could be missed so that they not generate any measurements.
To handle these scenarios, object tracking filters are needed to process the sensor measurements in order to obtain a stable and accurate estimate of the objects in each sensor scan. In this thesis, the scope is to implement such a tracking filter that handles the extended objects, i.e. the filter estimates their positions and extents. In context of this, the topic of measurement partitioning occurs, which is a pre-processing of the measurement data. With the use of partitioning, the measurements that are likely generated by one object are put into one cluster, also called cell. Then, the obtained cells are processed by the tracking filter for the estimation process. The partitioning of measurement data is a crucial part for the performance of tracking filter because insufficient partitioning leads to bad tracking performance, i.e. inaccurate object estimates.
In this thesis, a Gaussian inverse Wishart Probability Hypothesis Density (GIW-PHD) filter was implemented to handle the multiple extended object tracking problem. Within this filter framework, the number of objects are modelled as Random Finite Sets (RFSs) and the objects’ extent as random matrices (RM). The partitioning methods that are used to cluster the measurement data are existing ones as well as a new approach that is based on likelihood sampling methods. The applied classical heuristic methods are Distance Partitioning (DP) and Sub-Partitioning (SP), whereas the proposed likelihood-based approach is called Stochastic Partitioning (StP). The latter was developed in this thesis based on the Stochastic Optimisation approach by Granström et al. An implementation, including the StP method and its integration into the filter framework, is provided within this thesis.
The implementations, using the different partitioning methods, were tested on simulated random multi-object scenarios and in a fixed parallel tracking scenario using Monte Carlo methods. Further, a runtime analysis was done to provide an insight into the computational effort using the different partitioning methods. It emphasized, that the StP method outperforms the classical partitioning methods in scenarios, where the objects move spatially close. The filter using StP performs more stable and with more accurate estimates. However, this advantage is associated with a higher computational effort compared to the classical heuristic partitioning methods.
This paper presents a new likelihood-based partitioning method of the measurement set for the extended object probability hypothesis density (PHD) filter framework. Recent work has mostly relied on heuristic partitioning methods that cluster the measurement data based on a distance measure between the single measurements. This can lead to poor filter performance if the tracked extended objects are closely spaced. The proposed method called Stochastic Partitioning (StP) is based on sampling methods and was inspired by a former work of Granström et. al. In this work, the StP method is applied to a Gaussian inverse Wishart (GIW) PHD filter and compared to a second filter implementation that uses the heuristic Distance Partitioning (DP) method. The performance is evaluated in Monte Carlo simulations in a scenario where two objects approach each other. It is shown that the sampling based StP method leads to an improved filter performance compared to DP.
Anbindung eines PPC405 embedded CPU Core an ein MPC860-System unter dem Echtzeitbetriebssystem OSE
(2006)
Die Diplomarbeit verfolgt das Ziel, die softwareseitige Anbindung eines PPC405-Systems an das Gesamtsystem einer digitalen Funkgeräteplattform herzustellen. Über eine konzipierte Implementierung, die nach dem Einschalten des Funkgeräts gestartet wird, erfolgt die Übertragung des Betriebssystems in den zugehörigen Arbeitsspeicher des PPC405-Systems. Nach Abschluss der Initialisierungsphase kommunizieren die Anwendungsprozesse auf den beteiligten Systemen über eine transparente Erweiterung des Betriebssystems mit der jeweiligen Gegenseite. Als Übertragungsmedium für die genannte Betriebssystemerweiterung wird ein zwischen den Systemen angesiedelter Speicher genutzt, für dessen koordinierte Zugriffe eine im Rahmen der Diplomarbeit entwickelte Treiberkomponente sorgt. Durch die Anbindung des PPC405-Systems wird eine Leistungssteigerung des Gesamtsystems erwartet. Auf der Basis von Messungen wurde die Performance des somit erhaltenen Multiprozessorsystems bestimmt. Aus den Ergebnissen wurden geeignete Möglichkeiten zur Optimierung erarbeitet und umgesetzt.
Network effects, economies of scale, and lock-in-effects increasingly lead to a concentration of digital resources and capabilities, hindering the free and equitable development of digital entrepreneurship, new skills, and jobs, especially in small communities and their small and medium-sized enterprises (“SMEs”). To ensure the affordability and accessibility of technologies, promote digital entrepreneurship and community well-being, and protect digital rights, we propose data cooperatives as a vehicle for secure, trusted, and sovereign data exchange. In post-pandemic times, community/SME-led cooperatives can play a vital role by ensuring that supply chains to support digital commons are uninterrupted, resilient, and decentralized. Digital commons and data sovereignty provide communities with affordable and easy access to information and the ability to collectively negotiate data-related decisions. Moreover, cooperative commons (a) provide access to the infrastructure that underpins the modern economy, (b) preserve property rights, and (c) ensure that privatization and monopolization do not further erode self-determination, especially in a world increasingly mediated by AI. Thus, governance plays a significant role in accelerating communities’/SMEs’ digital transformation and addressing their challenges. Cooperatives thrive on digital governance and standards such as open trusted application programming interfaces (“APIs”) that increase the efficiency, technological capabilities, and capacities of participants and, most importantly, integrate, enable, and accelerate the digital transformation of SMEs in the overall process. This review article analyses an array of transformative use cases that underline the potential of cooperative data governance. These case studies exemplify how data and platform cooperatives, through their innovative value creation mechanisms, can elevate digital commons and value chains to a new dimension of collaboration, thereby addressing pressing societal issues. Guided by our research aim, we propose a policy framework that supports the practical implementation of digital federation platforms and data cooperatives. This policy blueprint intends to facilitate sustainable development in both the Global South and North, fostering equitable and inclusive data governance strategies.
Increasing demand for sustainable, resilient, and low-carbon construction materials has highlighted the potential of Compacted Mineral Mixtures (CMMs), which are formulated from various soil types (sand, silt, clay) and recycled mineral waste. This paper presents a comprehensive inter- and transdisciplinary research concept that aims to industrialise and scale up the adoption of CMM-based construction materials and methods, thereby accelerating the construction industry’s systemic transition towards carbon neutrality. By drawing upon the latest advances in soil mechanics, rheology, and automation, we propose the development of a robust material properties database to inform the design and application of CMM-based materials, taking into account their complex, time-dependent behaviour. Advanced soil mechanical tests would be utilised to ensure optimal performance under various loading and ageing conditions. This research has also recognised the importance of context-specific strategies for CMM adoption. We have explored the implications and limitations of implementing the proposed framework in developing countries, particularly where resources may be constrained. We aim to shed light on socio-economic and regulatory aspects that could influence the adoption of these sustainable construction methods. The proposed concept explores how the automated production of CMM-based wall elements can become a fast, competitive, emission-free, and recyclable alternative to traditional masonry and concrete construction techniques. We advocate for the integration of open-source digital platform technologies to enhance data accessibility, processing, and knowledge acquisition; to boost confidence in CMM-based technologies; and to catalyse their widespread adoption. We believe that the transformative potential of this research necessitates a blend of basic and applied investigation using a comprehensive, holistic, and transfer-oriented methodology. Thus, this paper serves to highlight the viability and multiple benefits of CMMs in construction, emphasising their pivotal role in advancing sustainable development and resilience in the built environment.