Refine
Document Type
- Master's Thesis (14) (remove)
Language
- English (14) (remove)
Has Fulltext
- yes (14)
Keywords
- 2 D environment Laser data (1)
- Analog-Digital-Umsetzer (1)
- Analog-to-digital-Converter ; Spice Simulation ; Analog integrated circuit design ; (1)
- Arbeitsablauf (1)
- Auftragsabwicklung (1)
- Autonomous Mobile Indoor Robot (1)
- Client Management System ; Haaland Internet Productions ; Client Relationship Management ; workflow (1)
- Codierung (1)
- Corporate Development (1)
- Deep Transformation Model (1)
Interpretability and uncertainty modeling are important key factors for medical applications. Moreover, data in medicine are often available as a combination of unstructured data like images and structured predictors like patient’s metadata. While deep learning models are state-of-the-art for image classification, the models are often referred to as ’black-box’, caused by the lack of interpretability. Moreover, DL models are often yielding point predictions and are too confident about the parameter estimation and outcome predictions.
On the other side with statistical regression models, it is possible to obtain interpretable predictor effects and capture parameter and model uncertainty based on the Bayesian approach. In this thesis, a publicly available melanoma dataset, consisting of skin lesions and patient’s age, is used to predict the melanoma types by using a semi-structured model, while interpretable components and model uncertainty is quantified. For Bayesian models, transformation model-based variational inference (TM-VI) method is used to determine the posterior distribution of the parameter. Several model constellations consisting of patient’s age and/or skin lesion were implemented and evaluated. Predictive performance was shown to be best by using a combined model of image and patient’s age, while providing the interpretable posterior distribution of the regression coefficient is possible. In addition, integrating uncertainty in image and tabular parts results in larger variability of the outputs corresponding to high uncertainty of the single model components.
In Maun, Botswana, a self-sufficient, sustainable and future-oriented district will be created, the Maun Science Park. Within this project, several 5-8 storey smart homes shall be built in sustainable construction. The aim of this thesis is to develop a sustainable structural concept for those homes of the Maun Science Park. In a first step, the general basics for tall building structures and sustainable construction were established. Based on those fundamentals, criteria for the structural requirements, the ecological as well as the social sustainability of a structural design could be defined. Subsequently, four structural systems were drafted: a concrete core structure, a steel shear frame structure, a rammed earth shear wall structure and a wooden diagrid structure. In addition to the pre-dimensioning of the systems, a life cycle assessment was set up to evaluate the ecological sustainability of the designs. With the help of a utility value analysis, the wooden diagrid structure was determined as the preferred variant. The comparison of the designs also allows to draw general conclusions for the development of sustainable tall building structures. The results of the life cycle assessment show the advantage of wood as an ecological building material over industrially manufactured building materials, such as steel and concrete. Whereas rammed earth, a likewise ecological building material, is not convincing due to its low strength. In general, a balance is created in the life cycle assessment between ecological and industrially manufactured products in regard of strength and environmental impact. In terms of social sustainability, the design of the structure system can significantly influence the flexibility and use of local resources. However, due to the diversity of sustainable construction, the development of a structural system should be linked to an overarching sustainability concept that takes architecture and stakeholders into account.
Forecasting is crucial for both system planning and operations in the energy sector. With increasing penetration of renewable energy sources, increasing fluctuations in the power generation need to be taken into account. Probabilistic load forecasting is a young, but emerging research topic focusing on the prediction of future uncertainties. However, the majority of publications so far focus on techniques like quantile regression, ensemble, or scenario-based methods, which generate discrete quantiles or sets of possible load curves. The conditioned probability distribution remains unknown and can only be estimated when the output is post-processed using a statistical method like kernel density estimation.
Instead, the proposed probabilistic deep learning model uses a cascade of transformation functions, known as normalizing flow, to model the conditioned density function from a smart meter dataset containing electricity demand information for over 4,000 buildings in Ireland. Since the whole probability density function is tractable, the parameters of the model can be obtained by minimizing the negative loglikelihood through the state of the art gradient descent. This leads to the model with the best representation of the data distribution.
Two different deep learning models have been compared, a simple three-layer fully connected neural network and a more advanced convolutional neural network for sequential data processing inspired by the WaveNet architecture. These models have been used to parametrize three different probabilistic models, a simple normal distribution, a Gaussian mixture model, and the normalizing flow model. The prediction horizon is set to one day with a resolution of 30 minutes, hence the models predict 48 conditioned probability distributions.
The normalizing flow model outperforms the two other variants for both architectures and proves its ability to capture the complex structures and dependencies causing the variations in the data. Understanding the stochastic nature of the task in such detail makes the methodology applicable for other use cases apart from forecasting. It is shown how it can be used to detect anomalies in the power grid or generate synthetic scenarios for grid planning.
Cities around the world are facing an increasing number of global and local challenges, such as climate change and scarcity of raw materials. At the same time trends like digitalization, globalization and networking gain in importance. For this reason, cities have started imple-menting smart solutions within the urban structure in order to evolve towards a Smart City. In Botswana, the Maun Science Park is intended to provide a best practice approach for a Bot-swanan Smart City. Since Smart City concepts have to be specifically tailored to local condi-tions, the first main goal of this thesis is to develop a synthesis concept for the Maun Science Park. A key problem in cities is the utilization of space, which is further intensified by increasing urbanization and population growth. Therefore, the second main goal is to develop approaches of (digitally) re-programmable space to use available areas intelligently and optimized.
Within the thesis, human-centered design has been applied as structure-giving methodology. By clarifying relevant Smart City contents, considering reference examples as well as identify-ing local challenges and requirements, an appropriate concept has been developed with hu-man-focus. Furthermore, the methodologies of literature research and expert interviews have been used as input in the individual human-centered design phases. In combination with an innovation funnel, the methodology human-centered design forms the structure of the thesis.
In total, ten main solution areas and 37 sub-segments have been identified for the synthesis concept of Maun Science Park. Additionally, a concept for Smart Buildings has been devel-oped as a part of the synthesis concept and as an essential infrastructure component of the Maun Science Park (three main segments, 16 sub-segments). Based on expert input, a priori-tization has been determined by evaluating the impact and economic affordability of the indi-vidual sub-areas. Moreover, individual key areas have been highlighted by identifying direct interactions between sub-segments and on the basis of expert input – these are particularly related to the segments Smart Data and Smart People. Besides the synthesis concept, ap-proaches of (digitally) re-programmable space have been created. Thereby, ten approaches refer to the conversion, reuse or expansion abilities of space within daily, weekly or life cycle. In addition, the conventional (digitally) re-programmable space idea has been extended by two new considerations – “multi-purpose use of built-up space” and “concept programming in the planning phase”. Finally, within an overall consideration – synthesis concept combined with approaches of (digitally) re-programmable space – the added value of the developed contents has been outlined, positive and negative aspects have been identified within a SWOT analysis and the business model of the Maun Science Park approach has been verified in a Business Model Canvas.
Through explicit elaboration, classification and prioritization of solution areas, the developed concept can serve as a basis for further project steps. Based on the defined requirements of the sub-segments, solutions can be developed with regard to the entire Smart City context.
Throughout this thesis, the implementation of tools for knowledge management as a key factor for sustainable corporate development, is presented. In industries with a high fluc-tuation rate, such as construction, efficient knowledge management is of particular im-portance. Companies feel the effects of negligent handling of this resource especially dur-ing the Corona pandemic. Restructuring leads to experienced employees leaving the com-pany – and with them the know-how and experience gained. With a systematic knowledge transfer, the most important insights in such situations remain within the or-ganization. Thus, the company becomes crisis-proof and receives all the tools it needs to grow healthily again after the recession. Practical data from competitors indicates that knowledge management promises savings potential of several million euros per year for BAM. Further potentials in the areas of sustainability, customer- and employee satisfac-tion as well as occupational safety, which do not lead to savings, are also worth mention-ing. This thesis determines the current maturity level of knowledge management at BAM, before introducing processes and systems that successively drive the improvement. The developed methods simultaneously help to prevent and solve problems and systematical-ly promote the continuous improvement of all work processes in the company.
Detailed steps are presented to carry out change management towards the successful introduction and further development of knowledge management at BAM. A major focus is on interpersonal factors. The related topic of knowledge culture was recently ranked by german think-tank Zukunftsinstitut as one of the top 5 megatrends for companies in the 2020s. The methods developed, contribute to the creation of such a culture and to the transformation of BAM towards a learning organization. Knowledge management identi-fies with the BAM values. In the course of this thesis it will be shown how the system by its very nature, helps to implement these values in the work of every employee.
The results of this elaboration were recently awarded the Digital Construction Award 2020 for Business Excellence at BAM Deutschland AG.
Botswana, a new construction project – the Maun Science Park - is to be built with a focus on sustainability and to create a new living space for the rapidly growing population in Africa. The project will be a blueprint for future projects in Africain terms of progress, technology and sustainability. This thesis will deal with its financial framework and will serve as a basis for the development of ways and means of financing such projects.
In the field of autonomously driving vehicles the environment perception containing dynamic objects like other road users is essential. Especially, detecting other vehicles in the road traffic using sensor data is of utmost importance. As the sensor data and the applied system model for the objects of interest are noise corrupted, a filter algorithm must be used to track moving objects. Using LIDAR sensors one object gives rise to more than one measurement per time step and is therefore called extended object. This allows to jointly estimate the objects, position, as well as its orientation, extension and shape. Estimating an arbitrary shaped object comes with a higher computational effort than estimating the shape of an object that can be approximated using a basic geometrical shape like an ellipse or a rectangle. In the case of a vehicle, assuming a rectangular shape is an accurate assumption.
A recently developed approach models the contour of a vehicle as periodic B-spline function. This representation is an easy to use tool, as the contour can be specified by some basis points in Cartesian coordinates. Also rotating, scaling and moving the contour is easy to handle using a spline contour. This contour model can be used to develop a measurement model for extended objects, that can be integrated into a tracking filter. Another approach modeling the shape of a vehicle is the so-called bounding box that represents the shape as rectangle.
In this thesis the basics of single, multi and extended object tracking, as well as the basics of B-spline functions are addressed. Afterwards, the spline measurement model is established in detail and integrated into an extended Kalman filter to track a single extended object. An implementation of the resulting algorithm is compared with the rectangular shape estimator. The implementation of the rectangular shape estimator is provided. The comparison is done using long-term considerations with Monte Carlo simulations and by analyzing the results of a single run. Therefore, both algorithms are applied to the same measurements. The measurements are generated using an artificial LIDAR sensor in a simulation environment.
In a real-world tracking scenario detecting several extended objects and measurements that do not originate from a real object, named clutter measurements, is possible. Also, the sudden appearance and disappearance of an object is possible. A filter framework investigated in recent years that can handle tracking multiple objects in a cluttered environment is a random finite set based approach. The idea of random finite sets and its use in a tracking filter is recapped in this thesis. Afterwards, the spline measurement model is included in a multi extended object tracking framework. An implementation of the resulting filter is investigated in a long-term consideration using Monte Carlo simulations and by analyzing the results of a single run. The multi extended object filter is also applied to artificial LIDAR measurements generated in a simulation environment.
The results of comparing the spline based and rectangular based extended object trackers show a more stable performance of the spline extended object tracker. Also, some problems that have to be addressed in future works are discussed. The investigation of the resulting multi extended object tracker shows a successful integration of the spline measurement model in a multi extended object tracker. Also, with these results some problems remain, that have to be solved in future works.
This thesis deals with the object tracking problem of multiple extended objects. For instance, this tracking problem occurs when a car with sensors drives on the road and detects multiple other cars in front of it. When the setup between the senor and the other cars is in a such way that multiple measurements are created by each single car, the cars are called extended objects. This can occur in real world scenarios, mainly with the use of high resolution sensors in near field applications. Such a near field scenario leads a single object to occupy several resolution cells of the sensor so that multiple measurements are generated per scan. The measurements are additionally superimposed by the sensor’s noise. Beside the object generated measurements, there occur false alarms, which are not caused by any object and sometimes in a sensor scan, single objects could be missed so that they not generate any measurements.
To handle these scenarios, object tracking filters are needed to process the sensor measurements in order to obtain a stable and accurate estimate of the objects in each sensor scan. In this thesis, the scope is to implement such a tracking filter that handles the extended objects, i.e. the filter estimates their positions and extents. In context of this, the topic of measurement partitioning occurs, which is a pre-processing of the measurement data. With the use of partitioning, the measurements that are likely generated by one object are put into one cluster, also called cell. Then, the obtained cells are processed by the tracking filter for the estimation process. The partitioning of measurement data is a crucial part for the performance of tracking filter because insufficient partitioning leads to bad tracking performance, i.e. inaccurate object estimates.
In this thesis, a Gaussian inverse Wishart Probability Hypothesis Density (GIW-PHD) filter was implemented to handle the multiple extended object tracking problem. Within this filter framework, the number of objects are modelled as Random Finite Sets (RFSs) and the objects’ extent as random matrices (RM). The partitioning methods that are used to cluster the measurement data are existing ones as well as a new approach that is based on likelihood sampling methods. The applied classical heuristic methods are Distance Partitioning (DP) and Sub-Partitioning (SP), whereas the proposed likelihood-based approach is called Stochastic Partitioning (StP). The latter was developed in this thesis based on the Stochastic Optimisation approach by Granström et al. An implementation, including the StP method and its integration into the filter framework, is provided within this thesis.
The implementations, using the different partitioning methods, were tested on simulated random multi-object scenarios and in a fixed parallel tracking scenario using Monte Carlo methods. Further, a runtime analysis was done to provide an insight into the computational effort using the different partitioning methods. It emphasized, that the StP method outperforms the classical partitioning methods in scenarios, where the objects move spatially close. The filter using StP performs more stable and with more accurate estimates. However, this advantage is associated with a higher computational effort compared to the classical heuristic partitioning methods.
This diploma thesis is devoted to the design and analysis of a radar signal enabling an object classification capability in surveillance radar systems based on high-resolution radar range profiles. It picks up the research results from Kastinger (2006), who investigated classification algorithms for high-resolution radar range profiles, and Meier (2007), who programmed a MATLAB toolbox for the evaluation of radar signals. A classical, brief, introduction to radar fundamentals is given (Chapter 1) as well as the motivation for this thesis and certain basic parameters used. After high-resolution radar range profiles are discussed with special focus on surveillance radar systems (Chapter 2), the results of Kastinger (2006) are picked up (Chapter 3) as far as necessary for the following chapters of this thesis. Following the chapters on radar basics, high-resolution radar range profiles and classification, basic and advanced radar signals are discussed and analysed, especially their range resolution and sidelobe levels (Chapter 4). This includes linear frequency-modulated pulses and nonlinear frequency-modulated pulses as well as phase-coded pulses, coherent trains of identical pulses, and stepped-frequency waveforms. Their analysis is based on Meier's MATLAB toolbox. In Chapter 5 we will bring up additional points that have to be considered in radar system design for implementing a classification capability, before this thesis ends with an overall conclusion (Chapter 6).
This thesis deals with background, theory, design, layout and experimental test results of an analogue CMOS VLSI current-mode analog-to-digital converter. This system supports a project, whose goal it is to build a biologically relevant model of synaptic plasticity, named the Artificial Synapse. A critical part of the design, which is based on analogue CMOS VLSI circuits, is the ability to activate a discrete number of channels by sampling an analogue signal. Since currents are the signal of interest and transistors are biased in weak inversion (subthreshold regime), the system requires a current mode A/D circuit that it can operate at ultra-low power and current levels. To meet this need, two new innovative A/D converter approaches are proposed to replace the system’s previous A/D converter design which suffered from a non-linear resolution, uncoded output code and heavy bit oscillations. The initial technical requirements and key criteria for the new converter comprise a resolution of one nano ampere, an input current range between 0 – 100nA, conversion frequencies of up to 5kHz, and a power supply voltage of less than 1.5V. Temperature range, space occupation and power dissipation aspects were not specified due to the early stage of the related Artificial Synapse project. The novel converters both produce seven bit thermometer codes, their functional principle can be best described as current mode flash analog-to-digital converters (ADCs). Due to the fact that the input signal is in the area of a subthreshold current, it is selfevident that the A/D converter design should operate at a subthreshold realm. To support low power operation, clocks or high currents could not be used and were excluded from the design from the very start. To encode the thermometer code into standard binary code, a seven-to-three encoder was designed and integrated on the chip. In October 2003, the design was submitted for production to the MOSIS circuit fabrication service. The AMI Semiconductor 1.5 micron ABN CMOS process was chosen to manufacture the chip. When it was returned in January 2004, simulation results showed that both new A/D converter approaches accomplished excellent results which were expected from SPICE simulation results. With the new chip installed, it became possible to resolve input currents as small as one nano ampere and achieve conversion frequencies of up to 5kHz. The circuits also both meet the requirements which were set at the beginning of the project to operate at a power supply voltage of less than 1.5V, processing input currents in the range between 0 – 100nA. A prototype printed circuit board (PCB) was developed, produced and employed for experiments with the chip. The major application of this test-bed is the ability to generate and measure extremely low currents with high precision. This enables the monitoring of the very small currents that are processed by the chip.
This work treats with the segmentation of 2D environment Laser data, captured by an Autonomous Mobile Indoor Robot. It is part of the data processing, which is necessary to navigate a mobile robot error free in its environment. The whole process can generally be described by data capturing, data processing and navigation. In this project the data processing deals with data, captured by a Laser-Sensor, which provides two dimensional data by a series of distance measurements i.e. point-measurements of the environment. These point series have to be filtered and processed into a more convenient representation to provide a virtual environment map, which can be used of the robot for an error free navigation. This project provides different solutions of the same problem: the conversion from distance points to model segments which should represent the real world environment as close as possible. The advantages and disadvantages of each of the different Segmentation-Algorithms will be shown as well as a comparison taking into account the Computational Time and the Robustness of the results.
This thesis investigates methods for the recognition of facial expressions using support vector machines. Rather than trying to recognize facial actions in the face such as raised eyebrow, mouth open and frowns. These facial actions are described in the Facial Action Coding System (FACS) and are essential facial components, which can be combined to form facial expressions. We perform independent recognition of 6 upper and 10 lower action units in the face, which may occur either individually or in combination. Based on a feature extraction from grey-level values, the system is expected to recognize under real-time conditions. Results are presented with different image resolutions, SVM kernels and variations of low-level features.
The target of this thesis is the introduction of a client management system (CMS) at Haaland Internet Productions (HiP), a web design and hosting company in Burbank, California, USA. The company needs a system to track orders and improve workflow. HiP needs a system which not only tracks orders, but also stores all client information in a database. This client information can be used for a variety of marketing and contact reasons. It is an important and integral part of HiP's client relationship management (CRM). The lack of a cohesive CMS at HiP caused many fundamental business problems, such as lost orders, missed billing statements, and over/under billing. The research done during the investigation and analysis of the company and their needs should lead to a global system which totally fulfils the needs of HiP. This global system could be in the form of an off-the-shelf product with some customizations, or a completely new, in-house system. Either solution will have respective pros and cons; the goal is to reach a decision that best fits HiP's needs and situation. The following is a concise version of the project. Particular emphasis is placed upon the single steps which made up the decision process, as well as the practiced techniques, methods, and their applications.