Refine
Year of publication
Document Type
- Article (53)
- Conference Proceeding (29)
- Master's Thesis (14)
- Working Paper (12)
- Report (11)
- Bachelor Thesis (8)
- Part of a Book (3)
- Study Thesis (3)
- Preprint (2)
- Book (1)
Language
- English (137) (remove)
Has Fulltext
- yes (137) (remove)
Keywords
- 1D-CNN (1)
- 2 D environment Laser data (1)
- 3D Extended Object Tracking (1)
- 3D shape tracking (1)
- Abschätzung (1)
- Accelerometer (1)
- Additive manufacturing (1)
- Agile administration (1)
- Alpine area (1)
- Alternative Energy Production (1)
Institute
- Fakultät Architektur und Gestaltung (2)
- Fakultät Bauingenieurwesen (16)
- Fakultät Elektrotechnik und Informationstechnik (6)
- Fakultät Informatik (12)
- Fakultät Maschinenbau (6)
- Fakultät Wirtschafts-, Kultur- und Rechtswissenschaften (12)
- Institut für Angewandte Forschung - IAF (18)
- Institut für Optische Systeme - IOS (9)
- Institut für Strategische Innovation und Technologiemanagement - IST (2)
- Institut für Systemdynamik - ISD (13)
This thesis investigates methods for the recognition of facial expressions using support vector machines. Rather than trying to recognize facial actions in the face such as raised eyebrow, mouth open and frowns. These facial actions are described in the Facial Action Coding System (FACS) and are essential facial components, which can be combined to form facial expressions. We perform independent recognition of 6 upper and 10 lower action units in the face, which may occur either individually or in combination. Based on a feature extraction from grey-level values, the system is expected to recognize under real-time conditions. Results are presented with different image resolutions, SVM kernels and variations of low-level features.
The target of this thesis is the introduction of a client management system (CMS) at Haaland Internet Productions (HiP), a web design and hosting company in Burbank, California, USA. The company needs a system to track orders and improve workflow. HiP needs a system which not only tracks orders, but also stores all client information in a database. This client information can be used for a variety of marketing and contact reasons. It is an important and integral part of HiP's client relationship management (CRM). The lack of a cohesive CMS at HiP caused many fundamental business problems, such as lost orders, missed billing statements, and over/under billing. The research done during the investigation and analysis of the company and their needs should lead to a global system which totally fulfils the needs of HiP. This global system could be in the form of an off-the-shelf product with some customizations, or a completely new, in-house system. Either solution will have respective pros and cons; the goal is to reach a decision that best fits HiP's needs and situation. The following is a concise version of the project. Particular emphasis is placed upon the single steps which made up the decision process, as well as the practiced techniques, methods, and their applications.
Towards an integrated theory of economic governance – Conclusions from the governance of ethics
(2004)
This thesis deals with background, theory, design, layout and experimental test results of an analogue CMOS VLSI current-mode analog-to-digital converter. This system supports a project, whose goal it is to build a biologically relevant model of synaptic plasticity, named the Artificial Synapse. A critical part of the design, which is based on analogue CMOS VLSI circuits, is the ability to activate a discrete number of channels by sampling an analogue signal. Since currents are the signal of interest and transistors are biased in weak inversion (subthreshold regime), the system requires a current mode A/D circuit that it can operate at ultra-low power and current levels. To meet this need, two new innovative A/D converter approaches are proposed to replace the system’s previous A/D converter design which suffered from a non-linear resolution, uncoded output code and heavy bit oscillations. The initial technical requirements and key criteria for the new converter comprise a resolution of one nano ampere, an input current range between 0 – 100nA, conversion frequencies of up to 5kHz, and a power supply voltage of less than 1.5V. Temperature range, space occupation and power dissipation aspects were not specified due to the early stage of the related Artificial Synapse project. The novel converters both produce seven bit thermometer codes, their functional principle can be best described as current mode flash analog-to-digital converters (ADCs). Due to the fact that the input signal is in the area of a subthreshold current, it is selfevident that the A/D converter design should operate at a subthreshold realm. To support low power operation, clocks or high currents could not be used and were excluded from the design from the very start. To encode the thermometer code into standard binary code, a seven-to-three encoder was designed and integrated on the chip. In October 2003, the design was submitted for production to the MOSIS circuit fabrication service. The AMI Semiconductor 1.5 micron ABN CMOS process was chosen to manufacture the chip. When it was returned in January 2004, simulation results showed that both new A/D converter approaches accomplished excellent results which were expected from SPICE simulation results. With the new chip installed, it became possible to resolve input currents as small as one nano ampere and achieve conversion frequencies of up to 5kHz. The circuits also both meet the requirements which were set at the beginning of the project to operate at a power supply voltage of less than 1.5V, processing input currents in the range between 0 – 100nA. A prototype printed circuit board (PCB) was developed, produced and employed for experiments with the chip. The major application of this test-bed is the ability to generate and measure extremely low currents with high precision. This enables the monitoring of the very small currents that are processed by the chip.
This work treats with the segmentation of 2D environment Laser data, captured by an Autonomous Mobile Indoor Robot. It is part of the data processing, which is necessary to navigate a mobile robot error free in its environment. The whole process can generally be described by data capturing, data processing and navigation. In this project the data processing deals with data, captured by a Laser-Sensor, which provides two dimensional data by a series of distance measurements i.e. point-measurements of the environment. These point series have to be filtered and processed into a more convenient representation to provide a virtual environment map, which can be used of the robot for an error free navigation. This project provides different solutions of the same problem: the conversion from distance points to model segments which should represent the real world environment as close as possible. The advantages and disadvantages of each of the different Segmentation-Algorithms will be shown as well as a comparison taking into account the Computational Time and the Robustness of the results.
Web services are, due to the excellent tool support, simple to provide and use in trivial cases. But their use in non-trivial Web service-based systems like I3M poses new difficulties and problems. I3M is an instant messaging and chat system with distributed and local components collaborating via Web services. One difficulty is to make a series of related Web service invocations in a stateful session. A problem is the performance of collaborating collocated, service-oriented components of a system due to the high Web service invocation overheaed as is shown by measurements. Solutions to both the difficulty and the problem are proposed.
The focus of this part of the research project lies on the process of developing a Social Responsibility Standard within a network made up of various stakeholders. The International Organization for Standardization (ISO) is known as the world´s leading institution for the development of standards. Besides setting standards in the fields of e.g. construction, agriculture and information technology, recently the Technical Management Board (TMB) of ISO proposed to further extend its activities by developing an international standard addressing the social responsibility of organizations. In 2004, a new Working Group was established as a multi-stakeholder group comprised of experts, who are nominated by ISO´s members as well as interested international and regional organizations in order to provide for guidance in setting international standards on social responsibility. In May 2006, the survey was conducted during the third conference of the ISO Working Group in Lisbon, Portugal. This particular empirical study has been designed on the one hand to investigate the motivation of organizations and their delegates to engage in social responsibility. On the other hand, the survey had the objective to evaluate the individual participants' current perception and assessment of the network´s efficiency, effectiveness and legitimacy, a so-called 'snap-shot' of this ISO process1. Overall, the empirical study shows that the organizations and their delegates, who have dealt with the topic SR for several years for diverse reasons, expect a tremendous effect by implementing ISO 26000 in their own organizations. Furthermore, the majority of respondents assess the decision-making process positively within the ISO process with respect to the criteria inclusive, fair, capacity building, legitimate and transparent. Difficulties concerning the distribution of stakeholder influences are being addressed. The results of the survey support the efforts to establish policies and procedures in order to encourage a balanced representation of stakeholders in terms of gender, geographic and stakeholder groups.
There was hardly another development which influenced the life on earth as much as the development of the communication technology in the last decades. The advantages of mobile communication brought the branch enormeous growth rates. However, for some years an increasing saturation has been looming in the markets especially in the developed nations and new marketing strategies are needed in order for companies to be able to distance themselves from their competitors. Against the background of this situation ICT companies all over the world started to look for new growth opportunities and found them in the so called “emerging markets” of the developing nations. To exploit this potential will be the one central challenge for the mobile communication industry for the next years. With this book I want to direct the gaze of all readers towards these markets which hold an enormous potential for the whole industry. Furthermore, I want to introduce some generic strategic approaches which can help firms to successfully participate in these markets.
This diploma thesis is devoted to the design and analysis of a radar signal enabling an object classification capability in surveillance radar systems based on high-resolution radar range profiles. It picks up the research results from Kastinger (2006), who investigated classification algorithms for high-resolution radar range profiles, and Meier (2007), who programmed a MATLAB toolbox for the evaluation of radar signals. A classical, brief, introduction to radar fundamentals is given (Chapter 1) as well as the motivation for this thesis and certain basic parameters used. After high-resolution radar range profiles are discussed with special focus on surveillance radar systems (Chapter 2), the results of Kastinger (2006) are picked up (Chapter 3) as far as necessary for the following chapters of this thesis. Following the chapters on radar basics, high-resolution radar range profiles and classification, basic and advanced radar signals are discussed and analysed, especially their range resolution and sidelobe levels (Chapter 4). This includes linear frequency-modulated pulses and nonlinear frequency-modulated pulses as well as phase-coded pulses, coherent trains of identical pulses, and stepped-frequency waveforms. Their analysis is based on Meier's MATLAB toolbox. In Chapter 5 we will bring up additional points that have to be considered in radar system design for implementing a classification capability, before this thesis ends with an overall conclusion (Chapter 6).
This working paper is part of a PhD research project dealing with the topics Social Responsibility, Stakeholder Theory and Network Governance, run by Maud Schmiedeknecht and supervised by Prof. Dr. habil. Josef Wieland, both from the Konstanz Institute for Intercultural Management, Values and Communication at the Konstanz University of Applied Sciences.
Today we live in a world that is characterized by a constantly changing environment. During the last decade, this highly volatile environment forced companies to implement strategies that identify, track and minimise the risks that entrepreneurial activity entails. Unfortunately, risks only account for a part of the insecurity that is connected to future events. The other and not inferior part of this insecurity consists of possible positive developments – so called opportunities. Due to this reason in economic science and in practice the opinion aggravates that solely focusing on risks is not sufficient to fully exploit the potential of markets and companies. In the 16th century, the Dutch Renaissance humanist scholar Desiderius Erasmus (1466-1536) said: “It is well known that among the blind, the one-eyed man is king.” Transferring this statement in the context of Risk Management, the conclusion becomes apparent: The environmental uncertainty that surrounds entrepreneurial actions includes both opportunities and threats. As commonly practiced though, Risk Management tools only address threats. While this approach is surely better than doing nothing, it still can be seen as a major weakness of the traditional Risk Management approach. Nevertheless, in terms of Erasmus, this approach represents the one-eyed man when compared with the blind. To continue this metaphor a little further, it is possible to conclude that the one-eyed king could be easily relieved of his crown by introducing an emperor who is able to see with two eyes. Although this problem is well known in economic science, up to know only little scientific focus was shifted towards the systematic identification and management of opportunities. In fact, most of the present literature focuses on the identification and handling of risk and even though much of the recently published literature captures the term opportunity, none of it proposes a solid idea of following up on the approach. Still, facing the defiances of the present economic environment, it is not sufficient for companies to focus their attention on reducing risks. Instead, it is imperative to deal with the subject of Opportunity Management as well. With this paper, I want to undermine the importance of Opportunity Management for all companies independently of their size or branch that they operate in. Thereby, this paper is dedicated to all managers who strive to improve the professionalism of their companies in terms of strategic thinking. Furthermore, I hope that this paper can facilitate a practical implementation of a working Opportunity Management System.
In automotive a lot of electromagnetically, pyrotechnically or mechanically driven actuators are integrated to run comfort systems and to control safety systems in modern passenger cars. Using shape memory alloys (SMA) the existing systems could be simplified, performing the same function through new mechanisms with reduced size, weight, and costs. A drawback for the use of SMA in safety systems is the lack of materials knowledge concerning the durability of the switching function (long-time stability of the shape memory effect). Pedestrian safety systems play a significant role to reduce injuries and fatal casualties caused by accidents. One automotive safety system for pedestrian protection is the bonnet lifting system. Based on such an application, this article gives an introduction to existing bonnet lifting systems for pedestrian protection, describes the use of quick changing shape memory actuators and the results of the study concerning the long-time stability of the tested NiTi-wires. These wires were trained, exposed up to 4years at elevated temperatures (up to 140°C) and tested regarding their phase change temperatures, times, and strokes. For example, it was found that A P-temperature is shifted toward higher temperatures with longer exposing periods and higher temperatures. However, in the functional testing plant a delay in the switching time could not be detected. This article gives some answers concerning the long-time stability of NiTi-wires that were missing till now. With this knowledge, the number of future automotive applications using SMA can be increased. It can be concluded, that the use of quick changing shape memory actuators in safety systems could simplify the mechanism, reduce maintenance and manufacturing costs and should be insertable also for other automotive applications.
Using multi-camera matching techniques for 3d reconstruction there is usually the trade-off between the quality of the computed depth map and the speed of the computations. Whereas high quality matching methods take several seconds to several minutes to compute a depth map for one set of images, real-time methods achieve only low quality results. In this paper we present a multi-camera matching method that runs in real-time and yields high resolution depth maps. Our method is based on a novel multi-level combination of normalized cross correlation, deformed matching windows based on the multi-level depth map information, and sub-pixel precise disparity maps. The whole process is implemented completely on the GPU. With this approach we can process four 0.7 megapixel images in 129 milliseconds to a full resolution 3d depth map. Our technique is tailored for the recognition of non-technical shapes, because our target application is face recognition.
To master complexity, we can organize it or discard it. The Art of Insight in Science and Engineering first teaches the tools for organizing complexity, then distinguishes the two paths for discarding complexity: with and without loss of information. Questions and problems throughout the text help readers master and apply these groups of tools. Armed with this three-part toolchest, and without complicated mathematics, readers can estimate the flight range of birds and planes and the strength of chemical bonds, understand the physics of pianos and xylophones, and explain why skies are blue and sunsets are red.
The effect on the mean-variance space of restrictions on a variable is investigated in this
paper. A restriction may be the placing of upper and lower bounds on a variable.
Another limitation is the loss of the continuity of a variable.
Average marks for Examinations are considered in an application of this limited meanvariance
space. In this case, the bounds are given by the highest and the lowest possible mark (e.g. 1.0 and 5.0). The limitation of the mean-variance space depends on the number of students who participate in the examination. The restriction of the loss of continuity is shown by the use of discrete marks (e.g. 1.0, 1.3, 1.7, 2.0, …). Furthermore,
the Target-Shortfall-Probability lines are integrated into the mean-variance space. These
lines are used to indicate the proportion of students who have good or very good marks in the examination. In financial markets, Target-Shortfall-Probability is used as a risk criterion.
Atom interferometers have a multitude of proposed applications in space including precise measurements of the Earth's gravitational field, in navigation & ranging, and in fundamental physics such as tests of the weak equivalence principle (WEP) and gravitational wave detection. While atom interferometers are realized routinely in ground-based laboratories, current efforts aim at the development of a space compatible design optimized with respect to dimensions, weight, power consumption, mechanical robustness and radiation hardness. In this paper, we present a design of a high-sensitivity differential dual species 85Rb/87Rb atom interferometer for space, including physics package, laser system, electronics and software. The physics package comprises the atom source consisting of dispensers and a 2D magneto-optical trap (MOT), the science chamber with a 3D-MOT, a magnetic trap based on an atom chip and an optical dipole trap (ODT) used for Bose-Einstein condensate (BEC) creation and interferometry, the detection unit, the vacuum system for 10-11 mbar ultra-high vacuum generation, and the high-suppression factor magnetic shielding as well as the thermal control system.
The laser system is based on a hybrid approach using fiber-based telecom components and high-power laser diode technology and includes all laser sources for 2D-MOT, 3D-MOT, ODT, interferometry and detection. Manipulation and switching of the laser beams is carried out on an optical bench using Zerodur bonding technology. The instrument consists of 9 units with an overall mass of 221 kg, an average power consumption of 608 W (819 W peak), and a volume of 470 liters which would well fit on a satellite to be launched with a Soyuz rocket, as system studies have shown.
SInCom 2015
(2015)
Regarding moral concerns in the business sphere, integrity is often mentioned as one of the core values that guides the behavior of companies. Daimler for instance states: “Acting with integrity is the central requirement for sustainable success and a maxim that Daimler follows in its worldwide business practices.”1 Reference to integrity is mostly supposed to signal that the company acts morally responsibly. Although some companies specify what acting with integrity means for them, it generally remains unclear what the concept of integrity entails – both broadly speaking and referring to business. This conceptual gap shall be filled by developing a concept of integrity that can be transferred to the business context. For this purpose, the main criteria that constitute moral integrity will be discussed before reflecting on how these could be integrated into a practical and comprehensive concept of corporate integrity.
Earthquake response spectra as defined by Eurocode 8 (German NAD) are restricted to soils with shear wave velocities greater than 150 m/s. For soft soil layers e.g. of clay underlain by bedrock special investigations are required because resonance effects of the layer significantly influence the shape of the spectrum. Numerical investigations are normally based on a one-dimensional theory of horizontally polarized shear waves propagating in vertical direction. The paper describes a parametric study to define acceleration response spectra for a soft soil over a half-space for a wide range of soil layer heights and material parameters. Based on this study a simplified method to describe response spectra for the model of a soft soil layer underlain by a viscoelastic halfspace is given.
Sabbatical semester report
(2015)
Increasing robustness of handwriting recognition using character N-Gram decoding on large lexica
(2016)
Offline handwriting recognition systems often include a decoding step, that is retrieving the most likely character sequence from the underlying machine learning algorithm. Decoding is sensitive to ranges of weakly predicted characters, caused e.g. by obstructions in the scanned document. We present a new algorithm for robust decoding of handwriting recognizer outputs using character n-grams. Multidimensional hierarchical subsampling artificial neural networks with Long-Short-Term-Memory cells have been successfully applied to offline handwriting recognition. Output activations from such networks, trained with Connectionist Temporal Classification, can be decoded with several different algorithms in order to retrieve the most likely literal string that it represents. We present a new algorithm for decoding the network output while restricting the possible strings to a large lexicon. The index used for this work is an n-gram index with tri-grams used for experimental comparisons. N-grams are extracted from the network output using a backtracking algorithm and each n-gram assigned a mean probability. The decoding result is obtained by intersecting the n-gram hit lists while calculating the total probability for each matched lexicon entry. We conclude with an experimental comparison of different decoding algorithms on a large lexicon.
In the reverse engineering process one has to classify parts of point clouds with the correct type of geometric primitive. Features based on different geometric properties like point relations, normals, and curvature information can be used, to train classifiers like Support Vector Machines (SVM). These geometric features are estimated in the local neighborhood of a point of the point cloud. The multitude of different features makes an in-depth comparison necessary. In this work we evaluate 23 features for the classification of geometric primitives in point clouds. Their performance is evaluated on SVMs when used to classify geometric primitives in simulated and real laser scanned point clouds. We also introduce a normalization of point cloud density to improve classification generalization.
The Universal Serial Bus (USB) is a worldwide standard for communication between peripherals. Nowadays USB interfaces are integrated in almost every device. It will be used to connect peripherals and computers. USB devices communicate between pieces of hardware, i.e., cable, plug and socket. Thus, there exists different standardized communication protocols depending on the application. In case of different communication protocols, it is necessary to verify them, that devices, no matter of country, can communicate to each other.
The verifying process is very important in order that companies can sell products with such interfaces and their designated logo, to guaranty a certain standard, which is provided all over the world. Devices have to complete various test procedures to get certified. Otherwise a company is not allowed to use logos ore designations, i.e., USB or information about data rates, i.e., SuperSpeed. Furthermore, successfully completed test procedures prove that a device works properly based on a professional method.
The Human-Machine-Interface (HMI) device family from the company Marquardt Verwaltungs GmbH, is using the USB interface for service and data exchange purposes. The service application is realized through a Virtual COM Port (VCP), based on the Communication Device Class (CDC) of USB. On the other side they want to use the Media Transfer Protocol (MTP) based on the Still Image Capture Device class for data exchange between the HMI device and a computer. Of course, the integrated circuit, which implements the USB interface on the circuit board of the HMI device has to be verified, too. The verification will be performed through an external company. In contrast, the communication protocols do not need a verification but must be examined. The identification of an USB class in an operating system does neither guaranty a proper functionality nor comply with a professional scientific method.
To accelerate the development of a project as well as to reduce the production costs, it is a significant advantage to own a test environment. Microsoft provides the possibility to verify devices on Windows operating systems. Therefor they invented the Windows Certification Program, which contains software that can be used for verification purposes. One of them is the Windows Hardware Certification Kit (HCK) we want to set up and set the HMI device under test, to examine the implementation of MTP.
Thus, it is possible to use the HCK test setup during a development process to examine a current implementation without a big effort, i.e., cooperation with an external company or similarly approaches, which delays the whole development process by far.
InBetween
(2017)
In tourism, energy demands are particularly high.Tourism facilities such as hotels require large amounts ofelectric and heating resp. cooling energy. Their supply howeveris usually still based on fossil energies. This research approachanalyses the potential of promoting renewable energies in BlackForest tourism. It focuses on a combined and hence highlyefficient production of both electric and thermal energy bybiogas plants on the one hand and its provision to local tourismfacilities via short distance networks on the other. Basing onsurveys and qualitative empiricism and considering regionalresource availability as well as socio-economic aspects, it thusexamines strengths, weaknesses, opportunities and threats thatcan arise from such a cooperation.
We propose a novel end-to-end neural network architecture that, once trained, directly outputs a probabilistic clustering of a batch of input examples in one pass. It estimates a distribution over the number of clusters k, and for each 1≤k≤kmax, a distribution over the individual cluster assignment for each data point. The network is trained in advance in a supervised fashion on separate data to learn grouping by any perceptual similarity criterion based on pairwise labels (same/different group). It can then be applied to different data containing different groups. We demonstrate promising performance on high-dimensional data like images (COIL-100) and speech (TIMIT). We call this “learning to cluster” and show its conceptual difference to deep metric learning, semi-supervise clustering and other related approaches while having the advantage of performing learnable clustering fully end-to-end.
In tourism, energy demands are particularly high. Tourism facilities such as hotels require large amounts of electric and heating / cooling energy while their supply is usually still based on fossil energies.
This research approach analyses the potential of promoting renewable energies in tourism. It focuses on a combined and hence highly efficient production of both electric and thermal energy by biogas plants on the one hand and its provision to local tourism facilities via short distance networks on the other. Considering regional resource availability as well as socio-economic aspects, it thus examines strengths, weaknesses, opportunities and threats that can arise from such a micro-cooperation. The research aim is to provide an actor-based, spatially transferable feasibility analysis.
Offline handwriting recognition systems often use LSTM networks, trained with line- or word-images. Multi-line text makes it necessary to use segmentation to explicitly obtain these images. Skewed, curved, overlapping, incorrectly written text, or noise can lead to errors during segmentation of multi-line text and reduces the overall recognition capacity of the system. Last year has seen the introduction of deep learning methods capable of segmentation-free recognition of whole paragraphs. Our method uses Conditional Random Fields to represent text and align it with the network output to calculate a loss function for training. Experiments are promising and show that the technique is capable of training a LSTM multi-line text recognition system.
Algorithms for calculating the string edit distance are used in e.g. information retrieval and document analysis systems or for evaluation of text recognizers. Text recognition based on CTC-trained LSTM networks includes a decoding step to produce a string, possibly using a language model, and evaluation using the string edit distance. The decoded string can further be used as a query for database search, e.g. in document retrieval. We propose to closely integrate dictionary search with text recognition to train both combined in a continuous fashion. This work shows that LSTM networks are capable of calculating the string edit distance while allowing for an exchangeable dictionary to separate learned algorithm from data. This could be a step towards integrating text recognition and dictionary search in one deep network.
A growing share of modern trade policy instruments is shaped by non-tariff barriers (NTBs). Based on a structural gravity equation and the recently updated Global Trade Alert database, we empirically investigate the effect of NTBs on imports. Our analysis reveals that the implementation of NTBs reduces imports of affected products by up to 12%. Their trade dampening effect is thus comparable to that of trade defence instruments such as anti-dumping duties. It is smaller for exporters that have a free trade agreement with the importing country. Different types of NTBs affect trade to a different extent. Finally, we investigate the effect of behind-the-border measures, showing that they significantly lower the importer’s market access.
In 1970, B.A. Asner, Jr., proved that for a real quasi-stable polynomial, i.e., a polynomial whose zeros lie in the closed left half-plane of the complex plane, its finite Hurwitz matrix is totally nonnegative, i.e., all its minors are nonnegative, and that the converse statement is not true. In this work, we explain this phenomenon in detail, and provide necessary and sufficient conditions for a real polynomial to have a totally nonnegative finite Hurwitz matrix.
Further applications of the Cauchon algorithm to rank determination and bidiagonal factorization
(2018)
For a class of matrices connected with Cauchon diagrams, Cauchon matrices, and the Cauchon algorithm, a method for determining the rank, and for checking a set of consecutive row (or column) vectors for linear independence is presented. Cauchon diagrams are also linked to the elementary bidiagonal factorization of a matrix and to certain types of rank conditions associated with submatrices called descending rank conditions.
Deep neural networks have become a veritable alternative to classic speaker recognition and clustering methods in recent years. However, while the speech signal clearly is a time series, and despite the body of literature on the benefits of prosodic (suprasegmental) features, identifying voices has usually not been approached with sequence learning methods. Only recently has a recurrent neural network (RNN) been successfully applied to this task, while the use of convolutional neural networks (CNNs) (that are not able to capture arbitrary time dependencies, unlike RNNs) still prevails. In this paper, we show the effectiveness of RNNs for speaker recognition by improving state of the art speaker clustering performance and robustness on the classic TIMIT benchmark. We provide arguments why RNNs are superior by experimentally showing a “sweet spot” of the segment length for successfully capturing prosodic information that has been theoretically predicted in previous work.
Today’s markets are characterized by fast and radical changes, posing an essential challenge to established companies. Startups, yet, seem to be more capable in developing radical innovations to succeed in those volatile markets. Thus, established companies started to experiment with various approaches to implement startup-like structures in their organization. Internal corporate accelerators (ICAs) are a novel form of corporate venturing, aiming to foster bottom-up innovations through intrapreneurship. However, ICAs still lack empirical investigations. This work contributes to a deeper understanding of the interface between the ICA and the core organization and the respective support activities (resource access and support services) that create an innovation-supportive work environment for the intrapreneurial team. The results of this qualitative study, comprising 12 interviews with ICA teams out of two German high-tech companies, show that the resources provided by ICAs differ from the support activities of external accelerators. Further, the study shows that some resources show both supportive as well as obstructive potential for the intrapreneurial teams within the ICA.
In the field of autonomously driving vehicles the environment perception containing dynamic objects like other road users is essential. Especially, detecting other vehicles in the road traffic using sensor data is of utmost importance. As the sensor data and the applied system model for the objects of interest are noise corrupted, a filter algorithm must be used to track moving objects. Using LIDAR sensors one object gives rise to more than one measurement per time step and is therefore called extended object. This allows to jointly estimate the objects, position, as well as its orientation, extension and shape. Estimating an arbitrary shaped object comes with a higher computational effort than estimating the shape of an object that can be approximated using a basic geometrical shape like an ellipse or a rectangle. In the case of a vehicle, assuming a rectangular shape is an accurate assumption.
A recently developed approach models the contour of a vehicle as periodic B-spline function. This representation is an easy to use tool, as the contour can be specified by some basis points in Cartesian coordinates. Also rotating, scaling and moving the contour is easy to handle using a spline contour. This contour model can be used to develop a measurement model for extended objects, that can be integrated into a tracking filter. Another approach modeling the shape of a vehicle is the so-called bounding box that represents the shape as rectangle.
In this thesis the basics of single, multi and extended object tracking, as well as the basics of B-spline functions are addressed. Afterwards, the spline measurement model is established in detail and integrated into an extended Kalman filter to track a single extended object. An implementation of the resulting algorithm is compared with the rectangular shape estimator. The implementation of the rectangular shape estimator is provided. The comparison is done using long-term considerations with Monte Carlo simulations and by analyzing the results of a single run. Therefore, both algorithms are applied to the same measurements. The measurements are generated using an artificial LIDAR sensor in a simulation environment.
In a real-world tracking scenario detecting several extended objects and measurements that do not originate from a real object, named clutter measurements, is possible. Also, the sudden appearance and disappearance of an object is possible. A filter framework investigated in recent years that can handle tracking multiple objects in a cluttered environment is a random finite set based approach. The idea of random finite sets and its use in a tracking filter is recapped in this thesis. Afterwards, the spline measurement model is included in a multi extended object tracking framework. An implementation of the resulting filter is investigated in a long-term consideration using Monte Carlo simulations and by analyzing the results of a single run. The multi extended object filter is also applied to artificial LIDAR measurements generated in a simulation environment.
The results of comparing the spline based and rectangular based extended object trackers show a more stable performance of the spline extended object tracker. Also, some problems that have to be addressed in future works are discussed. The investigation of the resulting multi extended object tracker shows a successful integration of the spline measurement model in a multi extended object tracker. Also, with these results some problems remain, that have to be solved in future works.
Thermal shape memory alloys show extraordinary material properties and can be used as actuators, dampers and sensors. Since their discovery in the middle of the last century they have been investigated and further developed. The majority of the industrial applications with the highest material sales can still be found in the medical industry, where they are used due to their superelastic and thermal shape memory effect, e.g. as stents or as guidewires and tools in the minimal invasive surgery. Particularly in recent years, more and more applications have been developed for other industrial fields, e.g. for the household goods, civil engineering and automotive sector. In this context it is worth mentioning that for the latter sector, million seller series applications have found their way into some European automobile manufacturers. The German VDI guideline for shape memory alloys introduced in 2017 will give the material a further boost in application. Last but not least the new production technologies of additive manufacturing with metal laser sintering plants open up additional applications for these multifunctional materials. This paper gives an overview of the extraordinary material properties of shape memory components, shows examples of different applications and discusses European trends against the background of the most recent standard and new production technologies.
CO2 compensation measures, in particular the compensation of flights, are becoming more and more popular. Carbon offsetting is defined as measures financed by donations that save greenhouse gases previously emitted elsewhere through climate protection projects.
CO2 abatement costs are often low in developing countries. This is why most offset projects are implemented there. Nevertheless, this does not mean that the holiday resort and the project country are in any way related to each other.
By linking carbon offset projects with the destination country, the tourist is able to get an impression of the co-financed project. In case such projects are realized in cooperation with the hotel, the hotel operator obtains a new tourist attraction and can demonstrate its efforts to climate protection in a PR-effective way.
This thesis deals with the object tracking problem of multiple extended objects. For instance, this tracking problem occurs when a car with sensors drives on the road and detects multiple other cars in front of it. When the setup between the senor and the other cars is in a such way that multiple measurements are created by each single car, the cars are called extended objects. This can occur in real world scenarios, mainly with the use of high resolution sensors in near field applications. Such a near field scenario leads a single object to occupy several resolution cells of the sensor so that multiple measurements are generated per scan. The measurements are additionally superimposed by the sensor’s noise. Beside the object generated measurements, there occur false alarms, which are not caused by any object and sometimes in a sensor scan, single objects could be missed so that they not generate any measurements.
To handle these scenarios, object tracking filters are needed to process the sensor measurements in order to obtain a stable and accurate estimate of the objects in each sensor scan. In this thesis, the scope is to implement such a tracking filter that handles the extended objects, i.e. the filter estimates their positions and extents. In context of this, the topic of measurement partitioning occurs, which is a pre-processing of the measurement data. With the use of partitioning, the measurements that are likely generated by one object are put into one cluster, also called cell. Then, the obtained cells are processed by the tracking filter for the estimation process. The partitioning of measurement data is a crucial part for the performance of tracking filter because insufficient partitioning leads to bad tracking performance, i.e. inaccurate object estimates.
In this thesis, a Gaussian inverse Wishart Probability Hypothesis Density (GIW-PHD) filter was implemented to handle the multiple extended object tracking problem. Within this filter framework, the number of objects are modelled as Random Finite Sets (RFSs) and the objects’ extent as random matrices (RM). The partitioning methods that are used to cluster the measurement data are existing ones as well as a new approach that is based on likelihood sampling methods. The applied classical heuristic methods are Distance Partitioning (DP) and Sub-Partitioning (SP), whereas the proposed likelihood-based approach is called Stochastic Partitioning (StP). The latter was developed in this thesis based on the Stochastic Optimisation approach by Granström et al. An implementation, including the StP method and its integration into the filter framework, is provided within this thesis.
The implementations, using the different partitioning methods, were tested on simulated random multi-object scenarios and in a fixed parallel tracking scenario using Monte Carlo methods. Further, a runtime analysis was done to provide an insight into the computational effort using the different partitioning methods. It emphasized, that the StP method outperforms the classical partitioning methods in scenarios, where the objects move spatially close. The filter using StP performs more stable and with more accurate estimates. However, this advantage is associated with a higher computational effort compared to the classical heuristic partitioning methods.