Refine
Year of publication
Document Type
- Article (53)
- Conference Proceeding (29)
- Master's Thesis (14)
- Working Paper (12)
- Report (11)
- Bachelor Thesis (8)
- Part of a Book (3)
- Study Thesis (3)
- Preprint (2)
- Book (1)
Language
- English (137) (remove)
Has Fulltext
- yes (137) (remove)
Keywords
- 1D-CNN (1)
- 2 D environment Laser data (1)
- 3D Extended Object Tracking (1)
- 3D shape tracking (1)
- Abschätzung (1)
- Accelerometer (1)
- Additive manufacturing (1)
- Agile administration (1)
- Alpine area (1)
- Alternative Energy Production (1)
Institute
- Fakultät Architektur und Gestaltung (2)
- Fakultät Bauingenieurwesen (16)
- Fakultät Elektrotechnik und Informationstechnik (6)
- Fakultät Informatik (12)
- Fakultät Maschinenbau (6)
- Fakultät Wirtschafts-, Kultur- und Rechtswissenschaften (12)
- Institut für Angewandte Forschung - IAF (18)
- Institut für Optische Systeme - IOS (9)
- Institut für Strategische Innovation und Technologiemanagement - IST (2)
- Institut für Systemdynamik - ISD (13)
InBetween
(2017)
The actual task of electrocardiographic examinations is to increase the reliability of diagnosing the condition of the heart. Within the framework of this task, an important direction is the solution of the inverse problem of electrocardiography, based on the processing of electrocardiographic signals of multichannel cardio leads at known electrode coordinates in these leads (Titomir et al. Noninvasiv electrocardiotopography, 2003), (Macfarlane et al. Comprehensive Electrocardiology, 2nd ed. (Chapter 9), 2011).
Many countries offer state credit guarantees to support credit-constrained exporters. The policy instrument is commonly justified by governments as a means to mitigating adverse outcomes of financial market frictions for exporting firms. Accumulated returns to the German state credit guarantee scheme deriving from risk-compensating premia have outweighed accumulated losses over the past 60 years. Why do private financial agents not step in and provide insurance given that the state-run program yields positive returns? We argue that costs of risk diversification, liquidity management, and coordination among creditors limit the ability of private financial agents to offer comparable insurance products. Moreover, we suggest that the government’s greater effectiveness in recovering claims in foreign countries endows the state with a cost advantage in dealing with the risks involved in large export projects. We test these hypotheses using monthly firm-level data combined with official transaction-level data on covered exports of German firms and find suggestive evidence that positive effects on trade are due to mitigated financial constraints: State credit guarantees benefit firms that are dependent on external finance, if the value at risk which they seek to cover is large, and at times when refinancing conditions on the private financial market are tight.
Earthquake response spectra as defined by Eurocode 8 (German NAD) are restricted to soils with shear wave velocities greater than 150 m/s. For soft soil layers e.g. of clay underlain by bedrock special investigations are required because resonance effects of the layer significantly influence the shape of the spectrum. Numerical investigations are normally based on a one-dimensional theory of horizontally polarized shear waves propagating in vertical direction. The paper describes a parametric study to define acceleration response spectra for a soft soil over a half-space for a wide range of soil layer heights and material parameters. Based on this study a simplified method to describe response spectra for the model of a soft soil layer underlain by a viscoelastic halfspace is given.
Cities around the world are facing the implications of a changing climate as an increasingly pressing issue. The negative effects of climate change are already being felt today. Therefore, adaptation to these changes is a mission that every city must master. Leading practices worldwide demonstrate various urban efforts on climate change adaptation (CCA) which are already underway. Above all, the integration of climate data, remote sensing, and in situ data is key to a successful and measurable adaptation strategy. Furthermore, these data can act as a timely decision support tool for municipalities to develop an adaptation strategy, decide which actions to prioritize, and gain the necessary buy-in from local policymakers. The implementation of agile data workflows can facilitate the integration of climate data into climate-resilient urban planning. Due to local specificities, (supra)national, regional, and municipal policies and (by) laws, as well as geographic and related climatic differences worldwide, there is no single path to climate-resilient urban planning. Agile data workflows can support interdepartmental collaboration and, therefore, need to be integrated into existing management processes and government structures. Agile management, which has its origins in software development, can be a way to break down traditional management practices, such as static waterfall models and sluggish stage-gate processes, and enable an increased level of flexibility and agility required when urgent. This paper presents the findings of an empirical case study conducted in cooperation with the City of Constance in southern Germany, which is pursuing a transdisciplinary and trans-sectoral co-development approach to make management processes more agile in the context of climate change adaptation. The aim is to present a possible way of integrating climate data into CCA planning by changing the management approach and implementing a toolbox for low-threshold access to climate data. The city administration, in collaboration with the University of Applied Sciences Constance, the Climate Service Center Germany (GERICS), and the University of Stuttgart, developed a co-creative and participatory project, CoKLIMAx, with the objective of integrating climate data into administrative processes in the form of a toolbox. One key element of CoKLIMAx is the involvement of the population, the city administration, and political decision-makers through targeted communication and regular feedback loops among all involved departments and stakeholder groups. Based on the results of a survey of 72 administrative staff members and a literature review on agile management in municipalities and city administrations, recommendations on a workflow and communication structure for cross-departmental strategies for resilient urban planning in the City of Constance were developed.
The main objective of this paper is to revisit the Euro method in a critical and constructive way.Wehave analysed some arguments against the Euro method published recently in the literature as well as some other relevant aspects of the SUT-Euro and SUT-RAS methods not covered before. Although not being the Euro method perfect, we believe that there is still space for the use of the Euro method in updating/regionalizing Supply and Use tables.
In my research sabbatical I was working on three different topics, namely orthogonal polynomials in geometric modeling, re-parametrized univariate subdivision curves, and reconstruction of 3d-fish-models and other zoological artifacts. In the subsequent Sections, I will describe my particular activity in these different fields. The sections are meant to present an overview of my research activities, leaving out the technical details.
Section 1 is on orthogonal polynomials and other related generating systems for functions systems of smooth function.
In Section 2, I will discuss the application of various re-parametrization schemes for interpolatory subdivision algorithms for the generation of space curves.
The next Section 3 is concerned with my research at the University of Queensland, Brisbane, in collaboration with Dr. Ulrike Siebeck from the School of Biomedical Sciences on fish behavior and reconstruction of 3d-fish models in particular.
In the last Section 4, I will describe what effects this research will have on in my subsequent teaching at the University of Applied Science Konstanz (HTWG).
Matrix methods for the computation of bounds for the range of a complex polynomial and its modulus over a rectangular region in the complex plane are presented. The approach relies on the expansion of the given polynomial into Bernstein polynomials. The results are extended to multivariate complex polynomials and rational functions.
This paper examines the interdependencies of tourism, Buddhism and sustainability combining in-depth-interviews with Buddhism experts and non-participant observation in a mixed-method approach. The area under investigation is the Alpine region of Austria, Germany and Switzerland, since it is home to Asian and Western forms of Buddhism tourism alike. Results show that Buddhism tourism as a value-based activity on the one hand is not commercial, but since demand is rising, on the other hand tendencies towards more commercial forms can be observed. As a modest form of activity Buddhism tourism does not shape the landscape of the Alpine area and by its nature it incorporates sustainability.
Code-based cryptosystems are promising candidates for post-quantum cryptography. Recently, generalized concatenated codes over Gaussian and Eisenstein integers were proposed for those systems. For a channel model with errors of restricted weight, those q-ary codes lead to high error correction capabilities. Hence, these codes achieve high work factors for information set decoding attacks. In this work, we adapt this concept to codes for the weight-one error channel, i.e., a binary channel model where at most one bit-error occurs in each block of m bits. We also propose a low complexity decoding algorithm for the proposed codes. Compared to codes over Gaussian and Eisenstein integers, these codes achieve higher minimum Hamming distances for the dual codes of the inner component codes. This property increases the work factor for a structural attack on concatenated codes leading to higher overall security. For comparable security, the key size for the proposed code construction is significantly smaller than for the classic McEliece scheme based on Goppa codes.
Generalized Concatenated Codes over Gaussian and Eisenstein Integers for Code-Based Cryptography
(2021)
The code-based McEliece and Niederreiter cryptosystems are promising candidates for post-quantum public-key encryption. Recently, q-ary concatenated codes over Gaussian integers were proposed for the McEliece cryptosystem together with the one-Mannheim error channel, where the error values are limited to Mannheim weight one. Due to the limited error values, the codes over Gaussian integers achieve a higher error correction capability than maximum distance separable (MDS) codes with bounded minimum distance decoding. This higher error correction capability improves the work factor regarding decoding attacks based on information-set decoding. The codes also enable a low complexity decoding algorithm for decoding beyond the guaranteed error correction capability. In this work, we extend this coding scheme to codes over Eisenstein integers. These codes have advantages for the Niederreiter system. Additionally, we propose an improved code construction based on generalized concatenated codes. These codes extent the rate region where the work factor is beneficial compared to MDS codes. Moreover, generalized concatenated codes are more robust against structural attacks than ordinary concatenated codes.
The performance and reliability of non-volatile NAND flash memories deteriorate as the number of program/erase cycles grows. The reliability also suffers from cell to cell interference, long data retention time, and read disturb. These processes effect the read threshold voltages. The aging of the cells causes voltage shifts which lead to high bit error rates (BER) with fixed pre-defined read thresholds. This work proposes two methods that aim on minimizing the BER by adjusting the read thresholds. Both methods utilize the number of errors detected in the codeword of an error correction code. It is demonstrated that the observed number of errors is a good measure for the voltage shifts and is utilized for the initial calibration of the read thresholds. The second approach is a gradual channel estimation method that utilizes the asymmetrical error probabilities for the one-to-zero and zero-to-one errors that are caused by threshold calibration errors. Both methods are investigated utilizing the mutual information between the optimal read voltage and the measured error values.
Numerical results obtained from flash measurements show that these methods reduce the BER of NAND flash memories significantly.
Sabbatical semester report
(2020)
Sabbatical semester report
(2015)
Thermal shape memory alloys show extraordinary material properties and can be used as actuators, dampers and sensors. Since their discovery in the middle of the last century they have been investigated and further developed. The majority of the industrial applications with the highest material sales can still be found in the medical industry, where they are used due to their superelastic and thermal shape memory effect, e.g. as stents or as guidewires and tools in the minimal invasive surgery. Particularly in recent years, more and more applications have been developed for other industrial fields, e.g. for the household goods, civil engineering and automotive sector. In this context it is worth mentioning that for the latter sector, million seller series applications have found their way into some European automobile manufacturers. The German VDI guideline for shape memory alloys introduced in 2017 will give the material a further boost in application. Last but not least the new production technologies of additive manufacturing with metal laser sintering plants open up additional applications for these multifunctional materials. This paper gives an overview of the extraordinary material properties of shape memory components, shows examples of different applications and discusses European trends against the background of the most recent standard and new production technologies.
In automotive a lot of electromagnetically, pyrotechnically or mechanically driven actuators are integrated to run comfort systems and to control safety systems in modern passenger cars. Using shape memory alloys (SMA) the existing systems could be simplified, performing the same function through new mechanisms with reduced size, weight, and costs. A drawback for the use of SMA in safety systems is the lack of materials knowledge concerning the durability of the switching function (long-time stability of the shape memory effect). Pedestrian safety systems play a significant role to reduce injuries and fatal casualties caused by accidents. One automotive safety system for pedestrian protection is the bonnet lifting system. Based on such an application, this article gives an introduction to existing bonnet lifting systems for pedestrian protection, describes the use of quick changing shape memory actuators and the results of the study concerning the long-time stability of the tested NiTi-wires. These wires were trained, exposed up to 4years at elevated temperatures (up to 140°C) and tested regarding their phase change temperatures, times, and strokes. For example, it was found that A P-temperature is shifted toward higher temperatures with longer exposing periods and higher temperatures. However, in the functional testing plant a delay in the switching time could not be detected. This article gives some answers concerning the long-time stability of NiTi-wires that were missing till now. With this knowledge, the number of future automotive applications using SMA can be increased. It can be concluded, that the use of quick changing shape memory actuators in safety systems could simplify the mechanism, reduce maintenance and manufacturing costs and should be insertable also for other automotive applications.
As fish farming is becoming more and more important worldwide, this ongoing project aims at the simulation and test-based analysis of highly stressed wire contacts, as they are found in off-shore fish farm cages in order to make them more reliable. The quasi-static tensile test of a wire mesh provides data for the construction of a finite element model to get a better understanding of the behavior of high-strength stainless steel from which the cages are made. Fatigue tests provide new insights that are used for an adjustment of the finite element model in order to predict the probability of possible damage caused by heavy mechanical loads (waves, storms, predators (sharks)).
This paper describes the rationale and the development of a structured digital approach for measuring corporate environmental sustainability using performance metrics.
It is impossible to imagine today's age without the preservation of our environment, not even in the corporate environment. Currently, sustainability is mostly only rudimentarily considered in companies, mostly only with written down phrases on the website. This will no longer be sufficient in the future, which is why companies should record sustainability on a numerical basis. Based on the development of a workable concept for companies, a small empirical study was carried out, which can be used to numerically measure the sustainability performance of companies. Two utility analyses were completed.
One of them was supplemented by expert interviews. Well-known practitioners from the business world were interviewed and asked for their assessment of ecological performance indicators. The result of the research is an indicator-based concept that can be applied in corporate practice to determine ecological sustainability performance.
Throughout this thesis, the implementation of tools for knowledge management as a key factor for sustainable corporate development, is presented. In industries with a high fluc-tuation rate, such as construction, efficient knowledge management is of particular im-portance. Companies feel the effects of negligent handling of this resource especially dur-ing the Corona pandemic. Restructuring leads to experienced employees leaving the com-pany – and with them the know-how and experience gained. With a systematic knowledge transfer, the most important insights in such situations remain within the or-ganization. Thus, the company becomes crisis-proof and receives all the tools it needs to grow healthily again after the recession. Practical data from competitors indicates that knowledge management promises savings potential of several million euros per year for BAM. Further potentials in the areas of sustainability, customer- and employee satisfac-tion as well as occupational safety, which do not lead to savings, are also worth mention-ing. This thesis determines the current maturity level of knowledge management at BAM, before introducing processes and systems that successively drive the improvement. The developed methods simultaneously help to prevent and solve problems and systematical-ly promote the continuous improvement of all work processes in the company.
Detailed steps are presented to carry out change management towards the successful introduction and further development of knowledge management at BAM. A major focus is on interpersonal factors. The related topic of knowledge culture was recently ranked by german think-tank Zukunftsinstitut as one of the top 5 megatrends for companies in the 2020s. The methods developed, contribute to the creation of such a culture and to the transformation of BAM towards a learning organization. Knowledge management identi-fies with the BAM values. In the course of this thesis it will be shown how the system by its very nature, helps to implement these values in the work of every employee.
The results of this elaboration were recently awarded the Digital Construction Award 2020 for Business Excellence at BAM Deutschland AG.
Deep neural networks have become a veritable alternative to classic speaker recognition and clustering methods in recent years. However, while the speech signal clearly is a time series, and despite the body of literature on the benefits of prosodic (suprasegmental) features, identifying voices has usually not been approached with sequence learning methods. Only recently has a recurrent neural network (RNN) been successfully applied to this task, while the use of convolutional neural networks (CNNs) (that are not able to capture arbitrary time dependencies, unlike RNNs) still prevails. In this paper, we show the effectiveness of RNNs for speaker recognition by improving state of the art speaker clustering performance and robustness on the classic TIMIT benchmark. We provide arguments why RNNs are superior by experimentally showing a “sweet spot” of the segment length for successfully capturing prosodic information that has been theoretically predicted in previous work.
This report summarises up-to-date social science evidence on climate communication for effective public engagement. It presents ten key principles that may inform communication activities. At the heart of them is the following insight: People do not form their attitudes or take action as a result primarily of weighing up expert information and making rational cost-benefit calculations. Instead, climate communication has to connect with people at the level of values and emotions.
Two aspects seem to be of special importance: First, climate communication needs to focus more on effectively speaking to people who have up to now not been properly addressed by climate communications, but who are vitally important to build broad public engagement. Second, climate communication has to support a shift from concern to agency, where high levels of climate risk perception turn into pro-climate individual and collective action.
Background: Polysomnography (PSG) is the gold standard for detecting obstructive sleep apnea (OSA). However, this technique has many disadvantages when using it outside the hospital or for daily use. Portable monitors (PMs) aim to streamline the OSA detection process through deep learning (DL).
Materials and methods: We studied how to detect OSA events and calculate the apnea-hypopnea index (AHI) by using deep learning models that aim to be implemented on PMs. Several deep learning models are presented after being trained on polysomnography data from the National Sleep Research Resource (NSRR) repository. The best hyperparameters for the DL architecture are presented. In addition, emphasis is focused on model explainability techniques, concretely on Gradient-weighted Class Activation Mapping (Grad-CAM).
Results: The results for the best DL model are presented and analyzed. The interpretability of the DL model is also analyzed by studying the regions of the signals that are most relevant for the model to make the decision. The model that yields the best result is a one-dimensional convolutional neural network (1D-CNN) with 84.3% accuracy.
Conclusion: The use of PMs using machine learning techniques for detecting OSA events still has a long way to go. However, our method for developing explainable DL models demonstrates that PMs appear to be a promising alternative to PSG in the future for the detection of obstructive apnea events and the automatic calculation of AHI.
Today’s markets are characterized by fast and radical changes, posing an essential challenge to established companies. Startups, yet, seem to be more capable in developing radical innovations to succeed in those volatile markets. Thus, established companies started to experiment with various approaches to implement startup-like structures in their organization. Internal corporate accelerators (ICAs) are a novel form of corporate venturing, aiming to foster bottom-up innovations through intrapreneurship. However, ICAs still lack empirical investigations. This work contributes to a deeper understanding of the interface between the ICA and the core organization and the respective support activities (resource access and support services) that create an innovation-supportive work environment for the intrapreneurial team. The results of this qualitative study, comprising 12 interviews with ICA teams out of two German high-tech companies, show that the resources provided by ICAs differ from the support activities of external accelerators. Further, the study shows that some resources show both supportive as well as obstructive potential for the intrapreneurial teams within the ICA.
Regarding moral concerns in the business sphere, integrity is often mentioned as one of the core values that guides the behavior of companies. Daimler for instance states: “Acting with integrity is the central requirement for sustainable success and a maxim that Daimler follows in its worldwide business practices.”1 Reference to integrity is mostly supposed to signal that the company acts morally responsibly. Although some companies specify what acting with integrity means for them, it generally remains unclear what the concept of integrity entails – both broadly speaking and referring to business. This conceptual gap shall be filled by developing a concept of integrity that can be transferred to the business context. For this purpose, the main criteria that constitute moral integrity will be discussed before reflecting on how these could be integrated into a practical and comprehensive concept of corporate integrity.
This thesis investigates methods for the recognition of facial expressions using support vector machines. Rather than trying to recognize facial actions in the face such as raised eyebrow, mouth open and frowns. These facial actions are described in the Facial Action Coding System (FACS) and are essential facial components, which can be combined to form facial expressions. We perform independent recognition of 6 upper and 10 lower action units in the face, which may occur either individually or in combination. Based on a feature extraction from grey-level values, the system is expected to recognize under real-time conditions. Results are presented with different image resolutions, SVM kernels and variations of low-level features.
Atom interferometers have a multitude of proposed applications in space including precise measurements of the Earth's gravitational field, in navigation & ranging, and in fundamental physics such as tests of the weak equivalence principle (WEP) and gravitational wave detection. While atom interferometers are realized routinely in ground-based laboratories, current efforts aim at the development of a space compatible design optimized with respect to dimensions, weight, power consumption, mechanical robustness and radiation hardness. In this paper, we present a design of a high-sensitivity differential dual species 85Rb/87Rb atom interferometer for space, including physics package, laser system, electronics and software. The physics package comprises the atom source consisting of dispensers and a 2D magneto-optical trap (MOT), the science chamber with a 3D-MOT, a magnetic trap based on an atom chip and an optical dipole trap (ODT) used for Bose-Einstein condensate (BEC) creation and interferometry, the detection unit, the vacuum system for 10-11 mbar ultra-high vacuum generation, and the high-suppression factor magnetic shielding as well as the thermal control system.
The laser system is based on a hybrid approach using fiber-based telecom components and high-power laser diode technology and includes all laser sources for 2D-MOT, 3D-MOT, ODT, interferometry and detection. Manipulation and switching of the laser beams is carried out on an optical bench using Zerodur bonding technology. The instrument consists of 9 units with an overall mass of 221 kg, an average power consumption of 608 W (819 W peak), and a volume of 470 liters which would well fit on a satellite to be launched with a Soyuz rocket, as system studies have shown.
The effect on the mean-variance space of restrictions on a variable is investigated in this
paper. A restriction may be the placing of upper and lower bounds on a variable.
Another limitation is the loss of the continuity of a variable.
Average marks for Examinations are considered in an application of this limited meanvariance
space. In this case, the bounds are given by the highest and the lowest possible mark (e.g. 1.0 and 5.0). The limitation of the mean-variance space depends on the number of students who participate in the examination. The restriction of the loss of continuity is shown by the use of discrete marks (e.g. 1.0, 1.3, 1.7, 2.0, …). Furthermore,
the Target-Shortfall-Probability lines are integrated into the mean-variance space. These
lines are used to indicate the proportion of students who have good or very good marks in the examination. In financial markets, Target-Shortfall-Probability is used as a risk criterion.
With the increasing challenges of the 21st century, such as a rapidly growing population, increasing hunger and the destruction of the environment, the demand for sustainable and future-oriented ways of living is growing. To meet this demand, a residential district named Maun Science Park is being built in Botswana to develop a resilient society. In addition to the application of modern technology to optimise the use of resources, the environmentally friendly construction of the buildings is another goal of the project. This thesis investigates the prefabrication of rammed earth in terms of implementation and profitability for the Maun Science Park.
For this purpose, the specific properties, handling, as well as the application of the building material in prefabrication are first discussed.
This is followed by an investigation of how the work processes of prefabrication can be implemented in the Maun Science Park. Based on this, a profitability test is carried out using a break-even and sensitivity analysis.
The analyses showed that the investment in prefabrication is not profitable within the assumed production volume, which is due to the high fixed costs. These are primarily generated by the two main cost drivers, consisting of the new construction of the production hall and the rental of heavy construction equipment.
Lastly, recommendations for action were formulated that provide for a cost reduction in both the two main cost drivers as well as for other decisive factors.
The focus of this part of the research project lies on the process of developing a Social Responsibility Standard within a network made up of various stakeholders. The International Organization for Standardization (ISO) is known as the world´s leading institution for the development of standards. Besides setting standards in the fields of e.g. construction, agriculture and information technology, recently the Technical Management Board (TMB) of ISO proposed to further extend its activities by developing an international standard addressing the social responsibility of organizations. In 2004, a new Working Group was established as a multi-stakeholder group comprised of experts, who are nominated by ISO´s members as well as interested international and regional organizations in order to provide for guidance in setting international standards on social responsibility. In May 2006, the survey was conducted during the third conference of the ISO Working Group in Lisbon, Portugal. This particular empirical study has been designed on the one hand to investigate the motivation of organizations and their delegates to engage in social responsibility. On the other hand, the survey had the objective to evaluate the individual participants' current perception and assessment of the network´s efficiency, effectiveness and legitimacy, a so-called 'snap-shot' of this ISO process1. Overall, the empirical study shows that the organizations and their delegates, who have dealt with the topic SR for several years for diverse reasons, expect a tremendous effect by implementing ISO 26000 in their own organizations. Furthermore, the majority of respondents assess the decision-making process positively within the ISO process with respect to the criteria inclusive, fair, capacity building, legitimate and transparent. Difficulties concerning the distribution of stakeholder influences are being addressed. The results of the survey support the efforts to establish policies and procedures in order to encourage a balanced representation of stakeholders in terms of gender, geographic and stakeholder groups.
This working paper is part of a PhD research project dealing with the topics Social Responsibility, Stakeholder Theory and Network Governance, run by Maud Schmiedeknecht and supervised by Prof. Dr. habil. Josef Wieland, both from the Konstanz Institute for Intercultural Management, Values and Communication at the Konstanz University of Applied Sciences.
Web services are, due to the excellent tool support, simple to provide and use in trivial cases. But their use in non-trivial Web service-based systems like I3M poses new difficulties and problems. I3M is an instant messaging and chat system with distributed and local components collaborating via Web services. One difficulty is to make a series of related Web service invocations in a stateful session. A problem is the performance of collaborating collocated, service-oriented components of a system due to the high Web service invocation overheaed as is shown by measurements. Solutions to both the difficulty and the problem are proposed.
While driving, stress is caused by situations in which the driver estimates their ability to manage the driving demands as insufficient or loses the capability to handle the situation. This leads to increased numbers of driver mistakes and traffic violations. Additional stressing factors are time pressure, road conditions, or dislike for driving. Therefore, stress affects driver and road safety. Stress is classified into two categories depending on its duration and the effects on the body and psyche: short-term eustress and constantly present distress, which causes degenerative effects. In this work, we focus on distress. Wearable sensors are handy tools for collecting biosignals like heart rate, activity, etc. Easy installation and non-intrusive nature make them convenient for calculating stress. This study focuses on the investigation of stress and its implications. Specifically, the research conducts an analysis of stress within a select group of individuals from both Spain and Germany. The primary objective is to examine the influence of recognized psychological factors, including personality traits such as neuroticism, extroversion, psychoticism, stress and road safety. The estimation of stress levels was accomplished through the collection of physiological parameters (R-R intervals) using a Polar H10 chest strap. We observed that personality traits, such as extroversion, exhibited similar trends during relaxation, with an average heart rate 6% higher in Spain and 3% higher in Germany. However, while driving, introverts, on average, experienced more stress, with rates 4% and 1% lower than extroverts in Spain and Germany, respectively.
Increasing robustness of handwriting recognition using character N-Gram decoding on large lexica
(2016)
Offline handwriting recognition systems often include a decoding step, that is retrieving the most likely character sequence from the underlying machine learning algorithm. Decoding is sensitive to ranges of weakly predicted characters, caused e.g. by obstructions in the scanned document. We present a new algorithm for robust decoding of handwriting recognizer outputs using character n-grams. Multidimensional hierarchical subsampling artificial neural networks with Long-Short-Term-Memory cells have been successfully applied to offline handwriting recognition. Output activations from such networks, trained with Connectionist Temporal Classification, can be decoded with several different algorithms in order to retrieve the most likely literal string that it represents. We present a new algorithm for decoding the network output while restricting the possible strings to a large lexicon. The index used for this work is an n-gram index with tri-grams used for experimental comparisons. N-grams are extracted from the network output using a backtracking algorithm and each n-gram assigned a mean probability. The decoding result is obtained by intersecting the n-gram hit lists while calculating the total probability for each matched lexicon entry. We conclude with an experimental comparison of different decoding algorithms on a large lexicon.
Offline handwriting recognition systems often use LSTM networks, trained with line- or word-images. Multi-line text makes it necessary to use segmentation to explicitly obtain these images. Skewed, curved, overlapping, incorrectly written text, or noise can lead to errors during segmentation of multi-line text and reduces the overall recognition capacity of the system. Last year has seen the introduction of deep learning methods capable of segmentation-free recognition of whole paragraphs. Our method uses Conditional Random Fields to represent text and align it with the network output to calculate a loss function for training. Experiments are promising and show that the technique is capable of training a LSTM multi-line text recognition system.
Algorithms for calculating the string edit distance are used in e.g. information retrieval and document analysis systems or for evaluation of text recognizers. Text recognition based on CTC-trained LSTM networks includes a decoding step to produce a string, possibly using a language model, and evaluation using the string edit distance. The decoded string can further be used as a query for database search, e.g. in document retrieval. We propose to closely integrate dictionary search with text recognition to train both combined in a continuous fashion. This work shows that LSTM networks are capable of calculating the string edit distance while allowing for an exchangeable dictionary to separate learned algorithm from data. This could be a step towards integrating text recognition and dictionary search in one deep network.
This paper examines how varying antidumping methodologies applied within the World Trade Organization differ in the extent to which they reduce targeted exports. We show that antidumping duties, on average, hit Chinese exporters harder than those of other targeted countries. This difference can be traced back in part to China's non-market economy status, which affects the way antidumping duties are calculated. Furthermore, we show that the type of imposed duty matters, as ad-valorem duties affect exports differently compared to specific duties or duties conditional on the export price. Overall, however, antidumping duties remain effective in reducing imports independent of market economy status.
This work presents a new concept to implement the elliptic curve point multiplication (PM). This computation is based on a new modular arithmetic over Gaussian integer fields. Gaussian integers are a subset of the complex numbers such that the real and imaginary parts are integers. Since Gaussian integer fields are isomorphic to prime fields, this arithmetic is suitable for many elliptic curves. Representing the key by a Gaussian integer expansion is beneficial to reduce the computational complexity and the memory requirements of secure hardware implementations, which are robust against attacks. Furthermore, an area-efficient coprocessor design is proposed with an arithmetic unit that enables Montgomery modular arithmetic over Gaussian integers. The proposed architecture and the new arithmetic provide high flexibility, i.e., binary and non-binary key expansions as well as protected and unprotected PM calculations are supported. The proposed coprocessor is a competitive solution for a compact ECC processor suitable for applications in small embedded systems.
Modular arithmetic over integers is required for many cryptography systems. Montgomeryreduction is an efficient algorithm for the modulo reduction after a multiplication. Typically, Mont-gomery reduction is used for rings of ordinary integers. In contrast, we investigate the modularreduction over rings of Gaussian integers. Gaussian integers are complex numbers where the real andimaginary parts are integers. Rings over Gaussian integers are isomorphic to ordinary integer rings.In this work, we show that Montgomery reduction can be applied to Gaussian integer rings. Twoalgorithms for the precision reduction are presented. We demonstrate that the proposed Montgomeryreduction enables an efficient Gaussian integer arithmetic that is suitable for elliptic curve cryptogra-phy. In particular, we consider the elliptic curve point multiplication according to the randomizedinitial point method which is protected against side-channel attacks. The implementation of thisprotected point multiplication is significantly faster than comparable algorithms over ordinary primefields.
CO2 compensation measures, in particular the compensation of flights, are becoming more and more popular. Carbon offsetting is defined as measures financed by donations that save greenhouse gases previously emitted elsewhere through climate protection projects.
CO2 abatement costs are often low in developing countries. This is why most offset projects are implemented there. Nevertheless, this does not mean that the holiday resort and the project country are in any way related to each other.
By linking carbon offset projects with the destination country, the tourist is able to get an impression of the co-financed project. In case such projects are realized in cooperation with the hotel, the hotel operator obtains a new tourist attraction and can demonstrate its efforts to climate protection in a PR-effective way.
Research Report
(2024)
We have analyzed a pool of 37,839 articles published in 4,404 business-related journals in the entrepreneurship research field using a novel literature review approach that is based on machine learning and text data mining. Most papers have been published in the journals ‘Small Business Economics’, ‘International Journal of Entrepreneurship and Small Business’, and ‘Sustainability’ (Switzerland), while the sum of citations is highest in the ‘Journal of Business Venturing’, ‘Entrepreneurship Theory and Practice’, and ‘Small Business Economics’. We derived 29 overarching themes based on 52 identified clusters. The social entrepreneurship, development, innovation, capital, and economy clusters represent the largest ones among those with high thematic clarity. The most discussed clusters measured by the average number of citations per assigned paper are research, orientation, capital, gender, and growth. Clusters with the highest average growth in publications per year are social entrepreneurship, innovation, development, entrepreneurship education, and (business-) models. Measured by the average yearly citation rate per paper, the thematic cluster ‘research’, mostly containing literature studies, received most attention. The MLR allows for an inclusion of a significantly higher number of publications compared to traditional reviews thus providing a comprehensive, descriptive overview of the whole research field.
Twenty-first century infrastructure needs to respond to changing demographics, becoming climate neutral, resilient, and economically affordable, while remaining a driver for development and shared prosperity. However, the infrastructure sector remains one of the least innovative and digitalized, plagued by delays, cost overruns, and benefit shortfalls. The authors assessed trends and barriers in the planning and delivery of infrastructure based on secondary research, qualitative
interviews with internationally leading experts, and expert workshops. The analysis concludes that the root-cause of the industry’s problems is the prevailing fragmentation of the infrastructure value chain and a lacking long-term vision for infrastructure. To help overcome these challenges, an integration of the value chain is needed. The authors propose that this could be achieved through a use-case-based, as well as vision and governance-driven creation of federated digital platforms applied to infrastructure projects and outline a concept. Digital platforms enable full-lifecycle participation and responsible governance guided by a shared infrastructure vision. This paper has contributed as policy recommendation to the Group of Twenty (G20) in 2021.
If the process contains a delay (dead time), the Nyquist criterion is well suited to derive a PI or PID tuning rule because the delay is taken into account without approximation. The tuning of the speed of the closed loop enters naturally by the crossover frequency. The goal of robustness and performance is translated into the phase margin.
This work treats with the segmentation of 2D environment Laser data, captured by an Autonomous Mobile Indoor Robot. It is part of the data processing, which is necessary to navigate a mobile robot error free in its environment. The whole process can generally be described by data capturing, data processing and navigation. In this project the data processing deals with data, captured by a Laser-Sensor, which provides two dimensional data by a series of distance measurements i.e. point-measurements of the environment. These point series have to be filtered and processed into a more convenient representation to provide a virtual environment map, which can be used of the robot for an error free navigation. This project provides different solutions of the same problem: the conversion from distance points to model segments which should represent the real world environment as close as possible. The advantages and disadvantages of each of the different Segmentation-Algorithms will be shown as well as a comparison taking into account the Computational Time and the Robustness of the results.
The target of this thesis is the introduction of a client management system (CMS) at Haaland Internet Productions (HiP), a web design and hosting company in Burbank, California, USA. The company needs a system to track orders and improve workflow. HiP needs a system which not only tracks orders, but also stores all client information in a database. This client information can be used for a variety of marketing and contact reasons. It is an important and integral part of HiP's client relationship management (CRM). The lack of a cohesive CMS at HiP caused many fundamental business problems, such as lost orders, missed billing statements, and over/under billing. The research done during the investigation and analysis of the company and their needs should lead to a global system which totally fulfils the needs of HiP. This global system could be in the form of an off-the-shelf product with some customizations, or a completely new, in-house system. Either solution will have respective pros and cons; the goal is to reach a decision that best fits HiP's needs and situation. The following is a concise version of the project. Particular emphasis is placed upon the single steps which made up the decision process, as well as the practiced techniques, methods, and their applications.
In tourism, energy demands are particularly high.Tourism facilities such as hotels require large amounts ofelectric and heating resp. cooling energy. Their supply howeveris usually still based on fossil energies. This research approachanalyses the potential of promoting renewable energies in BlackForest tourism. It focuses on a combined and hence highlyefficient production of both electric and thermal energy bybiogas plants on the one hand and its provision to local tourismfacilities via short distance networks on the other. Basing onsurveys and qualitative empiricism and considering regionalresource availability as well as socio-economic aspects, it thusexamines strengths, weaknesses, opportunities and threats thatcan arise from such a cooperation.
In tourism, energy demands are particularly high. Tourism facilities such as hotels require large amounts of electric and heating / cooling energy while their supply is usually still based on fossil energies.
This research approach analyses the potential of promoting renewable energies in tourism. It focuses on a combined and hence highly efficient production of both electric and thermal energy by biogas plants on the one hand and its provision to local tourism facilities via short distance networks on the other. Considering regional resource availability as well as socio-economic aspects, it thus examines strengths, weaknesses, opportunities and threats that can arise from such a micro-cooperation. The research aim is to provide an actor-based, spatially transferable feasibility analysis.
We propose a novel end-to-end neural network architecture that, once trained, directly outputs a probabilistic clustering of a batch of input examples in one pass. It estimates a distribution over the number of clusters k, and for each 1≤k≤kmax, a distribution over the individual cluster assignment for each data point. The network is trained in advance in a supervised fashion on separate data to learn grouping by any perceptual similarity criterion based on pairwise labels (same/different group). It can then be applied to different data containing different groups. We demonstrate promising performance on high-dimensional data like images (COIL-100) and speech (TIMIT). We call this “learning to cluster” and show its conceptual difference to deep metric learning, semi-supervise clustering and other related approaches while having the advantage of performing learnable clustering fully end-to-end.
In order to elaborate inflation and deflation tendencies due to the COVID-19 pandemic and how they are tried to be actively influenced, this paper compares news regarding the measurements of central banks in Europe, USA and Japan. Factors affecting inflation are defined in conjunction with the typical measurements of central banks and conclusions are drawn in respect to differences of the most recent correcting behavior. The paper is concluded by discussing how price levels might develop during and after the crisis.
Purpose
The goal of this research survey was to propose an entrepreneurship education model for students in higher education institutions.
Methodology
A questionnaire was distributed to 246 randomly sampled students at the Universitas Negeri Jakarta. The data was analyzed through Structural Equation Modeling to study the variables of entrepreneurship education for higher education students and examine whether it can be predicted by the university leadership as a facilitator of entrepreneurial culture, university departments as promoters of entrepreneurial skills, and university research as an incubator of local business
development.
Findings
The results show that university leadership as a facilitator of entrepreneurial culture is supported by the university leadership’s fostering a culture of entrepreneurial thinking. It was also evident that the university placed sufficient emphasis on entrepreneurial education, and it successfully motivated lecturers to embrace entrepreneurship education, and students to embrace entrepreneurship education. The results also indicated that university departments acted as promoters of entrepreneurial skills and stimulated students to attain sufficient entrepreneurial skills during their university education. Lastly, the university research also proved as an incubator of local business development and was found influenced by the university conducting research projects with local
private sector businesses and supporting graduates planning to launch start-ups.
Implications to Research and Practice
The survey results will provide valuable policy insights to improve entrepreneurship education. The university faculty and students would have opportunities to gain practical experience in local private sector businesses. The model of entrepreneurship education proposed herein can be applied for higher education students.
Digitization and sustainability are the two big topics of our current time. As the usage of digital products like IoT devices continues to grow, it affects the energy consumption caused by the Internet. At the same time, more and more companies feel the need to become carbon neutral and sustainable. Determining the environmental impact of an IoT device is challenging, as the production of the hardware components should be considered and the electricity consumption of the Internet since this is the primary communication medium of an IoT device. Estimating the electricity consumption of the Internet itself is a complex task. We performed a life cycle assessment (LCA) to determine the environmental impact of an intelligent smoke detector sold in Germany, taking its whole life-cycle from cradle-to-grave into account. We applied the impact assessment method ReCiPe 2016 Midpoint and compared its results with ILCD 2011 Midpoint+ to check the robustness of our results. The LCA results showed that electricity consumption during the use phase is the main contributor to environmental impacts. The mining of coal causes this contribution, which is a part of the German electricity mix. Consequently, the smoke detector mainly contributes to the impact categories of freshwater and marine ecotoxicity, but only marginally to global warming.
To master complexity, we can organize it or discard it. The Art of Insight in Science and Engineering first teaches the tools for organizing complexity, then distinguishes the two paths for discarding complexity: with and without loss of information. Questions and problems throughout the text help readers master and apply these groups of tools. Armed with this three-part toolchest, and without complicated mathematics, readers can estimate the flight range of birds and planes and the strength of chemical bonds, understand the physics of pianos and xylophones, and explain why skies are blue and sunsets are red.
Urban car-free mobility
(2021)
Across the globe, urban areas experience the phenomena of rising road-congestion, air pollution and car accidents. These are just a few popular quantified effects that arise due to rapid, uncoordinated urbanization on a car-centric city layout. There is an urgent need to consider new concepts of urban mobility development to combat these negative effects. Car-free mobility is one notion adopted in diverse formats by numerous cities to create a more inclusive, just, healthy and sustainable urban life. The focus of this thesis is to ex- amine whether a car-free mobility concept is applicable to the Maun Science Park, Bot- swana. Therefore, the idea of car-free mobility, its positive aspects as well as its con- straints, are described first. This illustrates the complexity of urban transport planning as it is intertwined with urban land-use, political vision and people’s perceptions and behav- iors. Secondly, examples and strategies on how to change existing structures are pre- sented. Following this, the smart developments in the field of sustainable urban mobility are considered to provide an insight into their assets and drawbacks. Then the local mo- bility conditions are examined before the car-free concept is exemplarily applied to the Maun Science Park via scenario construction. These scenarios give a first vision of how a car-free concept can be applied to the MSP and additionally provide a starting point for future strategic planning as well as inspiration for other cities to follow along.
Cities around the world are facing an increasing number of global and local challenges, such as climate change and scarcity of raw materials. At the same time trends like digitalization, globalization and networking gain in importance. For this reason, cities have started imple-menting smart solutions within the urban structure in order to evolve towards a Smart City. In Botswana, the Maun Science Park is intended to provide a best practice approach for a Bot-swanan Smart City. Since Smart City concepts have to be specifically tailored to local condi-tions, the first main goal of this thesis is to develop a synthesis concept for the Maun Science Park. A key problem in cities is the utilization of space, which is further intensified by increasing urbanization and population growth. Therefore, the second main goal is to develop approaches of (digitally) re-programmable space to use available areas intelligently and optimized.
Within the thesis, human-centered design has been applied as structure-giving methodology. By clarifying relevant Smart City contents, considering reference examples as well as identify-ing local challenges and requirements, an appropriate concept has been developed with hu-man-focus. Furthermore, the methodologies of literature research and expert interviews have been used as input in the individual human-centered design phases. In combination with an innovation funnel, the methodology human-centered design forms the structure of the thesis.
In total, ten main solution areas and 37 sub-segments have been identified for the synthesis concept of Maun Science Park. Additionally, a concept for Smart Buildings has been devel-oped as a part of the synthesis concept and as an essential infrastructure component of the Maun Science Park (three main segments, 16 sub-segments). Based on expert input, a priori-tization has been determined by evaluating the impact and economic affordability of the indi-vidual sub-areas. Moreover, individual key areas have been highlighted by identifying direct interactions between sub-segments and on the basis of expert input – these are particularly related to the segments Smart Data and Smart People. Besides the synthesis concept, ap-proaches of (digitally) re-programmable space have been created. Thereby, ten approaches refer to the conversion, reuse or expansion abilities of space within daily, weekly or life cycle. In addition, the conventional (digitally) re-programmable space idea has been extended by two new considerations – “multi-purpose use of built-up space” and “concept programming in the planning phase”. Finally, within an overall consideration – synthesis concept combined with approaches of (digitally) re-programmable space – the added value of the developed contents has been outlined, positive and negative aspects have been identified within a SWOT analysis and the business model of the Maun Science Park approach has been verified in a Business Model Canvas.
Through explicit elaboration, classification and prioritization of solution areas, the developed concept can serve as a basis for further project steps. Based on the defined requirements of the sub-segments, solutions can be developed with regard to the entire Smart City context.
In the reverse engineering process one has to classify parts of point clouds with the correct type of geometric primitive. Features based on different geometric properties like point relations, normals, and curvature information can be used, to train classifiers like Support Vector Machines (SVM). These geometric features are estimated in the local neighborhood of a point of the point cloud. The multitude of different features makes an in-depth comparison necessary. In this work we evaluate 23 features for the classification of geometric primitives in point clouds. Their performance is evaluated on SVMs when used to classify geometric primitives in simulated and real laser scanned point clouds. We also introduce a normalization of point cloud density to improve classification generalization.
Mutual Information Analysis for Generalized Spatial Modulation Systems With Multilevel Coding
(2022)
Generalized Spatial Modulation (GSM) enables a trade-off between very high spectral efficiencies and low hardware costs for massive MIMO systems. This is achieved by transmitting information via the selection of active antennas from a set of available antennas besides the transmission of conventional data symbols. GSM systems have been investigated concerning various aspects like suitable signal constellations, efficient detection algorithms, hardware implementations, spatial precoding, and error control coding. On the other hand, determining the capacity of GSM is challenging because no closed-form expressions have been found so far. This paper investigates the mutual information for different GSM variants. We consider a multilevel coding approach, where the antenna selection and IQ modulation are encoded independently. Combined with multistage decoding, such an approach enables low-complexity capacity-achieving coded modulation. The influence of the data symbols on the mutual information is illuminated. We analyze the portions of mutual information related to antenna selection and the IQ modulation processes which depend on the GSM variant and the signal constellation. Moreover, the potential of spatial modulation for massive MIMO systems with many transmit antennas is investigated. Especially in systems with many transmit antennas much information can be conveyed by antenna selection.
InnoCrowd, a Product Classification System for Design Decision in a Crowdsourced Product Innovation
(2021)
System engineering focuses on how to design and manage complex systems. Meanwhile, in the era of Industry 4.0 and Internet of Things (IoT), systems are getting more complex. Contributors to higher complexity include the usage of modern components (e.g. mechatronics), new manufacturing technologies (e.g. 3D Print) and new engineering product development processes, e.g. open innovation. Open innovation is enabled by IoT, where people and devices are easily connected, and it supports development of more innovative products through ideas gained from predecessors and collaborators world wide. Some researchers suggest this approach is up to three times faster and five times cheaper than conventional approaches [Gassmann, 2012], [Howe, 2008], [Kusumah, 2018]. Because open innovation is relatively new, many managers do not know how to employ it effectively in some phases of product development [Schenk, 2009], [Afuah, 2017], including requirements definition, design and engineering processes (task assignment) through quality assurance. Also, they have trouble estimating and controlling development time and cost [Nevo, 2020], [Thanh, 2015]. As a consequence, the acceptance of this new approach in the industry is limited. Research activities addressing this new approach mainly address high-level and qualitive issues. Few effective methods are available to estimate project risk and to decide whether to initiate a project.
We propose InnoCrowd, a decision support system that uses an improved method to support these tasks and make decisions about crowdsourced engineering product development.
InnoCrowd uses natural language processing and machine learning to build a knowledgebase of crowdsourced product developments. InnoCrowd presents a manager with results of similar projects to show which practices led to good results. A manager of a new project can use this guidance to employ best practices for product requirements definition, project schedule, and other aspects, thereby reducing risk and increasing chances for success.
Interpretability and uncertainty modeling are important key factors for medical applications. Moreover, data in medicine are often available as a combination of unstructured data like images and structured predictors like patient’s metadata. While deep learning models are state-of-the-art for image classification, the models are often referred to as ’black-box’, caused by the lack of interpretability. Moreover, DL models are often yielding point predictions and are too confident about the parameter estimation and outcome predictions.
On the other side with statistical regression models, it is possible to obtain interpretable predictor effects and capture parameter and model uncertainty based on the Bayesian approach. In this thesis, a publicly available melanoma dataset, consisting of skin lesions and patient’s age, is used to predict the melanoma types by using a semi-structured model, while interpretable components and model uncertainty is quantified. For Bayesian models, transformation model-based variational inference (TM-VI) method is used to determine the posterior distribution of the parameter. Several model constellations consisting of patient’s age and/or skin lesion were implemented and evaluated. Predictive performance was shown to be best by using a combined model of image and patient’s age, while providing the interpretable posterior distribution of the regression coefficient is possible. In addition, integrating uncertainty in image and tabular parts results in larger variability of the outputs corresponding to high uncertainty of the single model components.
This diploma thesis is devoted to the design and analysis of a radar signal enabling an object classification capability in surveillance radar systems based on high-resolution radar range profiles. It picks up the research results from Kastinger (2006), who investigated classification algorithms for high-resolution radar range profiles, and Meier (2007), who programmed a MATLAB toolbox for the evaluation of radar signals. A classical, brief, introduction to radar fundamentals is given (Chapter 1) as well as the motivation for this thesis and certain basic parameters used. After high-resolution radar range profiles are discussed with special focus on surveillance radar systems (Chapter 2), the results of Kastinger (2006) are picked up (Chapter 3) as far as necessary for the following chapters of this thesis. Following the chapters on radar basics, high-resolution radar range profiles and classification, basic and advanced radar signals are discussed and analysed, especially their range resolution and sidelobe levels (Chapter 4). This includes linear frequency-modulated pulses and nonlinear frequency-modulated pulses as well as phase-coded pulses, coherent trains of identical pulses, and stepped-frequency waveforms. Their analysis is based on Meier's MATLAB toolbox. In Chapter 5 we will bring up additional points that have to be considered in radar system design for implementing a classification capability, before this thesis ends with an overall conclusion (Chapter 6).
The Universal Serial Bus (USB) is a worldwide standard for communication between peripherals. Nowadays USB interfaces are integrated in almost every device. It will be used to connect peripherals and computers. USB devices communicate between pieces of hardware, i.e., cable, plug and socket. Thus, there exists different standardized communication protocols depending on the application. In case of different communication protocols, it is necessary to verify them, that devices, no matter of country, can communicate to each other.
The verifying process is very important in order that companies can sell products with such interfaces and their designated logo, to guaranty a certain standard, which is provided all over the world. Devices have to complete various test procedures to get certified. Otherwise a company is not allowed to use logos ore designations, i.e., USB or information about data rates, i.e., SuperSpeed. Furthermore, successfully completed test procedures prove that a device works properly based on a professional method.
The Human-Machine-Interface (HMI) device family from the company Marquardt Verwaltungs GmbH, is using the USB interface for service and data exchange purposes. The service application is realized through a Virtual COM Port (VCP), based on the Communication Device Class (CDC) of USB. On the other side they want to use the Media Transfer Protocol (MTP) based on the Still Image Capture Device class for data exchange between the HMI device and a computer. Of course, the integrated circuit, which implements the USB interface on the circuit board of the HMI device has to be verified, too. The verification will be performed through an external company. In contrast, the communication protocols do not need a verification but must be examined. The identification of an USB class in an operating system does neither guaranty a proper functionality nor comply with a professional scientific method.
To accelerate the development of a project as well as to reduce the production costs, it is a significant advantage to own a test environment. Microsoft provides the possibility to verify devices on Windows operating systems. Therefor they invented the Windows Certification Program, which contains software that can be used for verification purposes. One of them is the Windows Hardware Certification Kit (HCK) we want to set up and set the HMI device under test, to examine the implementation of MTP.
Thus, it is possible to use the HCK test setup during a development process to examine a current implementation without a big effort, i.e., cooperation with an external company or similarly approaches, which delays the whole development process by far.
This thesis emphasizes problems that reports generated by vulnerability scanners impose on the process of vulnerability management, which are a. an overwhelming amount of data and b. an insufficient prioritization of the scan results.
To assist the process of developing means to counteract those problems and to allow for quantitative evaluation of their solutions, two metrics are proposed for their effectiveness and efficiency. These metrics imply a focus on higher severity vulnerabilities and can be applied to any simplification process of vulnerability scan results, given it relies on a severity score and time of remediation estimation for each vulnerability.
A priority score is introduced which aims to improve the widely used Common Vulnerability Scoring System (CVSS) base score of each vulnerability dependent on a vulnerability’s ease of exploit, estimated probability of exploitation and probability of its existence.
Patterns within the reports generated by the Open Vulnerability Assessment System (OpenVAS) vulnerability scanner between vulnerabilities are discovered which identify criteria by which they can be categorized from a remediation actor standpoint. These categories lay the groundwork of a final simplified report and consist of updates that need to be installed on a host, severe vulnerabilities, vulnerabilities that occur on multiple hosts and vulnerabilities that will take a lot of time for remediation. The highest potential time savings are found to exist within frequently occurring vulnerabilities, minor- and major suggested updates.
Processing of the results provided by the vulnerability scanner and creation of the report is realized in the form of a python script. The resulting reports are short, straight to the point and provide a top down remediation process which should theoretically allow to minimize the institutions attack surface as fast as possible. Evaluation of the practicality must follow as the reports are yet to be introduced into the Information Security Management Lifecycle.
Contemporary empirical applications frequently require flexible regression models for complex response types and large tabular or non-tabular, including image or text, data. Classical regression models either break down under the computational load of processing such data or require additional manual feature extraction to make these problems tractable. Here, we present deeptrafo, a package for fitting flexible regression models for conditional distributions using a tensorflow backend with numerous additional processors, such as neural networks, penalties, and smoothing splines. Package deeptrafo implements deep conditional transformation models (DCTMs) for binary, ordinal, count, survival, continuous, and time series responses, potentially with uninformative censoring. Unlike other available methods, DCTMs do not assume a parametric family of distributions for the response. Further, the data analyst may trade off interpretability and flexibility by supplying custom neural network architectures and smoothers for each term in an intuitive formula interface. We demonstrate how to set up, fit, and work with DCTMs for several response types. We further showcase how to construct ensembles of these models, evaluate models using inbuilt cross-validation, and use other convenience functions for DCTMs in several applications. Lastly, we discuss DCTMs in light of other approaches to regression with non-tabular data.
Driver assistance systems are increasingly becoming part of the standard equipment of vehicles and thus contribute to road safety. However, as they become more widespread, the requirements for cost efficiency are also increasing, and so few and inexpensive sensors are used in these systems. Especially in challenging situations, this leads to the fact that target discrimination cannot be ensured which may lead to false reactions of the driver assistance system. In this paper, the Boids flocking algorithm is used to generate semantic neighborhood information between tracked objects which in turn can significantly improve the overall performance. Two different variants were developed: First, a free-moving flock whereby a separate flock is generated per tracked object and second, a formation-controlled flock where boids of a single flock move along the future road course in a pre-defined formation. In the first approach, the interaction between the flocks as well as the interaction between the boids within a flock is used to generate additional information, which in turn can be used to improve, for example, lane change detection. For the latter approach, new behavioral rules have been developed, so that the boids can reliably identify control-relevant objects to a driver assistance system. Finally, the performance of the presented methods is verified through extensive simulations.
This paper analyses international cooperation in alternative energy production research and development. Therefore, patents of the technological domain, registered at the European Patent Office from 1997 until 2016, are analysed. International cooperation is considered when patents involve co-assignment or co-inventorship comprising two or more different countries. Generally, international R&D cooperation tends to be increasing over time in alternative energy production. In total, 2234 co-patents from 87 countries are identified. Through social network analysis the cooperative relationships between countries are examined. The most significant states of the network are the United States of America and Germany. Innovative clusters and strong partnerships are identified. Alternative energy technologies that involve international cooperation most extensively are harnessing energy from manmade waste, solar energy and bio-fuels. The paper clarifies which countries are cooperating with each other for what purpose. The findings can be used for establishing R&D strategies in the domain of alternative energy production.
A growing share of modern trade policy instruments is shaped by non-tariff barriers (NTBs). Based on a structural gravity equation and the recently updated Global Trade Alert database, we empirically investigate the effect of NTBs on imports. Our analysis reveals that the implementation of NTBs reduces imports of affected products by up to 12%. Their trade dampening effect is thus comparable to that of trade defence instruments such as anti-dumping duties. It is smaller for exporters that have a free trade agreement with the importing country. Different types of NTBs affect trade to a different extent. Finally, we investigate the effect of behind-the-border measures, showing that they significantly lower the importer’s market access.
Lidar sensors are widely used for environmental perception on autonomous robot vehicles (ARV). The field of view (FOV) of Lidar sensors can be reshaped by positioning plane mirrors in their vicinity. Mirror setups can especially improve the FOV for ground detection of ARVs with 2D-Lidar sensors. This paper presents an overview of several geometric designs and their strengths for certain vehicle types. Additionally, a new and easy-to-implement calibration procedure for setups of 2D-Lidar sensors with mirrors is presented to determine precise mirror orientations and positions, using a single flat calibration object with a pre-aligned simple fiducial marker. Measurement data from a prototype vehicle with a 2D-Lidar with a 2 m range using this new calibration procedure are presented. We show that the calibrated mirror orientations are accurate to less than 0.6° in this short range, which is a significant improvement over the orientation angles taken directly from the CAD. The accuracy of the point cloud data improved, and no significant decrease in distance noise was introduced. We deduced general guidelines for successful calibration setups using our method. In conclusion, a 2D-Lidar sensor and two plane mirrors calibrated with this method are a cost-effective and accurate way for robot engineers to improve the environmental perception of ARVs.
AbstractSanctions encompass a wide set of policy instruments restricting cross‐border economic activities. In this paper, we study how different types of sanctions affect the export behavior of firms to the targeted countries. We combine Danish register data, including information on firm‐destination‐specific exports, with information on sanctions imposed by Denmark from the Global Sanctions Database. Our data allow us to study firms' export behavior in 62 sanctioned countries, amounting to a total of 453 country‐years with sanctions over the period 2000–2015. Methodologically, we apply a two‐stage estimation strategy to properly account for multilateral resistance terms. We find that, on average, sanctions lead to a significant reduction in firms' destination‐specific exports and a significant increase in firms' probability to exit the destination. Next, we study heterogeneity in the effects of sanctions across (i) sanction types and sanction packages, (ii) the objectives of sanctions, and (iii) countries subject to sanctions. Results confirm that the effects of sanctions on firms' export behavior vary considerably across these three dimensions.
There was hardly another development which influenced the life on earth as much as the development of the communication technology in the last decades. The advantages of mobile communication brought the branch enormeous growth rates. However, for some years an increasing saturation has been looming in the markets especially in the developed nations and new marketing strategies are needed in order for companies to be able to distance themselves from their competitors. Against the background of this situation ICT companies all over the world started to look for new growth opportunities and found them in the so called “emerging markets” of the developing nations. To exploit this potential will be the one central challenge for the mobile communication industry for the next years. With this book I want to direct the gaze of all readers towards these markets which hold an enormous potential for the whole industry. Furthermore, I want to introduce some generic strategic approaches which can help firms to successfully participate in these markets.
Today we live in a world that is characterized by a constantly changing environment. During the last decade, this highly volatile environment forced companies to implement strategies that identify, track and minimise the risks that entrepreneurial activity entails. Unfortunately, risks only account for a part of the insecurity that is connected to future events. The other and not inferior part of this insecurity consists of possible positive developments – so called opportunities. Due to this reason in economic science and in practice the opinion aggravates that solely focusing on risks is not sufficient to fully exploit the potential of markets and companies. In the 16th century, the Dutch Renaissance humanist scholar Desiderius Erasmus (1466-1536) said: “It is well known that among the blind, the one-eyed man is king.” Transferring this statement in the context of Risk Management, the conclusion becomes apparent: The environmental uncertainty that surrounds entrepreneurial actions includes both opportunities and threats. As commonly practiced though, Risk Management tools only address threats. While this approach is surely better than doing nothing, it still can be seen as a major weakness of the traditional Risk Management approach. Nevertheless, in terms of Erasmus, this approach represents the one-eyed man when compared with the blind. To continue this metaphor a little further, it is possible to conclude that the one-eyed king could be easily relieved of his crown by introducing an emperor who is able to see with two eyes. Although this problem is well known in economic science, up to know only little scientific focus was shifted towards the systematic identification and management of opportunities. In fact, most of the present literature focuses on the identification and handling of risk and even though much of the recently published literature captures the term opportunity, none of it proposes a solid idea of following up on the approach. Still, facing the defiances of the present economic environment, it is not sufficient for companies to focus their attention on reducing risks. Instead, it is imperative to deal with the subject of Opportunity Management as well. With this paper, I want to undermine the importance of Opportunity Management for all companies independently of their size or branch that they operate in. Thereby, this paper is dedicated to all managers who strive to improve the professionalism of their companies in terms of strategic thinking. Furthermore, I hope that this paper can facilitate a practical implementation of a working Opportunity Management System.
This research project has been awarded as part of the research competition organized by Connect2Recover, which is a global initiative by the International Telecommunication Union (ITU) with the priority of reinforcing and strengthening the digital infrastructure and ecosystems of developing countries. Carried out by an international and transdisciplinary research consortium, the project sets out to analyze the prospects of digital federation and data sharing within the context of Botswana. Considering the country’s stage of economic and digital development, the project team identified Botswana’s smallholder agricultural sector as the most important area of digital transformation given the development need of the country’s primary sector.
Derived from semi-structured interviews, a focus group, as well as secondary research, the project team developed a digital transformation roadmap based on three development stages: (a) crowdfarming pilot, (b) crowdfarming marketplace, and (c) digital ecosystem for smallholder agriculture. Based on a detailed review of Botswana’s smallholder agriculture and the government’s digitalization strategy, the report envisions each phase, especially the pilot project, in terms of a minimal viable product. This is to consider the low level of digital penetration of Botswana’s primary sector, while providing an incentive to connect smallholders with consumers, traders, and retailers.
The project team has been successful in receiving commitment from actual smallholder farmers, the farmer association and government, as well as support for the idea of developing a crowdfarming marketplace as a novel production model and, eventually, a digital agriculture ecosystem for smallholder farmers, livestock producers, and agricultural technology companies and start-ups. The report is a proposal for a phase-one pilot project with the objective to advance smallholder agribusiness in Botswana.
Botswana is a country in southern Africa with rich mineral resources, which has built its economy on mining. Due to challenges in the upcoming years caused by climate and demographic change, it aims to move away from a resource-based economy to a knowledge-based economy in the long term. In order to support the
process, the Maun Science Park, a centre for research and development is planned to be created in Maun, a town on the edge of the Okavango Delta. The project is initiated by the “International Resilience and Sustainability Partnership” (inRES), a non-governmental organization. The project is currently in the initiation phase.
The purpose of this thesis is to determine a cost framework with exemplary developer calculation and sensitivity analysis for the Maun Science Park Project in Botswana. Therefor, a source research was performed in a first step. Based on this, interviews were conducted with members of the inRES. Based on the data
obtained and further assumptions, a cost framework for the different project phases of the MSP project was established. Subsequently, a developer calculation
was exemplarily carried out on the basis of the project phase 2 and a sensitivity analysis was performed.
During the interviews, data was collected on the different project phases. It became clear that the interview partners had partly inconsistent perceptions
about different project phases. The calculation can be used as a basis for further calculation at the time of concretization of the planning data.
In this paper, a novel feature-based sampling strategy for nonlinear Model Predictive Path Integral (MPPI) control is presented. Using the MPPI approach, the optimal feedback control is calculated by solving a stochastic optimal control (OCP) problem online by evaluating the weighted inference of sampled stochastic trajectories. While the MPPI algorithm can be excellently parallelized, the closed-loop performance strongly depends on the information quality of the sampled trajectories. To draw samples, a proposal density is used. The solver’s and thus, the controller’s performance is of high quality if the sampled trajectories drawn from this proposal density are located in low-cost regions of state-space. In classical MPPI control, the explored state-space is strongly constrained by assumptions that refer to the control value’s covariance matrix, which are necessary for transforming the stochastic Hamilton–Jacobi–Bellman (HJB) equation into a linear second-order partial differential equation. To achieve excellent performance even with discontinuous cost functions, in this novel approach, knowledge-based features are introduced to constitute the proposal density and thus the low-cost region of state-space for exploration. This paper addresses the question of how the performance of the MPPI algorithm can be improved using a feature-based mixture of base densities. Furthermore, the developed algorithm is applied to an autonomous vessel that follows a track and concurrently avoids collisions using an emergency braking feature. Therefore, the presented feature-based MPPI algorithm is applied and analyzed in both simulation and full-scale experiments.
Image novelty detection is a repeating task in computer vision and describes the detection of anomalous images based on a training dataset consisting solely of normal reference data. It has been found that, in particular, neural networks are well-suited for the task. Our approach first transforms the training and test images into ensembles of patches, which enables the assessment of mean-shifts between normal data and outliers. As mean-shifts are only detectable when the outlier ensemble and inlier distribution are spatially separate from each other, a rich feature space, such as a pre-trained neural network, needs to be chosen to represent the extracted patches. For mean-shift estimation, the Hotelling T2 test is used. The size of the patches turned out to be a crucial hyperparameter that needs additional domain knowledge about the spatial size of the expected anomalies (local vs. global). This also affects model selection and the chosen feature space, as commonly used Convolutional Neural Networks or Vision Image Transformers have very different receptive field sizes. To showcase the state-of-the-art capabilities of our approach, we compare results with classical and deep learning methods on the popular dataset CIFAR-10, and demonstrate its real-world applicability in a large-scale industrial inspection scenario using the MVTec dataset. Because of the inexpensive design, our method can be implemented by a single additional 2D-convolution and pooling layer and allows particularly fast prediction times while being very data-efficient.
This thesis deals with background, theory, design, layout and experimental test results of an analogue CMOS VLSI current-mode analog-to-digital converter. This system supports a project, whose goal it is to build a biologically relevant model of synaptic plasticity, named the Artificial Synapse. A critical part of the design, which is based on analogue CMOS VLSI circuits, is the ability to activate a discrete number of channels by sampling an analogue signal. Since currents are the signal of interest and transistors are biased in weak inversion (subthreshold regime), the system requires a current mode A/D circuit that it can operate at ultra-low power and current levels. To meet this need, two new innovative A/D converter approaches are proposed to replace the system’s previous A/D converter design which suffered from a non-linear resolution, uncoded output code and heavy bit oscillations. The initial technical requirements and key criteria for the new converter comprise a resolution of one nano ampere, an input current range between 0 – 100nA, conversion frequencies of up to 5kHz, and a power supply voltage of less than 1.5V. Temperature range, space occupation and power dissipation aspects were not specified due to the early stage of the related Artificial Synapse project. The novel converters both produce seven bit thermometer codes, their functional principle can be best described as current mode flash analog-to-digital converters (ADCs). Due to the fact that the input signal is in the area of a subthreshold current, it is selfevident that the A/D converter design should operate at a subthreshold realm. To support low power operation, clocks or high currents could not be used and were excluded from the design from the very start. To encode the thermometer code into standard binary code, a seven-to-three encoder was designed and integrated on the chip. In October 2003, the design was submitted for production to the MOSIS circuit fabrication service. The AMI Semiconductor 1.5 micron ABN CMOS process was chosen to manufacture the chip. When it was returned in January 2004, simulation results showed that both new A/D converter approaches accomplished excellent results which were expected from SPICE simulation results. With the new chip installed, it became possible to resolve input currents as small as one nano ampere and achieve conversion frequencies of up to 5kHz. The circuits also both meet the requirements which were set at the beginning of the project to operate at a power supply voltage of less than 1.5V, processing input currents in the range between 0 – 100nA. A prototype printed circuit board (PCB) was developed, produced and employed for experiments with the chip. The major application of this test-bed is the ability to generate and measure extremely low currents with high precision. This enables the monitoring of the very small currents that are processed by the chip.
Sleep disorders can impact daily life, affecting physical, emotional, and cognitive well-being. Due to the time-consuming, highly obtrusive, and expensive nature of using the standard approaches such as polysomnography, it is of great interest to develop a noninvasive and unobtrusive in-home sleep monitoring system that can reliably and accurately measure cardiorespiratory parameters while causing minimal discomfort to the user’s sleep. We developed a low-cost Out of Center Sleep Testing (OCST) system with low complexity to measure cardiorespiratory parameters. We tested and validated two force-sensitive resistor strip sensors under the bed mattress covering the thoracic and abdominal regions. Twenty subjects were recruited, including 12 males and 8 females. The ballistocardiogram signal was processed using the 4th smooth level of the discrete wavelet transform and the 2nd order of the Butterworth bandpass filter to measure the heart rate and respiration rate, respectively. We reached a total error (concerning the reference sensors) of 3.24 beats per minute and 2.32 rates for heart rate and respiration rate, respectively. For males and females, heart rate errors were 3.47 and 2.68, and respiration rate errors were 2.32 and 2.33, respectively. We developed and verified the reliability and applicability of the system. It showed a minor dependency on sleeping positions, one of the major cumbersome sleep measurements. We identified the sensor under the thoracic region as the optimal configuration for cardiorespiratory measurement. Although testing the system with healthy subjects and regular patterns of cardiorespiratory parameters showed promising results, further investigation is required with the bandwidth frequency and validation of the system with larger groups of subjects, including patients.
Fatigue and drowsiness are responsible for a significant percentage of road traffic accidents. There are several approaches to monitor the driver’s drowsiness, ranging from the driver’s steering behavior to analysis of the driver, e.g. eye tracking, blinking, yawning or electrocardiogram (ECG). This paper describes the development of a low-cost ECG sensor to derive heart rate variability (HRV) data for the drowsiness detection. The work includes the hardware and the software design. The hardware has been implemented on a printed circuit board (PCB) designed so that the board can be used as an extension shield for an Arduino. The PCB contains a double, inverted ECG channel including low-pass filtering and provides two analog outputs to the Arduino, that combined them and performs the analog-to-digital conversion. The digital ECG signal is transferred to an NVidia embedded PC where the processing takes place, including QRS-complex, heart rate and HRV detection as well as visualization features. The compact resulting sensor provides good results in the extraction of the main ECG parameters. The sensor is being used in a larger frame, where facial-recognition-based drowsiness detection is combined with ECG-based detection to improve the recognition rate under unfavorable light or occlusion conditions.
Shared Field, Divided Field
(2020)
Anthropologists’ arrival stories have long served to justify, naturalize, and domesticate—often through humor—the fraught moment of entering unasked into other people's lives. This textual convention has been thoroughly critiqued, but no comparable attention has been paid to the analogous moment of departure from the field. The digital age enables both sides to maintain contact, a shift that negates the finality of earlier departures. This article engages the changes wrought by digital media that allow us to remain connected to the field. While this seems a humane affordance, it also means that it is no longer feasible to cleanly sever ties established ‘there’. When anthropologists leave the field, the field will likely follow them—on Facebook or Instagram.
In this article, the collection of classes of matrices presented in [J. Garloff, M. Adm, ad J. Titi, A survey of classes of matrices possessing the interval property and related properties, Reliab. Comput. 22:1-14, 2016] is continued. That is, given an interval of matrices with respect to a certain partial order, it is desired to know whether a special property of the entire matrix interval can be inferred from some of its element matrices lying on the vertices of the matrix interval. The interval property of some matrix classes found in the literature is presented, and the interval property of further matrix classes including the ultrametric, the conditionally positive semidefinite, and the infinitely divisible matrices is given for the first time. For the inverse M-matrices the cardinality of the required set of vertex matrices known so far is significantly reduced.
The scoring of sleep stages is one of the essential tasks in sleep analysis. Since a manual procedure requires considerable human and financial resources, and incorporates some subjectivity, an automated approach could result in several advantages. There have been many developments in this area, and in order to provide a comprehensive overview, it is essential to review relevant recent works and summarise the characteristics of the approaches, which is the main aim of this article. To achieve it, we examined articles published between 2018 and 2022 that dealt with the automated scoring of sleep stages. In the final selection for in-depth analysis, 125 articles were included after reviewing a total of 515 publications. The results revealed that automatic scoring demonstrates good quality (with Cohen's kappa up to over 0.80 and accuracy up to over 90%) in analysing EEG/EEG + EOG + EMG signals. At the same time, it should be noted that there has been no breakthrough in the quality of results using these signals in recent years. Systems involving other signals that could potentially be acquired more conveniently for the user (e.g. respiratory, cardiac or movement signals) remain more challenging in the implementation with a high level of reliability but have considerable innovation capability. In general, automatic sleep stage scoring has excellent potential to assist medical professionals while providing an objective assessment.
Sustainable technologies are being increasingly used in various areas of human life. While they have a multitude of benefits, they are especially useful in health monitoring, especially for certain groups of people, such as the elderly. However, there are still several issues that need to be addressed before its use becomes widespread. This work aims to clarify the aspects that are of great importance for increasing the acceptance of the use of this type of technology in the elderly. In addition, we aim to clarify whether the technologies that are already available are able to ensure acceptable accuracy and whether they could replace some of the manual approaches that are currently being used. A two-week study with people 65 years of age and over was conducted to address the questions posed here, and the results were evaluated. It was demonstrated that simplicity of use and automatic functioning play a crucial role. It was also concluded that technology cannot yet completely replace traditional methods such as questionnaires in some areas. Although the technologies that were tested were classified as being “easy to use”, the elderly population in the current study indicated that they were not sure that they would use these technologies regularly in the long term because the added value is not always clear, among other issues. Therefore, awareness-raising must take place in parallel with the development of technologies and services.
In order to ensure sufficient recovery of the human body and brain, healthy sleep is indispensable. For this purpose, appropriate therapy should be initiated at an early stage in the case of sleep disorders. For some sleep disorders (e.g., insomnia), a sleep diary is essential for diagnosis and therapy monitoring. However, subjective measurement with a sleep diary has several disadvantages, requiring regular action from the user and leading to decreased comfort and potential data loss. To automate sleep monitoring and increase user comfort, one could consider replacing a sleep diary with an automatic measurement, such as a smartwatch, which would not disturb sleep. To obtain accurate results on the evaluation of the possibility of such a replacement, a field study was conducted with a total of 166 overnight recordings, followed by an analysis of the results. In this evaluation, objective sleep measurement with a Samsung Galaxy Watch 4 was compared to a subjective approach with a sleep diary, which is a standard method in sleep medicine. The focus was on comparing four relevant sleep characteristics: falling asleep time, waking up time, total sleep time (TST), and sleep efficiency (SE). After evaluating the results, it was concluded that a smartwatch could replace subjective measurement to determine falling asleep and waking up time, considering some level of inaccuracy. In the case of SE, substitution was also proved to be possible. However, some individual recordings showed a higher discrepancy in results between the two approaches. For its part, the evaluation of the TST measurement currently does not allow us to recommend substituting the measurement method for this sleep parameter. The appropriateness of replacing sleep diary measurement with a smartwatch depends on the acceptable levels of discrepancy. We propose four levels of similarity of results, defining ranges of absolute differences between objective and subjective measurements. By considering the values in the provided table and knowing the required accuracy, it is possible to determine the suitability of substitution in each individual case. The introduction of a “similarity level” parameter increases the adaptability and reusability of study findings in individual practical cases.
The McEliece cryptosystem is a promising candidate for post-quantum public-key encryption. In this work, we propose q-ary codes over Gaussian integers for the McEliece system and a new channel model. With this one Mannheim error channel, errors are limited to weight one. We investigate the channel capacity of this channel and discuss its relation to the McEliece system. The proposed codes are based on a simple product code construction and have a low complexity decoding algorithm. For the one Mannheim error channel, these codes achieve a higher error correction capability than maximum distance separable codes with bounded minimum distance decoding. This improves the work factor regarding decoding attacks based on information-set decoding.
Error correction coding for optical communication and storage requires high rate codes that enable high data throughput and low residual errors. Recently, different concatenated coding schemes were proposed that are based on binary BCH codes with low error correcting capabilities. In this work, low-complexity hard- and soft-input decoding methods for such codes are investigated. We propose three concepts to reduce the complexity of the decoder. For the algebraic decoding we demonstrate that Peterson's algorithm can be more efficient than the Berlekamp-Massey algorithm for single, double, and triple error correcting BCH codes. We propose an inversion-less version of Peterson's algorithm and a corresponding decoding architecture. Furthermore, we propose a decoding approach that combines algebraic hard-input decoding with soft-input bit-flipping decoding. An acceptance criterion is utilized to determine the reliability of the estimated codewords. For many received codewords the stopping criterion indicates that the hard-decoding result is sufficiently reliable, and the costly soft-input decoding can be omitted. To reduce the memory size for the soft-values, we propose a bit-flipping decoder that stores only the positions and soft-values of a small number of code symbols. This method significantly reduces the memory requirements and has little adverse effect on the decoding performance.
This paper introduces the concept of Universal Memory Automata (UMA) and automated compilation of Verilog Hardware Description Language (HDL) code at Register Transfer Level (RTL) from UMA graphs for digital designs. The idea is based on the observation that Push Down Automata (PDA) are able to process the Dyk-Language - commonly known as the balanced bracket problem - with a finite set of states while Finite State Machines (FSM) require an infinite set of states. Since infinite sets of states are not applicable to real designs, PDAs appear promising for types of problems similar to the Dyk-Language. PDAs suffer from the problem that complex memory operations need to be emulated by a specific stack management. The presented UMA therefore extends the PDA by other types of memory, e.g. Queue, RAM or CAM. Memories that are eligible for UMAs are supposed to have at least one read and one write port and a one-cycle read/write latency. With their modified state-transfer- and output-function, UMAs are able to operate user-defined numbers, configurations and types of memories. Proof of concept is given by an implementation of a cache coherency protocol, i.e. a practical problem in microprocessor design.
In Maun, Botswana, a self-sufficient, sustainable and future-oriented district will be created, the Maun Science Park. Within this project, several 5-8 storey smart homes shall be built in sustainable construction. The aim of this thesis is to develop a sustainable structural concept for those homes of the Maun Science Park. In a first step, the general basics for tall building structures and sustainable construction were established. Based on those fundamentals, criteria for the structural requirements, the ecological as well as the social sustainability of a structural design could be defined. Subsequently, four structural systems were drafted: a concrete core structure, a steel shear frame structure, a rammed earth shear wall structure and a wooden diagrid structure. In addition to the pre-dimensioning of the systems, a life cycle assessment was set up to evaluate the ecological sustainability of the designs. With the help of a utility value analysis, the wooden diagrid structure was determined as the preferred variant. The comparison of the designs also allows to draw general conclusions for the development of sustainable tall building structures. The results of the life cycle assessment show the advantage of wood as an ecological building material over industrially manufactured building materials, such as steel and concrete. Whereas rammed earth, a likewise ecological building material, is not convincing due to its low strength. In general, a balance is created in the life cycle assessment between ecological and industrially manufactured products in regard of strength and environmental impact. In terms of social sustainability, the design of the structure system can significantly influence the flexibility and use of local resources. However, due to the diversity of sustainable construction, the development of a structural system should be linked to an overarching sustainability concept that takes architecture and stakeholders into account.
Using multi-camera matching techniques for 3d reconstruction there is usually the trade-off between the quality of the computed depth map and the speed of the computations. Whereas high quality matching methods take several seconds to several minutes to compute a depth map for one set of images, real-time methods achieve only low quality results. In this paper we present a multi-camera matching method that runs in real-time and yields high resolution depth maps. Our method is based on a novel multi-level combination of normalized cross correlation, deformed matching windows based on the multi-level depth map information, and sub-pixel precise disparity maps. The whole process is implemented completely on the GPU. With this approach we can process four 0.7 megapixel images in 129 milliseconds to a full resolution 3d depth map. Our technique is tailored for the recognition of non-technical shapes, because our target application is face recognition.
This policy brief presents the possibilities of using big data analytics for safe, decarbonised and climate-resilient infrastructure. The policy brief focuses on current constraints and limitations to applying big data analytics to the infrastructure ecosystem and presents several examples and best practices for different infrastructure sectors and at different policy levels (national, municipal) to highlight recommendations and policy requirements needed for deep digital transformation and sustainable solutions in infrastructure planning and delivery.