Refine
Year of publication
- 2020 (75) (remove)
Document Type
- Conference Proceeding (41)
- Article (19)
- Master's Thesis (4)
- Doctoral Thesis (3)
- Book (2)
- Part of a Book (2)
- Report (2)
- Bachelor Thesis (1)
- Study Thesis (1)
Language
- English (75) (remove)
Keywords
- 3D ship detection (1)
- Accelerometers (1)
- Accessible Tourism (1)
- Actions (1)
- Adaptive (1)
- Adivasi (1)
- Assisted living (1)
- Bayesian convolutional neural networks (1)
- Benchmark (1)
- Bernstein coefficient (1)
Institute
- Fakultät Architektur und Gestaltung (2)
- Fakultät Bauingenieurwesen (3)
- Fakultät Elektrotechnik und Informationstechnik (1)
- Fakultät Informatik (17)
- Fakultät Maschinenbau (1)
- Fakultät Wirtschafts-, Kultur- und Rechtswissenschaften (6)
- Institut für Angewandte Forschung - IAF (1)
- Institut für Optische Systeme - IOS (8)
- Institut für Strategische Innovation und Technologiemanagement - IST (7)
- Institut für Systemdynamik - ISD (14)
Digitalization is one of the most frequently discussed topics in industry. New technologies, platform concepts and integrated data models do enable disruptive business models and drive changes in organization, processes, and tools. The goal is to make a company more efficient, productive and ultimately profitable. However, many companies are facing the challenge of how to approach digital transformation in a structured way and to realize these potential benefits. What they realize is that Product Lifecycle Management plays a key role in digitalization intends, as object, structure and process management along the life cycle is a foundation for many digitalization use cases. The introduced maturity model for assessing a firm’s capabilities along the product lifecycle has been used almost two hundred times. It allows a company to compare its performance with an industry specific benchmark to reveal individual strengths and weaknesses. Furthermore, an empirical study produced multidimensional correlation coefficients, which identify dependencies between business model characteristics and the maturity level of capabilities.
The transformation to an Industry 4.0, which is in general seen as a solution to increasing market challenges, is forcing companies to radically change their way of thinking and to be open to new forms of cooperation. In this context, the opening-up of the innovation process is widely seen as a necessity to meet these challenges, especially for small and medium enterprises (SMEs). The aim of the study therefore is to analyze how cooperation today can be characterized, how this character has changed since the establishment of the term Industry 4.0 at Hanover Fair in 2011 and which cooperation strategies have proven successful. The analysis consists of a quantitative, secondary data analysis that includes country-specific data from 35 European countries of 2010 and 2016 collected by the European Commission and the OECD. The research, focusing on the secondary sector, shows that multinational enterprises MNEs still tend to cooperate more than SMEs, with a slight overall trend towards protectionism. Nevertheless, there is a clear tendency towards the opening-up of SMEs. In this regard, especially universities, competitors and suppliers have become increasingly attractive as cooperation partners for SMEs.
Despite the importance of Social Life Cycle Sustainability Assessment (S-LCSA), little research has addressed its integration into Product Lifecycle Management (PLM) systems. This paper presents a structured review of relevant research and practice. Also, to address practical aspects in more detail, it focuses on challenges and potential for adoption of such an integrated system at an electronics company.
We began by reviewing literature on implementations of Social-LCSA and identifying research needs. Then we investigated the status of Social-LCSA within the electronics industry, both by reviewing literature and interviewing decision makers, to identify challenges and the potential for adopting S-LCSA at an electronics company. We found low maturity of Social-LCSA, particularly difficulty in quantifying social sustainability. Adoption of Social-LCSA was less common among electronics industry suppliers, especially mining & smelting plants. Our results could provide a basis for conducting case studies that could further clarify issues involved in integrations of Social-LCSA into PLM systems.
This paper examines the corporate organisational aspects of the implementation of Industry 4.0. Industry 4.0 builds on new technologies and appears as a disruptive innovation to manufacturing firms. Although we do have a good understanding of the technical components, the implementation of the management and organisational aspects of Industry 4.0 is under-researched. It is challenging to find qualitative empirical evidence which provides comprehensive insights about real implementation cases. Based on a case study in a German high value manufacturing firm, we explore the corporate organisation and implementation of Industry 4.0. By using the framework of Complex Adaptive System (CAS), we have identified three key factors which facilitate the implementation of Industry 4.0 namely 1.) Organisational structure changes such as the foundation of a central department for digital transformation, 2.) The election of a Chief Digital Officer as a personnel change, and 3.) Corporate opening up towards cooperating with partners as a cultural change. We have furthermore found that Lean Management is an important enabler that ensures readiness for the adoption of Industry 4.0.
Forecasting is crucial for both system planning and operations in the energy sector. With increasing penetration of renewable energy sources, increasing fluctuations in the power generation need to be taken into account. Probabilistic load forecasting is a young, but emerging research topic focusing on the prediction of future uncertainties. However, the majority of publications so far focus on techniques like quantile regression, ensemble, or scenario-based methods, which generate discrete quantiles or sets of possible load curves. The conditioned probability distribution remains unknown and can only be estimated when the output is post-processed using a statistical method like kernel density estimation.
Instead, the proposed probabilistic deep learning model uses a cascade of transformation functions, known as normalizing flow, to model the conditioned density function from a smart meter dataset containing electricity demand information for over 4,000 buildings in Ireland. Since the whole probability density function is tractable, the parameters of the model can be obtained by minimizing the negative loglikelihood through the state of the art gradient descent. This leads to the model with the best representation of the data distribution.
Two different deep learning models have been compared, a simple three-layer fully connected neural network and a more advanced convolutional neural network for sequential data processing inspired by the WaveNet architecture. These models have been used to parametrize three different probabilistic models, a simple normal distribution, a Gaussian mixture model, and the normalizing flow model. The prediction horizon is set to one day with a resolution of 30 minutes, hence the models predict 48 conditioned probability distributions.
The normalizing flow model outperforms the two other variants for both architectures and proves its ability to capture the complex structures and dependencies causing the variations in the data. Understanding the stochastic nature of the task in such detail makes the methodology applicable for other use cases apart from forecasting. It is shown how it can be used to detect anomalies in the power grid or generate synthetic scenarios for grid planning.
In order to elaborate inflation and deflation tendencies due to the COVID-19 pandemic and how they are tried to be actively influenced, this paper compares news regarding the measurements of central banks in Europe, USA and Japan. Factors affecting inflation are defined in conjunction with the typical measurements of central banks and conclusions are drawn in respect to differences of the most recent correcting behavior. The paper is concluded by discussing how price levels might develop during and after the crisis.
Cities around the world are facing an increasing number of global and local challenges, such as climate change and scarcity of raw materials. At the same time trends like digitalization, globalization and networking gain in importance. For this reason, cities have started imple-menting smart solutions within the urban structure in order to evolve towards a Smart City. In Botswana, the Maun Science Park is intended to provide a best practice approach for a Bot-swanan Smart City. Since Smart City concepts have to be specifically tailored to local condi-tions, the first main goal of this thesis is to develop a synthesis concept for the Maun Science Park. A key problem in cities is the utilization of space, which is further intensified by increasing urbanization and population growth. Therefore, the second main goal is to develop approaches of (digitally) re-programmable space to use available areas intelligently and optimized.
Within the thesis, human-centered design has been applied as structure-giving methodology. By clarifying relevant Smart City contents, considering reference examples as well as identify-ing local challenges and requirements, an appropriate concept has been developed with hu-man-focus. Furthermore, the methodologies of literature research and expert interviews have been used as input in the individual human-centered design phases. In combination with an innovation funnel, the methodology human-centered design forms the structure of the thesis.
In total, ten main solution areas and 37 sub-segments have been identified for the synthesis concept of Maun Science Park. Additionally, a concept for Smart Buildings has been devel-oped as a part of the synthesis concept and as an essential infrastructure component of the Maun Science Park (three main segments, 16 sub-segments). Based on expert input, a priori-tization has been determined by evaluating the impact and economic affordability of the indi-vidual sub-areas. Moreover, individual key areas have been highlighted by identifying direct interactions between sub-segments and on the basis of expert input – these are particularly related to the segments Smart Data and Smart People. Besides the synthesis concept, ap-proaches of (digitally) re-programmable space have been created. Thereby, ten approaches refer to the conversion, reuse or expansion abilities of space within daily, weekly or life cycle. In addition, the conventional (digitally) re-programmable space idea has been extended by two new considerations – “multi-purpose use of built-up space” and “concept programming in the planning phase”. Finally, within an overall consideration – synthesis concept combined with approaches of (digitally) re-programmable space – the added value of the developed contents has been outlined, positive and negative aspects have been identified within a SWOT analysis and the business model of the Maun Science Park approach has been verified in a Business Model Canvas.
Through explicit elaboration, classification and prioritization of solution areas, the developed concept can serve as a basis for further project steps. Based on the defined requirements of the sub-segments, solutions can be developed with regard to the entire Smart City context.
Throughout this thesis, the implementation of tools for knowledge management as a key factor for sustainable corporate development, is presented. In industries with a high fluc-tuation rate, such as construction, efficient knowledge management is of particular im-portance. Companies feel the effects of negligent handling of this resource especially dur-ing the Corona pandemic. Restructuring leads to experienced employees leaving the com-pany – and with them the know-how and experience gained. With a systematic knowledge transfer, the most important insights in such situations remain within the or-ganization. Thus, the company becomes crisis-proof and receives all the tools it needs to grow healthily again after the recession. Practical data from competitors indicates that knowledge management promises savings potential of several million euros per year for BAM. Further potentials in the areas of sustainability, customer- and employee satisfac-tion as well as occupational safety, which do not lead to savings, are also worth mention-ing. This thesis determines the current maturity level of knowledge management at BAM, before introducing processes and systems that successively drive the improvement. The developed methods simultaneously help to prevent and solve problems and systematical-ly promote the continuous improvement of all work processes in the company.
Detailed steps are presented to carry out change management towards the successful introduction and further development of knowledge management at BAM. A major focus is on interpersonal factors. The related topic of knowledge culture was recently ranked by german think-tank Zukunftsinstitut as one of the top 5 megatrends for companies in the 2020s. The methods developed, contribute to the creation of such a culture and to the transformation of BAM towards a learning organization. Knowledge management identi-fies with the BAM values. In the course of this thesis it will be shown how the system by its very nature, helps to implement these values in the work of every employee.
The results of this elaboration were recently awarded the Digital Construction Award 2020 for Business Excellence at BAM Deutschland AG.
Botswana, a new construction project – the Maun Science Park - is to be built with a focus on sustainability and to create a new living space for the rapidly growing population in Africa. The project will be a blueprint for future projects in Africain terms of progress, technology and sustainability. This thesis will deal with its financial framework and will serve as a basis for the development of ways and means of financing such projects.
Traditional Western philosophy, cognitive science and traditional HCI frameworks approach the term digital and its implications with an implicit dualism (nature/cul-ture, theory /practice, body/mind, human/machine). What lies between is a feature of our postmodern times, in which different states, conditions or positions merge and co-exist in a new, hybrid reality, a “continuous beta” (Mühlenbeck & Skibicki, 2007) version of becoming .Post-digitality involves the physical dimensions of spatio-temporal engagements. This new ontological paradigm reconceptualizes digital technology through the ex-perience of the human body and its senses, thus emphasizing form-taking, situation-al engagement and practice rather than symbolic, disembodied rationality. This rais-es two questions in particular: how to encourage curiosity, playfulness, serendipity, emergence, discourse and collectivity? How to construct working methods without foregrounding and dividing the subject into an individual that already takes posi-tion? This paper briefly outlines the rhizomatic framework that I developed within my PhD research. This attempts to overcome two prevailing tendencies: first, the one-sided view of scientific approaches to knowledge acquisition and the pure-ly application-oriented handling of materials, technologies and machines; second, the distanced perception of the world. In contrast, my work involves project-driven alchemic curiosity and doing research through artistic design practice. This means thinking through materials, technologies and machinic interactions. Now, at the end of this PhD journey, 10 interdisciplinary projects have emerged from this ontological queer-paradigm that is post-digital–crafting 4.0. Below I illustrate this approach and its outcomes.
Production and marketing of cereal grains are some of the main activities in developing countries to ensure food security. However, the food gap is complicated further by high postharvest loss of grains during storage. This study aimed to compare low‐cost modified‐atmosphere hermetic storage structures with traditional practice to minimize quantitative and qualitative losses of grains during storage. The study was conducted in two phases: in the first phase, seven hermetic storage structures with or without smoke infusion were compared, and one selected structure was further validated at scaled‐up capacity in the second phase.
In modern fruit processing technology, non-destructive quality measuring techniques aresought for determining and controlling changes in the optical, structural, and chemical properties of theproducts. In this context, changes inside the product can be measured during processing. Especiallyfor industrial use, fast, precise, but robust methods are particularly important to obtain high-qualityproducts. In this work, a newly developed multi-spectral imaging system was implemented andadapted for drying processes. Further it was investigated if the system could be used to link changesin the surface spectral reflectance during mango drying with changes in moisture content andcontents of chemical components. This was achieved by recovering the spectral reflectance frommulti-spectral image data and comparing the spectral changes with changes of the total soluble solids(TSS), pH-value and the relative moisture contentxwbof the products. In a first step, the camera wasmodified to be used in drying, then the changes in the spectra and quality criteria during mangodrying were measured. For this, mango slices were dried at air temperatures of 40–80◦C and relativeair humidities of 5%–30%. Samples were analyzed and pictures were taken with the multi-spectralimaging system. The quality criteria were then predicted from spectral data. It could be shown thatthe newly developed multi-spectral imaging system can be used for quality control in fruit drying.There are strong indications as well, that it can be employed for the prediction of chemical qualitycriteria of mangoes during drying. This way, quality changes can be monitored inline during theprocess using only one single measuring device.
Location-aware mobile devices are becoming increasingly popular and GPS sensors are built into nearly every portable unit with computational capabilities. At the same time, the emergence of location-aware virtual services and ideas calls for new efficient spatial real-time queries. Communication latency in mobile environments interacting with high decentralization and the need of scalability in high-density systems with immense client counts leads to major challenges. In this paper we describe a decentralized architecture for continuous range queries in settings in which both, the requested and the requesting clients, are mobile. While prior works commonly use a request-response approach we provide a stream-based adaptive grid solution dealing with arbitrary high client counts and improving communication latency that meets given hard real-time constraints.
A residual neural network was adapted and applied to the Physionet/Computing data in Cardiology Challenge 2020 to detect 24 different classes of cardiac abnormalities from 12-lead. Additive Gaussian noise, signal shifting, and the classification of signal sections of different lengths were applied to prevent the network from overfitting and facilitating generalization. Due to the use of a global pooling layer after the feature extractor, the network is independent of the signal’s length. On the hidden test set of the challenge, the model achieved a validation score of 0.656 and a full test score of 0.27, placing us 15th out of 41 officially ranked teams (Team name: UC_Lab_Kn). These results show the potential of deep neural networks for ap- plication to raw data and a complex multi-class multi-label classification problem, even if the training data is from di- verse datasets and of differing lengths.
What drives entrepreneurial action to create a lasting impact? The creation of new ventures that aim at having an impact beyond their financial performance face additional challenges: achieving economic sustainability and at the same time addressing social or environmental issues. Little is known on how these new hybrid organizations, aiming for multiple impact dimensions, manage to be congruent with their blended values. A dataset of 4,125 early-stage ventures is used to gain insights into how blended values are converted into financial, social and environmental impacts, giving shape to different types of hybrid organizations. Our findings suggest new hybrid organizations might opt to sacrifice financial impact to achieve social impact, yet this is not the case when they aim to generate environmental or sustainable impact. Therefore, the tensions and sacrifices related to holding blended values are not homogeneous across all types of new hybrid organizations.
Uncertainty about the future requires companies to create discontinuous innovations. Established companies, however, struggle to do so; whereas independent startups seem to better cope with this. Consequently, established companies set up entrepreneurial initiatives to make use of startups' benefits. Consequently, this led-amongst others-to great interest in socalled corporate entrepreneurship (CE) programs and to the development and characterization of several different forms. Their processes to achieve certain objectives, yet, are still rather ineffective. Thus, considerations of the actions performed in preparation for and during CE programs could be one approach to improve this but are still absent today. Furthermore, the increasing use of several CE programs in parallel seems to bear the potential for synergies and, thus, more efficient use of resources. Aiming to provide insights to both issues, this study analyzes actions of CE programs, by looking at interviews with managers of seven corporate incubators and accelerator programs of five established German tech-companies.
In today's volatile world, established companies must be capable of optimizing their core business with incremental innovations while simultaneously developing discontinuous innovations to maintain their long-term competitiveness. Balancing both is a major challenge for companies, since different types of innovation require different organizational structures, operational modes and management styles. Established companies tend to excel in improving their current business through incremental innovations which are closely related to their current knowledge base and competencies. However, this often goes hand in hand with challenges in the exploration of knowledge that is new to the company and that is essential for the development of discontinuous innovations. In this respect, the concept of corporate entrepreneurship is recognized as a way to strengthen the exploration of new knowledge and to support the development of discontinuous innovation. For managing corporate entrepreneurship more effectively, it is crucial to understand which types of knowledge can be created through corporate entrepreneurship and which organizational designs are more suited to gain certain types of knowledge. To answer these questions, this study analyzed 23 semi-structured interviews conducted with established companies that are running such entrepreneurial activities. The results show (1) that three general types of knowledge can be explored through corporate entrepreneurship and (2) that some organizational designs are more suited to explore certain knowledge types than others are.
We have analyzed a pool of 37,839 articles published in 4,404 business-related journals in the entrepreneurship research field using a novel literature review approach that is based on machine learning and text data mining. Most papers have been published in the journals ‘Small Business Economics’, ‘International Journal of Entrepreneurship and Small Business’, and ‘Sustainability’ (Switzerland), while the sum of citations is highest in the ‘Journal of Business Venturing’, ‘Entrepreneurship Theory and Practice’, and ‘Small Business Economics’. We derived 29 overarching themes based on 52 identified clusters. The social entrepreneurship, development, innovation, capital, and economy clusters represent the largest ones among those with high thematic clarity. The most discussed clusters measured by the average number of citations per assigned paper are research, orientation, capital, gender, and growth. Clusters with the highest average growth in publications per year are social entrepreneurship, innovation, development, entrepreneurship education, and (business-) models. Measured by the average yearly citation rate per paper, the thematic cluster ‘research’, mostly containing literature studies, received most attention. The MLR allows for an inclusion of a significantly higher number of publications compared to traditional reviews thus providing a comprehensive, descriptive overview of the whole research field.
We provide an overview of the ongoing discussions on the objectives of the energy transition in the form of a conceptual framework, intending to facilitate the search for the most viable options for a successful transformation of the energy system. For this purpose, we examine the development of energy policy goals in Germany in the past and present, whereby we give an overview of objectives and assessment approaches from politics, economics, and science. Moreover, we then merge the different views into a common framework and analyze the central conflict between the wholeness of a hypothetical target circle and the simplification in favor of a hypothetical target point in more detail.
This paper proposes a novel transmission scheme for generalized multistream spatial modulation. This new approach uses one Mannheim error correcting codes over Gaussian or Eisenstein integers as multidimensional signal constellations. These codes enable a suboptimal decoding strategy with near maximum likelihood performance for transmission over the additive white Gaussian noise channel. In this contribution, this decoding algorithm is generalized to the detection for generalized multistream spatial modulation. The proposed method can outperform conventional generalized multistream spatial modulation with respect to decoding performance, detection complexity, and spectral efficiency.
Soft-input decoding of concatenated codes based on the Plotkin construction and BCH component codes
(2020)
Low latency communication requires soft-input decoding of binary block codes with small to medium block lengths.
In this work, we consider generalized multiple concatenated (GMC) codes based on the Plotkin construction. These codes are similar to Reed-Muller (RM) codes. In contrast to RM codes, BCH codes are employed as component codes. This leads to improved code parameters. Moreover, a decoding algorithm is proposed that exploits the recursive structure of the concatenation. This algorithm enables efficient soft-input decoding of binary block codes with small to medium lengths. The proposed codes and their decoding achieve significant performance gains compared with RM codes and recursive GMC decoding.
The reliability of flash memories suffers from various error causes. Program/erase cycles, read disturb, and cell to cell interference impact the threshold voltages and cause bit errors during the read process. Hence, error correction is required to ensure reliable data storage. In this work, we investigate the bit-labeling of triple level cell (TLC) memories. This labeling determines the page capacities and the latency of the read process. The page capacity defines the redundancy that is required for error correction coding. Typically, Gray codes are used to encode the cell state such that the codes of adjacent states differ in a single digit. These Gray codes minimize the latency for random access reads but cannot balance the page capacities. Based on measured voltage distributions, we investigate the page capacities and propose a labeling that provides a better rate balancing than Gray labeling.
Side Channel Attack Resistance of the Elliptic Curve Point Multiplication using Eisenstein Integers
(2020)
Asymmetric cryptography empowers secure key exchange and digital signatures for message authentication. Nevertheless, consumer electronics and embedded systems often rely on symmetric cryptosystems because asymmetric cryptosystems are computationally intensive. Besides, implementations of cryptosystems are prone to side-channel attacks (SCA). Consequently, the secure and efficient implementation of asymmetric cryptography on resource-constrained systems is demanding. In this work, elliptic curve cryptography is considered. A new concept for an SCA resistant calculation of the elliptic curve point multiplication over Eisenstein integers is presented and an efficient arithmetic over Eisenstein integers is proposed. Representing the key by Eisenstein integer expansions is beneficial to reduce the computational complexity and the memory requirements of an SCA protected implementation.
In this article, we give the construction of new four-dimensional signal constellations in the Euclidean space, which represent a certain combination of binary frequency-shift keying (BFSK) and M-ary amplitude-phase-shift keying (MAPSK). Description of such signals and the formulas for calculating the minimum squared Euclidean distance are presented. We have developed an analytic building method for even and odd values of M. Hence, no computer search and no heuristic methods are required. The new optimized BFSK-MAPSK (M = 5,6,···,16) signal constructions are built for the values of modulation indexes h =0.1,0.15,···,0.5 and their parameters are given. The results of computer simulations are also provided. Based on the obtained results we can conclude, that BFSK-MAPSK systems outperform similar four-dimensional systems both in terms of minimum squared Euclidean distance and simulated symbol error rate.
This work presents a new concept to implement the elliptic curve point multiplication (PM). This computation is based on a new modular arithmetic over Gaussian integer fields. Gaussian integers are a subset of the complex numbers such that the real and imaginary parts are integers. Since Gaussian integer fields are isomorphic to prime fields, this arithmetic is suitable for many elliptic curves. Representing the key by a Gaussian integer expansion is beneficial to reduce the computational complexity and the memory requirements of secure hardware implementations, which are robust against attacks. Furthermore, an area-efficient coprocessor design is proposed with an arithmetic unit that enables Montgomery modular arithmetic over Gaussian integers. The proposed architecture and the new arithmetic provide high flexibility, i.e., binary and non-binary key expansions as well as protected and unprotected PM calculations are supported. The proposed coprocessor is a competitive solution for a compact ECC processor suitable for applications in small embedded systems.
Deep neural networks (DNNs) are known for their high prediction performance, especially in perceptual tasks such as object recognition or autonomous driving. Still, DNNs are prone to yield unreliable predictions when encountering completely new situations without indicating their uncertainty. Bayesian variants of DNNs (BDNNs), such as MC dropout BDNNs, do provide uncertainty measures. However, BDNNs are slow during test time because they rely on a sampling approach. Here we present a single shot MC dropout approximation that preserves the advantages of BDNNs without being slower than a DNN. Our approach is to analytically approximate for each layer in a fully connected network the expected value and the variance of the MC dropout signal. We evaluate our approach on different benchmark datasets and a simulated toy example. We demonstrate that our single shot MC dropout approximation resembles the point estimate and the uncertainty estimate of the predictive distribution that is achieved with an MC approach, while being fast enough for real-time deployments of BDNNs.
Probabilistic Deep Learning
(2020)
Probabilistic Deep Learning is a hands-on guide to the principles that support neural networks. Learn to improve network performance with the right distribution for different data types, and discover Bayesian variants that can state their own uncertainty to increase accuracy. This book provides easy-to-apply code and uses popular frameworks to keep you focused on practical applications.
At present, the majority of the proposed Deep Learning (DL) methods provide point predictions without quantifying the model's uncertainty. However, a quantification of the reliability of automated image analysis is essential, in particular in medicine when physicians rely on the results for making critical treatment decisions. In this work, we provide an entire framework to diagnose ischemic stroke patients incorporating Bayesian uncertainty into the analysis procedure. We present a Bayesian Convolutional Neural Network (CNN) yielding a probability for a stroke lesion on 2D Magnetic Resonance (MR) images with corresponding uncertainty information about the reliability of the prediction. For patient-level diagnoses, different aggregation methods are proposed and evaluated, which combine the individual image-level predictions. Those methods take advantage of the uncertainty in the image predictions and report model uncertainty at the patient-level. In a cohort of 511 patients, our Bayesian CNN achieved an accuracy of 95.33% at the image-level representing a significant improvement of 2% over a non-Bayesian counterpart. The best patient aggregation method yielded 95.89% of accuracy. Integrating uncertainty information about image predictions in aggregation models resulted in higher uncertainty measures to false patient classifications, which enabled to filter critical patient diagnoses that are supposed to be closer examined by a medical doctor. We therefore recommend using Bayesian approaches not only for improved image-level prediction and uncertainty estimation but also for the detection of uncertain aggregations at the patient-level.
Mapping of tree seedlings is useful for tasks ranging from monitoring natural succession and regeneration to effective silvicultural management. Development of methods that are both accurate and cost-effective is especially important considering the dramatic increase in tree planting that is required globally to mitigate the impacts of climate change. The combination of high-resolution imagery from unmanned aerial vehicles and object detection by convolutional neural networks (CNNs) is one promising approach. However, unbiased assessments of these models and methods to integrate them into geospatial workflows are lacking. In this study, we present a method for rapid, large-scale mapping of young conifer seedlings using CNNs applied to RGB orthomosaic imagery. Importantly, we provide an unbiased assessment of model performance by using two well-characterised trial sites together containing over 30,000 seedlings to assemble datasets with a high level of completeness. Our results showed CNN-based models trained on two sites detected seedlings with sensitivities of 99.5% and 98.8%. False positives due to tall weeds at one site and naturally regenerating seedlings of the same species led to slightly lower precision of 98.5% and 96.7%. A model trained on examples from both sites had 99.4% sensitivity and precision of 97%, showing applicability across sites. Additional testing showed that the CNN model was able to detect 68.7% of obscured seedlings missed during the initial annotation of the imagery but present in the field data. Finally, we demonstrate the potential to use a form of weakly supervised training and a tile-based processing chain to enhance the accuracy and efficiency of CNNs applied to large, high-resolution orthomosaics.
Three-dimensional ship localization with only one camera is a challenging task due to the loss of depth information caused by perspective projection. In this paper, we propose a method to measure distances based on the assumption that ships lie on a flat surface. This assumption allows to recover depth from a single image using the principle of inverse perspective. For the 3D ship detection task, we use a hybrid approach that combines image detection with a convolutional neural network, camera geometry and inverse perspective. Furthermore, a novel calculation of object height is introduced. Experiments show that the monocular distance computation works well in comparison to a Velodyne lidar. Due to its robustness, this could be an easy-to-use baseline method for detection tasks in navigation systems.
Creative industry and cultural tourism destination Lake Constance - a media discourse analysis
(2020)
The following media discourse analysis examines the news media coverage of four regional online newspapers, about the topics “creative industries” and “cultural tourism” at Lake Constance region in the period from 2006 until 2016. The results show that, besides event-relater reporting, there is currently no vibrant media discourse on the topics “creative industries” and “cultural tourism”. Even though the image of the Lake Constance region is heavily influenced by tourism, “cultural tourism” also plays a secondary role when it comes to regional news reporting. Moreover, discourses do not overlap and thus no synergies within the local media discourse are formed. This result is relevant for the regional tourism development, because the cooperation between “creative industries” and “cultural tourism” creates opportunities such as the expansion of the tourism offer and an extension of the tourist season. To activate unused opportunities at the different destinations of the region, a supra-regional visibility of the sector “creative industries” should be developed and the cooperation of the sector with local stakeholders of cultural tourism should be promoted.
This paper presents the goals, service design approach, and the results of the project “Accessible Tourism around Lake Constance”, which is currently run by different universities, industrial partners and selected hotels in Switzerland, Germany and Austria. In the 1st phase, interviews with different persons with disabilities and elderly persons have been conducted to identify the barriers and pains faced by tourists who want to spend their holidays in the region of Lake Constance as well as possible assistive technologies that help to overcome these barriers. The analysis of the interviews shows that one third of the pains and barriers are due to missing, insufficient, wrong or inaccessible information about the
accessibility of the accommodation, surroundings, and points of interests during the planning phase of the holidays. Digital assistive technologies hence play a
major role in bridging this information gap. In the 2nd phase so-called Hotel-Living-Labs (HLL) have been established where the identified assistive technologies
can be evaluated. Based on these HLLs an overall service for accessible holidays has been designed and developed. In the last phase, this service has been implemented
based on the HLLs as well as the identified assistive technologies and is currently field tested with tourists with disabilities from the three participated countries.
Digital technology and architecture have become inseparable, with new approaches and methodologies not just affecting the workflows and practice of architects but shaping the very character of architecture.
This compendious work offers a wide-ranging orientation to the new landscape with its opportunities, its challenges, and its vast potential.
Shared Field, Divided Field
(2020)
A conceptual framework for indigenous ecotourism projects – a case study in Wayanad, Kerala, India
(2020)
This paper analyses indigenous ecotourism in the Indian district of Wayanad, Kerala, using a conceptual framework based on a PATA 2015 study on indigenous tourism that includes the criteria: human rights, participation, business and ecology. Detailed indicator sets for each criterion are applied to a case study of the Priyadarshini Tea Environs with a qualitative research approach addressing stakeholders from the public sector, non-governmental organisations, academia, tour operators and communities including Adivasi and non-Adivasi. In-depth interviews were supported by participant and non-participant observations. The authors adapted this framework to the needs of the case study and consider that this modified version is a useful tool for academics and practitioners wishing to evaluate and develop indigenous ecotourism projects. The results show that the Adivasi involved in the Priyadarshini Tea Environs project benefit from indigenous ecotourism. But they could profit more if they had more involvement in and control of the whole tourism value chain.
The ageing infrastructure in ports requires regular inspection. This inspection is currently carried out manually by divers who sense by hand the entire underwater infrastructure. This process is cost-intensive as it involves a lot of time and human resources. To overcome these difficulties, we propose to scan the above and underwater port structure with a Multi-SensorSystem, and -by a fully automated processto classify the obtained point cloud into damaged and undamaged zones. We make use of simulated training data to test our approach since not enough training data with corresponding class labels are available yet. To that aim, we build a rasterised heightfield of a point cloud of a sheet pile wall by cutting it into verticall slices. The distance from each slice to the corresponding line generates the heightfield. This latter is propagated through a convolutional neural network which detects anomalies. We use the VGG19 Deep Neural Network model pretrained on natural images. This neural network has 19 layers and it is often used for image recognition tasks. We showed that our approach can achieve a fully automated, reproducible, quality-controlled damage detection which is able to analyse the whole structure instead of the sample wise manual method with divers. The mean true positive rate is 0.98 which means that we detected 98 % of the damages in the simulated environment.
Modeling a suitable birth density is a challenge when using Bernoulli filters such as the Labeled Multi-Bernoulli (LMB) filter. The birth density of newborn targets is unknown in most applications, but must be given as a prior to the filter. Usually the birth density stays unchanged or is designed based on the measurements from previous time steps.
In this paper, we assume that the true initial state of new objects is normally distributed. The expected value and covariance of the underlying density are unknown parameters. Using the estimated multi-object state of the LMB and the Rauch-Tung-Striebel (RTS) recursion, these parameters are recursively estimated and adapted after a target is detected.
The main contribution of this paper is an algorithm to estimate the parameters of the birth density and its integration into the LMB framework. Monte Carlo simulations are used to evaluate the detection driven adaptive birth density in two scenarios. The approach can also be applied to filters that are able to estimate trajectories.
In this thesis, the recognition problem and the properties of eigenvalues and eigenvectors of matrices which are strictly sign-regular of a given order, i.e., matrices whose minors of a given order have the same strict sign, are considered. The results are extended to matrices which are sign-regular of a given order, i.e., matrices whose minors of a given order have the same sign or are allowed to vanish. As a generalization, a new type of matrices called oscillatory of a specific order, are introduced. Furthermore, the properties for this type are investigated. Also, same applications to dynamic systems are given.
The expansion of a given multivariate polynomial into Bernstein polynomials is considered. Matrix methods for the calculation of the Bernstein expansion of the product of two polynomials and of the Bernstein expansion of a polynomial from the expansion of one of its partial derivatives are provided which allow also a symbolic computation.
Totally nonnegative matrices, i.e., matrices having all their minors nonnegative, and matrix intervals with respect to the checkerboard partial order are considered. It is proven that if the two bound matrices of such a matrix interval are totally nonnegative and satisfy certain conditions, then all matrices from this interval are also totally nonnegative and satisfy the same conditions.
Let A = [a_ij] be a real symmetric matrix. If f:(0,oo)-->[0,oo) is a Bernstein function, a sufficient condition for the matrix [f(a_ij)] to have only one positive eigenvalue is presented. By using this result, new results for a symmetric matrix with exactly one positive eigenvalue, e.g., properties of its Hadamard powers, are derived.
We propose and apply a requirements engineering approach that focuses on security and privacy properties and takes into account various stakeholder interests. The proposed methodology facilitates the integration of security and privacy by design into the requirements engineering process. Thus, specific, detailed security and privacy requirements can be implemented from the very beginning of a software project. The method is applied to an exemplary application scenario in the logistics industry. The approach includes the application of threat and risk rating methodologies, a technique to derive technical requirements from legal texts, as well as a matching process to avoid duplication and accumulate all essential requirements.
We present source code patterns that are difficult for modern static code analysis tools. Our study comprises 50 different open source projects in both a vulnerable and a fixed version for XSS vulnerabilities reported with CVE IDs over a period of seven years. We used three commercial and two open source static code analysis tools. Based on the reported vulnerabilities we discovered code patterns that appear to be difficult to classify by static analysis. The results show that code analysis tools are helpful, but still have problems with specific source code patterns. These patterns should be a focus in training for developers.
For a long time, the use of intermediate products in production has been growing more rapidly in most countries than domestic production. This is a strong indication of more interdependency in production. The main purpose of input-output analysis is to study the interdependency of industries in an economy. Often the term interindustry analysis is also used. Therefore, the exchange of intermediate products is a key issue of input-output analysis. We will use input–output data for this study that the author prepared for the new ‘Handbook on Supply, Use and Input–Output Tables with Extensions and Applications’ of the United Nations. The supply use and input–output tables contain separate valuation matrices for trade margins, transport margins, value added tax, other taxes on products and subsidies on products. For the study, two input–output models were developed to evaluate the impact of fuel subsidy and taxation reform on output, gross domestic product, inflation and trade. Six scenarios are discussed covering different aspects of the reform.
The evaluation of the effectiveness of different machine learning algorithms on a publicly available database of signals derived from wearable devices is presented with the goal of optimizing human activity recognition and classification. Among the wide number of body signals we choose a couple of signals, namely photoplethysmographic (optically detected subcutaneous blood volume) and tri-axis acceleration signals that are easy to be simultaneously acquired using commercial widespread devices (e.g. smartwatches) as well as custom wearable wireless devices designed for sport, healthcare, or clinical purposes. To this end, two widely used algorithms (decision tree and k-nearest neighbor) were tested, and their performance were compared to two new recent algorithms (particle Bernstein and a Monte Carlo-based regression) both in terms of accuracy and processing time. A data preprocessing phase was also considered to improve the performance of the machine learning procedures, in order to reduce the problem size and a detailed analysis of the compression strategy and results is also presented.
Good sleep is crucial for a healthy life of every person. Unfortunately, its quality often decreases with aging. A common approach to measuring the sleep characteristics is based on interviews with the subjects or letting them fill in a daily questionnaire and afterward evaluating the obtained data. However, this method has time and personal costs for the interviewer and evaluator of responses. Therefore, it would be important to execute the collection and evaluation of sleep characteristics automatically. To do that, it is necessary to investigate the level of agreement between measurements performed in a traditional way using questionnaires and measurements obtained using electronic monitoring devices. The study presented in this manuscript performs this investigation, comparing such sleep characteristics as "time going to bed", "total time in bed", "total sleep time" and "sleep efficiency". A total number of 106 night records of elderly persons (aged 65+) were analyzed. The results achieved so far reveal the fact that the degree of agreement between the two measurement methods varies substantially for different characteristics, from 31 minutes of mean difference for "time going to bed" to 77 minutes for "total sleep time". For this reason, a direct exchange of objective and subjective measuring methods is currently not possible.
Polysomnography is a gold standard for a sleep study, and it provides very accurate results, but its cost (both personnel and material) are quite high. Therefore, the development of a low-cost system for overnight breathing and heartbeat monitoring, which provides more comfort while recording the data, is a well-motivated challenge. The system proposed in this manuscript is based on the usage of resistive pressure sensors installed under the mattress. These sensors can measure slight pressure changes provoked during breathing and heartbeat. The captured signal requires advanced processing, like applying filters and amplifiers before the analog signal is ready for the next step. Then, the output signal is digitalized and further processed by an algorithm that performs a custom filtering before it can recognize breathing and heart rate in real-time. The result can be directly visualized. Furthermore, a CSV file is created containing the raw data, timestamps, and unique IDs to facilitate further processing. The achieved results are promising, and the average deviation from a reference device is about 4bpm.
Cardiovascular diseases are directly or indirectly responsible for up to 38.5% of all deaths in Germany and thus represent the most frequent cause of death. At present, heart diseases are mainly discovered by chance during routine visits to the doctor or when acute symptoms occur. However, there is no practical method to proactively detect diseases or abnormalities of the heart in the daily environment and to take preventive measures for the person concerned. Long-term ECG devices, as currently used by physicians, are simply too expensive, impractical, and not widely available for everyday use. This work aims to develop an ECG device suitable for everyday use that can be worn directly on the body. For this purpose, an already existing hardware platform will be analyzed, and the corresponding potential for improvement will be identified. A precise picture of the existing data quality is obtained by metrological examination, and corresponding requirements are defined. Based on these identified optimization potentials, a new ECG device is developed. The revised ECG device is characterized by a high integration density and combines all components directly on one board except the battery and the ECG electrodes. The compact design allows the device to be attached directly to the chest. An integrated microcontroller allows digital signal processing without the need for an additional computer. Central features of the evaluation are a peak detection for detecting R-peaks and a calculation of the current heart rate based on the RR interval. To ensure the validity of the detected R-peaks, a model of the anatomical conditions is used. Thus, unrealistic RR-intervals can be excluded. The wireless interface allows continuous transmission of the calculated heart rate. Following the development of hardware and software, the results are verified, and appropriate conclusions about the data quality are drawn. As a result, a very compact and wearable ECG device with different wireless technologies, data storage, and evaluation of RR intervals was developed. Some tests yelled runtimes up to 24 hours with wireless Lan activated and streaming.