Refine
Year of publication
Document Type
- Article (54)
- Conference Proceeding (29)
- Master's Thesis (14)
- Working Paper (12)
- Report (11)
- Bachelor Thesis (8)
- Part of a Book (3)
- Study Thesis (3)
- Preprint (2)
- Book (1)
Language
- English (138) (remove)
Has Fulltext
- yes (138) (remove)
Keywords
- 1D-CNN (1)
- 2 D environment Laser data (1)
- 3D Extended Object Tracking (1)
- 3D shape tracking (1)
- Abschätzung (1)
- Accelerometer (1)
- Additive manufacturing (1)
- Agile administration (1)
- Alpine area (1)
- Alternative Energy Production (1)
Institute
- Fakultät Architektur und Gestaltung (2)
- Fakultät Bauingenieurwesen (16)
- Fakultät Elektrotechnik und Informationstechnik (6)
- Fakultät Informatik (12)
- Fakultät Maschinenbau (6)
- Fakultät Wirtschafts-, Kultur- und Rechtswissenschaften (12)
- Institut für Angewandte Forschung - IAF (18)
- Institut für Optische Systeme - IOS (10)
- Institut für Strategische Innovation und Technologiemanagement - IST (2)
- Institut für Systemdynamik - ISD (13)
This research project has been awarded as part of the research competition organized by Connect2Recover, which is a global initiative by the International Telecommunication Union (ITU) with the priority of reinforcing and strengthening the digital infrastructure and ecosystems of developing countries. Carried out by an international and transdisciplinary research consortium, the project sets out to analyze the prospects of digital federation and data sharing within the context of Botswana. Considering the country’s stage of economic and digital development, the project team identified Botswana’s smallholder agricultural sector as the most important area of digital transformation given the development need of the country’s primary sector.
Derived from semi-structured interviews, a focus group, as well as secondary research, the project team developed a digital transformation roadmap based on three development stages: (a) crowdfarming pilot, (b) crowdfarming marketplace, and (c) digital ecosystem for smallholder agriculture. Based on a detailed review of Botswana’s smallholder agriculture and the government’s digitalization strategy, the report envisions each phase, especially the pilot project, in terms of a minimal viable product. This is to consider the low level of digital penetration of Botswana’s primary sector, while providing an incentive to connect smallholders with consumers, traders, and retailers.
The project team has been successful in receiving commitment from actual smallholder farmers, the farmer association and government, as well as support for the idea of developing a crowdfarming marketplace as a novel production model and, eventually, a digital agriculture ecosystem for smallholder farmers, livestock producers, and agricultural technology companies and start-ups. The report is a proposal for a phase-one pilot project with the objective to advance smallholder agribusiness in Botswana.
Purpose
The goal of this research survey was to propose an entrepreneurship education model for students in higher education institutions.
Methodology
A questionnaire was distributed to 246 randomly sampled students at the Universitas Negeri Jakarta. The data was analyzed through Structural Equation Modeling to study the variables of entrepreneurship education for higher education students and examine whether it can be predicted by the university leadership as a facilitator of entrepreneurial culture, university departments as promoters of entrepreneurial skills, and university research as an incubator of local business
development.
Findings
The results show that university leadership as a facilitator of entrepreneurial culture is supported by the university leadership’s fostering a culture of entrepreneurial thinking. It was also evident that the university placed sufficient emphasis on entrepreneurial education, and it successfully motivated lecturers to embrace entrepreneurship education, and students to embrace entrepreneurship education. The results also indicated that university departments acted as promoters of entrepreneurial skills and stimulated students to attain sufficient entrepreneurial skills during their university education. Lastly, the university research also proved as an incubator of local business development and was found influenced by the university conducting research projects with local
private sector businesses and supporting graduates planning to launch start-ups.
Implications to Research and Practice
The survey results will provide valuable policy insights to improve entrepreneurship education. The university faculty and students would have opportunities to gain practical experience in local private sector businesses. The model of entrepreneurship education proposed herein can be applied for higher education students.
Image novelty detection is a repeating task in computer vision and describes the detection of anomalous images based on a training dataset consisting solely of normal reference data. It has been found that, in particular, neural networks are well-suited for the task. Our approach first transforms the training and test images into ensembles of patches, which enables the assessment of mean-shifts between normal data and outliers. As mean-shifts are only detectable when the outlier ensemble and inlier distribution are spatially separate from each other, a rich feature space, such as a pre-trained neural network, needs to be chosen to represent the extracted patches. For mean-shift estimation, the Hotelling T2 test is used. The size of the patches turned out to be a crucial hyperparameter that needs additional domain knowledge about the spatial size of the expected anomalies (local vs. global). This also affects model selection and the chosen feature space, as commonly used Convolutional Neural Networks or Vision Image Transformers have very different receptive field sizes. To showcase the state-of-the-art capabilities of our approach, we compare results with classical and deep learning methods on the popular dataset CIFAR-10, and demonstrate its real-world applicability in a large-scale industrial inspection scenario using the MVTec dataset. Because of the inexpensive design, our method can be implemented by a single additional 2D-convolution and pooling layer and allows particularly fast prediction times while being very data-efficient.
We call for a paradigm shift in engineering education. We are entering the era of the Fourth Industrial Revolution (“4IR”), accelerated by Artificial Intelligence (“AI”). Disruptive changes affect all industrial sectors and society, leading to increased uncertainty that makes it impossible to predict what lies ahead. Therefore, gradual cultural change in education is no longer an option to ease social pain. The vast majority of engineering education and training systems, which have remained largely static and underinvested for decades, are inadequate for the emerging 4IR and AI labour markets. Nevertheless, some positive developments can be observed in the reorientation of the engineering education sector. Novel approaches to engineering education are already providing distinctive, technology-enhanced, personalised, student-centred curriculum experiences within an integrated and unified education system. We need to educate engineering students for a future whose key characteristics are volatility, uncertainty, complexity and ambiguity (“VUCA”). Talent and skills gaps are expected to increase in all industries in the coming years. The authors argue for an engineering curriculum that combines timeless didactic traditions such as Socratic inquiry, mastery-based and project-based learning and first-principles thinking with novel elements, e.g., student-centred active and e-learning with a focus on case studies, as well as visualization/metaverse and gamification elements discussed in this paper, and a refocusing of engineering skills and knowledge enhanced by AI on human qualities such as creativity, empathy and dexterity. These skills strengthen engineering students’ perceptions of the world and the decisions they make as a result. This 4IR engineering curriculum will prepare engineering students to become curious engineers and excellent collaborators who navigate increasingly complex multistakeholder ecosystems.
This policy brief presents the possibilities of using big data analytics for safe, decarbonised and climate-resilient infrastructure. The policy brief focuses on current constraints and limitations to applying big data analytics to the infrastructure ecosystem and presents several examples and best practices for different infrastructure sectors and at different policy levels (national, municipal) to highlight recommendations and policy requirements needed for deep digital transformation and sustainable solutions in infrastructure planning and delivery.
This report summarises up-to-date social science evidence on climate communication for effective public engagement. It presents ten key principles that may inform communication activities. At the heart of them is the following insight: People do not form their attitudes or take action as a result primarily of weighing up expert information and making rational cost-benefit calculations. Instead, climate communication has to connect with people at the level of values and emotions.
Two aspects seem to be of special importance: First, climate communication needs to focus more on effectively speaking to people who have up to now not been properly addressed by climate communications, but who are vitally important to build broad public engagement. Second, climate communication has to support a shift from concern to agency, where high levels of climate risk perception turn into pro-climate individual and collective action.
Botswana serves as a role model for other African countries due to its rapid development in recent decades. Since the country is sparsely populated and a large part of the rural population depends on agriculture, especially livestock, this sector forms the backbone of the national economy. The digitization of this sector offers promising opportunities for economic growth and driving Botswana's evolution to a digital economy, while real value is being created for smallholder farmers. To support this process, an ITU research project made the key recommendation for the development of a digital crowdfarming tool and marketplace to create a digital ecosystem for smallholder agriculture. Within the research project, infrastructural challenges such as the creation of rural electricity supply and internet access, as well as the smallholders' need for remote monitoring, management, and better connectivity, were identified.
Based on the findings of the ITU research report, this bachelor's thesis aims to identify potential innovations for the digital development of smallholder agriculture in Botswana and to conceptualize proposals to address the identified challenges and needs of smallholder farmers. To achieve this, solutions were developed through literature research, technology analysis and expert involvement. These included the design of a decentralized mini-grid for power supply, proposals to create internet access, and the graphic visualization of a conceptual app. The latter addresses smallholder farmers' needs for remote monitoring, market access, knowledge enhancement, and connection to colleagues, buyers, and investors.
The proposed solutions and developed concepts provide impulses for further research and can serve as a basis for an extended evaluation through further involvement of experts and stakeholders.
Global agriculture will face major challenges in the future. In addition to the increasing demand for food due to constant population growth, the consequences of climate change will make it even more difficult to operate agriculture and supply people with food. In addition to further productivity increases in traditional agriculture, new concepts for sustainable and scalable food production are needed. Vertical farming offers a promising approach.
The aim of this project is to demonstrate how vertical farming can be used to ensure sustainable food production and how this concept can be applied in the pioneering Maun Science Park project in Botswana. In doing so, the Maun Science Park will address future challenges such as demographics, governance and climate change and become a best practice model for Botswana, the whole of Africa and the world. The country of Botswana grew to become one of the most prosperous countries in Africa in recent decades due to strong economic growth from mining. However, the population faces great challenges in the future; in addition to great social inequality, climate change threatens the country's overall supply.
With the help of a literature review and qualitative and quantitative interviews with stakeholders from Maun (Botswana), the potentials and challenges for vertical farming in Botswana could be identified and future measures for a possible realization could be derived. Basically, some challenges in Botswana are addressed by the technology, for example, Vertical Farming offers high food security through year-round production of food through the closed ecosystem and creates independence from current and future climatic conditions, poor conditions for traditional agriculture (e.g. infertile soils) and foreign imports. However, the main structural problems of agriculture in Botswana, such as the lack of infrastructure, know-how and policy support, are not addressed.
Digitization and sustainability are the two big topics of our current time. As the usage of digital products like IoT devices continues to grow, it affects the energy consumption caused by the Internet. At the same time, more and more companies feel the need to become carbon neutral and sustainable. Determining the environmental impact of an IoT device is challenging, as the production of the hardware components should be considered and the electricity consumption of the Internet since this is the primary communication medium of an IoT device. Estimating the electricity consumption of the Internet itself is a complex task. We performed a life cycle assessment (LCA) to determine the environmental impact of an intelligent smoke detector sold in Germany, taking its whole life-cycle from cradle-to-grave into account. We applied the impact assessment method ReCiPe 2016 Midpoint and compared its results with ILCD 2011 Midpoint+ to check the robustness of our results. The LCA results showed that electricity consumption during the use phase is the main contributor to environmental impacts. The mining of coal causes this contribution, which is a part of the German electricity mix. Consequently, the smoke detector mainly contributes to the impact categories of freshwater and marine ecotoxicity, but only marginally to global warming.
As fish farming is becoming more and more important worldwide, this ongoing project aims at the simulation and test-based analysis of highly stressed wire contacts, as they are found in off-shore fish farm cages in order to make them more reliable. The quasi-static tensile test of a wire mesh provides data for the construction of a finite element model to get a better understanding of the behavior of high-strength stainless steel from which the cages are made. Fatigue tests provide new insights that are used for an adjustment of the finite element model in order to predict the probability of possible damage caused by heavy mechanical loads (waves, storms, predators (sharks)).
In this paper, a novel feature-based sampling strategy for nonlinear Model Predictive Path Integral (MPPI) control is presented. Using the MPPI approach, the optimal feedback control is calculated by solving a stochastic optimal control (OCP) problem online by evaluating the weighted inference of sampled stochastic trajectories. While the MPPI algorithm can be excellently parallelized, the closed-loop performance strongly depends on the information quality of the sampled trajectories. To draw samples, a proposal density is used. The solver’s and thus, the controller’s performance is of high quality if the sampled trajectories drawn from this proposal density are located in low-cost regions of state-space. In classical MPPI control, the explored state-space is strongly constrained by assumptions that refer to the control value’s covariance matrix, which are necessary for transforming the stochastic Hamilton–Jacobi–Bellman (HJB) equation into a linear second-order partial differential equation. To achieve excellent performance even with discontinuous cost functions, in this novel approach, knowledge-based features are introduced to constitute the proposal density and thus the low-cost region of state-space for exploration. This paper addresses the question of how the performance of the MPPI algorithm can be improved using a feature-based mixture of base densities. Furthermore, the developed algorithm is applied to an autonomous vessel that follows a track and concurrently avoids collisions using an emergency braking feature. Therefore, the presented feature-based MPPI algorithm is applied and analyzed in both simulation and full-scale experiments.
Sleep is essential to physical and mental health. However, the traditional approach to sleep analysis—polysomnography (PSG)—is intrusive and expensive. Therefore, there is great interest in the development of non-contact, non-invasive, and non-intrusive sleep monitoring systems and technologies that can reliably and accurately measure cardiorespiratory parameters with minimal impact on the patient. This has led to the development of other relevant approaches, which are characterised, for example, by the fact that they allow greater freedom of movement and do not require direct contact with the body, i.e., they are non-contact. This systematic review discusses the relevant methods and technologies for non-contact monitoring of cardiorespiratory activity during sleep. Taking into account the current state of the art in non-intrusive technologies, we can identify the methods of non-intrusive monitoring of cardiac and respiratory activity, the technologies and types of sensors used, and the possible physiological parameters available for analysis. To do this, we conducted a literature review and summarised current research on the use of non-contact technologies for non-intrusive monitoring of cardiac and respiratory activity. The inclusion and exclusion criteria for the selection of publications were established prior to the start of the search. Publications were assessed using one main question and several specific questions. We obtained 3774 unique articles from four literature databases (Web of Science, IEEE Xplore, PubMed, and Scopus) and checked them for relevance, resulting in 54 articles that were analysed in a structured way using terminology. The result was 15 different types of sensors and devices (e.g., radar, temperature sensors, motion sensors, cameras) that can be installed in hospital wards and departments or in the environment. The ability to detect heart rate, respiratory rate, and sleep disorders such as apnoea was among the characteristics examined to investigate the overall effectiveness of the systems and technologies considered for cardiorespiratory monitoring. In addition, the advantages and disadvantages of the considered systems and technologies were identified by answering the identified research questions. The results obtained allow us to determine the current trends and the vector of development of medical technologies in sleep medicine for future researchers and research.
Increasing demand for sustainable, resilient, and low-carbon construction materials has highlighted the potential of Compacted Mineral Mixtures (CMMs), which are formulated from various soil types (sand, silt, clay) and recycled mineral waste. This paper presents a comprehensive inter- and transdisciplinary research concept that aims to industrialise and scale up the adoption of CMM-based construction materials and methods, thereby accelerating the construction industry’s systemic transition towards carbon neutrality. By drawing upon the latest advances in soil mechanics, rheology, and automation, we propose the development of a robust material properties database to inform the design and application of CMM-based materials, taking into account their complex, time-dependent behaviour. Advanced soil mechanical tests would be utilised to ensure optimal performance under various loading and ageing conditions. This research has also recognised the importance of context-specific strategies for CMM adoption. We have explored the implications and limitations of implementing the proposed framework in developing countries, particularly where resources may be constrained. We aim to shed light on socio-economic and regulatory aspects that could influence the adoption of these sustainable construction methods. The proposed concept explores how the automated production of CMM-based wall elements can become a fast, competitive, emission-free, and recyclable alternative to traditional masonry and concrete construction techniques. We advocate for the integration of open-source digital platform technologies to enhance data accessibility, processing, and knowledge acquisition; to boost confidence in CMM-based technologies; and to catalyse their widespread adoption. We believe that the transformative potential of this research necessitates a blend of basic and applied investigation using a comprehensive, holistic, and transfer-oriented methodology. Thus, this paper serves to highlight the viability and multiple benefits of CMMs in construction, emphasising their pivotal role in advancing sustainable development and resilience in the built environment.
Digital federated platforms and data cooperatives for secure, trusted and sovereign data exchange will play a central role in the construction industry of the future. With the help of platforms, cooperatives and their novel value creation, the digital transformation and the degree of organization of the construction value chain can be taken to a new level of collaboration. The goal of this research project was to develop an experimental prototype for a federated innovation data platform along with a suitable exemplary use case. The prototype is to serve the construction industry as a demonstrator for further developments and form the basis for an innovation platform. It exemplifies how an overall concept is concretely implemented along one or more use cases that address high-priority industry pain points. This concept will create a blueprint and a framework for further developments, which will then be further established in the market. The research project illuminates the perspective of various governance innovations to increase industry collaboration, productivity and capital project performance and transparency as well as the overall potential of possible platform business models. However, a comprehensive expert survey revealed that there are considerable obstacles to trust-based data exchange between the key stakeholders in the industry value network. The obstacles to cooperation are predominantly not of a technical nature but rather of a competitive, predominantly trust-related nature. To overcome these obstacles and create a pre-competitive space of trust, the authors therefore propose the governance structure of a data cooperative model, which is discussed in detail in this paper.
The scoring of sleep stages is one of the essential tasks in sleep analysis. Since a manual procedure requires considerable human and financial resources, and incorporates some subjectivity, an automated approach could result in several advantages. There have been many developments in this area, and in order to provide a comprehensive overview, it is essential to review relevant recent works and summarise the characteristics of the approaches, which is the main aim of this article. To achieve it, we examined articles published between 2018 and 2022 that dealt with the automated scoring of sleep stages. In the final selection for in-depth analysis, 125 articles were included after reviewing a total of 515 publications. The results revealed that automatic scoring demonstrates good quality (with Cohen's kappa up to over 0.80 and accuracy up to over 90%) in analysing EEG/EEG + EOG + EMG signals. At the same time, it should be noted that there has been no breakthrough in the quality of results using these signals in recent years. Systems involving other signals that could potentially be acquired more conveniently for the user (e.g. respiratory, cardiac or movement signals) remain more challenging in the implementation with a high level of reliability but have considerable innovation capability. In general, automatic sleep stage scoring has excellent potential to assist medical professionals while providing an objective assessment.
Driver assistance systems are increasingly becoming part of the standard equipment of vehicles and thus contribute to road safety. However, as they become more widespread, the requirements for cost efficiency are also increasing, and so few and inexpensive sensors are used in these systems. Especially in challenging situations, this leads to the fact that target discrimination cannot be ensured which may lead to false reactions of the driver assistance system. In this paper, the Boids flocking algorithm is used to generate semantic neighborhood information between tracked objects which in turn can significantly improve the overall performance. Two different variants were developed: First, a free-moving flock whereby a separate flock is generated per tracked object and second, a formation-controlled flock where boids of a single flock move along the future road course in a pre-defined formation. In the first approach, the interaction between the flocks as well as the interaction between the boids within a flock is used to generate additional information, which in turn can be used to improve, for example, lane change detection. For the latter approach, new behavioral rules have been developed, so that the boids can reliably identify control-relevant objects to a driver assistance system. Finally, the performance of the presented methods is verified through extensive simulations.
In order to ensure sufficient recovery of the human body and brain, healthy sleep is indispensable. For this purpose, appropriate therapy should be initiated at an early stage in the case of sleep disorders. For some sleep disorders (e.g., insomnia), a sleep diary is essential for diagnosis and therapy monitoring. However, subjective measurement with a sleep diary has several disadvantages, requiring regular action from the user and leading to decreased comfort and potential data loss. To automate sleep monitoring and increase user comfort, one could consider replacing a sleep diary with an automatic measurement, such as a smartwatch, which would not disturb sleep. To obtain accurate results on the evaluation of the possibility of such a replacement, a field study was conducted with a total of 166 overnight recordings, followed by an analysis of the results. In this evaluation, objective sleep measurement with a Samsung Galaxy Watch 4 was compared to a subjective approach with a sleep diary, which is a standard method in sleep medicine. The focus was on comparing four relevant sleep characteristics: falling asleep time, waking up time, total sleep time (TST), and sleep efficiency (SE). After evaluating the results, it was concluded that a smartwatch could replace subjective measurement to determine falling asleep and waking up time, considering some level of inaccuracy. In the case of SE, substitution was also proved to be possible. However, some individual recordings showed a higher discrepancy in results between the two approaches. For its part, the evaluation of the TST measurement currently does not allow us to recommend substituting the measurement method for this sleep parameter. The appropriateness of replacing sleep diary measurement with a smartwatch depends on the acceptable levels of discrepancy. We propose four levels of similarity of results, defining ranges of absolute differences between objective and subjective measurements. By considering the values in the provided table and knowing the required accuracy, it is possible to determine the suitability of substitution in each individual case. The introduction of a “similarity level” parameter increases the adaptability and reusability of study findings in individual practical cases.
Non-volatile NAND flash memories store information as an electrical charge. Different read reference voltages are applied to read the data. However, the threshold voltage distributions vary due to aging effects like program erase cycling and data retention time. It is necessary to adapt the read reference voltages for different life-cycle conditions to minimize the error probability during readout. In the past, methods based on pilot data or high-resolution threshold voltage histograms were proposed to estimate the changes in voltage distributions. In this work, we propose a machine learning approach with neural networks to estimate the read reference voltages. The proposed method utilizes sparse histogram data for the threshold voltage distributions. For reading the information from triple-level cell (TLC) memories, several read reference voltages are applied in sequence. We consider two histogram resolutions. The simplest histogram consists of the zero-and-one ratios for the hard decision read operation, whereas a higher resolution is obtained by considering the quantization levels for soft-input decoding. This approach does not require pilot data for the voltage adaptation. Furthermore, only a few measurements of extreme points of the threshold voltage distributions are required as training data. Measurements with different conditions verify the proposed approach. The resulting neural networks perform well under other life-cycle conditions.
Background: Polysomnography (PSG) is the gold standard for detecting obstructive sleep apnea (OSA). However, this technique has many disadvantages when using it outside the hospital or for daily use. Portable monitors (PMs) aim to streamline the OSA detection process through deep learning (DL).
Materials and methods: We studied how to detect OSA events and calculate the apnea-hypopnea index (AHI) by using deep learning models that aim to be implemented on PMs. Several deep learning models are presented after being trained on polysomnography data from the National Sleep Research Resource (NSRR) repository. The best hyperparameters for the DL architecture are presented. In addition, emphasis is focused on model explainability techniques, concretely on Gradient-weighted Class Activation Mapping (Grad-CAM).
Results: The results for the best DL model are presented and analyzed. The interpretability of the DL model is also analyzed by studying the regions of the signals that are most relevant for the model to make the decision. The model that yields the best result is a one-dimensional convolutional neural network (1D-CNN) with 84.3% accuracy.
Conclusion: The use of PMs using machine learning techniques for detecting OSA events still has a long way to go. However, our method for developing explainable DL models demonstrates that PMs appear to be a promising alternative to PSG in the future for the detection of obstructive apnea events and the automatic calculation of AHI.
Sleep is extremely important for physical and mental health. Although polysomnography is an established approach in sleep analysis, it is quite intrusive and expensive. Consequently, developing a non-invasive and non-intrusive home sleep monitoring system with minimal influence on patients, that can reliably and accurately measure cardiorespiratory parameters, is of great interest. The aim of this study is to validate a non-invasive and unobtrusive cardiorespiratory parameter monitoring system based on an accelerometer sensor. This system includes a special holder to install the system under the bed mattress. The additional aim is to determine the optimum relative system position (in relation to the subject) at which the most accurate and precise values of measured parameters could be achieved. The data were collected from 23 subjects (13 males and 10 females). The obtained ballistocardiogram signal was sequentially processed using a sixth-order Butterworth bandpass filter and a moving average filter. As a result, an average error (compared to reference values) of 2.24 beats per minute for heart rate and 1.52 breaths per minute for respiratory rate was achieved, regardless of the subject’s sleep position. For males and females, the errors were 2.28 bpm and 2.19 bpm for heart rate and 1.41 rpm and 1.30 rpm for respiratory rate. We determined that placing the sensor and system at chest level is the preferred configuration for cardiorespiratory measurement. Further studies of the system’s performance in larger groups of subjects are required, despite the promising results of the current tests in healthy subjects.
While driving, stress is caused by situations in which the driver estimates their ability to manage the driving demands as insufficient or loses the capability to handle the situation. This leads to increased numbers of driver mistakes and traffic violations. Additional stressing factors are time pressure, road conditions, or dislike for driving. Therefore, stress affects driver and road safety. Stress is classified into two categories depending on its duration and the effects on the body and psyche: short-term eustress and constantly present distress, which causes degenerative effects. In this work, we focus on distress. Wearable sensors are handy tools for collecting biosignals like heart rate, activity, etc. Easy installation and non-intrusive nature make them convenient for calculating stress. This study focuses on the investigation of stress and its implications. Specifically, the research conducts an analysis of stress within a select group of individuals from both Spain and Germany. The primary objective is to examine the influence of recognized psychological factors, including personality traits such as neuroticism, extroversion, psychoticism, stress and road safety. The estimation of stress levels was accomplished through the collection of physiological parameters (R-R intervals) using a Polar H10 chest strap. We observed that personality traits, such as extroversion, exhibited similar trends during relaxation, with an average heart rate 6% higher in Spain and 3% higher in Germany. However, while driving, introverts, on average, experienced more stress, with rates 4% and 1% lower than extroverts in Spain and Germany, respectively.
Network effects, economies of scale, and lock-in-effects increasingly lead to a concentration of digital resources and capabilities, hindering the free and equitable development of digital entrepreneurship, new skills, and jobs, especially in small communities and their small and medium-sized enterprises (“SMEs”). To ensure the affordability and accessibility of technologies, promote digital entrepreneurship and community well-being, and protect digital rights, we propose data cooperatives as a vehicle for secure, trusted, and sovereign data exchange. In post-pandemic times, community/SME-led cooperatives can play a vital role by ensuring that supply chains to support digital commons are uninterrupted, resilient, and decentralized. Digital commons and data sovereignty provide communities with affordable and easy access to information and the ability to collectively negotiate data-related decisions. Moreover, cooperative commons (a) provide access to the infrastructure that underpins the modern economy, (b) preserve property rights, and (c) ensure that privatization and monopolization do not further erode self-determination, especially in a world increasingly mediated by AI. Thus, governance plays a significant role in accelerating communities’/SMEs’ digital transformation and addressing their challenges. Cooperatives thrive on digital governance and standards such as open trusted application programming interfaces (“APIs”) that increase the efficiency, technological capabilities, and capacities of participants and, most importantly, integrate, enable, and accelerate the digital transformation of SMEs in the overall process. This review article analyses an array of transformative use cases that underline the potential of cooperative data governance. These case studies exemplify how data and platform cooperatives, through their innovative value creation mechanisms, can elevate digital commons and value chains to a new dimension of collaboration, thereby addressing pressing societal issues. Guided by our research aim, we propose a policy framework that supports the practical implementation of digital federation platforms and data cooperatives. This policy blueprint intends to facilitate sustainable development in both the Global South and North, fostering equitable and inclusive data governance strategies.
This thesis emphasizes problems that reports generated by vulnerability scanners impose on the process of vulnerability management, which are a. an overwhelming amount of data and b. an insufficient prioritization of the scan results.
To assist the process of developing means to counteract those problems and to allow for quantitative evaluation of their solutions, two metrics are proposed for their effectiveness and efficiency. These metrics imply a focus on higher severity vulnerabilities and can be applied to any simplification process of vulnerability scan results, given it relies on a severity score and time of remediation estimation for each vulnerability.
A priority score is introduced which aims to improve the widely used Common Vulnerability Scoring System (CVSS) base score of each vulnerability dependent on a vulnerability’s ease of exploit, estimated probability of exploitation and probability of its existence.
Patterns within the reports generated by the Open Vulnerability Assessment System (OpenVAS) vulnerability scanner between vulnerabilities are discovered which identify criteria by which they can be categorized from a remediation actor standpoint. These categories lay the groundwork of a final simplified report and consist of updates that need to be installed on a host, severe vulnerabilities, vulnerabilities that occur on multiple hosts and vulnerabilities that will take a lot of time for remediation. The highest potential time savings are found to exist within frequently occurring vulnerabilities, minor- and major suggested updates.
Processing of the results provided by the vulnerability scanner and creation of the report is realized in the form of a python script. The resulting reports are short, straight to the point and provide a top down remediation process which should theoretically allow to minimize the institutions attack surface as fast as possible. Evaluation of the practicality must follow as the reports are yet to be introduced into the Information Security Management Lifecycle.
Cities around the world are facing the implications of a changing climate as an increasingly pressing issue. The negative effects of climate change are already being felt today. Therefore, adaptation to these changes is a mission that every city must master. Leading practices worldwide demonstrate various urban efforts on climate change adaptation (CCA) which are already underway. Above all, the integration of climate data, remote sensing, and in situ data is key to a successful and measurable adaptation strategy. Furthermore, these data can act as a timely decision support tool for municipalities to develop an adaptation strategy, decide which actions to prioritize, and gain the necessary buy-in from local policymakers. The implementation of agile data workflows can facilitate the integration of climate data into climate-resilient urban planning. Due to local specificities, (supra)national, regional, and municipal policies and (by) laws, as well as geographic and related climatic differences worldwide, there is no single path to climate-resilient urban planning. Agile data workflows can support interdepartmental collaboration and, therefore, need to be integrated into existing management processes and government structures. Agile management, which has its origins in software development, can be a way to break down traditional management practices, such as static waterfall models and sluggish stage-gate processes, and enable an increased level of flexibility and agility required when urgent. This paper presents the findings of an empirical case study conducted in cooperation with the City of Constance in southern Germany, which is pursuing a transdisciplinary and trans-sectoral co-development approach to make management processes more agile in the context of climate change adaptation. The aim is to present a possible way of integrating climate data into CCA planning by changing the management approach and implementing a toolbox for low-threshold access to climate data. The city administration, in collaboration with the University of Applied Sciences Constance, the Climate Service Center Germany (GERICS), and the University of Stuttgart, developed a co-creative and participatory project, CoKLIMAx, with the objective of integrating climate data into administrative processes in the form of a toolbox. One key element of CoKLIMAx is the involvement of the population, the city administration, and political decision-makers through targeted communication and regular feedback loops among all involved departments and stakeholder groups. Based on the results of a survey of 72 administrative staff members and a literature review on agile management in municipalities and city administrations, recommendations on a workflow and communication structure for cross-departmental strategies for resilient urban planning in the City of Constance were developed.
In the past years, algorithms for 3D shape tracking using radial functions in spherical coordinates represented with different methods have been proposed. However, we have seen that mainly measurements from the lateral surface of the target can be expected in a lot of dynamic scenarios and only few measurements from the top and bottom parts leading to an error-prone shape estimate in the top and bottom regions when using a representation in spherical coordinates. We, therefore, propose to represent the shape of the target using a radial function in cylindrical coordinates, as these only represent regions of the lateral surface, and no information from the top or bottom parts is needed. In this paper, we use a Fourier-Chebyshev double series for 3D shape representation since a mixture of Fourier and Chebyshev series is a suitable basis for expanding a radial function in cylindrical coordinates. We investigate the method in a simulated and real-world maritime scenario with a CAD model of the target boat as a reference. We have found that shape representation in cylindrical coordinates has decisive advantages compared to a shape representation in spherical coordinates and should preferably be used if no prior knowledge of the measurement distribution on the surface of the target is available.
In 3D extended object tracking (EOT), well-established models exist for tracking the object extent using various shape priors. A single update, however, has to be performed for every measurement using these models leading to a high computational runtime for high-resolution sensors. In this paper, we address this problem by using various model-independent downsampling schemes based on distance heuristics and random sampling as pre-processing before the update. We investigate the methods in a simulated and real-world tracking scenario using two different measurement models with measurements gathered from a LiDAR sensor. We found that there is a huge potential for speeding up 3D EOT by dropping up to 95\% of the measurements in our investigated scenarios when using random sampling. Since random sampling, however, can also result in a subset that does not represent the total set very well, leading to a poor tracking performance, there is still a high demand for further research.
Sleep disorders can impact daily life, affecting physical, emotional, and cognitive well-being. Due to the time-consuming, highly obtrusive, and expensive nature of using the standard approaches such as polysomnography, it is of great interest to develop a noninvasive and unobtrusive in-home sleep monitoring system that can reliably and accurately measure cardiorespiratory parameters while causing minimal discomfort to the user’s sleep. We developed a low-cost Out of Center Sleep Testing (OCST) system with low complexity to measure cardiorespiratory parameters. We tested and validated two force-sensitive resistor strip sensors under the bed mattress covering the thoracic and abdominal regions. Twenty subjects were recruited, including 12 males and 8 females. The ballistocardiogram signal was processed using the 4th smooth level of the discrete wavelet transform and the 2nd order of the Butterworth bandpass filter to measure the heart rate and respiration rate, respectively. We reached a total error (concerning the reference sensors) of 3.24 beats per minute and 2.32 rates for heart rate and respiration rate, respectively. For males and females, heart rate errors were 3.47 and 2.68, and respiration rate errors were 2.32 and 2.33, respectively. We developed and verified the reliability and applicability of the system. It showed a minor dependency on sleeping positions, one of the major cumbersome sleep measurements. We identified the sensor under the thoracic region as the optimal configuration for cardiorespiratory measurement. Although testing the system with healthy subjects and regular patterns of cardiorespiratory parameters showed promising results, further investigation is required with the bandwidth frequency and validation of the system with larger groups of subjects, including patients.
Contemporary empirical applications frequently require flexible regression models for complex response types and large tabular or non-tabular, including image or text, data. Classical regression models either break down under the computational load of processing such data or require additional manual feature extraction to make these problems tractable. Here, we present deeptrafo, a package for fitting flexible regression models for conditional distributions using a tensorflow backend with numerous additional processors, such as neural networks, penalties, and smoothing splines. Package deeptrafo implements deep conditional transformation models (DCTMs) for binary, ordinal, count, survival, continuous, and time series responses, potentially with uninformative censoring. Unlike other available methods, DCTMs do not assume a parametric family of distributions for the response. Further, the data analyst may trade off interpretability and flexibility by supplying custom neural network architectures and smoothers for each term in an intuitive formula interface. We demonstrate how to set up, fit, and work with DCTMs for several response types. We further showcase how to construct ensembles of these models, evaluate models using inbuilt cross-validation, and use other convenience functions for DCTMs in several applications. Lastly, we discuss DCTMs in light of other approaches to regression with non-tabular data.
Research Report
(2024)
AbstractSanctions encompass a wide set of policy instruments restricting cross‐border economic activities. In this paper, we study how different types of sanctions affect the export behavior of firms to the targeted countries. We combine Danish register data, including information on firm‐destination‐specific exports, with information on sanctions imposed by Denmark from the Global Sanctions Database. Our data allow us to study firms' export behavior in 62 sanctioned countries, amounting to a total of 453 country‐years with sanctions over the period 2000–2015. Methodologically, we apply a two‐stage estimation strategy to properly account for multilateral resistance terms. We find that, on average, sanctions lead to a significant reduction in firms' destination‐specific exports and a significant increase in firms' probability to exit the destination. Next, we study heterogeneity in the effects of sanctions across (i) sanction types and sanction packages, (ii) the objectives of sanctions, and (iii) countries subject to sanctions. Results confirm that the effects of sanctions on firms' export behavior vary considerably across these three dimensions.
Black-box variational inference (BBVI) is a technique to approximate the posterior of Bayesian models by optimization. Similar to MCMC, the user only needs to specify the model; then, the inference procedure is done automatically. In contrast to MCMC, BBVI scales to many observations, is faster for some applications, and can take advantage of highly optimized deep learning frameworks since it can be formulated as a minimization task. In the case of complex posteriors, however, other state-of-the-art BBVI approaches often yield unsatisfactory posterior approximations. This paper presents Bernstein flow variational inference (BF-VI), a robust and easy-to-use method flexible enough to approximate complex multivariate posteriors. BF-VI combines ideas from normalizing flows and Bernstein polynomial-based transformation models. In benchmark experiments, we compare BF-VI solutions with exact posteriors, MCMC solutions, and state-of-the-art BBVI methods, including normalizing flow-based BBVI. We show for low-dimensional models that BF-VI accurately approximates the true posterior; in higher-dimensional models, BF-VI compares favorably against other BBVI methods. Further, using BF-VI, we develop a Bayesian model for the semi-structured melanoma challenge data, combining a CNN model part for image data with an interpretable model part for tabular data, and demonstrate, for the first time, the use of BBVI in semi-structured models.