Refine
Year of publication
Document Type
- Conference Proceeding (492)
- Article (218)
- Part of a Book (48)
- Doctoral Thesis (31)
- Other Publications (28)
- Master's Thesis (14)
- Report (13)
- Working Paper (12)
- Book (9)
- Bachelor Thesis (8)
Language
- English (883) (remove)
Keywords
- (Strict) sign-regularity (1)
- 1D-CNN (1)
- 2 D environment Laser data (1)
- 360-degree coverage (1)
- 3D Extended Object Tracking (1)
- 3D Extended Object Tracking (EOT) (2)
- 3D shape tracking (1)
- 3D ship detection (1)
- 3D urban planning (1)
- AAL (3)
Institute
- Fakultät Architektur und Gestaltung (6)
- Fakultät Bauingenieurwesen (26)
- Fakultät Elektrotechnik und Informationstechnik (16)
- Fakultät Informatik (63)
- Fakultät Maschinenbau (12)
- Fakultät Wirtschafts-, Kultur- und Rechtswissenschaften (43)
- Institut für Angewandte Forschung - IAF (78)
- Institut für Optische Systeme - IOS (36)
- Institut für Strategische Innovation und Technologiemanagement - IST (38)
- Institut für Systemdynamik - ISD (98)
- Institut für Werkstoffsystemtechnik Konstanz - WIK (4)
- Institut für Werkstoffsystemtechnik Thurgau - WITg (14)
- Institut für angewandte Thermo- und Fluiddynamik - IATF (3)
- Institut für professionelles Schreiben - IPS (1)
- Konstanz Institut für Corporate Governance - KICG (6)
- Konstanz Institut für WerteManagement - KIeM (4)
- Konstanzer Institut für Prozesssteuerung - KIPS (8)
Black-box variational inference (BBVI) is a technique to approximate the posterior of Bayesian models by optimization. Similar to MCMC, the user only needs to specify the model; then, the inference procedure is done automatically. In contrast to MCMC, BBVI scales to many observations, is faster for some applications, and can take advantage of highly optimized deep learning frameworks since it can be formulated as a minimization task. In the case of complex posteriors, however, other state-of-the-art BBVI approaches often yield unsatisfactory posterior approximations. This paper presents Bernstein flow variational inference (BF-VI), a robust and easy-to-use method flexible enough to approximate complex multivariate posteriors. BF-VI combines ideas from normalizing flows and Bernstein polynomial-based transformation models. In benchmark experiments, we compare BF-VI solutions with exact posteriors, MCMC solutions, and state-of-the-art BBVI methods, including normalizing flow-based BBVI. We show for low-dimensional models that BF-VI accurately approximates the true posterior; in higher-dimensional models, BF-VI compares favorably against other BBVI methods. Further, using BF-VI, we develop a Bayesian model for the semi-structured melanoma challenge data, combining a CNN model part for image data with an interpretable model part for tabular data, and demonstrate, for the first time, the use of BBVI in semi-structured models.
Apnea is a sleep disorder characterized by breathing interruptions during sleep, impacting cardiorespiratory function and overall health. Traditional diagnostic methods, like polysomnography (PSG), are unobtrusive, leading to noninvasive monitoring. This study aims to develop and validate a novel sleep monitoring system using noninvasive sensor technology to estimate cardiorespiratory parameters and detect sleep apnea. We designed a seamless monitoring system integrating noncontact force-sensitive resistor sensors to collect ballistocardiogram signals associated with cardiorespiratory activity. We enhanced the sensor’s sensitivity and reduced the noise by designing a new concept of edge-measuring sensor using a hemisphere dome and mechanical hanger to distribute the force and mechanically amplify the micromovement caused by cardiac and respiration activities. In total, we deployed three edge-measuring sensors, two deployed under the thoracic and one under the abdominal regions. The system is supported with onboard signal preprocessing in multiple physical layers deployed under the mattress. We collected the data in four sleeping positions from 16 subjects and analyzed them using ensemble empirical mode decomposition (EMD) to avoid frequency mixing. We also developed an adaptive thresholding method to identify sleep apnea. The error was reduced to 3.98 and 1.43 beats/min (BPM) in heart rate (HR) and respiration estimation, respectively. The apnea was detected with an accuracy of 87%. We optimized the system such that only one edge-measuring sensor can measure the cardiorespiratory parameters. Such a reduction in the complexity and simplification of the instruction of use shows excellent potential for in-home and continuous monitoring.
Incremental one-class learning using regularized null-space training for industrial defect detection
(2024)
One-class incremental learning is a special case of class-incremental learning, where only a single novel class is incrementally added to an existing classifier instead of multiple classes. This case is relevant in industrial defect detection scenarios, where novel defects usually appear during operation. Existing rolled-out classifiers must be updated incrementally in this scenario with only a few novel examples. In addition, it is often required that the base classifier must not be altered due to approval and warranty restrictions. While simple finetuning often gives the best performance across old and new classes, it comes with the drawback of potentially losing performance on the base classes (catastrophic forgetting [1]). Simple prototype approaches [2] work without changing existing weights and perform very well when the classes are well separated but fail dramatically when not. In theory, null-space training (NSCL) [3] should retain the basis classifier entirely, as parameter updates are restricted to the null space of the network with respect to existing classes. However, as we show, this technique promotes overfitting in the case of one-class incremental learning. In our experiments, we found that unconstrained weight growth in null space is the underlying issue, leading us to propose a regularization term (R-NSCL) that penalizes the magnitude of amplification. The regularization term is added to the standard classification loss and stabilizes null-space training in the one-class scenario by counteracting overfitting. We test the method’s capabilities on two industrial datasets, namely AITEX and MVTec, and compare the performance to state-of-the-art algorithms for class-incremental learning.
Particularly for manufactured products subject to aesthetic evaluation, the industrial manufacturing process must be monitored, and visual defects detected. For this purpose, more and more computer vision-integrated inspection systems are being used. In optical inspection based on cameras or range scanners, only a few examples are typically known before novel examples are inspected. Consequently, no large data set of non-defective and defective examples could be used to train a classifier, and methods that work with limited or weak supervision must be applied. For such scenarios, I propose new data-efficient machine learning approaches based on one-class learning that reduce the need for supervision in industrial computer vision tasks. The developed novelty detection model automatically extracts features from the input images and is trained only on available non-defective reference data. On top of the feature extractor, a one-class classifier based on recent developments in deep learning is placed. I evaluate the novelty detector in an industrial inspection scenario and state-of-the-art benchmarks from the machine learning community. In the second part of this work, the model gets improved by using a small number of novel defective examples, and hence, another source of supervision gets incorporated. The targeted real-world inspection unit is based on a camera array and a flashing light illumination, allowing inline capturing of multichannel images at a high rate. Optionally, the integration of range data, such as laser or Lidar signals, is possible by using the developed targetless data fusion method.
Using multi-camera matching techniques for 3d reconstruction there is usually the trade-off between the quality of the computed depth map and the speed of the computations. Whereas high quality matching methods take several seconds to several minutes to compute a depth map for one set of images, real-time methods achieve only low quality results. In this paper we present a multi-camera matching method that runs in real-time and yields high resolution depth maps. Our method is based on a novel multi-level combination of normalized cross correlation, deformed matching windows based on the multi-level depth map information, and sub-pixel precise disparity maps. The whole process is implemented completely on the GPU. With this approach we can process four 0.7 megapixel images in 129 milliseconds to a full resolution 3d depth map. Our technique is tailored for the recognition of non-technical shapes, because our target application is face recognition.
This paper broadens the resource-based approach to explaining survival of new technology-based firms (NTBFs) by focusing on the entrepreneur's ability to transform resources in response to triggers resulting from market interactions. Network theory is used to define a construct that allows determining the status of venture emergence (VE).The operationalization of the VE construct is built on the firm's value network maturity in the four market dimensions customer, investor, partner, and human resource. Business plans of NTBFs represent the artifact that contains this data in the form of transaction relation descriptions. Using content analysis, a multi-step combined human and computer coding process has been developed to empirically determine NTBFs' status of VE.Results of the business plan analysis suggests that the level of transaction relations allows to draw conclusions on the status of VE. Moreover, applying the developed process, a business plan coding test shows that the transaction relation based VE status significantly relates to NTBFs' survival capabilities.
Research Report
(2024)
We quantify the effects of GATT/WTO membership on trade and welfare. Using an extensive database covering manufacturing trade for 186 countries over the period 1980–2016, we find that the average partial equilibrium impact of GATT/WTO membership on trade among member countries is large, positive, and significant. We contribute to the literature by estimating country-specific estimates and find them to vary widely across the countries in our sample with poorer members benefitting more. Using these estimates, we simulate the general equilibrium effects of GATT/WTO on welfare, which are sizable and heterogeneous across members. We show that countries not experiencing positive trade effects from joining GATT/WTO can still gain in terms of welfare, due to lower import prices and higher export demand.
Misbehave like Nobody’s Watching? Investor Attention to Corporate Misconduct and its Implications
(2023)
Healthy and good sleep is a prerequisite for a rested mind and body. Both form the basis for physical and mental health. Healthy sleep is hindered by sleep disorders, the medically diagnosed frequency of which increases sharply from the age of 40. This chapter describes the formal specification of an on-course practical implementation for a non-invasive system based on biomedical signal processing to support the diagnosis and treatment of sleep-related diseases. The system aims to continuously monitor vital data during sleep in a patient’s home environment over long periods by using non-invasive technologies. At the center of the development is the MORPHEUS Box (MoBo), which consists of five main conceptualizations: the MoBo core, the MoBo-HW, the MoBo algorithm, the MoBo API, and the MoBo app. These synergistic elements aim to support the diagnosis and treatment of sleep-related diseases. Although there are related developments in individual aspects concerning the system, no comparative approach is known that gives a similar scope of functionality, deployment flexibility, extensibility, or the possibility to use multiple user groups. With the specification provided in this chapter, the MORPHEUS project sets a good platform, data model, and transmission strategies to bring an innovative proposal to measure sleep quality and detect sleep diseases from non-invasive sensors.
With the advancement in sensor technology and the trend shift of health measurement from treatment after diagnosis to abnormalities detection long before the occurrence, the approach of turning private spaces into diagnostic spaces has gained much attention. In this work, we designed and implemented a low-cost and compact form factor module that can be deployed on the steering wheel of cars as well as most frequently touch objects at home in order to measure physiological signals from the fingertip of the subject as well as environmental parameters. We estimated the heart rate and SpO2 with the error of 2.83 bpm and 3.52%, respectively. The signal evaluation of skin temperature shows a promising output with respect to environmental recalibration. In addition, the electrodermal activity sensor followed the reference signal, appropriately which indicates the potential for further development and application in stress measurement.
The perception of the amount of stress is subjective to every person, and the perception of it changes depending on many factors. One of the factors that has an impact on perceived stress is the emotional state. In this work, we compare the emotional state of 40 German driving students and present different partitions that can be advantageous for using artificial intelligence and classification. Like this, we evaluate the data quality and prepare for the specific use. The Stress Perceived Questionnaire (PSQ20) was employed to assess the level of stress experienced by individuals while participating in a driving simulation for 5 and 25 min. As a result of our analysis, we present a categorisation of various emotional states into intervals, comparing different classifications and facilitating a more straightforward implementation of artificial intelligence for classification purposes.
Evaluation of a Contactless Accelerometer Sensor System for Heart Rate Monitoring During Sleep
(2024)
The monitoring of a patient's heart rate (HR) is critical in the diagnosis of diseases. In the detection of sleep disorders, it also plays an important role. Several techniques have been proposed, including using sensors to record physiological signals that are automatically examined and analysed. This work aims to evaluate using a contactless HR monitoring system based on an accelerometer sensor during sleep. For this purpose, the oscillations caused by chest movements during heart contractions are recorded by an installation mounted under the bed mattress. The processing algorithm presented in this paper filters the signals and determines the HR. As a result, an average error of about 5 bpm has been documented, i.e., the system can be considered to be used for the forecasted domain.
Infrastructure-making in interwar India was a dynamic, multilayered process involving roads and vehicles in urban and rural sites. One of their strongest playgrounds was Bombay Presidency and the Central Provinces in central and western India. Focusing on this region in the interwar period, this paper analyzes the varied relationship between peasant households and town-centred modernizing agents in the making of road transport infrastructures. The central argument of this paper is about the persistence of bullock carts over motor cars in the region. This persistence was grounded in the specific regional environment, the effects of the 1930s economic depression, and the priorities of social classes. Pinpointing these connections, the paper highlights that “modernization” of infrastructure was not a simple, linear process of progressivist change, nor did it mean the survival of apparently “old” technologies in the modern era. Instead, the paper pays attention to conflicting social complexities, implications, and meanings of the connection between infrastructure and modernity that modernization assumptions often overlook. Here, the paper shows how technological change occurred as a result of real, material class interests pulling infrastructural technology in different directions. This was where and why arguments of road-motor lobbyists and cart advocates eventually clashed, and Gandhian social workers resisted motor transport in defense of peasant interests.
In the digital age, information technology (IT) is a strategic asset for organizations. As a result, the IT costs are rising, and the cost-effective management of IT is crucial. Nevertheless, organizations still face major challenges and former studies lack comprehensiveness and depth. The goal of this paper is to generate a deep and holistic view on current management challenges of IT costs. In 15 expert interviews, we identify 23 challenges divided into 7 categories. The main challenges are to ensure transparency on IT cost information, to demonstrate the business impact of IT as well as to change the mindset for the value of IT and overcoming them requires attention to their interactions. Hence, this paper leads to a better understanding of the issues that IT cost management (ITCM) faces in the digital age and builds a base for future research.
Nowadays, organizations must invest strategically in information technology (IT) and choose the right digital initiatives to maximize their benefit. Nevertheless, Chief Information Officers still struggle to communicate IT costs and demonstrate the business value of IT. The goal of this paper is to support their effective communication. In focus groups, we analyzed how different stakeholders perceive IT costs and the business value of IT as the basis of communication. We identified 16 success factors to establish effective communication. Hence, this paper enables a better understanding of the perception and the operationalization of effective communication.
Prior quantitative research identified in the text of technology-based ventures' business plans distinctive performance patterns of evolving business models. Accordingly, interactions with customers, financiers, and people and the patenting strategy's status evolved and served as indicators of early-stage tech ventures' performance. With longitudinal data from five venture cases, this research sheds light on the evolving business model by validating the performance patterns, and elucidating how and why the ventures' business models evolved. Based on a generic systems theory framework for the indicators, the explanatory case studies re-contextualize the performance patterns taken from the snapshot perspective of business plans to the longitudinal perspective of technology-based ventures' life-cycle. This research confirms the relation of business model patterns of digital and non-digital ventures to the performance groups of failure, survival, or success and suggests a broader systems perspective for further research.
The digital transformation of business processes and the integration of IT systems leads to opportunities and risks for small and medium-sized enterprises (SMEs). Risks that can result in a lack of IT Governance, Risk and Compliance (GRC). The purpose of this paper is to present the Design and Evaluation phase of creating an artefact, to reduce these risks. With this, the Design Science Research approach based on Hevner is using. The artefact will be developed by selecting relevant existing frameworks and the identification of SME-specific competencies. The method enables IT-GRC managers to transfer or adapt the frameworks to an SME organizational structure. The results from ten interviews and further three feedback loops showed that the method can be applied in practice and that a tailoring of established frameworks can take place. Contrary to the previous basic orientation of the research, this paper focuses on the concretization of approaches.
This study aims to adapt CEFR in developing an integrative approach-based teaching material model for a pre-basic BISOL class. The method used in this research is the development research design by Borg and Gall. This study was development research. The stages are identification of the problem, formulation of a hypothetical draft model; feasibility testing by experts; product revision; and test product effectiveness. The data were collected through survey techniques, interviews, and documentation. The needs identification results revealed data encompassing 10 themes, 5 tasks per theme, and diverse evaluations comprising theory, in-class practice, and real-world field assignments, both on an individual and group basis. These identified needs require alignment with CEFR A1 for the development of BISOL learning. These findings were subsequently incorporated into the design of the teaching material model, and the results indicated that tailoring CEFR to BISOL as an integrative language teaching material model was feasible for application in the classroom, as assessed by experts. The implications suggest that integrating CEFR into BISOL is highly feasible for the development of teaching materials, and teachers can leverage this instructional model to enhance students' proficiency in the Indonesian language.
The random matrix approach is a robust algorithm to filter the mean and covariance matrix of noisy observations of a dynamic object. Afterward, virtual measurement models can be used to find iteratively the extent parameters of an object that would cause the same statistical moments within their measurements. In previous work, this was limited to elliptical targets and only contour measurements.In this paper, we introduce the parallel use of an elliptical, triangular and rectangular-shaped virtual measurement model and a shape classification that selects the model that fits best to the measurements. The measurement likelihood is modeled either via ray tracing, a uniformly or normally spatial distribution over the object’s extent or as a combination of those.The results show that the extent estimation works precisely and that the classification accuracy highly depends on the measurement noise.
In this work, a storage study was conducted to find suitable packaging material for tomato powder storage. Experiments were laid out in a single factor completely randomized design (CRD) to study the effect of packaging materials on lycopene, vitamin C moisture content, and water activity of tomato powder; The factor (packaging materials) has three levels (low‐density polyethylene bag, polypropylene bottle, wrapped with aluminum foils, and packed in low‐density polyethylene bag) and is replicated three times. During the study, a twin layer solar tunnel dried tomato slices of var. Galilea was used. The dried tomato slices were then ground and packed (40 g each) in the packaging materials and stored at room temperature. Samples were drawn from the packages at 2‐month interval for quality analysis and SAS (version 9.2) software was used for statistical analysis. From the result, higher retention of lycopene (80.13%) and vitamin C (49.32%) and a nonsignificant increase in moisture content and water activity were observed for tomato powder packed in polypropylene bottles after 6 months of storage. For low‐density polyethylene packed samples and samples wrapped with aluminum foil and packed in a low‐density polyethylene bag, 57.06% and 60.45% lycopene retention and 42.9% and 49.23% Vitamin C retention were observed, respectively, after 6 months of storage. Considering the results found, it can be concluded that lycopene and vitamin C content of twin layer solar tunnel dried tomato powder can be preserved at ambient temperature storage by packing in a polypropylene bottle with a safe range of moisture content and water activity levels for 6 months.
Southeast Asia
(2023)
Southeast Asia continues to inspire and intrigue observers from all walks
of life due to its diverse cultural traditions and its interwoven threads of
geographical, historical, and social transformation. This essay will explore some of these threads by highlighting Southeast Asia’s (1) deep-rooted diversity, (2) decolonial nation-building, (3) digital leapfrogging, and (4) under-rated prospects
Times of high dynamic and growing new knowledge demand for entrepreneurial education and university engagement. Higher education institutions (HEIs) have established intensive knowledge and resources about entrepreneurial education and relating activities and formats over the last years. As smaller companies (SMEs) are increasingly experimenting with entrepreneurship, they seem to struggle with setting up entrepreneurial activities within their established corporate strategy and innovation structures. It is beneficial for them to collaborate with higher education institutions to minimize the risk and uncertainty associated with implementing entrepreneurship education (EE) and catch up with larger corporates. Further, research lacks a systematic characterization of EE activities in those companies and classification of collaboration formats. Therefore, this study uses qualitative research methods to analyze data from interviews conducted with two German SMEs. Our study contributes to a better understanding of EE in SME and respective HEI collaborations by (1) characterizing EE in SME and SME-HEI collaboration based on attributes and collaboration types defined by their locus of collaboration and intensity of knowledge inflow and (2) identifying differences among EE in SME and HEI. We provide implications to practice—corporate and university EE initiatives—for a more effective design and implementation of EE in SMEs and the SME-HEI collaborations themselves.
Regional economies clearly benefit from thriving entrepreneurial ecosystems. However, ecosystems are not yet entirely gender-inclusive and therefore are not tapping their full potential. This is most critical with respect to technology-based entrepreneurship which features the largest gender imbalance. Despite the considerably growing amount of literature in the two research fields of female entrepreneurship and entrepreneurial ecosystems, the intersection of the two areas has not yet been outlined. We depict the state of knowledge with a structured review of the literature highlighting bibliometric information, methods used, and the main topics addressed in current articles. From there, recommendations for future research are derived.
Think BIQ: Gender Differences, Entrepreneurship Support and the Quality of Business Idea Description
(2023)
Entrepreneurship support, its influencing factors and female entrepreneurship are recently discussed topics with great relevance for society and politics. However, research on the subject has been divergent in its results and lacks a focus on the impact of support programs’ characteristics concerning different types of entrepreneurs. Thus, we conduct a fuzzy-set Qualitative Comparative Analysis on entrepreneurship support characteristics aiming to shed light on possible gender differences occurring in respective programs. We investigate the quality of business idea descriptions, as a predecessor for a high-potential business model, operationalized using inter alia causation and effectuation theory and social role theory as possible explanations. In our fuzzy-set Qualitative Comparative Analysis on a sample of 911 Norwegian ventures, we find a variety of differences related to the entrepreneurs’ gender. For instance, that financial support combined with a well described key contribution or careful planning seem to be more important antecedents for female entrepreneurs’ business idea quality than for males. Moreover, it seems a well-described key contribution has a positive effect on the outcome variable in most cases. Another interesting finding concerns the entrepreneurs’ network partners, where we found evident gender differences in our combinations. Female entrepreneurs seemingly benefitted from rather small networks, and males from big networks, although the former possess larger networks in the sample. In conclusion, we find that gender differences in combinations of entrepreneurship support for high business idea quality still occur even in a country like Norway, calling for an adaption of the provided support and environment.
Strategic renewal and the development of new types of innovation pose special challenges to established small and medium-sized companies. The paper at hand aims at answering the questions what the underlying mechanism of these challenges are and which approaches might help to properly counteracting them. This case study investigates the strategic renewal process and its corresponding interventions in a high-tech SME company during a four-year period. We analyse the findings in relation to existing frameworks for dynamic capabilities and strategic learning and provide new recommendations for practice and future research.
Research credits corporate entrepreneurship (CE) with enabling established companies to create new types of innovation. Scholars have focused on the organizational design of CE activities, proposing specific organizational units. These semi-autonomous units create a tense management situation between the core organization and its CE activities. Management and organization research considers control as a key managerial function for help. However, control has received limited research attention regarding CE units, leaving design issues for appropriate control of CE units unanswered. In this study, we link management control and CE to illustrate how control is understood in the context of CE. For this, we scanned the CE literature to identify underlying attributes and characteristics that allow specifying control for CE. We identified 11 attributes to describe control for CE activities in a first round and to derive future research paths.
Corporate Entrepreneurship (CE) units have become an increasingly important part of established companies’ development activities enabling them to also create more discontinuous innovations. As a result, companies have developed and implemented different forms of CE units, such as corporate accelerators, incubators, startup supplier programs, and corporate venture capital. Driven by the need to innovate, companies have even begun to use multiple CE units simultaneously. However, this has not been empirically investigated yet. Thus, with this study, we aim to shed some light on this by investigating the parallel use of multiple CE units in the German business landscape. We conducted an extensive desk research, combining, coding, and analyzing different sources. We found that 55 out of 165 large established companies have multiple CE units, which allowed us to characterize the parallel use and identify differences and similarities, e.g., in terms of industry, company size, and CE forms implemented. We conclude by presenting different implications for both practice and research and by pointing out directions for future research.
The aim of this paper is to find out in how accommodation providers in the Seychelles perceive climate change and what mitigation and adaptation measures they can provide. In order to answer these questions, a qualitative mixed-method-approach, comprised of twenty semi-structured interviews, an online-survey and participant observation was used. Results show that accommodation providers especially perceive the effects of climate change that directly affect their business and that they have already partly implemented some mitigation and adaptation measures. However, strategies and regulations are needed at the Seychelles’ government level and on a global level to actually achieve CO2 neutral travel.
This chapter takes a detailed look at the developmental state model and its manifestations in regional development policies. Developmentalist ideas have been waxing and waning across periods of economic boom and bust. Recent years, however, have seen a renaissance of East Asian developmentalism – reminiscent of its heyday in the 1980s and 1990s and most notably driven by the region’s continued economic strength.
The endorsement of state-led modernization, preferential policies, and close state-business relations – which underpinned Japan/Korea/China’s transformations – has also left its mark on current ODA practices in the region and beyond. East Asia’s state agencies are pushing ahead with colossal infrastructure programs – in close cooperation with commercial actors – that advance broad development goals and, at the same time, promotes national interests. Compared to Western OECD peers, Asian development cooperation tends to focus less on neoliberal and democratic principles and, instead, places greater emphasis on state-corporatist and meritocratic ideas.
To what extent East Asia’s infrastructural megaprojects and connectivity corridors across Eurasia and Africa (BRI, EAI, and Partnership for Quality Infrastructure) will generate political momentum for an emergent developmental consensus remains uncertain. The jury is still out when it comes to whether and how Asian developmentalism will take center stage in global development debates. What is clear, however, is that the changing zeitgeist of a less Anglo/Euro-centric world bodes well for more heterodox and diverse ideas on development cooperation.
Unintrusive health monitoring systems is important when continuous monitoring of the patient vital signals is required. In this paper, signals obtained from accelerometers placed under a bed are processed with ballistocardiography algorithms and compared with synchronized electrocardiographic signals.
Cardiovascular diseases (CVD) are leading contributors to global mortality, necessitating advanced methods for vital sign monitoring. Heart Rate Variability (HRV) and Respiratory Rate, key indicators of cardiovascular health, are traditionally monitored via Electrocardiogram (ECG). However, ECG's obtrusiveness limits its practicality, prompting the exploration of Ballistocardiography (BCG) as a non-invasive alternative. BCG records the mechanical activity of the body with each heartbeat, offering a contactless method for HRV monitoring. Despite its benefits, BCG signals are susceptible to external interference and present a challenge in accurately detecting J-Peaks. This research uses advanced signal processing and deep learning techniques to overcome these limitations. Our approach integrates accelerometers for long-term BCG data collection during sleep, applying Discrete Wavelet Transforms (DWT) and Ensemble Empirical Mode Decomposition (EEMD) for feature extraction. The Bi-LSTM model, leveraging these features, enhances heartbeat detection, offering improved reliability over traditional methods. The study's findings indicate that the combined use of DWT, EEMD, and Bi-LSTM for J-Peak detection in BCG signals is effective, with potential applications in unobtrusive long-term cardiovascular monitoring. Our results suggest that this methodology could contribute to HRV monitoring, particularly in home settings, enhancing patient comfort and compliance.
This study investigates the application of Force Sensing Resistor (FSR) sensors and machine learning algorithms for non-invasive body position monitoring during sleep. Although reliable, traditional methods like Polysomnography (PSG) are invasive and unsuited for extended home-based monitoring. Our approach utilizes FSR sensors placed beneath the mattress to detect body positions effectively. We employed machine learning techniques, specifically Random Forest (RF), K-Nearest Neighbors (KNN), and XGBoost algorithms, to analyze the sensor data. The models were trained and tested using data from a controlled study with 15 subjects assuming various sleep positions. The performance of these models was evaluated based on accuracy and confusion matrices. The results indicate XGBoost as the most effective model for this application, followed by RF and KNN, offering promising avenues for home-based sleep monitoring systems.
The massive use of patient data for the training of artificial intelligence algorithms is common nowadays in medicine. In this scientific work, a statistical analysis of one of the most used datasets for the training of artificial intelligence models for the detection of sleep disorders is performed: sleep health heart study 2. This study focuses on determining whether the gender and age of the patients have a relevant influence to consider working with differentiated datasets based on these variables for the training of artificial intelligence models.
Accurate monitoring of a patient's heart rate is a key element in the medical observation and health monitoring. In particular, its importance extends to the identification of sleep-related disorders. Various methods have been established that involve sensor-based recording of physiological signals followed by automated examination and analysis. This study attempts to evaluate the efficacy of a non-invasive HR monitoring framework based on an accelerometer sensor specifically during sleep. To achieve this goal, the motion induced by thoracic movements during cardiac contractions is captured by a device installed under the mattress. Signal filtering techniques and heart rate estimation using the symlets6 wavelet are part of the implemented computational framework described in this article. Subsequent analysis indicates the potential applicability of this system in the prognostic domain, with an average error margin of approximately 3 beats per minute. The results obtained represent a promising advancement in non-invasive heart rate monitoring during sleep, with potential implications for improved diagnosis and management of cardiovascular and sleep-related disorders.
This paper compares two popular scripting implementations for hardware prototyping: Python scripts exe- cut from User-Space and C-based Linux-Driver processes executed from Kernel-Space, which can provide information to researchers when considering one or another in their implementations. Conclusions exhibit that deploying software scripts in the kernel space makes it possible to grant a certain quality of sensor information using a Raspberry Pi without the need for advanced real-time operational systems.
Spatial modulation (SM) is a low-complexity multiple-input/multiple-output transmission technique that combines index modulation and quadrature amplitude modulation for wireless communications. In this work, we consider the problem of link adaption for generalized spatial modulation (GSM) systems that use multiple active transmit antennas simultaneously. Link adaption algorithms require a real-time estimation of the link quality of the time-variant communication channels, e.g., by means of estimating the mutual information. However, determining the mutual information of SM is challenging because no closed-form expressions have been found so far. Recently, multilayer feedforward neural networks were applied to compute the achievable rate of an index modulation link. However, only a small SM system with two transmit and two receive antennas was considered. In this work, we consider a similar approach but investigate larger GSM systems with multiple active antennas. We analyze the portions of mutual information related to antenna selection and the IQ modulation processes, which depend on the GSM variant and the signal constellation.
Reliability is a crucial aspect of non-volatile NAND flash memories, and it is essential to thoroughly analyze the channel to prevent errors and ensure accurate readout. Es-timating the read reference voltages (RRV s) is a significant challenge due to the multitude of physical effects involved. The question arises which features are useful and necessary for the RRV estimation. Various possible features require specialized hardware or specific readout techniques to be usable. In contrast we consider sparse histograms based on the decision thresholds for hard-input and soft-input decoding. These offer a distinct advantage as they are derived directly from the raw readout data without the need for decoding. This paper focuses on the information-theoretic study of different features, especially on the exploration of the mutual information (MI) between feature vector and RRV. In particular, we investigate the dependency of the MI on the resolution of the histograms. With respect to the RRV estimation, sparse histograms provide sufficient information for near-optimum estimation.
Sleep is a multi-dimensional influencing factor on physical health, cognitive function, emotional well-being, mental health, daily performance, and productivity. The barriers such as time-consuming, invasiveness, and expense have caused a gradual shift in sleep monitoring from traditional and standard in-lab approach, e. g., polysomnography (PSG) to unobtrusive and noninvasive in-home sleep monitoring, yet further improvement is required. Despite an increasing interest in fiberoptic-based methods for cardiorespiratory estimation, the traditional mechanical-based sensors consist of force-sensitive resistors (FSR), lead zirconate titanate piezoelectric (PZT), and accelerometers yet serve as the dominant approach. The part of popularity lies in reducing the system’s complexity, expense, easy maintenance, and user-friendliness. However, care must be taken regarding the performance of such sensors with respect to accuracy and calibration.
A post-growth economy is a comparatively new paradigm in the tourism discourse. The aim of this article is to find out the commonalities between this concept and Māori tourism and in which way the latter can contribute to a post-growth economy. A qualitative mixed method approach, including in-depth-interviews, participant observation, and secondary analysis is applied. The results show that there is a lot of overlap between Māori tourism and a post-growth economy. Differences are visible, as well, regarding the value approach of Māori tourism and the indicator approach of a post-growth economy. Especially the social innovation created in Aotearoa New Zealand at the instigation of Māori groups of granting legal personhood to parts of nature may serve as a driver for a form of tourism that is in line with the idea of a post-growth economy.
This paper applies the concept of Soja’s Thirdspace to the phenomenon of Lazgi dance and tourism in Uzbekistan. In doing so it analyses the different levels of perception (including Firstspace and Secondspace) of Lazgi and tourism via an autoethnographic lens. Complemented by expert interviews, the interaction of Lazgi and tourism is examined and characteristics of the Lazgisphere (world of Lazgi) in Uzbekistan are distilled. The results show that Lazgi is often directly or indirectly connected with tourism in Uzbekistan, but even more so serves to reaffirm national identity.
While driving, stress is caused by situations in which the driver estimates their ability to manage the driving demands as insufficient or loses the capability to handle the situation. This leads to increased numbers of driver mistakes and traffic violations. Additional stressing factors are time pressure, road conditions, or dislike for driving. Therefore, stress affects driver and road safety. Stress is classified into two categories depending on its duration and the effects on the body and psyche: short-term eustress and constantly present distress, which causes degenerative effects. In this work, we focus on distress. Wearable sensors are handy tools for collecting biosignals like heart rate, activity, etc. Easy installation and non-intrusive nature make them convenient for calculating stress. This study focuses on the investigation of stress and its implications. Specifically, the research conducts an analysis of stress within a select group of individuals from both Spain and Germany. The primary objective is to examine the influence of recognized psychological factors, including personality traits such as neuroticism, extroversion, psychoticism, stress and road safety. The estimation of stress levels was accomplished through the collection of physiological parameters (R-R intervals) using a Polar H10 chest strap. We observed that personality traits, such as extroversion, exhibited similar trends during relaxation, with an average heart rate 6% higher in Spain and 3% higher in Germany. However, while driving, introverts, on average, experienced more stress, with rates 4% and 1% lower than extroverts in Spain and Germany, respectively.
Analysing observability is an important step in the
process of designing state feedback controllers. While for linear
systems observability has been widely studied and easy-to-check
necessary and sufficient conditions are available, for nonlinear
systems, such a general recipe does not exist and different classes
of systems require different techniques. In this paper, we analyse
observability for an industrial heating process where a stripe-
shaped plastic workpiece is moving through a heating zone where
it is heated up to a specific temperature by applying hot air to its
surface through a nozzle. A modeling approach for this process
is briefly presented, yielding a nonlinear Ordinary Differential
Equation model. Sensitivity-based observability analysis is used
to identify unobservable states and make suggestions for addi-
tional sensor locations. In practice, however, it is not possible
to place additional sensors, so the available measurements are
used to implement a simple open-loop state estimator with
offset compensation and numerical and experimental results are
presented.
Study design:
Retrospective, mono-centric cohort research study.
Objectives:
The purpose of this study is to validate a novel artificial intelligence (AI)-based algorithm against human-generated ground truth for radiographic parameters of adolescent idiopathic scoliosis (AIS).
Methods:
An AI-algorithm was developed that is capable of detecting anatomical structures of interest (clavicles, cervical, thoracic, lumbar spine and sacrum) and calculate essential radiographic parameters in AP spine X-rays fully automatically. The evaluated parameters included T1-tilt, clavicle angle (CA), coronal balance (CB), lumbar modifier, and Cobb angles in the proximal thoracic (C-PT), thoracic, and thoracolumbar regions. Measurements from 2 experienced physicians on 100 preoperative AP full spine X-rays of AIS patients were used as ground truth and to evaluate inter-rater and intra-rater reliability. The agreement between human raters and AI was compared by means of single measure Intra-class Correlation Coefficients (ICC; absolute agreement; .75 rated as excellent), mean error and additional statistical metrics.
Results:
The comparison between human raters resulted in excellent ICC values for intra- (range: .97-1) and inter-rater (.85-.99) reliability. The algorithm was able to determine all parameters in 100% of images with excellent ICC values (.78-.98). Consistently with the human raters, ICC values were typically smallest for C-PT (eg, rater 1A vs AI: .78, mean error: 4.7°) and largest for CB (.96, -.5 mm) as well as CA (.98, .2°).
Conclusions:
The AI-algorithm shows excellent reliability and agreement with human raters for coronal parameters in preoperative full spine images. The reliability and speed offered by the AI-algorithm could contribute to the efficient analysis of large datasets (eg, registry studies) and measurements in clinical practice.
Designing cities
(2023)
Manual for Urban Design
Urban design is based on planning and design principles that need to meet functional demands on the one hand, but on the other hand bring the design elements together into a distinctive whole. The basic compositional principles are, for the most part, timeless. Designing Cities examines the most important design and presentation principles of urban design, using historical examples and contemporary international competition entries designed by practices including Foster + Partners, KCAP Architects & Planners, MVRDV, and OMA.
At the core of the publication is the question of how the projects were designed and what methods and tools were available to the designer: such as parametric design, in which variable parameters automatically influence the design and provide a range of possible solutions.
- Tools for urban design
- Current projects and award-winning competition entries by renowned international practices
- A textbook for students and a practical design aid for practicing architects and planners
Requirements Engineering in Business Analytics for Innovation and Product Lifecycle Management
(2014)
Considering Requirements Engineering (RE) in business analytics, involving market oriented management, computer science and statistics, may be valuable for managing innovation in Product Lifecycle Management (PLM). RE and business analytics can help maximize the value of corporate product information throughout the value chain starting with innovation management. Innovation and PLM must address 1) big data, 2) development of well-defined business goals and principles, 3) cost/benefit analysis, 4) continuous change management, and 5) statistical and report science. This paper is a positioning note that addresses some business case considerations for analytics project involving PLM data, patents, and innovations. We describe a number of research challenges in RE that addresses business analytics when high PLM data should be turned into a successful market oriented innovation management strategy. We provide a draft on how to address these research challenges.
Digitalization is one of the most frequently discussed topics in industry. New technologies, platform concepts and integrated data models do enable disruptive business models and drive changes in organization, processes, and tools. The goal is to make a company more efficient, productive and ultimately profitable. However, many companies are facing the challenge of how to approach digital transformation in a structured way and to realize these potential benefits. What they realize is that Product Lifecycle Management plays a key role in digitalization intends, as object, structure and process management along the life cycle is a foundation for many digitalization use cases. The introduced maturity model for assessing a firm’s capabilities along the product lifecycle has been used almost two hundred times. It allows a company to compare its performance with an industry specific benchmark to reveal individual strengths and weaknesses. Furthermore, an empirical study produced multidimensional correlation coefficients, which identify dependencies between business model characteristics and the maturity level of capabilities.
Nowadays established companies use Corporate Entrepreneurship (CE) as a means to create discontinuous innovations. Many companies thereby even implement multiple CE units that typically involve several entrepreneurial activities. This explorative study aimed to identify the reasons why established companies implement multiple CE units concurrently. In conducting a comparative case study with eight companies from different industries, valuable insights for science and practice were gained. We provide an overview of different 11 reasons for implementing multiple CE units. This shows that the combination of CE units used by companies differs depending on the reason. It further allowed to derive general approaches of established companies to the implementation of CE units. Last, we identify the concept of co-specialization to be a central driver explaining the creation of the need to set up multiple units. We conclude by indicating implications and subjects for future research.
Entrepreneurial motivations have become a frequently discussed topic in entrepreneurship research. However, few studies investigated entrepreneurs' motivation across gender and different venture types and tend to rely on surveys or case studies. By using a text mining approach, we investigate if there are differences between male and female entrepreneurs' motivation and if female entrepreneurs' motivation differs across different venture types. This text mining approach in combination with a qualitative content analysis was used to examine unique motivational data from 472 entrepreneurial projects from three different entrepreneurship support programs in Norway and Sweden. Findings suggest that motivation of female and male entrepreneurs differ only slightly, while motivation of female entrepreneurs differs according to the different venture types. We thus contribute to a better understanding of entrepreneurial motivation and to a better understanding of why female entrepreneurs start a business. This can, for instance, benefit the improvement of future female entrepreneurship support programs.
Corporate Entrepreneurship (CE) has now evolved into an imperative innovation practice of established companies. Despite organizational design models for CE activities and companies' frequent initiation of new activities, effectively managing them remains a challenging endeavor which results in disappointment about the outcomes of CE and its early termination. We assume specific types of goals for CE as one element of this unresolved management issue. While both practice and literature address goals in different contexts, no uniform picture has emerged so far. Although goals are commonly used to categorize CE activities, they seldomly seem to be the core subject of investigation. Based on this preliminary analysis and consolidation, we put the goals of CE in focus. In a systematic literature review, we reveal aspects of goals to unmask the different types of goals and their underlying dimensions and characteristics. Our review contributes to a better understanding of goals by (1) organizing relevant literature on goals of CE in a specific classification process, (2) describing dimensions and attributes for a systematic classification of CE goals; and (3) providing a framework showing differences of goals for the CE context. We conclude with a discussion and hints for future research paths.
In the last decade, both sustainability and business models for sustainability have increased in importance. Sustainability issues have become the focus of discussion. These issues are interlinked and often negatively impact each other. They are complex and include socio-ecological dilemmas, exist in almost every aspect of our society (economic, environmental, social), and are hard to formulate. They may have multiple, incompatible solutions, competing objectives, and open timeframes. Previous research has not developed satisfactory ways to comprehend and solve problems of this nature. Life Cycle Assessment (LCA) the widely used method to assess sustainable development has reached its limitation to achieve sustainable social goals. System Dynamics (SD) is a valuable methodology that enhances understanding of the structure and internal dynamic behaviours of large, complex, and dynamic systems, leading to improved decision-making. It offers a philosophy and set of tools for modelling, analysing, and simulating dynamic systems. This research applied system dynamics methods in conjunction with simulation software to assess the potential impact of a solution on environmental, social, and economic aspects of a complex system, aims to gain insights into the system's behaviour and identify the potential consequences of interventions or policy changes across multiple dimensions. This paper responds to the urgent need for a new business model by presenting a concept for an adapted dynamic business modelling for sustainability (aDBMfS) using system dynamics. Case studies in the smartphone industry are applied.
“Crowd contamination”?
(2023)
Misconduct allegations have been found to not only affect the alleged firm but also other, unalleged firms in form of reputational and financial spillover effects. It has remained unexplored, however, how the number of prior allegations against other firms matters for an individual firm currently facing an allegation. Building on behavioral decision theory, we argue that the relationship between allegation prevalence among other firms and investor reaction to a focal allegation is inverted U-shaped. The inverted U-shaped effect is theorized to emerge from the combination of two effects: In the absence of prior allegations against other firms, investors fail to anticipate the focal allegation, and hence react particularly negatively (“anticipation effect”). In the case of many prior allegations against other firms, investors also react particularly negatively because investors perceive the focal allegation as more warranted (“evaluation effect”). The multi-industry, empirical analysis of 8,802 misconduct allegations against US firms between 2007 and 2017 provides support for our predicted, inverted U-shaped effect. Our study complements recent misconduct research on spillover effects by highlighting that not only a current allegation against an individual firm can “contaminate” other, unalleged firms but that also prior allegations against other firms can “contaminate” investor reaction to a focal allegation against an individual firm.
This paper compares novel methods to efficiently include input constraints using the nonlinear Model Predictive Path Integral (MPPI) approach. The MPPI algorithm solves stochastic optimal control problems and is based on sampled trajectories. MPPI results from the physical path integral framework. Sample-based algorithms are characterized by the fact that they can be computed in parallel and offer the possibility to handle discontinuous dynamics and cost functions. However, using standard MPPI the input costs in the Lagrange term have to be chosen quadratic. This fact is unfavorable for various real applications. Further, in standard nonlinear model predictive control (NMPC) approaches hard box constraints on the control input trajectory can be treated directly. In this contribution, novel architectures based on integrator action are compared. The investigated input constraint MPPI controllers were tested on an autonomous self-balancing vehicle. Therefore both, simulation and real-world experiments are presented. This paper addresses the question of how the MPPI algorithm can be further developed to consider input box constraints. Videos of the self-balancing vehicle are available at: https: https://tinyurl.com/mvn8j7vf
Comparison of Data-Driven Modeling and Identification Approaches for a Self-Balancing Vehicle
(2023)
This paper gives a systematic comparison of different state–of–the–art modeling approaches and the corresponding parameter identification processes for a self–balancing vehicle. In detail, a nonlinear grey box model, its extension to consider friction effects, a parametric black box model based on regression neural networks, and a hybrid approach are presented. The parameters of the models are identified by solving a nonlinear least squares problem. The training, validation, and test datasets are collected in full–scale experiments using a self–balancing vehicle. The performance of the different models used for ego–motion prediction are compared in full–scale scenarios, as well. The investigated model architectures can be used to improve both, simulation environments and model–based controller design. This paper shows the upsides and downsides arising from using the different modeling approaches. Videos showing the self–balancing vehicle in action are available at: https://tinyurl.com/mvn8j7vf22nd
A key objective of this research is to take a more detailed look at a central aspect of resilience in small and medium-sized enterprises (SMEs). A literature review and expert interviews were used to investigate which factors have an impact on the innovative capacity of start-ups and whether these can also be adapted by SMEs. First of all, it must be stated that there are considerable structural and process-related differences between start-ups and SMEs. These can considerably inhibit cooperation between the two forms of enterprise. However, in the same context, success factors and issues in the start-up sector could also be identified that can improve cooperation with SMEs. These and other findings are then discussed in both an economic and an academic context. This article was written as part of the research activities of the Smart Services Competence Centre (proper name: Kompetenzzentrum Smart Services), a central contact point for all questions in the area of smart service digitalization in Baden-Wuerttemberg. Here, companies can obtain information about various digital technologies and take advantage of various measures for the development of new ideas and innovative services (Kompetenzzentrum Smart Services BW: Über das Kompetenzzentrum, 2021).
Measuring cardiorespiratory parameters in sleep, using non-contact sensors and the Ballistocardiography technique has received much attention due to the low-cost, unobtrusive, and non-invasive method. Designing a user-friendly, simple-to-use, and easy-to-deployment preserving less errorprone remains open and challenging due to the complex morphology of the signal. In this work, using four forcesensitive resistor sensors, we conducted a study by designing four distributions of sensors, in order to simplify the complexity of the system by identifying the region of interest for heartbeat and respiration measurement. The sensors are deployed under the mattress and attached to the bed frame without any interference with the subjects. The four distributions are combined in two linear horizontal, one linear vertical, and one square, covering the influencing region in cardiorespiratory activities. We recruited 4 subjects and acquired data in four regular sleeping positions, each for a duration of 80 seconds. The signal processing was performed using discrete wavelet transform bior 3.9 and smooth level of 4 as well as bandpass filtering. The results indicate that we have achieved the mean absolute error of 2.35 and 4.34 for respiration and heartbeat, respectively. The results recommend the efficiency of a triangleshaped structure of three sensors for measuring heartbeat and respiration parameters in all four regular sleeping positions.
Monitoring heart rate and breathing is essential in understanding the physiological processes for sleep analysis. Polysomnography (PSG) system have traditionally been used for sleep monitoring, but alternative methods can help to make sleep monitoring more portable in someone's home. This study conducted a series of experiments to investigate the use of pressure sensors placed under the bed as an alternative to PSG for monitoring heart rate and breathing during sleep. The following sets of experiments involved the addition of small rubber domes - transparent and black - that were glued to the pressure sensor. The resulting data were compared with the PSG system to determine the accuracy of the pressure sensor readings. The study found that the pressure sensor provided reliable data for extracting heart rate and respiration rate, with mean absolute errors (MAE) of 2.32 and 3.24 for respiration and heart rate, respectively. However, the addition of small rubber hemispheres did not significantly improve the accuracy of the readings, with MAEs of 2.3 bpm and 7.56 breaths per minute for respiration rate and heart rate, respectively. The findings of this study suggest that pressure sensors placed under the bed may serve as a viable alternative to traditional PSG systems for monitoring heart rate and breathing during sleep. These sensors provide a more comfortable and non-invasive method of sleep monitoring. However, the addition of small rubber domes did not significantly enhance the accuracy of the readings, indicating that it may not be a worthwhile addition to the pressure sensor system.
Sleep is an essential part of human existence, as we are in this state for approximately a third of our lives. Sleep disorders are common conditions that can affect many aspects of life. Sleep disorders are diagnosed in special laboratories with a polysomnography system, a costly procedure requiring much effort for the patient. Several systems have been proposed to address this situation, including performing the examination and analysis at the patient's home, using sensors to detect physiological signals automatically analysed by algorithms. This work aims to evaluate the use of a contactless respiratory recording system based on an accelerometer sensor in sleep apnea detection. For this purpose, an installation mounted under the bed mattress records the oscillations caused by the chest movements during the breathing process. The presented processing algorithm performs filtering of the obtained signals and determines the apnea events presence. The performance of the developed system and algorithm of apnea event detection (average values of accuracy, specificity and sensitivity are 94.6%, 95.3%, and 93.7% respectively) confirms the suitability of the proposed method and system for further ambulatory and in-home use.
Healthy sleep is one of the prerequisites for a good human body and brain condition, including general well-being. Unfortunately, there are several sleep disorders that can negatively affect this. One of the most common is sleep apnoea, in which breathing is impaired. Studies have shown that this disorder often remains undiagnosed. To avoid this, developing a system that can be widely used in a home environment to detect apnoea and monitor the changes once therapy has been initiated is essential. The conceptualisation of such a system is the main aim of this research. After a thorough analysis of the available literature and state of the art in this area of knowledge, a concept of the system was created, which includes the following main components: data acquisition (including two parts), storage of the data, apnoea detection algorithm, user and device management, data visualisation. The modules are interchangeable, and interfaces have been defined for data transfer, most of which operate using the MQTT protocol. System diagrams and detailed component descriptions, including signal requirements and visualisation mockups, have also been developed. The system's design includes the necessary concepts for the implementation and can be realised in a prototype in the next phase.
The influence of sleep on human health is enormous. Accordingly, sleep disorders can have a negative impact on it. To avoid this, they should be identified and treated in time. For this purpose, objective (with an appropriate device) or subjective (based on perceived values) measurement methods are used for sleep analysis to understand the problem. The aim of this work is to find out whether an exchange of the two methods is possible and can provide reliable results. In accordance with this goal, a study was conducted with people aged over 65 years old (a total of 154 night-time recordings) in which both measurement methods were compared. Sleep questionnaires and electronic devices for sleep assessment placed under the mattress were applied to achieve the study aims. The obtained results indicated that the correlation between both measurement methods could be observed for sleep characteristics such as total sleep time, total time in bed and sleep efficiency. However, there are also significant differences in absolute values of the two measurement approaches for some subjects/nights, which leads us to conclude that the substitution is more likely to be considered in case of long-term monitoring where the trends are of more importance and not the absolute values for individual nights.
The principal objective of this study is to investigate the impact of perceived stress on traffic and road safety. Therefore, we designed a study that allows the generation and collection of stress-relevant data. Drivers often experience stress due to their perception of lack of control during the driving process. This can lead to an increased likelihood of traffic accidents, driver errors, and traffic violations. To explore this phenomenon, we used the Stress Perceived Questionnaire (PSQ) to evaluate perceived stress levels during driving simulations and the EPQR questionnaire to determine the personality of the driver. With the presented study, participants can categorised based on their emotional stability and personality traits. Wearable devices were utilised to monitor each participant's instantaneous heart rate (HR) due to their non-intrusive and portable nature. The findings of this study deliver an overview of the link between stress and traffic and road safety. These findings can be utilised for future research and implementing strategies to reduce road accidents and promote traffic safety.
Development of an expert system to overpass citizens technological barriers on smart home and living
(2023)
Adopting new technologies can be overwhelming, even for people with experience in the field. For the general public, learning about new implementations, releases, brands, and enhancements can cause them to lose interest. There is a clear need to create point sources and platforms that provide helpful information about the novel and smart technologies, assisting users, technicians, and providers with products and technologies. The purpose of these platforms is twofold, as they can gather and share information on interests common to manufacturers and vendors. This paper presents the ”Finde-Dein-SmartHome” tool. Developed in association with the Smart Home & Living competence center [5] to help users learn about, understand, and purchase available technologies that meet their home automation needs. This tool aims to lower the usability barrier and guide potential customers to clear their doubts about privacy and pricing. Communities can use the information provided by this tool to identify market trends that could eventually lower costs for providers and incentivize access to innovative home technologies and devices supporting long-term care.
The development of automatic solutions for the detection of physiological events of interest is booming. Improvements in the collection and storage of large amounts of healthcare data allow access to these data faster and more efficiently. This fact means that the development of artificial intelligence models for the detection and monitoring of a large number of pathologies is becoming increasingly common in the medical field. In particular, developing deep learning models for detecting obstructive apnea (OSA) events is at the forefront. Numerous scientific studies focus on the architecture of the models and the results that these models can provide in terms of OSA classification and Apnea-Hypopnea-Index (AHI) calculation. However, little focus is put on other aspects of great relevance that are crucial for the training and performance of the models. Among these aspects can be found the set of physiological signals used and the preprocessing tasks prior to model training. This paper covers the essential requirements that must be considered before training the deep learning model for obstructive sleep apnea detection, in addition to covering solutions that currently exist in the scientific literature by analyzing the preprocessing tasks prior to training.
Recently published nonlinear model-based control
approaches achieve impressive performances in complex real-
world applications. However, due to model-plant mismatches
and unforeseen disturbances, the model-based controller’s per-
formance is limited in full-scale applications. In most applica-
tions, low-level control loops mitigate the model-plant mismatch
and the sensitivity to disturbances. But what is the influence
of these low-level control loops? In this paper, we present
the model predictive path integral (MPPI) control of a self-
balancing vehicle and investigate the influence of subordinate
control loops on closed-loop performance. Therefore, simulation
and full-scale experiments are performed and analyzed. Subor-
dinate control loops empower the MPPI controller because they
dampen the influence of disturbances, and thus improve the
model’s accuracy. This is the basis for the successful application
of model-based control approaches in real-world systems. All
in all, a model is used to design a low-level controller, then
its closed-loop behavior is determined, and this model is used
within the superimposed MPPI control loop – modeling for
control and vice versa.
In 3D extended object tracking (EOT), well-established models exist for tracking the object extent using various shape priors. A single update, however, has to be performed for every measurement using these models leading to a high computational runtime for high-resolution sensors. In this paper, we address this problem by using various model-independent downsampling schemes based on distance heuristics and random sampling as pre-processing before the update. We investigate the methods in a simulated and real-world tracking scenario using two different measurement models with measurements gathered from a LiDAR sensor. We found that there is a huge potential for speeding up 3D EOT by dropping up to 95\% of the measurements in our investigated scenarios when using random sampling. Since random sampling, however, can also result in a subset that does not represent the total set very well, leading to a poor tracking performance, there is still a high demand for further research.
Public-key cryptographic algorithms are an essential part of todays cyber security, since those are required for key exchange protocols, digital signatures, and authentication. But large scale quantum computers threaten the security of the most widely used public-key cryptosystems. Hence, the National Institute of Standards and Technology ( NIST ) is currently in a standardization process for post-quantum secure public-key cryptography. One type of such systems is based on the NP-complete problem of decoding random linear codes and therefore called code-based cryptography. The best-known code-based cryptographic system is the McEliece system proposed in 1978 by Robert McEliece. It uses a scrambled generator matrix as a public key and the original generator matrix as well as the scrambling as private key. When encrypting a message it is encoded in the public code and a random but correctable error vector is added. Only the legitimate receiver can correct the errors and decrypt the message using the knowledge of the private key generator matrix. The original proposal of the McEliece system was based on binary Goppa codes, which are also considered for standardization. While those codes seem to be a secure choice, the public keys are extremely large, limiting the practicality of those systems. Many different code families were proposed for the McEliece system, but many of them are considered insecure since attacks exist, which use the known code structure to recover the private key. The security of code-based cryptosystems mainly depends on the number of errors added by the sender, which is limited by the error correction capability of the code. Hence, in order to obtain a high security for relatively short codes one needs a high error correction capability. Therefore maximum distance separable ( MDS ) codes were proposed for those systems, since those are optimal for the Hamming distance. In order to increase the error correction capability we propose q -ary codes over different metrics. There are many code families that have a higher minimum distance in some other metric than in the Hamming metric, leading to increased error correction capability over this metric. To make use of this one needs to restrict not only the number of errors but also their value. In this work, we propose the weight-one error channel, which restricts the error values to weight one and can be applied for different metrics. In addition we propose some concatenated code constructions, which make use of this restriction of error values. For each of these constructions we discuss the usability in code-based cryptography and compare them to other state-of-the-art code-based cryptosystems. The proposed code constructions show that restricting the error values allows for significantly lower public key sizes for code-based cryptographic systems. Furthermore, the use of concatenated code constructions allows for low complexity decoding and therefore an efficient cryptosystem.
This thesis presents the development of two different state-feedback controllers to solve the trajectory tracking problem, where the vessel needs to reach and follow a time-varying reference trajectory. This motion problem was addressed to a real-scaled fully actuated surface vessel, whose dynamic model had unknown hydrodynamic and propulsion parameters that were identified by applying an experimental maneuver-based identification process. This dynamic model was then used to develop the controllers. The first one was the backstepping controller, which was designed with a local exponential stability proof. For the NMPC, the controller was developed to minimize the tracking error, considering the thrusters’ constraints. Moreover, both controllers considered the thruster allocation problem and counteracted environmental disturbance forces such as current, waves and wind.The effectiveness of these approaches was verified in simulation using Matlab/Simulink and GRAMPC (in the case of the NMPC), and in experimental scenarios, where they were applied to the vessel, performing docking maneuvers at the Rhine River in Constance (Germany).
In spite of the amount of new tools and methodologies adopted in the road infrastructure sector, the performance of road infrastructure projects is not constantly improving. Considering that the volume of projects undertaken is forecasted to increase every year, this is a substantial issue for the road infrastructure sector. Hence this work focuses on the principles of Blockchain Technology, road infrastructure sector and the information exchange with the aim to use the advantages of the Blockchain Technology in supporting to overcome the various challenges along the life cycle of road infrastructure projects.
Within the scope of this paper, two studies were conducted. First, focus groups were used to explore where society (road infrastructure sector) stands in terms of industry 4.0 and to get a better understanding if and where the principles of Blockchain Technology can be used when managing projects in the road infrastructure sector. Second, semi-structured interviews were administrated with experts of the road infrastructure sector and experts of Blockchain Technology to better understand the interrelation between these two areas. Based on the outcome of the two studies, technology barriers and enablers were explored for the purpose of improved information exchange within the road infrastructure sector.
The two studies revealed that there are significant and strong interrelations between the principles of the Blockchain Technology, project management within the road infrastructure sector and information exchange. These interrelations are complex and diverse, but overall it can be concluded that the adoption of the principles of Blockchain Technology into the field of information exchange improves the management of road infrastructure projects. Based on the two studies a theoretical framework was developed.
In summary this research showed that trust is an important factor and builds the foundation for communication and to ensure a proper information exchange. Within the scope of this thesis, it was demonstrated that the principles of the Blockchain Technology can be used to increase transparency, traceability and immutability during the life cycle of road infrastructure projects in the area of information exchange.
Foil-air bearings (FABs) are predominantly used for high-speed, oil-free applications. Offering many advantages such as friction loss at high speeds, stability and price, they lack, however, load capacity as well as start-up and coast-down friction wear resistance.
The friction losses of FABs have been studied experimentally by many authors. In order to predict the friction and, consequently, the lifespan of a FAB, the start-up and coast-down regimes are modelled in such a way that allows for accurate, efficient simulation and later optimisation of lift-off speed and wear characteristics. The proposed simulation method applies the Kirchhoff-Love plate theory to the top foil mapping [20]. This system of differential equations is coupled with the underlying compliant foil to simulate the displacement due to the pressure buildup. Consequently, this coupled system allows for simulation from almost zero rounds per minute (rpm) to full speed. The underlying simulation model uses the finite difference method for spatial discretisation and a temporal explicit Runge-Kutta method.
Difficulties to overcome are the smooth combination of various friction regimes across the sliding surfaces as well as the synchronous coupling of Reynolds, deformation and kinematic equations with highly non-linear terms. Introducing an exponential pressure component based on Greenwood and Tripp’s theory avoids impingement between the rotor and foil.
In the past years, algorithms for 3D shape tracking using radial functions in spherical coordinates represented with different methods have been proposed. However, we have seen that mainly measurements from the lateral surface of the target can be expected in a lot of dynamic scenarios and only few measurements from the top and bottom parts leading to an error-prone shape estimate in the top and bottom regions when using a representation in spherical coordinates. We, therefore, propose to represent the shape of the target using a radial function in cylindrical coordinates, as these only represent regions of the lateral surface, and no information from the top or bottom parts is needed. In this paper, we use a Fourier-Chebyshev double series for 3D shape representation since a mixture of Fourier and Chebyshev series is a suitable basis for expanding a radial function in cylindrical coordinates. We investigate the method in a simulated and real-world maritime scenario with a CAD model of the target boat as a reference. We have found that shape representation in cylindrical coordinates has decisive advantages compared to a shape representation in spherical coordinates and should preferably be used if no prior knowledge of the measurement distribution on the surface of the target is available.
The transformation to an Industry 4.0, which is in general seen as a solution to increasing market challenges, is forcing companies to radically change their way of thinking and to be open to new forms of cooperation. In this context, the opening-up of the innovation process is widely seen as a necessity to meet these challenges, especially for small and medium enterprises (SMEs). The aim of the study therefore is to analyze how cooperation today can be characterized, how this character has changed since the establishment of the term Industry 4.0 at Hanover Fair in 2011 and which cooperation strategies have proven successful. The analysis consists of a quantitative, secondary data analysis that includes country-specific data from 35 European countries of 2010 and 2016 collected by the European Commission and the OECD. The research, focusing on the secondary sector, shows that multinational enterprises MNEs still tend to cooperate more than SMEs, with a slight overall trend towards protectionism. Nevertheless, there is a clear tendency towards the opening-up of SMEs. In this regard, especially universities, competitors and suppliers have become increasingly attractive as cooperation partners for SMEs.
Cities around the world are facing the implications of a changing climate as an increasingly pressing issue. The negative effects of climate change are already being felt today. Therefore, adaptation to these changes is a mission that every city must master. Leading practices worldwide demonstrate various urban efforts on climate change adaptation (CCA) which are already underway. Above all, the integration of climate data, remote sensing, and in situ data is key to a successful and measurable adaptation strategy. Furthermore, these data can act as a timely decision support tool for municipalities to develop an adaptation strategy, decide which actions to prioritize, and gain the necessary buy-in from local policymakers. The implementation of agile data workflows can facilitate the integration of climate data into climate-resilient urban planning. Due to local specificities, (supra)national, regional, and municipal policies and (by) laws, as well as geographic and related climatic differences worldwide, there is no single path to climate-resilient urban planning. Agile data workflows can support interdepartmental collaboration and, therefore, need to be integrated into existing management processes and government structures. Agile management, which has its origins in software development, can be a way to break down traditional management practices, such as static waterfall models and sluggish stage-gate processes, and enable an increased level of flexibility and agility required when urgent. This paper presents the findings of an empirical case study conducted in cooperation with the City of Constance in southern Germany, which is pursuing a transdisciplinary and trans-sectoral co-development approach to make management processes more agile in the context of climate change adaptation. The aim is to present a possible way of integrating climate data into CCA planning by changing the management approach and implementing a toolbox for low-threshold access to climate data. The city administration, in collaboration with the University of Applied Sciences Constance, the Climate Service Center Germany (GERICS), and the University of Stuttgart, developed a co-creative and participatory project, CoKLIMAx, with the objective of integrating climate data into administrative processes in the form of a toolbox. One key element of CoKLIMAx is the involvement of the population, the city administration, and political decision-makers through targeted communication and regular feedback loops among all involved departments and stakeholder groups. Based on the results of a survey of 72 administrative staff members and a literature review on agile management in municipalities and city administrations, recommendations on a workflow and communication structure for cross-departmental strategies for resilient urban planning in the City of Constance were developed.
This thesis emphasizes problems that reports generated by vulnerability scanners impose on the process of vulnerability management, which are a. an overwhelming amount of data and b. an insufficient prioritization of the scan results.
To assist the process of developing means to counteract those problems and to allow for quantitative evaluation of their solutions, two metrics are proposed for their effectiveness and efficiency. These metrics imply a focus on higher severity vulnerabilities and can be applied to any simplification process of vulnerability scan results, given it relies on a severity score and time of remediation estimation for each vulnerability.
A priority score is introduced which aims to improve the widely used Common Vulnerability Scoring System (CVSS) base score of each vulnerability dependent on a vulnerability’s ease of exploit, estimated probability of exploitation and probability of its existence.
Patterns within the reports generated by the Open Vulnerability Assessment System (OpenVAS) vulnerability scanner between vulnerabilities are discovered which identify criteria by which they can be categorized from a remediation actor standpoint. These categories lay the groundwork of a final simplified report and consist of updates that need to be installed on a host, severe vulnerabilities, vulnerabilities that occur on multiple hosts and vulnerabilities that will take a lot of time for remediation. The highest potential time savings are found to exist within frequently occurring vulnerabilities, minor- and major suggested updates.
Processing of the results provided by the vulnerability scanner and creation of the report is realized in the form of a python script. The resulting reports are short, straight to the point and provide a top down remediation process which should theoretically allow to minimize the institutions attack surface as fast as possible. Evaluation of the practicality must follow as the reports are yet to be introduced into the Information Security Management Lifecycle.
Random matrices are used to filter the center of gravity (CoG) and the covariance matrix of measurements. However, these quantities do not always correspond directly to the position and the extent of the object, e.g. when a lidar sensor is used.In this paper, we propose a Gaussian processes regression model (GPRM) to predict the position and extension of the object from the filtered CoG and covariance matrix of the measurements. Training data for the GPRM are generated by a sampling method and a virtual measurement model (VMM). The VMM is a function that generates artificial measurements using ray tracing and allows us to obtain the CoG and covariance matrix that any object would cause. This enables the GPRM to be trained without real data but still be applied to real data due to the precise modeling in the VMM. The results show an accurate extension estimation as long as the reality behaves like the modeling and e.g. lidar measurements only occur on the side facing the sensor.
Contemporary empirical applications frequently require flexible regression models for complex response types and large tabular or non-tabular, including image or text, data. Classical regression models either break down under the computational load of processing such data or require additional manual feature extraction to make these problems tractable. Here, we present deeptrafo, a package for fitting flexible regression models for conditional distributions using a tensorflow backend with numerous additional processors, such as neural networks, penalties, and smoothing splines. Package deeptrafo implements deep conditional transformation models (DCTMs) for binary, ordinal, count, survival, continuous, and time series responses, potentially with uninformative censoring. Unlike other available methods, DCTMs do not assume a parametric family of distributions for the response. Further, the data analyst may trade off interpretability and flexibility by supplying custom neural network architectures and smoothers for each term in an intuitive formula interface. We demonstrate how to set up, fit, and work with DCTMs for several response types. We further showcase how to construct ensembles of these models, evaluate models using inbuilt cross-validation, and use other convenience functions for DCTMs in several applications. Lastly, we discuss DCTMs in light of other approaches to regression with non-tabular data.
This paper aims to apply the basics of the Service-Dominant Logic, especially the concept of creating benefits through serving, to the stationary retail industry. In the industrial context, the shift from a product-driven point of view to a service-driven perspective has been discussed widely. However, there are only few connections to how this can be applied to the retail sector on a B2C-level and how retailers can use smart services in order to enable customer engagement, loyalty and retention. The expectations of customers towards future stationary retail develop significantly as consumers got used to the comfort of online shopping. Especially the younger generation—the Generation Z—seems to have changed their priorities from the bare purchase of products to an experience- and service-driven approach when shopping over-the-counter. To stay successful long-term, companies from this sector need to adapt to the expectations of their future main customer group. Therefore, this paper will analyse the specific needs of Generation Z, explain how smart services contribute to creating benefit for this customer group and how this affects the economic sustainability of these firms.
As one of the most important branches of the industry in Germany and
the European Union, the mechanical and plant engineering sector is confronted with fundamental changes due to ever shorter innovation cycles and increased competitive pressure. This makes it even more important to increase the level of service components in business models with a low service level, which are still frequently found in SMEs. This paper is dedicated to the changes that the individual components of a business model have experienced and will experience. Special attention is paid to economic sustainability, since service business models can also positively influence the long-term nature of a business. Seven interviews conducted with relevant companies serve as the empirical basis of this paper. The analysed effects of smart services and active customer integration are structured and summarized within the three pillars of every business model (value proposition, the value creation architecture and the revenue mechanic).
AbstractSanctions encompass a wide set of policy instruments restricting cross‐border economic activities. In this paper, we study how different types of sanctions affect the export behavior of firms to the targeted countries. We combine Danish register data, including information on firm‐destination‐specific exports, with information on sanctions imposed by Denmark from the Global Sanctions Database. Our data allow us to study firms' export behavior in 62 sanctioned countries, amounting to a total of 453 country‐years with sanctions over the period 2000–2015. Methodologically, we apply a two‐stage estimation strategy to properly account for multilateral resistance terms. We find that, on average, sanctions lead to a significant reduction in firms' destination‐specific exports and a significant increase in firms' probability to exit the destination. Next, we study heterogeneity in the effects of sanctions across (i) sanction types and sanction packages, (ii) the objectives of sanctions, and (iii) countries subject to sanctions. Results confirm that the effects of sanctions on firms' export behavior vary considerably across these three dimensions.
An inter- and transdisciplinary concept has been developed, focusing on the scaling of industrial circular construction using innovative compacted mineral mixtures (CMM) derived from various soil types (sand, silt, clay) and recycled mineral waste. The concept aims to accelerate the systemic transformation of the construction industry towards carbon neutrality by promoting the large-scale adoption and automation of CMM-based construction materials, which incorporate natural mineral components and recycled aggregates or industrial by-products. In close collaboration with international and domestic stakeholders in the construction sector, the concept explores the integration of various CMM-based construction methods for producing wall elements in conventional building construction. Leveraging a digital urban mining platform, the concept aims to standardize the production process and enable mass-scale production. The ultimate goal is to fully harness the potential of automated CMM-based wall elements as a fast, competitive, emission-free, and recyclable alternative to traditional masonry and concrete construction techniques. To achieve this objective, the concept draws upon the latest advances in soil mechanics, rheology, and automation and incorporates open-source digital platform technologies to enhance data accessibility, processing, and knowledge acquisition. This will bolster confidence in CMM-based technologies and facilitate their widespread adoption. The extraordinary transfer potential of this approach necessitates both basic and applied research. As such, the proposed transformative, inter- and transdisciplinary concept will be conducted and synthesized using a comprehensive, holistic, and transfer-oriented methodology.
Digitization and sustainability are the two big topics of our current time. As the usage of digital products like IoT devices continues to grow, it affects the energy consumption caused by the Internet. At the same time, more and more companies feel the need to become carbon neutral and sustainable. Determining the environmental impact of an IoT device is challenging, as the production of the hardware components should be considered and the electricity consumption of the Internet since this is the primary communication medium of an IoT device. Estimating the electricity consumption of the Internet itself is a complex task. We performed a life cycle assessment (LCA) to determine the environmental impact of an intelligent smoke detector sold in Germany, taking its whole life-cycle from cradle-to-grave into account. We applied the impact assessment method ReCiPe 2016 Midpoint and compared its results with ILCD 2011 Midpoint+ to check the robustness of our results. The LCA results showed that electricity consumption during the use phase is the main contributor to environmental impacts. The mining of coal causes this contribution, which is a part of the German electricity mix. Consequently, the smoke detector mainly contributes to the impact categories of freshwater and marine ecotoxicity, but only marginally to global warming.
Carroll, A.B.
(2023)
Bribery
(2023)
This paper presents the integration of a spline based extension model into a probability hypothesis density (PHD) filter for extended targets. Using this filter the position and extension of each object as well as the number of present objects can jointly be estimated. Therefore, the spline extension model and the PHD filter are addressed and merged in a Gaussian mixture (GM) implementation. Simulation results using artificial laser measurements are used to evaluate the performance of the presented filter. Finally, the results are illustrated and discussed.
A growing share of modern trade policy instruments is shaped by non-tariff barriers (NTBs). Based on a structural gravity equation and the recently updated Global Trade Alert database, we empirically investigate the effect of NTBs on imports. Our analysis reveals that the implementation of NTBs reduces imports of affected products by up to 12%. Their trade dampening effect is thus comparable to that of trade defence instruments such as anti-dumping duties. It is smaller for exporters that have a free trade agreement with the importing country. Different types of NTBs affect trade to a different extent. Finally, we investigate the effect of behind-the-border measures, showing that they significantly lower the importer’s market access.
CSR Pyramid
(2023)
Motion estimation is an essential element for autonomous vessels. It is used e.g. for lidar motion compensation as well as mapping and detection tasks in a maritime environment. Because the use of gyroscopes is not reliable and a high performance inertial measurement unit is quite expensive, we present an approach for visual pitch and roll estimation that utilizes a convolutional neural network for water segmentation, a stereo system for reconstruction and simple geometry to estimate pitch and roll. The algorithm is validated on a novel, publicly available dataset recorded at Lake Constance. Our experiments show that the pitch and roll estimator provides accurate results in comparison to an Xsens IMU sensor. We can further improve the pitch and roll estimation by sensor fusion with a gyroscope. The algorithm is available in its implementation as a ROS node.
Background: Polysomnography (PSG) is the gold standard for detecting obstructive sleep apnea (OSA). However, this technique has many disadvantages when using it outside the hospital or for daily use. Portable monitors (PMs) aim to streamline the OSA detection process through deep learning (DL).
Materials and methods: We studied how to detect OSA events and calculate the apnea-hypopnea index (AHI) by using deep learning models that aim to be implemented on PMs. Several deep learning models are presented after being trained on polysomnography data from the National Sleep Research Resource (NSRR) repository. The best hyperparameters for the DL architecture are presented. In addition, emphasis is focused on model explainability techniques, concretely on Gradient-weighted Class Activation Mapping (Grad-CAM).
Results: The results for the best DL model are presented and analyzed. The interpretability of the DL model is also analyzed by studying the regions of the signals that are most relevant for the model to make the decision. The model that yields the best result is a one-dimensional convolutional neural network (1D-CNN) with 84.3% accuracy.
Conclusion: The use of PMs using machine learning techniques for detecting OSA events still has a long way to go. However, our method for developing explainable DL models demonstrates that PMs appear to be a promising alternative to PSG in the future for the detection of obstructive apnea events and the automatic calculation of AHI.
Increasing demand for sustainable, resilient, and low-carbon construction materials has highlighted the potential of Compacted Mineral Mixtures (CMMs), which are formulated from various soil types (sand, silt, clay) and recycled mineral waste. This paper presents a comprehensive inter- and transdisciplinary research concept that aims to industrialise and scale up the adoption of CMM-based construction materials and methods, thereby accelerating the construction industry’s systemic transition towards carbon neutrality. By drawing upon the latest advances in soil mechanics, rheology, and automation, we propose the development of a robust material properties database to inform the design and application of CMM-based materials, taking into account their complex, time-dependent behaviour. Advanced soil mechanical tests would be utilised to ensure optimal performance under various loading and ageing conditions. This research has also recognised the importance of context-specific strategies for CMM adoption. We have explored the implications and limitations of implementing the proposed framework in developing countries, particularly where resources may be constrained. We aim to shed light on socio-economic and regulatory aspects that could influence the adoption of these sustainable construction methods. The proposed concept explores how the automated production of CMM-based wall elements can become a fast, competitive, emission-free, and recyclable alternative to traditional masonry and concrete construction techniques. We advocate for the integration of open-source digital platform technologies to enhance data accessibility, processing, and knowledge acquisition; to boost confidence in CMM-based technologies; and to catalyse their widespread adoption. We believe that the transformative potential of this research necessitates a blend of basic and applied investigation using a comprehensive, holistic, and transfer-oriented methodology. Thus, this paper serves to highlight the viability and multiple benefits of CMMs in construction, emphasising their pivotal role in advancing sustainable development and resilience in the built environment.
This paper introduces the third update/release of the Global Sanctions Data Base (GSDB-R3). The GSDB-R3 extends the period of coverage from 1950–2019 to 1950–2022, which includes two special periods—COVID-19 and the new sanctions against Russia. This update of the GSDB contains a total of 1325 cases. In response to multiple inquiries and requests, the GSDB-R3 has been amended with a new variable that distinguishes between unilateral and multilateral sanctions. As before, the GSDB comes in two versions, case-specific and dyadic, which are freely available upon request at GSDB@drexel.edu. To highlight one of the new features of the GSDB, we estimate the heterogeneous effects of unilateral and multilateral sanctions on trade. We also obtain estimates of the effects on trade of the 2014 sanctions on Russia.
The scoring of sleep stages is one of the essential tasks in sleep analysis. Since a manual procedure requires considerable human and financial resources, and incorporates some subjectivity, an automated approach could result in several advantages. There have been many developments in this area, and in order to provide a comprehensive overview, it is essential to review relevant recent works and summarise the characteristics of the approaches, which is the main aim of this article. To achieve it, we examined articles published between 2018 and 2022 that dealt with the automated scoring of sleep stages. In the final selection for in-depth analysis, 125 articles were included after reviewing a total of 515 publications. The results revealed that automatic scoring demonstrates good quality (with Cohen's kappa up to over 0.80 and accuracy up to over 90%) in analysing EEG/EEG + EOG + EMG signals. At the same time, it should be noted that there has been no breakthrough in the quality of results using these signals in recent years. Systems involving other signals that could potentially be acquired more conveniently for the user (e.g. respiratory, cardiac or movement signals) remain more challenging in the implementation with a high level of reliability but have considerable innovation capability. In general, automatic sleep stage scoring has excellent potential to assist medical professionals while providing an objective assessment.
In order to ensure sufficient recovery of the human body and brain, healthy sleep is indispensable. For this purpose, appropriate therapy should be initiated at an early stage in the case of sleep disorders. For some sleep disorders (e.g., insomnia), a sleep diary is essential for diagnosis and therapy monitoring. However, subjective measurement with a sleep diary has several disadvantages, requiring regular action from the user and leading to decreased comfort and potential data loss. To automate sleep monitoring and increase user comfort, one could consider replacing a sleep diary with an automatic measurement, such as a smartwatch, which would not disturb sleep. To obtain accurate results on the evaluation of the possibility of such a replacement, a field study was conducted with a total of 166 overnight recordings, followed by an analysis of the results. In this evaluation, objective sleep measurement with a Samsung Galaxy Watch 4 was compared to a subjective approach with a sleep diary, which is a standard method in sleep medicine. The focus was on comparing four relevant sleep characteristics: falling asleep time, waking up time, total sleep time (TST), and sleep efficiency (SE). After evaluating the results, it was concluded that a smartwatch could replace subjective measurement to determine falling asleep and waking up time, considering some level of inaccuracy. In the case of SE, substitution was also proved to be possible. However, some individual recordings showed a higher discrepancy in results between the two approaches. For its part, the evaluation of the TST measurement currently does not allow us to recommend substituting the measurement method for this sleep parameter. The appropriateness of replacing sleep diary measurement with a smartwatch depends on the acceptable levels of discrepancy. We propose four levels of similarity of results, defining ranges of absolute differences between objective and subjective measurements. By considering the values in the provided table and knowing the required accuracy, it is possible to determine the suitability of substitution in each individual case. The introduction of a “similarity level” parameter increases the adaptability and reusability of study findings in individual practical cases.