Refine
Year of publication
Document Type
- Article (218) (remove)
Language
- English (218) (remove)
Keywords
- (Strict) sign-regularity (1)
- 1D-CNN (1)
- 3D urban planning (1)
- AAL (2)
- Aboriginal people (1)
- Abstract interpretation (1)
- Accelerometer calibration (1)
- Accelerometer sensor (1)
- Accelerometers (1)
- Actuators (1)
Institute
- Fakultät Architektur und Gestaltung (1)
- Fakultät Bauingenieurwesen (11)
- Fakultät Elektrotechnik und Informationstechnik (1)
- Fakultät Informatik (7)
- Fakultät Maschinenbau (3)
- Fakultät Wirtschafts-, Kultur- und Rechtswissenschaften (27)
- Institut für Angewandte Forschung - IAF (30)
- Institut für Optische Systeme - IOS (9)
- Institut für Strategische Innovation und Technologiemanagement - IST (6)
- Institut für Systemdynamik - ISD (27)
Black-box variational inference (BBVI) is a technique to approximate the posterior of Bayesian models by optimization. Similar to MCMC, the user only needs to specify the model; then, the inference procedure is done automatically. In contrast to MCMC, BBVI scales to many observations, is faster for some applications, and can take advantage of highly optimized deep learning frameworks since it can be formulated as a minimization task. In the case of complex posteriors, however, other state-of-the-art BBVI approaches often yield unsatisfactory posterior approximations. This paper presents Bernstein flow variational inference (BF-VI), a robust and easy-to-use method flexible enough to approximate complex multivariate posteriors. BF-VI combines ideas from normalizing flows and Bernstein polynomial-based transformation models. In benchmark experiments, we compare BF-VI solutions with exact posteriors, MCMC solutions, and state-of-the-art BBVI methods, including normalizing flow-based BBVI. We show for low-dimensional models that BF-VI accurately approximates the true posterior; in higher-dimensional models, BF-VI compares favorably against other BBVI methods. Further, using BF-VI, we develop a Bayesian model for the semi-structured melanoma challenge data, combining a CNN model part for image data with an interpretable model part for tabular data, and demonstrate, for the first time, the use of BBVI in semi-structured models.
Apnea is a sleep disorder characterized by breathing interruptions during sleep, impacting cardiorespiratory function and overall health. Traditional diagnostic methods, like polysomnography (PSG), are unobtrusive, leading to noninvasive monitoring. This study aims to develop and validate a novel sleep monitoring system using noninvasive sensor technology to estimate cardiorespiratory parameters and detect sleep apnea. We designed a seamless monitoring system integrating noncontact force-sensitive resistor sensors to collect ballistocardiogram signals associated with cardiorespiratory activity. We enhanced the sensor’s sensitivity and reduced the noise by designing a new concept of edge-measuring sensor using a hemisphere dome and mechanical hanger to distribute the force and mechanically amplify the micromovement caused by cardiac and respiration activities. In total, we deployed three edge-measuring sensors, two deployed under the thoracic and one under the abdominal regions. The system is supported with onboard signal preprocessing in multiple physical layers deployed under the mattress. We collected the data in four sleeping positions from 16 subjects and analyzed them using ensemble empirical mode decomposition (EMD) to avoid frequency mixing. We also developed an adaptive thresholding method to identify sleep apnea. The error was reduced to 3.98 and 1.43 beats/min (BPM) in heart rate (HR) and respiration estimation, respectively. The apnea was detected with an accuracy of 87%. We optimized the system such that only one edge-measuring sensor can measure the cardiorespiratory parameters. Such a reduction in the complexity and simplification of the instruction of use shows excellent potential for in-home and continuous monitoring.
We quantify the effects of GATT/WTO membership on trade and welfare. Using an extensive database covering manufacturing trade for 186 countries over the period 1980–2016, we find that the average partial equilibrium impact of GATT/WTO membership on trade among member countries is large, positive, and significant. We contribute to the literature by estimating country-specific estimates and find them to vary widely across the countries in our sample with poorer members benefitting more. Using these estimates, we simulate the general equilibrium effects of GATT/WTO on welfare, which are sizable and heterogeneous across members. We show that countries not experiencing positive trade effects from joining GATT/WTO can still gain in terms of welfare, due to lower import prices and higher export demand.
Healthy and good sleep is a prerequisite for a rested mind and body. Both form the basis for physical and mental health. Healthy sleep is hindered by sleep disorders, the medically diagnosed frequency of which increases sharply from the age of 40. This chapter describes the formal specification of an on-course practical implementation for a non-invasive system based on biomedical signal processing to support the diagnosis and treatment of sleep-related diseases. The system aims to continuously monitor vital data during sleep in a patient’s home environment over long periods by using non-invasive technologies. At the center of the development is the MORPHEUS Box (MoBo), which consists of five main conceptualizations: the MoBo core, the MoBo-HW, the MoBo algorithm, the MoBo API, and the MoBo app. These synergistic elements aim to support the diagnosis and treatment of sleep-related diseases. Although there are related developments in individual aspects concerning the system, no comparative approach is known that gives a similar scope of functionality, deployment flexibility, extensibility, or the possibility to use multiple user groups. With the specification provided in this chapter, the MORPHEUS project sets a good platform, data model, and transmission strategies to bring an innovative proposal to measure sleep quality and detect sleep diseases from non-invasive sensors.
Infrastructure-making in interwar India was a dynamic, multilayered process involving roads and vehicles in urban and rural sites. One of their strongest playgrounds was Bombay Presidency and the Central Provinces in central and western India. Focusing on this region in the interwar period, this paper analyzes the varied relationship between peasant households and town-centred modernizing agents in the making of road transport infrastructures. The central argument of this paper is about the persistence of bullock carts over motor cars in the region. This persistence was grounded in the specific regional environment, the effects of the 1930s economic depression, and the priorities of social classes. Pinpointing these connections, the paper highlights that “modernization” of infrastructure was not a simple, linear process of progressivist change, nor did it mean the survival of apparently “old” technologies in the modern era. Instead, the paper pays attention to conflicting social complexities, implications, and meanings of the connection between infrastructure and modernity that modernization assumptions often overlook. Here, the paper shows how technological change occurred as a result of real, material class interests pulling infrastructural technology in different directions. This was where and why arguments of road-motor lobbyists and cart advocates eventually clashed, and Gandhian social workers resisted motor transport in defense of peasant interests.
This study aims to adapt CEFR in developing an integrative approach-based teaching material model for a pre-basic BISOL class. The method used in this research is the development research design by Borg and Gall. This study was development research. The stages are identification of the problem, formulation of a hypothetical draft model; feasibility testing by experts; product revision; and test product effectiveness. The data were collected through survey techniques, interviews, and documentation. The needs identification results revealed data encompassing 10 themes, 5 tasks per theme, and diverse evaluations comprising theory, in-class practice, and real-world field assignments, both on an individual and group basis. These identified needs require alignment with CEFR A1 for the development of BISOL learning. These findings were subsequently incorporated into the design of the teaching material model, and the results indicated that tailoring CEFR to BISOL as an integrative language teaching material model was feasible for application in the classroom, as assessed by experts. The implications suggest that integrating CEFR into BISOL is highly feasible for the development of teaching materials, and teachers can leverage this instructional model to enhance students' proficiency in the Indonesian language.
In this work, a storage study was conducted to find suitable packaging material for tomato powder storage. Experiments were laid out in a single factor completely randomized design (CRD) to study the effect of packaging materials on lycopene, vitamin C moisture content, and water activity of tomato powder; The factor (packaging materials) has three levels (low‐density polyethylene bag, polypropylene bottle, wrapped with aluminum foils, and packed in low‐density polyethylene bag) and is replicated three times. During the study, a twin layer solar tunnel dried tomato slices of var. Galilea was used. The dried tomato slices were then ground and packed (40 g each) in the packaging materials and stored at room temperature. Samples were drawn from the packages at 2‐month interval for quality analysis and SAS (version 9.2) software was used for statistical analysis. From the result, higher retention of lycopene (80.13%) and vitamin C (49.32%) and a nonsignificant increase in moisture content and water activity were observed for tomato powder packed in polypropylene bottles after 6 months of storage. For low‐density polyethylene packed samples and samples wrapped with aluminum foil and packed in a low‐density polyethylene bag, 57.06% and 60.45% lycopene retention and 42.9% and 49.23% Vitamin C retention were observed, respectively, after 6 months of storage. Considering the results found, it can be concluded that lycopene and vitamin C content of twin layer solar tunnel dried tomato powder can be preserved at ambient temperature storage by packing in a polypropylene bottle with a safe range of moisture content and water activity levels for 6 months.
The aim of this paper is to find out in how accommodation providers in the Seychelles perceive climate change and what mitigation and adaptation measures they can provide. In order to answer these questions, a qualitative mixed-method-approach, comprised of twenty semi-structured interviews, an online-survey and participant observation was used. Results show that accommodation providers especially perceive the effects of climate change that directly affect their business and that they have already partly implemented some mitigation and adaptation measures. However, strategies and regulations are needed at the Seychelles’ government level and on a global level to actually achieve CO2 neutral travel.
Unintrusive health monitoring systems is important when continuous monitoring of the patient vital signals is required. In this paper, signals obtained from accelerometers placed under a bed are processed with ballistocardiography algorithms and compared with synchronized electrocardiographic signals.
Cardiovascular diseases (CVD) are leading contributors to global mortality, necessitating advanced methods for vital sign monitoring. Heart Rate Variability (HRV) and Respiratory Rate, key indicators of cardiovascular health, are traditionally monitored via Electrocardiogram (ECG). However, ECG's obtrusiveness limits its practicality, prompting the exploration of Ballistocardiography (BCG) as a non-invasive alternative. BCG records the mechanical activity of the body with each heartbeat, offering a contactless method for HRV monitoring. Despite its benefits, BCG signals are susceptible to external interference and present a challenge in accurately detecting J-Peaks. This research uses advanced signal processing and deep learning techniques to overcome these limitations. Our approach integrates accelerometers for long-term BCG data collection during sleep, applying Discrete Wavelet Transforms (DWT) and Ensemble Empirical Mode Decomposition (EEMD) for feature extraction. The Bi-LSTM model, leveraging these features, enhances heartbeat detection, offering improved reliability over traditional methods. The study's findings indicate that the combined use of DWT, EEMD, and Bi-LSTM for J-Peak detection in BCG signals is effective, with potential applications in unobtrusive long-term cardiovascular monitoring. Our results suggest that this methodology could contribute to HRV monitoring, particularly in home settings, enhancing patient comfort and compliance.
This study investigates the application of Force Sensing Resistor (FSR) sensors and machine learning algorithms for non-invasive body position monitoring during sleep. Although reliable, traditional methods like Polysomnography (PSG) are invasive and unsuited for extended home-based monitoring. Our approach utilizes FSR sensors placed beneath the mattress to detect body positions effectively. We employed machine learning techniques, specifically Random Forest (RF), K-Nearest Neighbors (KNN), and XGBoost algorithms, to analyze the sensor data. The models were trained and tested using data from a controlled study with 15 subjects assuming various sleep positions. The performance of these models was evaluated based on accuracy and confusion matrices. The results indicate XGBoost as the most effective model for this application, followed by RF and KNN, offering promising avenues for home-based sleep monitoring systems.
The massive use of patient data for the training of artificial intelligence algorithms is common nowadays in medicine. In this scientific work, a statistical analysis of one of the most used datasets for the training of artificial intelligence models for the detection of sleep disorders is performed: sleep health heart study 2. This study focuses on determining whether the gender and age of the patients have a relevant influence to consider working with differentiated datasets based on these variables for the training of artificial intelligence models.
Accurate monitoring of a patient's heart rate is a key element in the medical observation and health monitoring. In particular, its importance extends to the identification of sleep-related disorders. Various methods have been established that involve sensor-based recording of physiological signals followed by automated examination and analysis. This study attempts to evaluate the efficacy of a non-invasive HR monitoring framework based on an accelerometer sensor specifically during sleep. To achieve this goal, the motion induced by thoracic movements during cardiac contractions is captured by a device installed under the mattress. Signal filtering techniques and heart rate estimation using the symlets6 wavelet are part of the implemented computational framework described in this article. Subsequent analysis indicates the potential applicability of this system in the prognostic domain, with an average error margin of approximately 3 beats per minute. The results obtained represent a promising advancement in non-invasive heart rate monitoring during sleep, with potential implications for improved diagnosis and management of cardiovascular and sleep-related disorders.
This paper compares two popular scripting implementations for hardware prototyping: Python scripts exe- cut from User-Space and C-based Linux-Driver processes executed from Kernel-Space, which can provide information to researchers when considering one or another in their implementations. Conclusions exhibit that deploying software scripts in the kernel space makes it possible to grant a certain quality of sensor information using a Raspberry Pi without the need for advanced real-time operational systems.
A post-growth economy is a comparatively new paradigm in the tourism discourse. The aim of this article is to find out the commonalities between this concept and Māori tourism and in which way the latter can contribute to a post-growth economy. A qualitative mixed method approach, including in-depth-interviews, participant observation, and secondary analysis is applied. The results show that there is a lot of overlap between Māori tourism and a post-growth economy. Differences are visible, as well, regarding the value approach of Māori tourism and the indicator approach of a post-growth economy. Especially the social innovation created in Aotearoa New Zealand at the instigation of Māori groups of granting legal personhood to parts of nature may serve as a driver for a form of tourism that is in line with the idea of a post-growth economy.
This paper applies the concept of Soja’s Thirdspace to the phenomenon of Lazgi dance and tourism in Uzbekistan. In doing so it analyses the different levels of perception (including Firstspace and Secondspace) of Lazgi and tourism via an autoethnographic lens. Complemented by expert interviews, the interaction of Lazgi and tourism is examined and characteristics of the Lazgisphere (world of Lazgi) in Uzbekistan are distilled. The results show that Lazgi is often directly or indirectly connected with tourism in Uzbekistan, but even more so serves to reaffirm national identity.
While driving, stress is caused by situations in which the driver estimates their ability to manage the driving demands as insufficient or loses the capability to handle the situation. This leads to increased numbers of driver mistakes and traffic violations. Additional stressing factors are time pressure, road conditions, or dislike for driving. Therefore, stress affects driver and road safety. Stress is classified into two categories depending on its duration and the effects on the body and psyche: short-term eustress and constantly present distress, which causes degenerative effects. In this work, we focus on distress. Wearable sensors are handy tools for collecting biosignals like heart rate, activity, etc. Easy installation and non-intrusive nature make them convenient for calculating stress. This study focuses on the investigation of stress and its implications. Specifically, the research conducts an analysis of stress within a select group of individuals from both Spain and Germany. The primary objective is to examine the influence of recognized psychological factors, including personality traits such as neuroticism, extroversion, psychoticism, stress and road safety. The estimation of stress levels was accomplished through the collection of physiological parameters (R-R intervals) using a Polar H10 chest strap. We observed that personality traits, such as extroversion, exhibited similar trends during relaxation, with an average heart rate 6% higher in Spain and 3% higher in Germany. However, while driving, introverts, on average, experienced more stress, with rates 4% and 1% lower than extroverts in Spain and Germany, respectively.
Study design:
Retrospective, mono-centric cohort research study.
Objectives:
The purpose of this study is to validate a novel artificial intelligence (AI)-based algorithm against human-generated ground truth for radiographic parameters of adolescent idiopathic scoliosis (AIS).
Methods:
An AI-algorithm was developed that is capable of detecting anatomical structures of interest (clavicles, cervical, thoracic, lumbar spine and sacrum) and calculate essential radiographic parameters in AP spine X-rays fully automatically. The evaluated parameters included T1-tilt, clavicle angle (CA), coronal balance (CB), lumbar modifier, and Cobb angles in the proximal thoracic (C-PT), thoracic, and thoracolumbar regions. Measurements from 2 experienced physicians on 100 preoperative AP full spine X-rays of AIS patients were used as ground truth and to evaluate inter-rater and intra-rater reliability. The agreement between human raters and AI was compared by means of single measure Intra-class Correlation Coefficients (ICC; absolute agreement; .75 rated as excellent), mean error and additional statistical metrics.
Results:
The comparison between human raters resulted in excellent ICC values for intra- (range: .97-1) and inter-rater (.85-.99) reliability. The algorithm was able to determine all parameters in 100% of images with excellent ICC values (.78-.98). Consistently with the human raters, ICC values were typically smallest for C-PT (eg, rater 1A vs AI: .78, mean error: 4.7°) and largest for CB (.96, -.5 mm) as well as CA (.98, .2°).
Conclusions:
The AI-algorithm shows excellent reliability and agreement with human raters for coronal parameters in preoperative full spine images. The reliability and speed offered by the AI-algorithm could contribute to the efficient analysis of large datasets (eg, registry studies) and measurements in clinical practice.
In the last decade, both sustainability and business models for sustainability have increased in importance. Sustainability issues have become the focus of discussion. These issues are interlinked and often negatively impact each other. They are complex and include socio-ecological dilemmas, exist in almost every aspect of our society (economic, environmental, social), and are hard to formulate. They may have multiple, incompatible solutions, competing objectives, and open timeframes. Previous research has not developed satisfactory ways to comprehend and solve problems of this nature. Life Cycle Assessment (LCA) the widely used method to assess sustainable development has reached its limitation to achieve sustainable social goals. System Dynamics (SD) is a valuable methodology that enhances understanding of the structure and internal dynamic behaviours of large, complex, and dynamic systems, leading to improved decision-making. It offers a philosophy and set of tools for modelling, analysing, and simulating dynamic systems. This research applied system dynamics methods in conjunction with simulation software to assess the potential impact of a solution on environmental, social, and economic aspects of a complex system, aims to gain insights into the system's behaviour and identify the potential consequences of interventions or policy changes across multiple dimensions. This paper responds to the urgent need for a new business model by presenting a concept for an adapted dynamic business modelling for sustainability (aDBMfS) using system dynamics. Case studies in the smartphone industry are applied.
“Crowd contamination”?
(2023)
Misconduct allegations have been found to not only affect the alleged firm but also other, unalleged firms in form of reputational and financial spillover effects. It has remained unexplored, however, how the number of prior allegations against other firms matters for an individual firm currently facing an allegation. Building on behavioral decision theory, we argue that the relationship between allegation prevalence among other firms and investor reaction to a focal allegation is inverted U-shaped. The inverted U-shaped effect is theorized to emerge from the combination of two effects: In the absence of prior allegations against other firms, investors fail to anticipate the focal allegation, and hence react particularly negatively (“anticipation effect”). In the case of many prior allegations against other firms, investors also react particularly negatively because investors perceive the focal allegation as more warranted (“evaluation effect”). The multi-industry, empirical analysis of 8,802 misconduct allegations against US firms between 2007 and 2017 provides support for our predicted, inverted U-shaped effect. Our study complements recent misconduct research on spillover effects by highlighting that not only a current allegation against an individual firm can “contaminate” other, unalleged firms but that also prior allegations against other firms can “contaminate” investor reaction to a focal allegation against an individual firm.
The transformation to an Industry 4.0, which is in general seen as a solution to increasing market challenges, is forcing companies to radically change their way of thinking and to be open to new forms of cooperation. In this context, the opening-up of the innovation process is widely seen as a necessity to meet these challenges, especially for small and medium enterprises (SMEs). The aim of the study therefore is to analyze how cooperation today can be characterized, how this character has changed since the establishment of the term Industry 4.0 at Hanover Fair in 2011 and which cooperation strategies have proven successful. The analysis consists of a quantitative, secondary data analysis that includes country-specific data from 35 European countries of 2010 and 2016 collected by the European Commission and the OECD. The research, focusing on the secondary sector, shows that multinational enterprises MNEs still tend to cooperate more than SMEs, with a slight overall trend towards protectionism. Nevertheless, there is a clear tendency towards the opening-up of SMEs. In this regard, especially universities, competitors and suppliers have become increasingly attractive as cooperation partners for SMEs.
Cities around the world are facing the implications of a changing climate as an increasingly pressing issue. The negative effects of climate change are already being felt today. Therefore, adaptation to these changes is a mission that every city must master. Leading practices worldwide demonstrate various urban efforts on climate change adaptation (CCA) which are already underway. Above all, the integration of climate data, remote sensing, and in situ data is key to a successful and measurable adaptation strategy. Furthermore, these data can act as a timely decision support tool for municipalities to develop an adaptation strategy, decide which actions to prioritize, and gain the necessary buy-in from local policymakers. The implementation of agile data workflows can facilitate the integration of climate data into climate-resilient urban planning. Due to local specificities, (supra)national, regional, and municipal policies and (by) laws, as well as geographic and related climatic differences worldwide, there is no single path to climate-resilient urban planning. Agile data workflows can support interdepartmental collaboration and, therefore, need to be integrated into existing management processes and government structures. Agile management, which has its origins in software development, can be a way to break down traditional management practices, such as static waterfall models and sluggish stage-gate processes, and enable an increased level of flexibility and agility required when urgent. This paper presents the findings of an empirical case study conducted in cooperation with the City of Constance in southern Germany, which is pursuing a transdisciplinary and trans-sectoral co-development approach to make management processes more agile in the context of climate change adaptation. The aim is to present a possible way of integrating climate data into CCA planning by changing the management approach and implementing a toolbox for low-threshold access to climate data. The city administration, in collaboration with the University of Applied Sciences Constance, the Climate Service Center Germany (GERICS), and the University of Stuttgart, developed a co-creative and participatory project, CoKLIMAx, with the objective of integrating climate data into administrative processes in the form of a toolbox. One key element of CoKLIMAx is the involvement of the population, the city administration, and political decision-makers through targeted communication and regular feedback loops among all involved departments and stakeholder groups. Based on the results of a survey of 72 administrative staff members and a literature review on agile management in municipalities and city administrations, recommendations on a workflow and communication structure for cross-departmental strategies for resilient urban planning in the City of Constance were developed.
AbstractSanctions encompass a wide set of policy instruments restricting cross‐border economic activities. In this paper, we study how different types of sanctions affect the export behavior of firms to the targeted countries. We combine Danish register data, including information on firm‐destination‐specific exports, with information on sanctions imposed by Denmark from the Global Sanctions Database. Our data allow us to study firms' export behavior in 62 sanctioned countries, amounting to a total of 453 country‐years with sanctions over the period 2000–2015. Methodologically, we apply a two‐stage estimation strategy to properly account for multilateral resistance terms. We find that, on average, sanctions lead to a significant reduction in firms' destination‐specific exports and a significant increase in firms' probability to exit the destination. Next, we study heterogeneity in the effects of sanctions across (i) sanction types and sanction packages, (ii) the objectives of sanctions, and (iii) countries subject to sanctions. Results confirm that the effects of sanctions on firms' export behavior vary considerably across these three dimensions.
Background: Polysomnography (PSG) is the gold standard for detecting obstructive sleep apnea (OSA). However, this technique has many disadvantages when using it outside the hospital or for daily use. Portable monitors (PMs) aim to streamline the OSA detection process through deep learning (DL).
Materials and methods: We studied how to detect OSA events and calculate the apnea-hypopnea index (AHI) by using deep learning models that aim to be implemented on PMs. Several deep learning models are presented after being trained on polysomnography data from the National Sleep Research Resource (NSRR) repository. The best hyperparameters for the DL architecture are presented. In addition, emphasis is focused on model explainability techniques, concretely on Gradient-weighted Class Activation Mapping (Grad-CAM).
Results: The results for the best DL model are presented and analyzed. The interpretability of the DL model is also analyzed by studying the regions of the signals that are most relevant for the model to make the decision. The model that yields the best result is a one-dimensional convolutional neural network (1D-CNN) with 84.3% accuracy.
Conclusion: The use of PMs using machine learning techniques for detecting OSA events still has a long way to go. However, our method for developing explainable DL models demonstrates that PMs appear to be a promising alternative to PSG in the future for the detection of obstructive apnea events and the automatic calculation of AHI.
Increasing demand for sustainable, resilient, and low-carbon construction materials has highlighted the potential of Compacted Mineral Mixtures (CMMs), which are formulated from various soil types (sand, silt, clay) and recycled mineral waste. This paper presents a comprehensive inter- and transdisciplinary research concept that aims to industrialise and scale up the adoption of CMM-based construction materials and methods, thereby accelerating the construction industry’s systemic transition towards carbon neutrality. By drawing upon the latest advances in soil mechanics, rheology, and automation, we propose the development of a robust material properties database to inform the design and application of CMM-based materials, taking into account their complex, time-dependent behaviour. Advanced soil mechanical tests would be utilised to ensure optimal performance under various loading and ageing conditions. This research has also recognised the importance of context-specific strategies for CMM adoption. We have explored the implications and limitations of implementing the proposed framework in developing countries, particularly where resources may be constrained. We aim to shed light on socio-economic and regulatory aspects that could influence the adoption of these sustainable construction methods. The proposed concept explores how the automated production of CMM-based wall elements can become a fast, competitive, emission-free, and recyclable alternative to traditional masonry and concrete construction techniques. We advocate for the integration of open-source digital platform technologies to enhance data accessibility, processing, and knowledge acquisition; to boost confidence in CMM-based technologies; and to catalyse their widespread adoption. We believe that the transformative potential of this research necessitates a blend of basic and applied investigation using a comprehensive, holistic, and transfer-oriented methodology. Thus, this paper serves to highlight the viability and multiple benefits of CMMs in construction, emphasising their pivotal role in advancing sustainable development and resilience in the built environment.
This paper introduces the third update/release of the Global Sanctions Data Base (GSDB-R3). The GSDB-R3 extends the period of coverage from 1950–2019 to 1950–2022, which includes two special periods—COVID-19 and the new sanctions against Russia. This update of the GSDB contains a total of 1325 cases. In response to multiple inquiries and requests, the GSDB-R3 has been amended with a new variable that distinguishes between unilateral and multilateral sanctions. As before, the GSDB comes in two versions, case-specific and dyadic, which are freely available upon request at GSDB@drexel.edu. To highlight one of the new features of the GSDB, we estimate the heterogeneous effects of unilateral and multilateral sanctions on trade. We also obtain estimates of the effects on trade of the 2014 sanctions on Russia.
The scoring of sleep stages is one of the essential tasks in sleep analysis. Since a manual procedure requires considerable human and financial resources, and incorporates some subjectivity, an automated approach could result in several advantages. There have been many developments in this area, and in order to provide a comprehensive overview, it is essential to review relevant recent works and summarise the characteristics of the approaches, which is the main aim of this article. To achieve it, we examined articles published between 2018 and 2022 that dealt with the automated scoring of sleep stages. In the final selection for in-depth analysis, 125 articles were included after reviewing a total of 515 publications. The results revealed that automatic scoring demonstrates good quality (with Cohen's kappa up to over 0.80 and accuracy up to over 90%) in analysing EEG/EEG + EOG + EMG signals. At the same time, it should be noted that there has been no breakthrough in the quality of results using these signals in recent years. Systems involving other signals that could potentially be acquired more conveniently for the user (e.g. respiratory, cardiac or movement signals) remain more challenging in the implementation with a high level of reliability but have considerable innovation capability. In general, automatic sleep stage scoring has excellent potential to assist medical professionals while providing an objective assessment.
In order to ensure sufficient recovery of the human body and brain, healthy sleep is indispensable. For this purpose, appropriate therapy should be initiated at an early stage in the case of sleep disorders. For some sleep disorders (e.g., insomnia), a sleep diary is essential for diagnosis and therapy monitoring. However, subjective measurement with a sleep diary has several disadvantages, requiring regular action from the user and leading to decreased comfort and potential data loss. To automate sleep monitoring and increase user comfort, one could consider replacing a sleep diary with an automatic measurement, such as a smartwatch, which would not disturb sleep. To obtain accurate results on the evaluation of the possibility of such a replacement, a field study was conducted with a total of 166 overnight recordings, followed by an analysis of the results. In this evaluation, objective sleep measurement with a Samsung Galaxy Watch 4 was compared to a subjective approach with a sleep diary, which is a standard method in sleep medicine. The focus was on comparing four relevant sleep characteristics: falling asleep time, waking up time, total sleep time (TST), and sleep efficiency (SE). After evaluating the results, it was concluded that a smartwatch could replace subjective measurement to determine falling asleep and waking up time, considering some level of inaccuracy. In the case of SE, substitution was also proved to be possible. However, some individual recordings showed a higher discrepancy in results between the two approaches. For its part, the evaluation of the TST measurement currently does not allow us to recommend substituting the measurement method for this sleep parameter. The appropriateness of replacing sleep diary measurement with a smartwatch depends on the acceptable levels of discrepancy. We propose four levels of similarity of results, defining ranges of absolute differences between objective and subjective measurements. By considering the values in the provided table and knowing the required accuracy, it is possible to determine the suitability of substitution in each individual case. The introduction of a “similarity level” parameter increases the adaptability and reusability of study findings in individual practical cases.
Characterization of NiTi Shape Memory Damping Elements designed for Automotive Safety Systems
(2014)
Actuator elements made of NiTi shape memory material are more and more known in industry because of their unique properties. Due to the martensitic phase change, they can revert to their original shape by heating when subjected to an appropriate treatment. This thermal shape memory effect (SME) can show a significant shape change combined with a considerable force. Therefore such elements can be used to solve many technical tasks in the field of actuating elements and mechatronics and will play an increasing role in the next years, especially within the automotive technology, energy management, power, and mechanical engineering as well as medical technology. Beside this thermal SME, these materials also show a mechanical SME, characterized by a superelastic plateau with reversible elongations in the range of 8%. This behavior is based on the building of stress-induced martensite of loaded austenite material at constant temperature and facilitates a lot of applications especially in the medical field. Both SMEs are attended by energy dissipation during the martensitic phase change. This paper describes the first results obtained on different actuator and superelastic NiTi wires concerning their use as damping elements in automotive safety systems. In a first step, the damping behavior of small NiTi wires up to 0.5 mm diameter was examined at testing speeds varying between 0.1 and 50 mm/s upon an adapted tensile testing machine. In order to realize higher testing speeds, a drop impact testing machine was designed, which allows testing speeds up to 4000 mm/s. After introducing this new type of testing machine, the first results of vertical-shock tests of superelastic and electrically activated actuator wires are presented. The characterization of these high dynamic phase change parameters represents the basis for new applications for shape memory damping elements, especially in automotive safety systems.
In automotive a lot of electromagnetically, pyrotechnically or mechanically driven actuators are integrated to run comfort systems and to control safety systems in modern passenger cars. Using shape memory alloys (SMA) the existing systems could be simplified, performing the same function through new mechanisms with reduced size, weight, and costs. A drawback for the use of SMA in safety systems is the lack of materials knowledge concerning the durability of the switching function (long-time stability of the shape memory effect). Pedestrian safety systems play a significant role to reduce injuries and fatal casualties caused by accidents. One automotive safety system for pedestrian protection is the bonnet lifting system. Based on such an application, this article gives an introduction to existing bonnet lifting systems for pedestrian protection, describes the use of quick changing shape memory actuators and the results of the study concerning the long-time stability of the tested NiTi-wires. These wires were trained, exposed up to 4years at elevated temperatures (up to 140°C) and tested regarding their phase change temperatures, times, and strokes. For example, it was found that A P-temperature is shifted toward higher temperatures with longer exposing periods and higher temperatures. However, in the functional testing plant a delay in the switching time could not be detected. This article gives some answers concerning the long-time stability of NiTi-wires that were missing till now. With this knowledge, the number of future automotive applications using SMA can be increased. It can be concluded, that the use of quick changing shape memory actuators in safety systems could simplify the mechanism, reduce maintenance and manufacturing costs and should be insertable also for other automotive applications.
Network effects, economies of scale, and lock-in-effects increasingly lead to a concentration of digital resources and capabilities, hindering the free and equitable development of digital entrepreneurship, new skills, and jobs, especially in small communities and their small and medium-sized enterprises (“SMEs”). To ensure the affordability and accessibility of technologies, promote digital entrepreneurship and community well-being, and protect digital rights, we propose data cooperatives as a vehicle for secure, trusted, and sovereign data exchange. In post-pandemic times, community/SME-led cooperatives can play a vital role by ensuring that supply chains to support digital commons are uninterrupted, resilient, and decentralized. Digital commons and data sovereignty provide communities with affordable and easy access to information and the ability to collectively negotiate data-related decisions. Moreover, cooperative commons (a) provide access to the infrastructure that underpins the modern economy, (b) preserve property rights, and (c) ensure that privatization and monopolization do not further erode self-determination, especially in a world increasingly mediated by AI. Thus, governance plays a significant role in accelerating communities’/SMEs’ digital transformation and addressing their challenges. Cooperatives thrive on digital governance and standards such as open trusted application programming interfaces (“APIs”) that increase the efficiency, technological capabilities, and capacities of participants and, most importantly, integrate, enable, and accelerate the digital transformation of SMEs in the overall process. This review article analyses an array of transformative use cases that underline the potential of cooperative data governance. These case studies exemplify how data and platform cooperatives, through their innovative value creation mechanisms, can elevate digital commons and value chains to a new dimension of collaboration, thereby addressing pressing societal issues. Guided by our research aim, we propose a policy framework that supports the practical implementation of digital federation platforms and data cooperatives. This policy blueprint intends to facilitate sustainable development in both the Global South and North, fostering equitable and inclusive data governance strategies.
This study aims to investigate the utilization of Bayesian techniques for the calibration of micro-electro-mechanical system (MEMS) accelerometers. These devices have garnered substantial interest in various practical applications and typically require calibration through error-correcting functions. The parameters of these error-correcting functions are determined during a calibration process. However, due to various sources of noise, these parameters cannot be determined with precision, making it desirable to incorporate uncertainty in the calibration models. Bayesian modeling offers a natural and complete way of reflecting uncertainty by treating the model parameters as variables rather than fixed values. In addition, Bayesian modeling enables the incorporation of prior knowledge, making it an ideal choice for calibration. Nevertheless, it is infrequently used in sensor calibration. This study introduces Bayesian methods for the calibration of MEMS accelerometer data in a straightforward manner using recent advances in probabilistic programming.
Non-volatile NAND flash memories store information as an electrical charge. Different read reference voltages are applied to read the data. However, the threshold voltage distributions vary due to aging effects like program erase cycling and data retention time. It is necessary to adapt the read reference voltages for different life-cycle conditions to minimize the error probability during readout. In the past, methods based on pilot data or high-resolution threshold voltage histograms were proposed to estimate the changes in voltage distributions. In this work, we propose a machine learning approach with neural networks to estimate the read reference voltages. The proposed method utilizes sparse histogram data for the threshold voltage distributions. For reading the information from triple-level cell (TLC) memories, several read reference voltages are applied in sequence. We consider two histogram resolutions. The simplest histogram consists of the zero-and-one ratios for the hard decision read operation, whereas a higher resolution is obtained by considering the quantization levels for soft-input decoding. This approach does not require pilot data for the voltage adaptation. Furthermore, only a few measurements of extreme points of the threshold voltage distributions are required as training data. Measurements with different conditions verify the proposed approach. The resulting neural networks perform well under other life-cycle conditions.
Sleep is essential to physical and mental health. However, the traditional approach to sleep analysis—polysomnography (PSG)—is intrusive and expensive. Therefore, there is great interest in the development of non-contact, non-invasive, and non-intrusive sleep monitoring systems and technologies that can reliably and accurately measure cardiorespiratory parameters with minimal impact on the patient. This has led to the development of other relevant approaches, which are characterised, for example, by the fact that they allow greater freedom of movement and do not require direct contact with the body, i.e., they are non-contact. This systematic review discusses the relevant methods and technologies for non-contact monitoring of cardiorespiratory activity during sleep. Taking into account the current state of the art in non-intrusive technologies, we can identify the methods of non-intrusive monitoring of cardiac and respiratory activity, the technologies and types of sensors used, and the possible physiological parameters available for analysis. To do this, we conducted a literature review and summarised current research on the use of non-contact technologies for non-intrusive monitoring of cardiac and respiratory activity. The inclusion and exclusion criteria for the selection of publications were established prior to the start of the search. Publications were assessed using one main question and several specific questions. We obtained 3774 unique articles from four literature databases (Web of Science, IEEE Xplore, PubMed, and Scopus) and checked them for relevance, resulting in 54 articles that were analysed in a structured way using terminology. The result was 15 different types of sensors and devices (e.g., radar, temperature sensors, motion sensors, cameras) that can be installed in hospital wards and departments or in the environment. The ability to detect heart rate, respiratory rate, and sleep disorders such as apnoea was among the characteristics examined to investigate the overall effectiveness of the systems and technologies considered for cardiorespiratory monitoring. In addition, the advantages and disadvantages of the considered systems and technologies were identified by answering the identified research questions. The results obtained allow us to determine the current trends and the vector of development of medical technologies in sleep medicine for future researchers and research.
Insecurity Refactoring is a change to the internal structure of software to inject a vulnerability without changing the observable behavior in a normal use case scenario. An implementation of Insecurity Refactoring is formally explained to inject vulnerabilities in source code projects by using static code analysis. It creates learning examples with source code patterns from known vulnerabilities.
Insecurity Refactoring is achieved by creating an Adversary Controlled Input Dataflow tree based on a Code Property Graph. The tree is used to find possible injection paths. Transformation of the possible injection paths allows to inject vulnerabilities. Insertion of data flow patterns introduces different code patterns from related Common Vulnerabilities and Exposures (CVE) reports. The approach is evaluated on 307 open source projects. Additionally, insecurity-refactored projects are deployed in virtual machines to be used as learning examples. Different static code analysis tools, dynamic tools and manual inspections are used with modified projects to confirm the presence of vulnerabilities.
The results show that in 8.1% of the open source projects it is possible to inject vulnerabilities. Different inspected code patterns from CVE reports can be inserted using corresponding data flow patterns. Furthermore the results reveal that the injected vulnerabilities are useful for a small sample size of attendees (n=16). Insecurity Refactoring is useful to automatically generate learning examples to improve software security training. It uses real projects as base whereas the injected vulnerabilities stem from real CVE reports. This makes the injected vulnerabilities unique and realistic.
Purpose
In order to combat climate change and safeguard a liveable future we need fundamental and rapid social change. Climate communication can play an important role to nurture the public engagement needed for this change, and higher education for sustainability can learn from climate communication.
Approach
The scientific evidence base on climate communication for effective public engagement is summarised into ten key principles, including ‘basing communication on people’s values’, ‘conscious use of framing’, and ‘turning concern into action’. Based on the author’s perspective and experience in the university context, implications are explored for sustainability in higher education.
Findings
The article provides suggestions for teaching (e.g. complement information with consistent behaviour by the lecturer, integrate local stories, and provide students with basic skills to communicate climate effectively), for research (e.g. make teaching for effective engagement the subject of applied research), for universities’ third mission to contribute to sustainable development
in the society (e.g. provide climate communication trainings to empower local stakeholders), andgreening the campus (develop a proper engagement infrastructure, e.g. by a university storytelling exchange on climate action).
Originality
The article provides an up-to-date overview of climate communication research, which is in itself original. This evidence base holds interesting learnings for institutions of higher education, and the link between climate communication and universities has so far not been explored comprehensively.
Sleep disorders can impact daily life, affecting physical, emotional, and cognitive well-being. Due to the time-consuming, highly obtrusive, and expensive nature of using the standard approaches such as polysomnography, it is of great interest to develop a noninvasive and unobtrusive in-home sleep monitoring system that can reliably and accurately measure cardiorespiratory parameters while causing minimal discomfort to the user’s sleep. We developed a low-cost Out of Center Sleep Testing (OCST) system with low complexity to measure cardiorespiratory parameters. We tested and validated two force-sensitive resistor strip sensors under the bed mattress covering the thoracic and abdominal regions. Twenty subjects were recruited, including 12 males and 8 females. The ballistocardiogram signal was processed using the 4th smooth level of the discrete wavelet transform and the 2nd order of the Butterworth bandpass filter to measure the heart rate and respiration rate, respectively. We reached a total error (concerning the reference sensors) of 3.24 beats per minute and 2.32 rates for heart rate and respiration rate, respectively. For males and females, heart rate errors were 3.47 and 2.68, and respiration rate errors were 2.32 and 2.33, respectively. We developed and verified the reliability and applicability of the system. It showed a minor dependency on sleeping positions, one of the major cumbersome sleep measurements. We identified the sensor under the thoracic region as the optimal configuration for cardiorespiratory measurement. Although testing the system with healthy subjects and regular patterns of cardiorespiratory parameters showed promising results, further investigation is required with the bandwidth frequency and validation of the system with larger groups of subjects, including patients.
Short-Term Density Forecasting of Low-Voltage Load using Bernstein-Polynomial Normalizing Flows
(2023)
The transition to a fully renewable energy grid requires better forecasting of demand at the low-voltage level to increase efficiency and ensure reliable control. However, high fluctuations and increasing electrification cause huge forecast variability, not reflected in traditional point estimates. Probabilistic load forecasts take uncertainties into account and thus allow more informed decision-making for the planning and operation of low-carbon energy systems. We propose an approach for flexible conditional density forecasting of short-term load based on Bernstein polynomial normalizing flows, where a neural network controls the parameters of the flow. In an empirical study with 3639 smart meter customers, our density predictions for 24h-ahead load forecasting compare favorably against Gaussian and Gaussian mixture densities. Furthermore, they outperform a non-parametric approach based on the pinball loss, especially in low-data scenarios.
Background
This is a systematic review protocol to identify automated features, applied technologies, and algorithms in the electronic early warning/track and triage system (EW/TTS) developed to predict clinical deterioration (CD).
Methodology
This study will be conducted using PubMed, Scopus, and Web of Science databases to evaluate the features of EW/TTS in terms of their automated features, technologies, and algorithms. To this end, we will include any English articles reporting an EW/TTS without time limitation. Retrieved records will be independently screened by two authors and relevant data will be extracted from studies and abstracted for further analysis. The included articles will be evaluated independently using the JBI critical appraisal checklist by two researchers.
Discussion
This study is an effort to address the available automated features in the electronic version of the EW/TTS to shed light on the applied technologies, automated level of systems, and utilized algorithms in order to smooth the road toward the fully automated EW/TTS as one of the potential solutions of prevention CD and its adverse consequences.
Recognizing Human Activity of Daily Living Using a Flexible Wearable for 3D Spine Pose Tracking
(2023)
The World Health Organization recognizes physical activity as an influencing domain on quality of life. Monitoring, evaluating, and supervising it by wearable devices can contribute to the early detection and progress assessment of diseases such as Alzheimer’s, rehabilitation, and exercises in telehealth, as well as abrupt events such as a fall. In this work, we use a non-invasive and non-intrusive flexible wearable device for 3D spine pose measurement to monitor and classify physical activity. We develop a comprehensive protocol that consists of 10 indoor, 4 outdoor, and 8 transition states activities in three categories of static, dynamic, and transition in order to evaluate the applicability of the flexible wearable device in human activity recognition. We implement and compare the performance of three neural networks: long short-term memory (LSTM), convolutional neural network (CNN), and a hybrid model (CNN-LSTM). For ground truth, we use an accelerometer and strips data. LSTM reached an overall classification accuracy of 98% for all activities. The CNN model with accelerometer data delivered better performance in lying down (100%), static (standing = 82%, sitting = 75%), and dynamic (walking = 100%, running = 100%) positions. Data fusion improved the outputs in standing (92%) and sitting (94%), while LSTM with the strips data yielded a better performance in bending-related activities (bending forward = 49%, bending backward = 88%, bending right = 92%, and bending left = 100%), the combination of data fusion and principle components analysis further strengthened the output (bending forward = 100%, bending backward = 89%, bending right = 100%, and bending left = 100%). Moreover, the LSTM model detected the first transition state that is similar to fall with the accuracy of 84%. The results show that the wearable device can be used in a daily routine for activity monitoring, recognition, and exercise supervision, but still needs further improvement for fall detection.
Multi-faceted stresses of social, environmental, and economic nature are increasingly challenging the existence and sustainability of our societies. Cities in particular are disproportionately threatened by global issues such as climate change, urbanization, population growth, air pollution, etc. In addition, urban space is often too limited to effectively develop sustainable, nature-based solutions while accommodating growing populations. This research aims to provide new methodologies by proposing lightweight green bridges in inner-city areas as an effective land value capture mechanism. Geometry analysis was performed using geospatial and remote sensing data to provide geometrically feasible locations of green bridges. A multi-criteria decision analysis was applied to identify suitable locations for green bridges investigating Central European urban centers with a focus on German cities as representative examples. A cost-benefit analysis was performed to assess the economic feasibility using a case study. The results of the geometry analysis identified 3249 locations that were geometrically feasible to implement a green bridge in German cities. The sample locations from the geometry analysis were proved to be validated for their implementation potential. Multi-criteria decision analysis was used to select 287 sites that fall under the highest suitable class based on several criteria. The cost-benefit analysis of the case study showed that the market value of the property alone can easily outweigh the capital and maintenance costs of a green bridge, while the indirect (monetary) benefits of the green space continue to increase the overall value of the green bridge property including its neighborhood over time. Hence, we strongly recommend light green bridges as financially sustainable and nature-based solutions in cities worldwide.
Digital federated platforms and data cooperatives for secure, trusted and sovereign data exchange will play a central role in the construction industry of the future. With the help of platforms, cooperatives and their novel value creation, the digital transformation and the degree of organization of the construction value chain can be taken to a new level of collaboration. The goal of this research project was to develop an experimental prototype for a federated innovation data platform along with a suitable exemplary use case. The prototype is to serve the construction industry as a demonstrator for further developments and form the basis for an innovation platform. It exemplifies how an overall concept is concretely implemented along one or more use cases that address high-priority industry pain points. This concept will create a blueprint and a framework for further developments, which will then be further established in the market. The research project illuminates the perspective of various governance innovations to increase industry collaboration, productivity and capital project performance and transparency as well as the overall potential of possible platform business models. However, a comprehensive expert survey revealed that there are considerable obstacles to trust-based data exchange between the key stakeholders in the industry value network. The obstacles to cooperation are predominantly not of a technical nature but rather of a competitive, predominantly trust-related nature. To overcome these obstacles and create a pre-competitive space of trust, the authors therefore propose the governance structure of a data cooperative model, which is discussed in detail in this paper.
Due to their structure of crossed yarns embedded in coating, woven fabric membranes are characterised by a highly nonlinear stress-strain behaviour. In order to determine an accurate structural response of membrane structures, a suitable description of the material behaviour is required. Typical phenomenological material models like linear-elastic orthotropic models only allow a limited determination of the real material behaviour. A more accurate approach becomes evident by focusing on the meso-scale, which reveals an inhomogeneous however periodic structure of woven fabrics. The present work focuses on an established meso-scale model. The novelty of this work is an enhancement of this model with regard to the coating stiffness. By performing an inverse process of parameter identification using a state-of-the-art Levenberg-Marquardt algorithm, a close fit w.r.t. measured data from a common biaxial test is shown and compared to results applying established models. Subsequently, the enhanced meso-scale model is processed into a multi-scale model and is implemented as a material law into a finite element program. Within finite element analyses of an exemplary full scale membrane structure by using the implemented material model as well as by using established material models, the results are compared and discussed.
Driver assistance systems are increasingly becoming part of the standard equipment of vehicles and thus contribute to road safety. However, as they become more widespread, the requirements for cost efficiency are also increasing, and so few and inexpensive sensors are used in these systems. Especially in challenging situations, this leads to the fact that target discrimination cannot be ensured which may lead to false reactions of the driver assistance system. In this paper, the Boids flocking algorithm is used to generate semantic neighborhood information between tracked objects which in turn can significantly improve the overall performance. Two different variants were developed: First, a free-moving flock whereby a separate flock is generated per tracked object and second, a formation-controlled flock where boids of a single flock move along the future road course in a pre-defined formation. In the first approach, the interaction between the flocks as well as the interaction between the boids within a flock is used to generate additional information, which in turn can be used to improve, for example, lane change detection. For the latter approach, new behavioral rules have been developed, so that the boids can reliably identify control-relevant objects to a driver assistance system. Finally, the performance of the presented methods is verified through extensive simulations.
In tomato drying, degradation in final quality may occur based on the drying method used and predrying preparation. Hence, this research was conducted to evaluate the effect of different predrying treatments on physicochemical quality and drying kinetics of twin-layer-solar-tunnel-dried tomato slices. During the experimental work, tomato slices of var. Galilea were used. As predrying treatments, 0.5% calcium chloride (CaCl2), 0.5% ascorbic acid (C6H8O6), 0.5% citric acid (C6H8O7), and 0.5% sodium chloride (NaCl) were used. The tomato samples were sliced to 5 mm thickness, socked in the pretreatments for ten minutes, and dried in a twin layer solar tunnel dryer under the weather conditions of Jimma, Ethiopia. Untreated samples were used as control. The moisture losses from the samples were monitored by weighing samples at 2 h interval from each treatment. SAS statistical software version 9.2 was used for analyzing data on the physicochemical quality of tomato slices in CRD with three replications. From the experimental result, it was observed that dried tomato slices pretreated with 0.5% ascorbic acid gave the best retention of vitamin C and total phenolic content with a high sugar/acid ratio. Better retention of lycopene and fast drying were observed in dried tomato slices pretreated with 0.5% sodium chloride, and pretreating tomatoes with 0.5% citric acid resulted in better color values than the other treatments. Compared to the control, pretreating significantly preserved the overall quality of dried tomato slices and increased the moisture removal rate in the twin layer solar tunnel dryer.
As interest in the investigation of possible sources and environmental sinks of technology-critical elements (TCEs) continues to grow, the demand for reliable background level information of these elements in environmental matrices increases. In this study, a time series of ten years of sediment samples from two different regions of the German North Sea were analyzed for their mass fractions of Ga, Ge, Nb, In, REEs, and Ta (grain size fraction < 20 µm). Possible regional differences were investigated in order to determine preliminary reference values for these regions. Throughout the investigated time period, only minor variations in the mass fractions were observed and both regions did not show significant differences. Calculated local enrichment factors ranging from 0.6 to 2.3 for all TCEs indicate no or little pollution in the investigated areas. Consequently, reference values were calculated using two different approaches (Median + 2 median absolute deviation (M2MAD) and Tukey inner fence (TIF)). Both approaches resulted in consistent threshold values for the respective regions ranging from 158 µg kg−1 for In to 114 mg kg−1 for Ce. As none of the threshold values exceed the observed natural variation of TCEs in marine and freshwater sediments, they may be considered baseline values of the German Bight for future studies.
The present contribution proposes a novel method for the indirect measurement of the ground reaction forces (GRF) induced by a pedestrian during walking on a vibrating structure. Its main idea is to formulate and solve an inverse problem in the time domain with the aim of finding the optimal time dependent moving point force describing the GRF of a pedestrian (input data), which minimizes the difference between a set of computed and a set of measured structural responses (output data). The solution of the inverse problem is addressed by means of the gradient-based trust region optimization strategy. The moving force identification process uses output data from a set of acceleration and displacement time histories recorded at different locations on the structure. The practicability and the accuracy of the proposed GRF identification method is firstly evaluated using simulated measurements, which revealed a high accuracy, robustness and stability of the results in relation to high noise levels. Subsequently, a comprehensive experimental validation process using real measurement data recorded on the HUMVIB experimental footbridge on the campus of the Technical University of Darmstadt (Germany) was carried out. Besides the conventional sensors for the acquisition of structural responses, an array of biomechanical force plates as well as classical load cells at the supports were used for measurement reference GRFs needed in the experimental validation process. The results show that the proposed method delivers a very accurate estimation of the GRF induced by a subject during walking on the experimental structure.
Uzbekistan is an emerging tourism destination that has experienced a strong increase in tourists since 2017. However, little research on tourism development in Uzbekistan exists to date. This study therefore analyzes possible research topics and proposes a tourism research agenda for Uzbekistan. A mix of methods was used consisting of participant observation, semi-structured qualitative expert interviews and qualitative content anal- ysis. The results revealed a variety of research deficits in different areas, which could be synthesized into a total of ten research fields, which were clustered into three overarching areas, namely market research, management, and culture & environment. The subordi- nate research fields identified are Demand, Statistics, Potentials, Governance, Products, Infrastructure & Development, Marketing, Heritage & Nation-building, Sustainability as well as Peace & Conflict Prevention. A strategic research plan based on this tourism research agenda could help to foster a purposeful scientific debate. Tourism research in these fields has both the potential to investigate and compare theoretical issues in an unique context and to produce applied research results that can make a relevant contri- bution to tourism development in Uzbekistan.
The Black Forest offers renewable energy as a specific tourist destination in the form of bioenergy villages (BEV). Particularly expert tourists tend to visit them. The results of two quantitative surveys on the supply and demand side show that there is, up to now, an untapped potential among experienceoriented
tourists for this type of niche tourism.
In this letter, we present an approach to building a new generalized multistream spatial modulation system (GMSM), where the information is conveyed by the two active antennas with signal indices and using all possible active antenna combinations. The signal constellations associated with these antennas may have different sizes. In addition, four-dimensional hybrid frequency-phase modulated signals are utilized in GMSM. Examples of GMSM systems are given and computer simulation results are presented for transmission over Rayleigh and deep Nakagami- m flat-fading channels when maximum-likelihood detection is used. The presented results indicate a significant improvement of characteristics compared to the best-known similar systems.
Reed-Muller (RM) codes have recently regained some interest in the context of low latency communications and due to their relation to polar codes. RM codes can be constructed based on the Plotkin construction. In this work, we consider concatenated codes based on the Plotkin construction, where extended Bose-Chaudhuri-Hocquenghem (BCH) codes are used as component codes. This leads to improved code parameters compared to RM codes. Moreover, this construction is more flexible concerning the attainable code rates. Additionally, new soft-input decoding algorithms are proposed that exploit the recursive structure of the concatenation and the cyclic structure of the component codes. First, we consider the decoding of the cyclic component codes and propose a low complexity hybrid ordered statistics decoding algorithm. Next, this algorithm is applied to list decoding of the Plotkin construction. The proposed list decoding approach achieves near-maximum-likelihood performance for codes with medium lengths. The performance is comparable to state-of-the-art decoders, whereas the complexity is reduced.
Large-scale quantum computers threaten the security of today's public-key cryptography. The McEliece cryptosystem is one of the most promising candidates for post-quantum cryptography. However, the McEliece system has the drawback of large key sizes for the public key. Similar to other public-key cryptosystems, the McEliece system has a comparably high computational complexity. Embedded devices often lack the required computational resources to compute those systems with sufficiently low latency. Hence, those systems require hardware acceleration. Lately, a generalized concatenated code construction was proposed together with a restrictive channel model, which allows for much smaller public keys for comparable security levels. In this work, we propose a hardware decoder suitable for a McEliece system based on these generalized concatenated codes. The results show that those systems are suitable for resource-constrained embedded devices.
Evaluation of tech ventures’ evolving business models: rules for performance-related classification
(2022)
At the early stage of a successful tech venture's life cycle, it is assumed that the business model will evolve to higher quality over time. However, there are few empirical insights into business model evolution patterns for the performance-related classification of early-stage tech ventures. We created relevant variables evaluating the evolution of the venture-centric network and the technological proposition of both digital and non-digital ventures' business models using the text of submissions to the official business plan award in the German State of Baden-Württemberg between 2006 and 2012. Applying a principal component analysis/rough set theory mixed methodology, we explore performance-related business model classification rules in the heterogeneous sample of business plans. We find that ventures need to demonstrate real interactions with their customers' needs to survive. The distinguishing success rules are related to patent applications, risk capital, and scaling of the organisation. The rules help practitioners to classify business models in a way that allows them to prioritise action for performance.
As organizations struggle to cope with digital transformation in
an innovation environment, partnerships between startups and established
companies have become increasingly important. Building upon years of
practical experience and empirical research, we present advantages,
obstacles, and the keys to successful corporate-startup collaboration.
While managerial mobility is ubiquitously seen as an integral part of the success in firms’ internationalization, discerning its empirical merits has been impaired by the paucity of quasi-experimental evidence, or adequate instrumental variables. To overcome these objective limitations, this paper proposes a novel identification strategy, which uses a control function based on on-the-job search theory to correct estimates for the presence of self-selected mobility flows. Our analysis confirms the finding that managers’ specific market experience matters for firms’ internationalization, especially when it derives from longer tenures at the former jobs.
Regarding the attributes of managerial knowledge, our results reveal that on-the-job earned experience is at least as effective for firms’ internationalization as in born knowledge (i.e. origins) and that managers’ personal network of customers is an important asset in managers’ fund of expertise for the expansion into new markets.
Image novelty detection is a repeating task in computer vision and describes the detection of anomalous images based on a training dataset consisting solely of normal reference data. It has been found that, in particular, neural networks are well-suited for the task. Our approach first transforms the training and test images into ensembles of patches, which enables the assessment of mean-shifts between normal data and outliers. As mean-shifts are only detectable when the outlier ensemble and inlier distribution are spatially separate from each other, a rich feature space, such as a pre-trained neural network, needs to be chosen to represent the extracted patches. For mean-shift estimation, the Hotelling T2 test is used. The size of the patches turned out to be a crucial hyperparameter that needs additional domain knowledge about the spatial size of the expected anomalies (local vs. global). This also affects model selection and the chosen feature space, as commonly used Convolutional Neural Networks or Vision Image Transformers have very different receptive field sizes. To showcase the state-of-the-art capabilities of our approach, we compare results with classical and deep learning methods on the popular dataset CIFAR-10, and demonstrate its real-world applicability in a large-scale industrial inspection scenario using the MVTec dataset. Because of the inexpensive design, our method can be implemented by a single additional 2D-convolution and pooling layer and allows particularly fast prediction times while being very data-efficient.
Purpose
The goal of this research survey was to propose an entrepreneurship education model for students in higher education institutions.
Methodology
A questionnaire was distributed to 246 randomly sampled students at the Universitas Negeri Jakarta. The data was analyzed through Structural Equation Modeling to study the variables of entrepreneurship education for higher education students and examine whether it can be predicted by the university leadership as a facilitator of entrepreneurial culture, university departments as promoters of entrepreneurial skills, and university research as an incubator of local business
development.
Findings
The results show that university leadership as a facilitator of entrepreneurial culture is supported by the university leadership’s fostering a culture of entrepreneurial thinking. It was also evident that the university placed sufficient emphasis on entrepreneurial education, and it successfully motivated lecturers to embrace entrepreneurship education, and students to embrace entrepreneurship education. The results also indicated that university departments acted as promoters of entrepreneurial skills and stimulated students to attain sufficient entrepreneurial skills during their university education. Lastly, the university research also proved as an incubator of local business development and was found influenced by the university conducting research projects with local
private sector businesses and supporting graduates planning to launch start-ups.
Implications to Research and Practice
The survey results will provide valuable policy insights to improve entrepreneurship education. The university faculty and students would have opportunities to gain practical experience in local private sector businesses. The model of entrepreneurship education proposed herein can be applied for higher education students.
Code-based cryptosystems are promising candidates for post-quantum cryptography. Recently, generalized concatenated codes over Gaussian and Eisenstein integers were proposed for those systems. For a channel model with errors of restricted weight, those q-ary codes lead to high error correction capabilities. Hence, these codes achieve high work factors for information set decoding attacks. In this work, we adapt this concept to codes for the weight-one error channel, i.e., a binary channel model where at most one bit-error occurs in each block of m bits. We also propose a low complexity decoding algorithm for the proposed codes. Compared to codes over Gaussian and Eisenstein integers, these codes achieve higher minimum Hamming distances for the dual codes of the inner component codes. This property increases the work factor for a structural attack on concatenated codes leading to higher overall security. For comparable security, the key size for the proposed code construction is significantly smaller than for the classic McEliece scheme based on Goppa codes.
Mutual Information Analysis for Generalized Spatial Modulation Systems With Multilevel Coding
(2022)
Generalized Spatial Modulation (GSM) enables a trade-off between very high spectral efficiencies and low hardware costs for massive MIMO systems. This is achieved by transmitting information via the selection of active antennas from a set of available antennas besides the transmission of conventional data symbols. GSM systems have been investigated concerning various aspects like suitable signal constellations, efficient detection algorithms, hardware implementations, spatial precoding, and error control coding. On the other hand, determining the capacity of GSM is challenging because no closed-form expressions have been found so far. This paper investigates the mutual information for different GSM variants. We consider a multilevel coding approach, where the antenna selection and IQ modulation are encoded independently. Combined with multistage decoding, such an approach enables low-complexity capacity-achieving coded modulation. The influence of the data symbols on the mutual information is illuminated. We analyze the portions of mutual information related to antenna selection and the IQ modulation processes which depend on the GSM variant and the signal constellation. Moreover, the potential of spatial modulation for massive MIMO systems with many transmit antennas is investigated. Especially in systems with many transmit antennas much information can be conveyed by antenna selection.
We call for a paradigm shift in engineering education. We are entering the era of the Fourth Industrial Revolution (“4IR”), accelerated by Artificial Intelligence (“AI”). Disruptive changes affect all industrial sectors and society, leading to increased uncertainty that makes it impossible to predict what lies ahead. Therefore, gradual cultural change in education is no longer an option to ease social pain. The vast majority of engineering education and training systems, which have remained largely static and underinvested for decades, are inadequate for the emerging 4IR and AI labour markets. Nevertheless, some positive developments can be observed in the reorientation of the engineering education sector. Novel approaches to engineering education are already providing distinctive, technology-enhanced, personalised, student-centred curriculum experiences within an integrated and unified education system. We need to educate engineering students for a future whose key characteristics are volatility, uncertainty, complexity and ambiguity (“VUCA”). Talent and skills gaps are expected to increase in all industries in the coming years. The authors argue for an engineering curriculum that combines timeless didactic traditions such as Socratic inquiry, mastery-based and project-based learning and first-principles thinking with novel elements, e.g., student-centred active and e-learning with a focus on case studies, as well as visualization/metaverse and gamification elements discussed in this paper, and a refocusing of engineering skills and knowledge enhanced by AI on human qualities such as creativity, empathy and dexterity. These skills strengthen engineering students’ perceptions of the world and the decisions they make as a result. This 4IR engineering curriculum will prepare engineering students to become curious engineers and excellent collaborators who navigate increasingly complex multistakeholder ecosystems.
Lidar sensors are widely used for environmental perception on autonomous robot vehicles (ARV). The field of view (FOV) of Lidar sensors can be reshaped by positioning plane mirrors in their vicinity. Mirror setups can especially improve the FOV for ground detection of ARVs with 2D-Lidar sensors. This paper presents an overview of several geometric designs and their strengths for certain vehicle types. Additionally, a new and easy-to-implement calibration procedure for setups of 2D-Lidar sensors with mirrors is presented to determine precise mirror orientations and positions, using a single flat calibration object with a pre-aligned simple fiducial marker. Measurement data from a prototype vehicle with a 2D-Lidar with a 2 m range using this new calibration procedure are presented. We show that the calibrated mirror orientations are accurate to less than 0.6° in this short range, which is a significant improvement over the orientation angles taken directly from the CAD. The accuracy of the point cloud data improved, and no significant decrease in distance noise was introduced. We deduced general guidelines for successful calibration setups using our method. In conclusion, a 2D-Lidar sensor and two plane mirrors calibrated with this method are a cost-effective and accurate way for robot engineers to improve the environmental perception of ARVs.
The growing error rates of triple-level cell (TLC) and quadruple-level cell (QLC) NAND flash memories have led to the application of error correction coding with soft-input decoding techniques in flash-based storage systems. Typically, flash memory is organized in pages where the individual bits per cell are assigned to different pages and different codewords of the error-correcting code. This page-wise encoding minimizes the read latency with hard-input decoding. To increase the decoding capability, soft-input decoding is used eventually due to the aging of the cells. This soft-decoding requires multiple read operations. Hence, the soft-read operations reduce the achievable throughput, and increase the read latency and power consumption. In this work, we investigate a different encoding and decoding approach that improves the error correction performance without increasing the number of reference voltages. We consider TLC and QLC flashes where all bits are jointly encoded using a Gray labeling. This cell-wise encoding improves the achievable channel capacity compared with independent page-wise encoding. Errors with cell-wise read operations typically result in a single erroneous bit per cell. We present a coding approach based on generalized concatenated codes that utilizes this property.
In this paper, a novel feature-based sampling strategy for nonlinear Model Predictive Path Integral (MPPI) control is presented. Using the MPPI approach, the optimal feedback control is calculated by solving a stochastic optimal control (OCP) problem online by evaluating the weighted inference of sampled stochastic trajectories. While the MPPI algorithm can be excellently parallelized, the closed-loop performance strongly depends on the information quality of the sampled trajectories. To draw samples, a proposal density is used. The solver’s and thus, the controller’s performance is of high quality if the sampled trajectories drawn from this proposal density are located in low-cost regions of state-space. In classical MPPI control, the explored state-space is strongly constrained by assumptions that refer to the control value’s covariance matrix, which are necessary for transforming the stochastic Hamilton–Jacobi–Bellman (HJB) equation into a linear second-order partial differential equation. To achieve excellent performance even with discontinuous cost functions, in this novel approach, knowledge-based features are introduced to constitute the proposal density and thus the low-cost region of state-space for exploration. This paper addresses the question of how the performance of the MPPI algorithm can be improved using a feature-based mixture of base densities. Furthermore, the developed algorithm is applied to an autonomous vessel that follows a track and concurrently avoids collisions using an emergency braking feature. Therefore, the presented feature-based MPPI algorithm is applied and analyzed in both simulation and full-scale experiments.
Reliability Assessment of an Unscented Kalman Filter by Using Ellipsoidal Enclosure Techniques
(2022)
The Unscented Kalman Filter (UKF) is widely used for the state, disturbance, and parameter estimation of nonlinear dynamic systems, for which both process and measurement uncertainties are represented in a probabilistic form. Although the UKF can often be shown to be more reliable for nonlinear processes than the linearization-based Extended Kalman Filter (EKF) due to the enhanced approximation capabilities of its underlying probability distribution, it is not a priori obvious whether its strategy for selecting sigma points is sufficiently accurate to handle nonlinearities in the system dynamics and output equations. Such inaccuracies may arise for sufficiently strong nonlinearities in combination with large state, disturbance, and parameter covariances. Then, computationally more demanding approaches such as particle filters or the representation of (multi-modal) probability densities with the help of (Gaussian) mixture representations are possible ways to resolve this issue. To detect cases in a systematic manner that are not reliably handled by a standard EKF or UKF, this paper proposes the computation of outer bounds for state domains that are compatible with a certain percentage of confidence under the assumption of normally distributed states with the help of a set-based ellipsoidal calculus. The practical applicability of this approach is demonstrated for the estimation of state variables and parameters for the nonlinear dynamics of an unmanned surface vessel (USV).
Experimental Validation of Ellipsoidal Techniques for State Estimation in Marine Applications
(2022)
A reliable quantification of the worst-case influence of model uncertainty and external disturbances is crucial for the localization of vessels in marine applications. This is especially true if uncertain GPS-based position measurements are used to update predicted vessel locations that are obtained from the evaluation of a ship’s state equation. To reflect real-life working conditions, these state equations need to account for uncertainty in the system model, such as imperfect actuation and external disturbances due to effects such as wind and currents. As an application scenario, the GPS-based localization of autonomous DDboat robots is considered in this paper. Using experimental data, the efficiency of an ellipsoidal approach, which exploits a bounded-error representation of disturbances and uncertainties, is demonstrated.
Extended Target Tracking With a Lidar Sensor Using Random Matrices and a Virtual Measurement Model
(2022)
Random matrices are widely used to estimate the extent of an elliptically contoured object. Usually, it is assumed that the measurements follow a normal distribution, with its standard deviation being proportional to the object’s extent. However, the random matrix approach can filter the center of gravity and the covariance matrix of measurements independently of the measurement model. This work considers the whole chain from data acquisition to the linear Kalman Filter with extension estimation as a reference plant. The input is the (unknown) ground truth (position and extent). The output is the filtered center of gravity and the filtered covariance matrix of the measurement distribution. A virtual measurement model emulates the behavior of the reference plant. The input of the virtual measurement model is adapted using the proposed algorithm until the output parameters of the virtual measurement model match the result of the reference plant. After the adaptation, the input to the virtual measurement model is considered an estimation for position and extent. The main contribution of this paper is the reference model concept and an adaptation algorithm to optimize the input of the virtual measurement model.
Outcomes with a natural order commonly occur in prediction problems and often the available input data are a mixture of complex data like images and tabular predictors. Deep Learning (DL) models are state-of-the-art for image classification tasks but frequently treat ordinal outcomes as unordered and lack interpretability. In contrast, classical ordinal regression models consider the outcome’s order and yield interpretable predictor effects but are limited to tabular data. We present ordinal neural network transformation models (ontrams), which unite DL with classical ordinal regression approaches. ontrams are a special case of transformation models and trade off flexibility and interpretability by additively decomposing the transformation function into terms for image and tabular data using jointly trained neural networks. The performance of the most flexible ontram is by definition equivalent to a standard multi-class DL model trained with cross-entropy while being faster in training when facing ordinal outcomes. Lastly, we discuss how to interpret model components for both tabular and image data on two publicly available datasets.
Lignin is a potentially high natural source of biological aromatic substances. However, decomposition of the polymer has proven to be quite challenging, as the complex bonds are fairly difficult to break down chemically. This article is intended to provide an overview of various recent methods for the catalytic chemical depolymerization of the biopolymer lignin into chemical products. For this purpose, nickel-, zeolite- and palladium-supported catalysts were examined in detail. In order to achieve this, various experiments of the last years were collected, and the efficiency of the individual catalysts was examined. This included evaluating the reaction conditions under which the catalysts work most efficiently. The influence of co-catalysts and Lewis acidity was also investigated. The results show that it is possible to control the obtained product selectivity very well by the choice of the respective catalysts combined with the proper reaction conditions.
The scoring of sleep stages is an essential part of sleep studies. The main objective of this research is to provide an algorithm for the automatic classification of sleep stages using signals that may be obtained in a non-obtrusive way. After reviewing the relevant research, the authors selected a multinomial logistic regression as the basis for their approach. Several parameters were derived from movement and breathing signals, and their combinations were investigated to develop an accurate and stable algorithm. The algorithm was implemented to produce successful results: the accuracy of the recognition of Wake/NREM/REM stages is equal to 73%, with Cohen's kappa of 0.44 for the analyzed 19324 sleep epochs of 30 seconds each. This approach has the advantage of using the only movement and breathing signals, which can be recorded with less effort than heart or brainwave signals, and requiring only four derived parameters for the calculations. Therefore, the new system is a significant improvement for non-obtrusive sleep stage identification compared to existing approaches.
Ferromagnetism is of increasing importance in the growing field of electromobility and data storage. In stable austenitic steels, the occurrence of ferromagnetism is not expected and would also interfere with many applications. However, ferromagnetism in austenitic stainless steels after low-temperature nitriding has already been shown in the past. Herein, the presence of ferromagnetism in austenitic steels is discovered after low-temperature carburization (Kolsterizing), which represents a novel and unique finding. A zone of expanded austenite is established on various austenitic stainless steels by low-temperature carburization and the respective ferromagnetism is investigated in relation to the alloy composition. The ferromagnetism occurring is determined by means of a commercial magnetoinductive sensor (Feritscope). Ferromagnetic domains are visualized by magnetic force microscopy and a ferrofluid. X-ray diffraction measurements indicate a clear difference in the lattice expansion of the different alloys. Furthermore, a different appearance of the magnetizable microstructure regions (magnetic domain structure) is detected depending on the grain orientation determined by electron backscatter diffraction (EBSD). Strongly pronounced magnetic domains show no linear lattice defects, whereas in small magnetizable areas linear lattice defects are detected by electron channeling contrast imaging and EBSD.
SyNumSeS is a Python package for numerical simulation of semiconductor devices. It uses the Scharfetter-Gummel discretization for solving the one dimensional Van Roosbroeck system which describes the free electron and hole transport by the drift-diffusion model. As boundary conditions voltages can be applied to Ohmic contacts. It is suited for the simulation of pn-diodes, MOS-diodes, LEDs (hetero junction), solar cells, and (hetero) bipolar transistors.
Error correction coding for optical communication and storage requires high rate codes that enable high data throughput and low residual errors. Recently, different concatenated coding schemes were proposed that are based on binary BCH codes with low error correcting capabilities. In this work, low-complexity hard- and soft-input decoding methods for such codes are investigated. We propose three concepts to reduce the complexity of the decoder. For the algebraic decoding we demonstrate that Peterson's algorithm can be more efficient than the Berlekamp-Massey algorithm for single, double, and triple error correcting BCH codes. We propose an inversion-less version of Peterson's algorithm and a corresponding decoding architecture. Furthermore, we propose a decoding approach that combines algebraic hard-input decoding with soft-input bit-flipping decoding. An acceptance criterion is utilized to determine the reliability of the estimated codewords. For many received codewords the stopping criterion indicates that the hard-decoding result is sufficiently reliable, and the costly soft-input decoding can be omitted. To reduce the memory size for the soft-values, we propose a bit-flipping decoder that stores only the positions and soft-values of a small number of code symbols. This method significantly reduces the memory requirements and has little adverse effect on the decoding performance.
Four-Dimensional Hurwitz Signal Constellations, Set Partitioning, Detection, and Multilevel Coding
(2021)
The Hurwitz lattice provides the densest four-dimensional packing. This fact has motivated research on four-dimensional Hurwitz signal constellations for optical and wireless communications. This work presents a new algebraic construction of finite sets of Hurwitz integers that is inherently accompanied by a respective modulo operation. These signal constellations are investigated for transmission over the additive white Gaussian noise (AWGN) channel. It is shown that these signal constellations have a better constellation figure of merit and hence a better asymptotic performance over an AWGN channel when compared with conventional signal constellations with algebraic structure, e.g., two-dimensional Gaussian-integer constellations or four-dimensional Lipschitz-integer constellations. We introduce two concepts for set partitioning of the Hurwitz integers. The first method is useful to reduce the computational complexity of the symbol detection. This suboptimum detection approach achieves near-maximum-likelihood performance. In the second case, the partitioning exploits the algebraic structure of the Hurwitz signal constellations. We partition the Hurwitz integers into additive subgroups in a manner that the minimum Euclidean distance of each subgroup is larger than in the original set. This enables multilevel code constructions for the new signal constellations.
The performance and reliability of non-volatile NAND flash memories deteriorate as the number of program/erase cycles grows. The reliability also suffers from cell to cell interference, long data retention time, and read disturb. These processes effect the read threshold voltages. The aging of the cells causes voltage shifts which lead to high bit error rates (BER) with fixed pre-defined read thresholds. This work proposes two methods that aim on minimizing the BER by adjusting the read thresholds. Both methods utilize the number of errors detected in the codeword of an error correction code. It is demonstrated that the observed number of errors is a good measure for the voltage shifts and is utilized for the initial calibration of the read thresholds. The second approach is a gradual channel estimation method that utilizes the asymmetrical error probabilities for the one-to-zero and zero-to-one errors that are caused by threshold calibration errors. Both methods are investigated utilizing the mutual information between the optimal read voltage and the measured error values.
Numerical results obtained from flash measurements show that these methods reduce the BER of NAND flash memories significantly.
The McEliece cryptosystem is a promising candidate for post-quantum public-key encryption. In this work, we propose q-ary codes over Gaussian integers for the McEliece system and a new channel model. With this one Mannheim error channel, errors are limited to weight one. We investigate the channel capacity of this channel and discuss its relation to the McEliece system. The proposed codes are based on a simple product code construction and have a low complexity decoding algorithm. For the one Mannheim error channel, these codes achieve a higher error correction capability than maximum distance separable codes with bounded minimum distance decoding. This improves the work factor regarding decoding attacks based on information-set decoding.
Generalized Concatenated Codes over Gaussian and Eisenstein Integers for Code-Based Cryptography
(2021)
The code-based McEliece and Niederreiter cryptosystems are promising candidates for post-quantum public-key encryption. Recently, q-ary concatenated codes over Gaussian integers were proposed for the McEliece cryptosystem together with the one-Mannheim error channel, where the error values are limited to Mannheim weight one. Due to the limited error values, the codes over Gaussian integers achieve a higher error correction capability than maximum distance separable (MDS) codes with bounded minimum distance decoding. This higher error correction capability improves the work factor regarding decoding attacks based on information-set decoding. The codes also enable a low complexity decoding algorithm for decoding beyond the guaranteed error correction capability. In this work, we extend this coding scheme to codes over Eisenstein integers. These codes have advantages for the Niederreiter system. Additionally, we propose an improved code construction based on generalized concatenated codes. These codes extent the rate region where the work factor is beneficial compared to MDS codes. Moreover, generalized concatenated codes are more robust against structural attacks than ordinary concatenated codes.
The main objective of this paper is to revisit the Euro method in a critical and constructive way.Wehave analysed some arguments against the Euro method published recently in the literature as well as some other relevant aspects of the SUT-Euro and SUT-RAS methods not covered before. Although not being the Euro method perfect, we believe that there is still space for the use of the Euro method in updating/regionalizing Supply and Use tables.
This paper examines the interdependencies of tourism, Buddhism and sustainability combining in-depth-interviews with Buddhism experts and non-participant observation in a mixed-method approach. The area under investigation is the Alpine region of Austria, Germany and Switzerland, since it is home to Asian and Western forms of Buddhism tourism alike. Results show that Buddhism tourism as a value-based activity on the one hand is not commercial, but since demand is rising, on the other hand tendencies towards more commercial forms can be observed. As a modest form of activity Buddhism tourism does not shape the landscape of the Alpine area and by its nature it incorporates sustainability.
Despite the increased attention dedicated to research on the antecedents and determinants of new venture survival in entrepreneurship, defining and capturing survival as an outcome represents a challenge in quantitative studies. This paper creates awareness for ventures being inactive while still classified as surviving based on the data available. We describe this as the ‘living dead’ phenomenon, arguing that it yields potential effects on the empirical results of survival studies. Based on a systematic literature review, we find that this issue of inactivity has not been sufficiently considered in previous new venture survival studies. Based on a sample of 501 New Technology-Based Firms, we empirically illustrate that the classification of living dead ventures into either survived or failed can impact the factors determining survival. On this basis, we contribute to an understanding of the issue by defining the ‘living dead’ phenomenon and by proposing recommendations for research practice to solve this issue in survival studies, taking the data source, the period under investigation and the sample size into account.
In this article, the collection of classes of matrices presented in [J. Garloff, M. Adm, ad J. Titi, A survey of classes of matrices possessing the interval property and related properties, Reliab. Comput. 22:1-14, 2016] is continued. That is, given an interval of matrices with respect to a certain partial order, it is desired to know whether a special property of the entire matrix interval can be inferred from some of its element matrices lying on the vertices of the matrix interval. The interval property of some matrix classes found in the literature is presented, and the interval property of further matrix classes including the ultrametric, the conditionally positive semidefinite, and the infinitely divisible matrices is given for the first time. For the inverse M-matrices the cardinality of the required set of vertex matrices known so far is significantly reduced.
Positive systems play an important role in systems and control theory and have found applications in multiagent systems, neural networks, systems biology, and more. Positive systems map the nonnegative orthant to itself (and also the non-positive orthant to itself). In other words, they map the set of vectors with zero sign variation to itself. In this article, discrete-time linear systems that map the set of vectors with up to k-1 sign variations to itself are introduced. For the special case k = 1 these reduce to discrete-time positive linear systems. Properties of these systems are analyzed using tools from the theory of sign-regular matrices. In particular, it is shown that almost every solution of such systems converges to the set of vectors with up to k-1 sign variations. It is also shown that these systems induce a positive dynamics of k-dimensional parallelotopes.
Matrix methods for the computation of bounds for the range of a complex polynomial and its modulus over a rectangular region in the complex plane are presented. The approach relies on the expansion of the given polynomial into Bernstein polynomials. The results are extended to multivariate complex polynomials and rational functions.
The class of square matrices of order n having a negative determinant and all their minors up to order n-1 nonnegative is considered. A characterization of these matrices is presented which provides an easy test based on the Cauchon algorithm for their recognition. Furthermore, the maximum allowable perturbation of the entry in position (2,2) such that the perturbed matrix remains in this class is given. Finally, it is shown that all matrices lying between two matrices of this class with respect to the checkerboard ordering are contained in this class, too.
In this paper, rectangular matrices whose minors of a given order have the same strict sign are considered and sufficient conditions for their recognition are presented. The results are extended to matrices whose minors of a given order have the same sign or are allowed to vanish. A matrix A is called oscillatory if all its minors are nonnegative and there exists a positive integer k such that A^k has all its minors positive. As a generalization, a new type of matrices, called oscillatory of a specific order, is introduced and some of their properties are investigated.
This paper describes the rationale and the development of a structured digital approach for measuring corporate environmental sustainability using performance metrics.
It is impossible to imagine today's age without the preservation of our environment, not even in the corporate environment. Currently, sustainability is mostly only rudimentarily considered in companies, mostly only with written down phrases on the website. This will no longer be sufficient in the future, which is why companies should record sustainability on a numerical basis. Based on the development of a workable concept for companies, a small empirical study was carried out, which can be used to numerically measure the sustainability performance of companies. Two utility analyses were completed.
One of them was supplemented by expert interviews. Well-known practitioners from the business world were interviewed and asked for their assessment of ecological performance indicators. The result of the research is an indicator-based concept that can be applied in corporate practice to determine ecological sustainability performance.
Sustainable technologies are being increasingly used in various areas of human life. While they have a multitude of benefits, they are especially useful in health monitoring, especially for certain groups of people, such as the elderly. However, there are still several issues that need to be addressed before its use becomes widespread. This work aims to clarify the aspects that are of great importance for increasing the acceptance of the use of this type of technology in the elderly. In addition, we aim to clarify whether the technologies that are already available are able to ensure acceptable accuracy and whether they could replace some of the manual approaches that are currently being used. A two-week study with people 65 years of age and over was conducted to address the questions posed here, and the results were evaluated. It was demonstrated that simplicity of use and automatic functioning play a crucial role. It was also concluded that technology cannot yet completely replace traditional methods such as questionnaires in some areas. Although the technologies that were tested were classified as being “easy to use”, the elderly population in the current study indicated that they were not sure that they would use these technologies regularly in the long term because the added value is not always clear, among other issues. Therefore, awareness-raising must take place in parallel with the development of technologies and services.
Introduction. Despite its high accuracy, polysomnography (PSG) has several drawbacks for diagnosing obstructive sleep apnea (OSA). Consequently, multiple portable monitors (PMs) have been proposed. Objective. This systematic review aims to investigate the current literature to analyze the sets of physiological parameters captured by a PM to select the minimum number of such physiological signals while maintaining accurate results in OSA detection. Methods. Inclusion and exclusion criteria for the selection of publications were established prior to the search. The evaluation of the publications was made based on one central question and several specific questions. Results. The abilities to detect hypopneas, sleep time, or awakenings were some of the features studied to investigate the full functionality of the PMs to select the most relevant set of physiological signals. Based on the physiological parameters collected (one to six), the PMs were classified into sets according to the level of evidence. The advantages and the disadvantages of each possible set of signals were explained by answering the research questions proposed in the methods. Conclusions. The minimum number of physiological signals detected by PMs for the detection of OSA depends mainly on the purpose and context of the sleep study. The set of three physiological signals showed the best results in the detection of OSA.
Anthropologists’ arrival stories have long served to justify, naturalize, and domesticate—often through humor—the fraught moment of entering unasked into other people's lives. This textual convention has been thoroughly critiqued, but no comparable attention has been paid to the analogous moment of departure from the field. The digital age enables both sides to maintain contact, a shift that negates the finality of earlier departures. This article engages the changes wrought by digital media that allow us to remain connected to the field. While this seems a humane affordance, it also means that it is no longer feasible to cleanly sever ties established ‘there’. When anthropologists leave the field, the field will likely follow them—on Facebook or Instagram.
The State of Custom
(2021)
In our article, we engage with the anthropologist Gerd Spittler’s pathbreaking
article “Dispute settlement in the shadow of Leviathan” (1980) in which
he strives to integrate the existence of state courts (the eponymous Leviathan’s
shadow) in (post-)colonial Africa into the analysis on non-state court legal practices.
According to Spittler, it is because of undesirable characteristics inherent
in state courts that the disputing parties tended to rather involve mediators than
pursue a state court judgment. The less people liked state courts, the more likely
they were to (re-)turn to dispute settlement procedures. Now how has this situation
changed in the last four decades since its publication date? We relate his findings
to contemporary debates in legal anthropology that investigate the relationship
between disputing, law and the state. We also show through our own work in
Africa and Asia, particularly in Southern Ethiopia and Kyrgyzstan, in what ways
Spittler’s by now classical contribution to the field of legal anthropology in 1980
can be made fruitful for a contemporary anthropology of the state at a time when
not only (legal) anthropology has changed, but especially the way states deal with
putatively “customary” forms of dispute settlement.
Electricity generation from renewable energies often fluctuates due to weather and other natural effects. The instrument of control energy (balancing energy) can compensate for these fluctuations and thus guarantee the system and supply security of the electricity grid. Luxury hotels on tourist islands could react to fluctuations in electricity generation and provide balancing energy. The purpose of this paper is to investigate the electricity consumption of luxury hotels to assess their potential as a source for providing control energy.
The main objective of this paper is to revisit Temursho’s (2020) article “On the Euro method” in a critical and constructive way. We have praised part of his work and at the same time, we have analysed some of his arguments against the Euro method and against the work published by Valderas-Jaramillo et al. (2019). Moreover, we have analysed some other relevant aspects of the SUT-Euro and SUT-RAS methods not covered in Temursho (2020). Temursho (2020) seems to conclude that no one should use the Euro method again because of its limitations and drawbacks. However, although not being the Euro method perfect, we are afraid that there is still space for the use of the Euro method in updating/regionalizing supply and use tables.
Background:
One of the most promising health care development areas is introducing telemedicine services and creating solutions based on blockchain technology. The study of systems combining both these domains indicates the ongoing expansion of digital technologies in this market segment.
Objective:
This paper aims to review the feasibility of blockchain technology for telemedicine.
Methods:
The authors identified relevant studies via systematic searches of databases including PubMed, Scopus, Web of Science, IEEE Xplore, and Google Scholar. The suitability of each for inclusion in this review was assessed independently. Owing to the lack of publications, available blockchain-based tokens were discovered via conventional web search engines (Google, Yahoo, and Yandex).
Results:
Of the 40 discovered projects, only 18 met the selection criteria. The 5 most prevalent features of the available solutions (N=18) were medical data access (14/18, 78%), medical service processing (14/18, 78%), diagnostic support (10/18, 56%), payment transactions (10/18, 56%), and fundraising for telemedical instrument development (5/18, 28%).
Conclusions:
These different features (eg, medical data access, medical service processing, epidemiology reporting, diagnostic support, and treatment support) allow us to discuss the possibilities for integration of blockchain technology into telemedicine and health care on different levels. In this area, a wide range of tasks can be identified that could be accomplished based on digital technologies using blockchains.
Specific climate adaptation and resilience measures can be efficiently designed and implemented at regional and local levels. Climate and environmental databases are critical for achieving the sustainable development goals (SDGs) and for efficiently planning and implementing appropriate adaptation measures. Available federated and distributed databases can serve as necessary starting points for municipalities to identify needs, prioritize resources, and allocate investments, taking into account often tight budget constraints. High-quality geospatial, climate, and environmental data are now broadly available and remote sensing data, e.g., Copernicus services, will be critical. There are forward-looking approaches to use these datasets to derive forecasts for optimizing urban planning processes for local governments. On the municipal level, however, the existing data have only been used to a limited extent. There are no adequate tools for urban planning with which remote sensing data can be merged and meaningfully combined with local data and further processed and applied in municipal planning and decision-making. Therefore, our project CoKLIMAx aims at the development of new digital products, advanced urban services, and procedures, such as the development of practical technical tools that capture different remote sensing and in-situ data sets for validation and further processing. CoKLIMAx will be used to develop a scalable toolbox for urban planning to increase climate resilience. Focus areas of the project will be water (e.g., soil sealing, stormwater drainage, retention, and flood protection), urban (micro)climate (e.g., heat islands and air flows), and vegetation (e.g., greening strategy, vegetation monitoring/vitality). To this end, new digital process structures will be embedded in local government to enable better policy decisions for the future.
Twenty-first century infrastructure needs to respond to changing demographics, becoming climate neutral, resilient, and economically affordable, while remaining a driver for development and shared prosperity. However, the infrastructure sector remains one of the least innovative and digitalized, plagued by delays, cost overruns, and benefit shortfalls. The authors assessed trends and barriers in the planning and delivery of infrastructure based on secondary research, qualitative
interviews with internationally leading experts, and expert workshops. The analysis concludes that the root-cause of the industry’s problems is the prevailing fragmentation of the infrastructure value chain and a lacking long-term vision for infrastructure. To help overcome these challenges, an integration of the value chain is needed. The authors propose that this could be achieved through a use-case-based, as well as vision and governance-driven creation of federated digital platforms applied to infrastructure projects and outline a concept. Digital platforms enable full-lifecycle participation and responsible governance guided by a shared infrastructure vision. This paper has contributed as policy recommendation to the Group of Twenty (G20) in 2021.
A nonlinear mathematical model for the dynamics of permanent magnet synchronous machines with interior magnets is discussed. The model of the current dynamics captures saturation and dependency on the rotor angle. Based on the model, a flatness-based field-oriented closed-loop controller and a feed-forward compensation of torque ripples are derived. Effectiveness and robustness of the proposed algorithms are demonstrated by simulation results.
The present work proposes the use of modern ICT technologies such as smartphones, NFCs, internet, and web technologies, to help patients in carrying out their therapies. The implemented system provides a calendar with a reminder of the assumptions, ensures the drug identification through NFC, allows remote assistance from healthcare staff and family members to check and manage the therapy in real-time. The system also provides centralized information on the patient's therapeutic situation, helpful in choosing new compatible therapies.
Modular arithmetic over integers is required for many cryptography systems. Montgomeryreduction is an efficient algorithm for the modulo reduction after a multiplication. Typically, Mont-gomery reduction is used for rings of ordinary integers. In contrast, we investigate the modularreduction over rings of Gaussian integers. Gaussian integers are complex numbers where the real andimaginary parts are integers. Rings over Gaussian integers are isomorphic to ordinary integer rings.In this work, we show that Montgomery reduction can be applied to Gaussian integer rings. Twoalgorithms for the precision reduction are presented. We demonstrate that the proposed Montgomeryreduction enables an efficient Gaussian integer arithmetic that is suitable for elliptic curve cryptogra-phy. In particular, we consider the elliptic curve point multiplication according to the randomizedinitial point method which is protected against side-channel attacks. The implementation of thisprotected point multiplication is significantly faster than comparable algorithms over ordinary primefields.