Peer reviewed Publikation in Master Journal List
Refine
Year of publication
Document Type
- Article (81)
Has Fulltext
- no (81) (remove)
Keywords
- (Strict) sign-regularity (1)
- 3D urban planning (1)
- Aboriginal people (1)
- Accelerometer calibration (1)
- Actuators (1)
- Adivasi (1)
- Adolescent idiopathic scoliosis (1)
- Aerobic fermentation (1)
- Aerospace Engineering (1)
- Antenna arrays (1)
Institute
- Fakultät Bauingenieurwesen (2)
- Fakultät Informatik (3)
- Fakultät Wirtschafts-, Kultur- und Rechtswissenschaften (7)
- Institut für Angewandte Forschung - IAF (7)
- Institut für Optische Systeme - IOS (5)
- Institut für Strategische Innovation und Technologiemanagement - IST (5)
- Institut für Systemdynamik - ISD (16)
- Institut für angewandte Thermo- und Fluiddynamik - IATF (2)
- Konstanzer Institut für Prozesssteuerung - KIPS (1)
Apnea is a sleep disorder characterized by breathing interruptions during sleep, impacting cardiorespiratory function and overall health. Traditional diagnostic methods, like polysomnography (PSG), are unobtrusive, leading to noninvasive monitoring. This study aims to develop and validate a novel sleep monitoring system using noninvasive sensor technology to estimate cardiorespiratory parameters and detect sleep apnea. We designed a seamless monitoring system integrating noncontact force-sensitive resistor sensors to collect ballistocardiogram signals associated with cardiorespiratory activity. We enhanced the sensor’s sensitivity and reduced the noise by designing a new concept of edge-measuring sensor using a hemisphere dome and mechanical hanger to distribute the force and mechanically amplify the micromovement caused by cardiac and respiration activities. In total, we deployed three edge-measuring sensors, two deployed under the thoracic and one under the abdominal regions. The system is supported with onboard signal preprocessing in multiple physical layers deployed under the mattress. We collected the data in four sleeping positions from 16 subjects and analyzed them using ensemble empirical mode decomposition (EMD) to avoid frequency mixing. We also developed an adaptive thresholding method to identify sleep apnea. The error was reduced to 3.98 and 1.43 beats/min (BPM) in heart rate (HR) and respiration estimation, respectively. The apnea was detected with an accuracy of 87%. We optimized the system such that only one edge-measuring sensor can measure the cardiorespiratory parameters. Such a reduction in the complexity and simplification of the instruction of use shows excellent potential for in-home and continuous monitoring.
We quantify the effects of GATT/WTO membership on trade and welfare. Using an extensive database covering manufacturing trade for 186 countries over the period 1980–2016, we find that the average partial equilibrium impact of GATT/WTO membership on trade among member countries is large, positive, and significant. We contribute to the literature by estimating country-specific estimates and find them to vary widely across the countries in our sample with poorer members benefitting more. Using these estimates, we simulate the general equilibrium effects of GATT/WTO on welfare, which are sizable and heterogeneous across members. We show that countries not experiencing positive trade effects from joining GATT/WTO can still gain in terms of welfare, due to lower import prices and higher export demand.
Study design:
Retrospective, mono-centric cohort research study.
Objectives:
The purpose of this study is to validate a novel artificial intelligence (AI)-based algorithm against human-generated ground truth for radiographic parameters of adolescent idiopathic scoliosis (AIS).
Methods:
An AI-algorithm was developed that is capable of detecting anatomical structures of interest (clavicles, cervical, thoracic, lumbar spine and sacrum) and calculate essential radiographic parameters in AP spine X-rays fully automatically. The evaluated parameters included T1-tilt, clavicle angle (CA), coronal balance (CB), lumbar modifier, and Cobb angles in the proximal thoracic (C-PT), thoracic, and thoracolumbar regions. Measurements from 2 experienced physicians on 100 preoperative AP full spine X-rays of AIS patients were used as ground truth and to evaluate inter-rater and intra-rater reliability. The agreement between human raters and AI was compared by means of single measure Intra-class Correlation Coefficients (ICC; absolute agreement; .75 rated as excellent), mean error and additional statistical metrics.
Results:
The comparison between human raters resulted in excellent ICC values for intra- (range: .97-1) and inter-rater (.85-.99) reliability. The algorithm was able to determine all parameters in 100% of images with excellent ICC values (.78-.98). Consistently with the human raters, ICC values were typically smallest for C-PT (eg, rater 1A vs AI: .78, mean error: 4.7°) and largest for CB (.96, -.5 mm) as well as CA (.98, .2°).
Conclusions:
The AI-algorithm shows excellent reliability and agreement with human raters for coronal parameters in preoperative full spine images. The reliability and speed offered by the AI-algorithm could contribute to the efficient analysis of large datasets (eg, registry studies) and measurements in clinical practice.
This paper introduces the third update/release of the Global Sanctions Data Base (GSDB-R3). The GSDB-R3 extends the period of coverage from 1950–2019 to 1950–2022, which includes two special periods—COVID-19 and the new sanctions against Russia. This update of the GSDB contains a total of 1325 cases. In response to multiple inquiries and requests, the GSDB-R3 has been amended with a new variable that distinguishes between unilateral and multilateral sanctions. As before, the GSDB comes in two versions, case-specific and dyadic, which are freely available upon request at GSDB@drexel.edu. To highlight one of the new features of the GSDB, we estimate the heterogeneous effects of unilateral and multilateral sanctions on trade. We also obtain estimates of the effects on trade of the 2014 sanctions on Russia.
This study aims to investigate the utilization of Bayesian techniques for the calibration of micro-electro-mechanical system (MEMS) accelerometers. These devices have garnered substantial interest in various practical applications and typically require calibration through error-correcting functions. The parameters of these error-correcting functions are determined during a calibration process. However, due to various sources of noise, these parameters cannot be determined with precision, making it desirable to incorporate uncertainty in the calibration models. Bayesian modeling offers a natural and complete way of reflecting uncertainty by treating the model parameters as variables rather than fixed values. In addition, Bayesian modeling enables the incorporation of prior knowledge, making it an ideal choice for calibration. Nevertheless, it is infrequently used in sensor calibration. This study introduces Bayesian methods for the calibration of MEMS accelerometer data in a straightforward manner using recent advances in probabilistic programming.
Insecurity Refactoring is a change to the internal structure of software to inject a vulnerability without changing the observable behavior in a normal use case scenario. An implementation of Insecurity Refactoring is formally explained to inject vulnerabilities in source code projects by using static code analysis. It creates learning examples with source code patterns from known vulnerabilities.
Insecurity Refactoring is achieved by creating an Adversary Controlled Input Dataflow tree based on a Code Property Graph. The tree is used to find possible injection paths. Transformation of the possible injection paths allows to inject vulnerabilities. Insertion of data flow patterns introduces different code patterns from related Common Vulnerabilities and Exposures (CVE) reports. The approach is evaluated on 307 open source projects. Additionally, insecurity-refactored projects are deployed in virtual machines to be used as learning examples. Different static code analysis tools, dynamic tools and manual inspections are used with modified projects to confirm the presence of vulnerabilities.
The results show that in 8.1% of the open source projects it is possible to inject vulnerabilities. Different inspected code patterns from CVE reports can be inserted using corresponding data flow patterns. Furthermore the results reveal that the injected vulnerabilities are useful for a small sample size of attendees (n=16). Insecurity Refactoring is useful to automatically generate learning examples to improve software security training. It uses real projects as base whereas the injected vulnerabilities stem from real CVE reports. This makes the injected vulnerabilities unique and realistic.
Short-Term Density Forecasting of Low-Voltage Load using Bernstein-Polynomial Normalizing Flows
(2023)
The transition to a fully renewable energy grid requires better forecasting of demand at the low-voltage level to increase efficiency and ensure reliable control. However, high fluctuations and increasing electrification cause huge forecast variability, not reflected in traditional point estimates. Probabilistic load forecasts take uncertainties into account and thus allow more informed decision-making for the planning and operation of low-carbon energy systems. We propose an approach for flexible conditional density forecasting of short-term load based on Bernstein polynomial normalizing flows, where a neural network controls the parameters of the flow. In an empirical study with 3639 smart meter customers, our density predictions for 24h-ahead load forecasting compare favorably against Gaussian and Gaussian mixture densities. Furthermore, they outperform a non-parametric approach based on the pinball loss, especially in low-data scenarios.
Background
This is a systematic review protocol to identify automated features, applied technologies, and algorithms in the electronic early warning/track and triage system (EW/TTS) developed to predict clinical deterioration (CD).
Methodology
This study will be conducted using PubMed, Scopus, and Web of Science databases to evaluate the features of EW/TTS in terms of their automated features, technologies, and algorithms. To this end, we will include any English articles reporting an EW/TTS without time limitation. Retrieved records will be independently screened by two authors and relevant data will be extracted from studies and abstracted for further analysis. The included articles will be evaluated independently using the JBI critical appraisal checklist by two researchers.
Discussion
This study is an effort to address the available automated features in the electronic version of the EW/TTS to shed light on the applied technologies, automated level of systems, and utilized algorithms in order to smooth the road toward the fully automated EW/TTS as one of the potential solutions of prevention CD and its adverse consequences.
Recognizing Human Activity of Daily Living Using a Flexible Wearable for 3D Spine Pose Tracking
(2023)
The World Health Organization recognizes physical activity as an influencing domain on quality of life. Monitoring, evaluating, and supervising it by wearable devices can contribute to the early detection and progress assessment of diseases such as Alzheimer’s, rehabilitation, and exercises in telehealth, as well as abrupt events such as a fall. In this work, we use a non-invasive and non-intrusive flexible wearable device for 3D spine pose measurement to monitor and classify physical activity. We develop a comprehensive protocol that consists of 10 indoor, 4 outdoor, and 8 transition states activities in three categories of static, dynamic, and transition in order to evaluate the applicability of the flexible wearable device in human activity recognition. We implement and compare the performance of three neural networks: long short-term memory (LSTM), convolutional neural network (CNN), and a hybrid model (CNN-LSTM). For ground truth, we use an accelerometer and strips data. LSTM reached an overall classification accuracy of 98% for all activities. The CNN model with accelerometer data delivered better performance in lying down (100%), static (standing = 82%, sitting = 75%), and dynamic (walking = 100%, running = 100%) positions. Data fusion improved the outputs in standing (92%) and sitting (94%), while LSTM with the strips data yielded a better performance in bending-related activities (bending forward = 49%, bending backward = 88%, bending right = 92%, and bending left = 100%), the combination of data fusion and principle components analysis further strengthened the output (bending forward = 100%, bending backward = 89%, bending right = 100%, and bending left = 100%). Moreover, the LSTM model detected the first transition state that is similar to fall with the accuracy of 84%. The results show that the wearable device can be used in a daily routine for activity monitoring, recognition, and exercise supervision, but still needs further improvement for fall detection.
Multi-faceted stresses of social, environmental, and economic nature are increasingly challenging the existence and sustainability of our societies. Cities in particular are disproportionately threatened by global issues such as climate change, urbanization, population growth, air pollution, etc. In addition, urban space is often too limited to effectively develop sustainable, nature-based solutions while accommodating growing populations. This research aims to provide new methodologies by proposing lightweight green bridges in inner-city areas as an effective land value capture mechanism. Geometry analysis was performed using geospatial and remote sensing data to provide geometrically feasible locations of green bridges. A multi-criteria decision analysis was applied to identify suitable locations for green bridges investigating Central European urban centers with a focus on German cities as representative examples. A cost-benefit analysis was performed to assess the economic feasibility using a case study. The results of the geometry analysis identified 3249 locations that were geometrically feasible to implement a green bridge in German cities. The sample locations from the geometry analysis were proved to be validated for their implementation potential. Multi-criteria decision analysis was used to select 287 sites that fall under the highest suitable class based on several criteria. The cost-benefit analysis of the case study showed that the market value of the property alone can easily outweigh the capital and maintenance costs of a green bridge, while the indirect (monetary) benefits of the green space continue to increase the overall value of the green bridge property including its neighborhood over time. Hence, we strongly recommend light green bridges as financially sustainable and nature-based solutions in cities worldwide.
In tomato drying, degradation in final quality may occur based on the drying method used and predrying preparation. Hence, this research was conducted to evaluate the effect of different predrying treatments on physicochemical quality and drying kinetics of twin-layer-solar-tunnel-dried tomato slices. During the experimental work, tomato slices of var. Galilea were used. As predrying treatments, 0.5% calcium chloride (CaCl2), 0.5% ascorbic acid (C6H8O6), 0.5% citric acid (C6H8O7), and 0.5% sodium chloride (NaCl) were used. The tomato samples were sliced to 5 mm thickness, socked in the pretreatments for ten minutes, and dried in a twin layer solar tunnel dryer under the weather conditions of Jimma, Ethiopia. Untreated samples were used as control. The moisture losses from the samples were monitored by weighing samples at 2 h interval from each treatment. SAS statistical software version 9.2 was used for analyzing data on the physicochemical quality of tomato slices in CRD with three replications. From the experimental result, it was observed that dried tomato slices pretreated with 0.5% ascorbic acid gave the best retention of vitamin C and total phenolic content with a high sugar/acid ratio. Better retention of lycopene and fast drying were observed in dried tomato slices pretreated with 0.5% sodium chloride, and pretreating tomatoes with 0.5% citric acid resulted in better color values than the other treatments. Compared to the control, pretreating significantly preserved the overall quality of dried tomato slices and increased the moisture removal rate in the twin layer solar tunnel dryer.
As interest in the investigation of possible sources and environmental sinks of technology-critical elements (TCEs) continues to grow, the demand for reliable background level information of these elements in environmental matrices increases. In this study, a time series of ten years of sediment samples from two different regions of the German North Sea were analyzed for their mass fractions of Ga, Ge, Nb, In, REEs, and Ta (grain size fraction < 20 µm). Possible regional differences were investigated in order to determine preliminary reference values for these regions. Throughout the investigated time period, only minor variations in the mass fractions were observed and both regions did not show significant differences. Calculated local enrichment factors ranging from 0.6 to 2.3 for all TCEs indicate no or little pollution in the investigated areas. Consequently, reference values were calculated using two different approaches (Median + 2 median absolute deviation (M2MAD) and Tukey inner fence (TIF)). Both approaches resulted in consistent threshold values for the respective regions ranging from 158 µg kg−1 for In to 114 mg kg−1 for Ce. As none of the threshold values exceed the observed natural variation of TCEs in marine and freshwater sediments, they may be considered baseline values of the German Bight for future studies.
The present contribution proposes a novel method for the indirect measurement of the ground reaction forces (GRF) induced by a pedestrian during walking on a vibrating structure. Its main idea is to formulate and solve an inverse problem in the time domain with the aim of finding the optimal time dependent moving point force describing the GRF of a pedestrian (input data), which minimizes the difference between a set of computed and a set of measured structural responses (output data). The solution of the inverse problem is addressed by means of the gradient-based trust region optimization strategy. The moving force identification process uses output data from a set of acceleration and displacement time histories recorded at different locations on the structure. The practicability and the accuracy of the proposed GRF identification method is firstly evaluated using simulated measurements, which revealed a high accuracy, robustness and stability of the results in relation to high noise levels. Subsequently, a comprehensive experimental validation process using real measurement data recorded on the HUMVIB experimental footbridge on the campus of the Technical University of Darmstadt (Germany) was carried out. Besides the conventional sensors for the acquisition of structural responses, an array of biomechanical force plates as well as classical load cells at the supports were used for measurement reference GRFs needed in the experimental validation process. The results show that the proposed method delivers a very accurate estimation of the GRF induced by a subject during walking on the experimental structure.
In this letter, we present an approach to building a new generalized multistream spatial modulation system (GMSM), where the information is conveyed by the two active antennas with signal indices and using all possible active antenna combinations. The signal constellations associated with these antennas may have different sizes. In addition, four-dimensional hybrid frequency-phase modulated signals are utilized in GMSM. Examples of GMSM systems are given and computer simulation results are presented for transmission over Rayleigh and deep Nakagami- m flat-fading channels when maximum-likelihood detection is used. The presented results indicate a significant improvement of characteristics compared to the best-known similar systems.
Reed-Muller (RM) codes have recently regained some interest in the context of low latency communications and due to their relation to polar codes. RM codes can be constructed based on the Plotkin construction. In this work, we consider concatenated codes based on the Plotkin construction, where extended Bose-Chaudhuri-Hocquenghem (BCH) codes are used as component codes. This leads to improved code parameters compared to RM codes. Moreover, this construction is more flexible concerning the attainable code rates. Additionally, new soft-input decoding algorithms are proposed that exploit the recursive structure of the concatenation and the cyclic structure of the component codes. First, we consider the decoding of the cyclic component codes and propose a low complexity hybrid ordered statistics decoding algorithm. Next, this algorithm is applied to list decoding of the Plotkin construction. The proposed list decoding approach achieves near-maximum-likelihood performance for codes with medium lengths. The performance is comparable to state-of-the-art decoders, whereas the complexity is reduced.
Technologiebasierte Startups leisten einen wesentlichen Beitrag zur wirtschaftlichen sowie gesellschaftlichen Entwicklung. Im Zuge ihrer Gründung benötigen sie Unterstützung in Form von Risikokapital, das in der Seed- und Early-Stage primär durch Business Angels (BAs) bereitgestellt wird. Die Abläufe und Bewertungskriterien des BA Investmentprozesses sind bisher jedoch unzureichend erforscht. Der vorliegende Beitrag nutzt Experteninterviews im Rahmen einer Fallstudie des baden-württembergischen entrepreneurialen Ökosystems zur Identifikation des Vorgehens von BAs bei der Bewertung und Auswahl technologiebasierter Startups. Zudem werden die Kriterien, nach denen BAs vielversprechende von scheiternden Startups unterscheiden abgeleitet. Somit trägt der Beitrag zur Öffnung der „Black Box” von Investmentaktivitäten in den frühsten Gründungsphasen bei.
Evaluation of tech ventures’ evolving business models: rules for performance-related classification
(2022)
At the early stage of a successful tech venture's life cycle, it is assumed that the business model will evolve to higher quality over time. However, there are few empirical insights into business model evolution patterns for the performance-related classification of early-stage tech ventures. We created relevant variables evaluating the evolution of the venture-centric network and the technological proposition of both digital and non-digital ventures' business models using the text of submissions to the official business plan award in the German State of Baden-Württemberg between 2006 and 2012. Applying a principal component analysis/rough set theory mixed methodology, we explore performance-related business model classification rules in the heterogeneous sample of business plans. We find that ventures need to demonstrate real interactions with their customers' needs to survive. The distinguishing success rules are related to patent applications, risk capital, and scaling of the organisation. The rules help practitioners to classify business models in a way that allows them to prioritise action for performance.
While managerial mobility is ubiquitously seen as an integral part of the success in firms’ internationalization, discerning its empirical merits has been impaired by the paucity of quasi-experimental evidence, or adequate instrumental variables. To overcome these objective limitations, this paper proposes a novel identification strategy, which uses a control function based on on-the-job search theory to correct estimates for the presence of self-selected mobility flows. Our analysis confirms the finding that managers’ specific market experience matters for firms’ internationalization, especially when it derives from longer tenures at the former jobs.
Regarding the attributes of managerial knowledge, our results reveal that on-the-job earned experience is at least as effective for firms’ internationalization as in born knowledge (i.e. origins) and that managers’ personal network of customers is an important asset in managers’ fund of expertise for the expansion into new markets.
Reliability Assessment of an Unscented Kalman Filter by Using Ellipsoidal Enclosure Techniques
(2022)
The Unscented Kalman Filter (UKF) is widely used for the state, disturbance, and parameter estimation of nonlinear dynamic systems, for which both process and measurement uncertainties are represented in a probabilistic form. Although the UKF can often be shown to be more reliable for nonlinear processes than the linearization-based Extended Kalman Filter (EKF) due to the enhanced approximation capabilities of its underlying probability distribution, it is not a priori obvious whether its strategy for selecting sigma points is sufficiently accurate to handle nonlinearities in the system dynamics and output equations. Such inaccuracies may arise for sufficiently strong nonlinearities in combination with large state, disturbance, and parameter covariances. Then, computationally more demanding approaches such as particle filters or the representation of (multi-modal) probability densities with the help of (Gaussian) mixture representations are possible ways to resolve this issue. To detect cases in a systematic manner that are not reliably handled by a standard EKF or UKF, this paper proposes the computation of outer bounds for state domains that are compatible with a certain percentage of confidence under the assumption of normally distributed states with the help of a set-based ellipsoidal calculus. The practical applicability of this approach is demonstrated for the estimation of state variables and parameters for the nonlinear dynamics of an unmanned surface vessel (USV).
Experimental Validation of Ellipsoidal Techniques for State Estimation in Marine Applications
(2022)
A reliable quantification of the worst-case influence of model uncertainty and external disturbances is crucial for the localization of vessels in marine applications. This is especially true if uncertain GPS-based position measurements are used to update predicted vessel locations that are obtained from the evaluation of a ship’s state equation. To reflect real-life working conditions, these state equations need to account for uncertainty in the system model, such as imperfect actuation and external disturbances due to effects such as wind and currents. As an application scenario, the GPS-based localization of autonomous DDboat robots is considered in this paper. Using experimental data, the efficiency of an ellipsoidal approach, which exploits a bounded-error representation of disturbances and uncertainties, is demonstrated.
Extended Target Tracking With a Lidar Sensor Using Random Matrices and a Virtual Measurement Model
(2022)
Random matrices are widely used to estimate the extent of an elliptically contoured object. Usually, it is assumed that the measurements follow a normal distribution, with its standard deviation being proportional to the object’s extent. However, the random matrix approach can filter the center of gravity and the covariance matrix of measurements independently of the measurement model. This work considers the whole chain from data acquisition to the linear Kalman Filter with extension estimation as a reference plant. The input is the (unknown) ground truth (position and extent). The output is the filtered center of gravity and the filtered covariance matrix of the measurement distribution. A virtual measurement model emulates the behavior of the reference plant. The input of the virtual measurement model is adapted using the proposed algorithm until the output parameters of the virtual measurement model match the result of the reference plant. After the adaptation, the input to the virtual measurement model is considered an estimation for position and extent. The main contribution of this paper is the reference model concept and an adaptation algorithm to optimize the input of the virtual measurement model.
Outcomes with a natural order commonly occur in prediction problems and often the available input data are a mixture of complex data like images and tabular predictors. Deep Learning (DL) models are state-of-the-art for image classification tasks but frequently treat ordinal outcomes as unordered and lack interpretability. In contrast, classical ordinal regression models consider the outcome’s order and yield interpretable predictor effects but are limited to tabular data. We present ordinal neural network transformation models (ontrams), which unite DL with classical ordinal regression approaches. ontrams are a special case of transformation models and trade off flexibility and interpretability by additively decomposing the transformation function into terms for image and tabular data using jointly trained neural networks. The performance of the most flexible ontram is by definition equivalent to a standard multi-class DL model trained with cross-entropy while being faster in training when facing ordinal outcomes. Lastly, we discuss how to interpret model components for both tabular and image data on two publicly available datasets.
Lignin is a potentially high natural source of biological aromatic substances. However, decomposition of the polymer has proven to be quite challenging, as the complex bonds are fairly difficult to break down chemically. This article is intended to provide an overview of various recent methods for the catalytic chemical depolymerization of the biopolymer lignin into chemical products. For this purpose, nickel-, zeolite- and palladium-supported catalysts were examined in detail. In order to achieve this, various experiments of the last years were collected, and the efficiency of the individual catalysts was examined. This included evaluating the reaction conditions under which the catalysts work most efficiently. The influence of co-catalysts and Lewis acidity was also investigated. The results show that it is possible to control the obtained product selectivity very well by the choice of the respective catalysts combined with the proper reaction conditions.
The scoring of sleep stages is an essential part of sleep studies. The main objective of this research is to provide an algorithm for the automatic classification of sleep stages using signals that may be obtained in a non-obtrusive way. After reviewing the relevant research, the authors selected a multinomial logistic regression as the basis for their approach. Several parameters were derived from movement and breathing signals, and their combinations were investigated to develop an accurate and stable algorithm. The algorithm was implemented to produce successful results: the accuracy of the recognition of Wake/NREM/REM stages is equal to 73%, with Cohen's kappa of 0.44 for the analyzed 19324 sleep epochs of 30 seconds each. This approach has the advantage of using the only movement and breathing signals, which can be recorded with less effort than heart or brainwave signals, and requiring only four derived parameters for the calculations. Therefore, the new system is a significant improvement for non-obtrusive sleep stage identification compared to existing approaches.
Ferromagnetism is of increasing importance in the growing field of electromobility and data storage. In stable austenitic steels, the occurrence of ferromagnetism is not expected and would also interfere with many applications. However, ferromagnetism in austenitic stainless steels after low-temperature nitriding has already been shown in the past. Herein, the presence of ferromagnetism in austenitic steels is discovered after low-temperature carburization (Kolsterizing), which represents a novel and unique finding. A zone of expanded austenite is established on various austenitic stainless steels by low-temperature carburization and the respective ferromagnetism is investigated in relation to the alloy composition. The ferromagnetism occurring is determined by means of a commercial magnetoinductive sensor (Feritscope). Ferromagnetic domains are visualized by magnetic force microscopy and a ferrofluid. X-ray diffraction measurements indicate a clear difference in the lattice expansion of the different alloys. Furthermore, a different appearance of the magnetizable microstructure regions (magnetic domain structure) is detected depending on the grain orientation determined by electron backscatter diffraction (EBSD). Strongly pronounced magnetic domains show no linear lattice defects, whereas in small magnetizable areas linear lattice defects are detected by electron channeling contrast imaging and EBSD.
Four-Dimensional Hurwitz Signal Constellations, Set Partitioning, Detection, and Multilevel Coding
(2021)
The Hurwitz lattice provides the densest four-dimensional packing. This fact has motivated research on four-dimensional Hurwitz signal constellations for optical and wireless communications. This work presents a new algebraic construction of finite sets of Hurwitz integers that is inherently accompanied by a respective modulo operation. These signal constellations are investigated for transmission over the additive white Gaussian noise (AWGN) channel. It is shown that these signal constellations have a better constellation figure of merit and hence a better asymptotic performance over an AWGN channel when compared with conventional signal constellations with algebraic structure, e.g., two-dimensional Gaussian-integer constellations or four-dimensional Lipschitz-integer constellations. We introduce two concepts for set partitioning of the Hurwitz integers. The first method is useful to reduce the computational complexity of the symbol detection. This suboptimum detection approach achieves near-maximum-likelihood performance. In the second case, the partitioning exploits the algebraic structure of the Hurwitz signal constellations. We partition the Hurwitz integers into additive subgroups in a manner that the minimum Euclidean distance of each subgroup is larger than in the original set. This enables multilevel code constructions for the new signal constellations.
Despite the increased attention dedicated to research on the antecedents and determinants of new venture survival in entrepreneurship, defining and capturing survival as an outcome represents a challenge in quantitative studies. This paper creates awareness for ventures being inactive while still classified as surviving based on the data available. We describe this as the ‘living dead’ phenomenon, arguing that it yields potential effects on the empirical results of survival studies. Based on a systematic literature review, we find that this issue of inactivity has not been sufficiently considered in previous new venture survival studies. Based on a sample of 501 New Technology-Based Firms, we empirically illustrate that the classification of living dead ventures into either survived or failed can impact the factors determining survival. On this basis, we contribute to an understanding of the issue by defining the ‘living dead’ phenomenon and by proposing recommendations for research practice to solve this issue in survival studies, taking the data source, the period under investigation and the sample size into account.
In this paper, rectangular matrices whose minors of a given order have the same strict sign are considered and sufficient conditions for their recognition are presented. The results are extended to matrices whose minors of a given order have the same sign or are allowed to vanish. A matrix A is called oscillatory if all its minors are nonnegative and there exists a positive integer k such that A^k has all its minors positive. As a generalization, a new type of matrices, called oscillatory of a specific order, is introduced and some of their properties are investigated.
Introduction. Despite its high accuracy, polysomnography (PSG) has several drawbacks for diagnosing obstructive sleep apnea (OSA). Consequently, multiple portable monitors (PMs) have been proposed. Objective. This systematic review aims to investigate the current literature to analyze the sets of physiological parameters captured by a PM to select the minimum number of such physiological signals while maintaining accurate results in OSA detection. Methods. Inclusion and exclusion criteria for the selection of publications were established prior to the search. The evaluation of the publications was made based on one central question and several specific questions. Results. The abilities to detect hypopneas, sleep time, or awakenings were some of the features studied to investigate the full functionality of the PMs to select the most relevant set of physiological signals. Based on the physiological parameters collected (one to six), the PMs were classified into sets according to the level of evidence. The advantages and the disadvantages of each possible set of signals were explained by answering the research questions proposed in the methods. Conclusions. The minimum number of physiological signals detected by PMs for the detection of OSA depends mainly on the purpose and context of the sleep study. The set of three physiological signals showed the best results in the detection of OSA.
Background:
One of the most promising health care development areas is introducing telemedicine services and creating solutions based on blockchain technology. The study of systems combining both these domains indicates the ongoing expansion of digital technologies in this market segment.
Objective:
This paper aims to review the feasibility of blockchain technology for telemedicine.
Methods:
The authors identified relevant studies via systematic searches of databases including PubMed, Scopus, Web of Science, IEEE Xplore, and Google Scholar. The suitability of each for inclusion in this review was assessed independently. Owing to the lack of publications, available blockchain-based tokens were discovered via conventional web search engines (Google, Yahoo, and Yandex).
Results:
Of the 40 discovered projects, only 18 met the selection criteria. The 5 most prevalent features of the available solutions (N=18) were medical data access (14/18, 78%), medical service processing (14/18, 78%), diagnostic support (10/18, 56%), payment transactions (10/18, 56%), and fundraising for telemedical instrument development (5/18, 28%).
Conclusions:
These different features (eg, medical data access, medical service processing, epidemiology reporting, diagnostic support, and treatment support) allow us to discuss the possibilities for integration of blockchain technology into telemedicine and health care on different levels. In this area, a wide range of tasks can be identified that could be accomplished based on digital technologies using blockchains.
A nonlinear mathematical model for the dynamics of permanent magnet synchronous machines with interior magnets is discussed. The model of the current dynamics captures saturation and dependency on the rotor angle. Based on the model, a flatness-based field-oriented closed-loop controller and a feed-forward compensation of torque ripples are derived. Effectiveness and robustness of the proposed algorithms are demonstrated by simulation results.
What drives entrepreneurial action to create a lasting impact? The creation of new ventures that aim at having an impact beyond their financial performance face additional challenges: achieving economic sustainability and at the same time addressing social or environmental issues. Little is known on how these new hybrid organizations, aiming for multiple impact dimensions, manage to be congruent with their blended values. A dataset of 4,125 early-stage ventures is used to gain insights into how blended values are converted into financial, social and environmental impacts, giving shape to different types of hybrid organizations. Our findings suggest new hybrid organizations might opt to sacrifice financial impact to achieve social impact, yet this is not the case when they aim to generate environmental or sustainable impact. Therefore, the tensions and sacrifices related to holding blended values are not homogeneous across all types of new hybrid organizations.
At present, the majority of the proposed Deep Learning (DL) methods provide point predictions without quantifying the model's uncertainty. However, a quantification of the reliability of automated image analysis is essential, in particular in medicine when physicians rely on the results for making critical treatment decisions. In this work, we provide an entire framework to diagnose ischemic stroke patients incorporating Bayesian uncertainty into the analysis procedure. We present a Bayesian Convolutional Neural Network (CNN) yielding a probability for a stroke lesion on 2D Magnetic Resonance (MR) images with corresponding uncertainty information about the reliability of the prediction. For patient-level diagnoses, different aggregation methods are proposed and evaluated, which combine the individual image-level predictions. Those methods take advantage of the uncertainty in the image predictions and report model uncertainty at the patient-level. In a cohort of 511 patients, our Bayesian CNN achieved an accuracy of 95.33% at the image-level representing a significant improvement of 2% over a non-Bayesian counterpart. The best patient aggregation method yielded 95.89% of accuracy. Integrating uncertainty information about image predictions in aggregation models resulted in higher uncertainty measures to false patient classifications, which enabled to filter critical patient diagnoses that are supposed to be closer examined by a medical doctor. We therefore recommend using Bayesian approaches not only for improved image-level prediction and uncertainty estimation but also for the detection of uncertain aggregations at the patient-level.
Mapping of tree seedlings is useful for tasks ranging from monitoring natural succession and regeneration to effective silvicultural management. Development of methods that are both accurate and cost-effective is especially important considering the dramatic increase in tree planting that is required globally to mitigate the impacts of climate change. The combination of high-resolution imagery from unmanned aerial vehicles and object detection by convolutional neural networks (CNNs) is one promising approach. However, unbiased assessments of these models and methods to integrate them into geospatial workflows are lacking. In this study, we present a method for rapid, large-scale mapping of young conifer seedlings using CNNs applied to RGB orthomosaic imagery. Importantly, we provide an unbiased assessment of model performance by using two well-characterised trial sites together containing over 30,000 seedlings to assemble datasets with a high level of completeness. Our results showed CNN-based models trained on two sites detected seedlings with sensitivities of 99.5% and 98.8%. False positives due to tall weeds at one site and naturally regenerating seedlings of the same species led to slightly lower precision of 98.5% and 96.7%. A model trained on examples from both sites had 99.4% sensitivity and precision of 97%, showing applicability across sites. Additional testing showed that the CNN model was able to detect 68.7% of obscured seedlings missed during the initial annotation of the imagery but present in the field data. Finally, we demonstrate the potential to use a form of weakly supervised training and a tile-based processing chain to enhance the accuracy and efficiency of CNNs applied to large, high-resolution orthomosaics.
A conceptual framework for indigenous ecotourism projects – a case study in Wayanad, Kerala, India
(2020)
This paper analyses indigenous ecotourism in the Indian district of Wayanad, Kerala, using a conceptual framework based on a PATA 2015 study on indigenous tourism that includes the criteria: human rights, participation, business and ecology. Detailed indicator sets for each criterion are applied to a case study of the Priyadarshini Tea Environs with a qualitative research approach addressing stakeholders from the public sector, non-governmental organisations, academia, tour operators and communities including Adivasi and non-Adivasi. In-depth interviews were supported by participant and non-participant observations. The authors adapted this framework to the needs of the case study and consider that this modified version is a useful tool for academics and practitioners wishing to evaluate and develop indigenous ecotourism projects. The results show that the Adivasi involved in the Priyadarshini Tea Environs project benefit from indigenous ecotourism. But they could profit more if they had more involvement in and control of the whole tourism value chain.
Totally nonnegative matrices, i.e., matrices having all their minors nonnegative, and matrix intervals with respect to the checkerboard partial order are considered. It is proven that if the two bound matrices of such a matrix interval are totally nonnegative and satisfy certain conditions, then all matrices from this interval are also totally nonnegative and satisfy the same conditions.
Let A = [a_ij] be a real symmetric matrix. If f:(0,oo)-->[0,oo) is a Bernstein function, a sufficient condition for the matrix [f(a_ij)] to have only one positive eigenvalue is presented. By using this result, new results for a symmetric matrix with exactly one positive eigenvalue, e.g., properties of its Hadamard powers, are derived.
For a long time, the use of intermediate products in production has been growing more rapidly in most countries than domestic production. This is a strong indication of more interdependency in production. The main purpose of input-output analysis is to study the interdependency of industries in an economy. Often the term interindustry analysis is also used. Therefore, the exchange of intermediate products is a key issue of input-output analysis. We will use input–output data for this study that the author prepared for the new ‘Handbook on Supply, Use and Input–Output Tables with Extensions and Applications’ of the United Nations. The supply use and input–output tables contain separate valuation matrices for trade margins, transport margins, value added tax, other taxes on products and subsidies on products. For the study, two input–output models were developed to evaluate the impact of fuel subsidy and taxation reform on output, gross domestic product, inflation and trade. Six scenarios are discussed covering different aspects of the reform.
This article introduces the Global Sanctions Data Base (GSDB), a new dataset of economic sanctions that covers all bilateral, multilateral, and plurilateral sanctions in the world during the 1950–2016 period across three dimensions: type, political objective, and extent of success. The GSDB features by far the most cases amongst data bases that focus on effective sanctions (i.e., excluding threats) and is particularly useful for analysis of bilateral international transactional data (such as trade flows). We highlight five important stylized facts: (i) sanctions are increasingly used over time; (ii) European countries are the most frequent users and African countries the most frequent targets; (iii) sanctions are becoming more diverse, with the share of trade sanctions falling and that of financial or travel sanctions rising; (iv) the main objectives of sanctions are increasingly related to democracy or human rights; (v) the success rate of sanctions has gone up until 1995 and fallen since then. Using state-of-the-art gravity modeling, we highlight the usefulness of the GSDB in the realm of international trade. Trade sanctions have a negative but heterogeneous effect on trade, which is most pronounced for complete bilateral sanctions, followed by complete export sanctions.
Globalization has increased the number of road trips and vehicles. The result has been an intensification of traffic accidents, which are becoming one of the most important causes of death worldwide. Traffic accidents are often due to human error, the probability of which increases when the cognitive ability of the driver decreases. Cognitive capacity is closely related to the driver’s mental state, as well as other external factors such as the CO2 concentration inside the vehicle. The objective of this work is to analyze how these elements affect driving. We have conducted an experiment with 50 drivers who have driven for 25 min using a driving simulator. These drivers completed a survey at the start and end of the experiment to obtain information about their mental state. In addition, during the test, their stress level was monitored using biometric sensors and the state of the environment (temperature, humidity and CO2 level) was recorded. The results of the experiment show that the initial level of stress and tiredness of the driver can have a strong impact on stress, driving behavior and fatigue produced by the driving test. Other elements such as sadness and the conditions of the interior of the vehicle also cause impaired driving and affect compliance with traffic regulations.
Im vorliegenden Beitrag wird der Einfluss der Modellierung des Kellergeschosses auf die Querkraft und die Verformungen von im Kellergeschoss eingespannten Stahlbeton‐Aussteifungswänden untersucht. Der Querkraftverlauf der Wand und die Verschiebung am Kopf der Wand werden mit den entsprechenden, am vereinfachten Kragwandmodell ermittelten Werten verglichen, bei dem die Wand auf Höhe der Kellerdecke voll eingespannt ist und die Weiterleitung der Schnittkräfte im Kellergeschoss nicht näher betrachtet wird. Für die Berücksichtigung des Kellergeschosses wird zunächst eine gelenkige Festhaltung durch die Kellerdecke und die Bodenplatte betrachtet, wodurch sich eine unrealistisch große Wandquerkraft im Kellergeschoss ergibt. Danach wird ein verfeinertes Modell mit Teileinspannung in der Bodenplatte und nachgiebiger Halterung durch die Kellerdecke untersucht. Es werden Empfehlungswerte für die Federkonstante der Drehfeder auf Höhe der Bodenplatte und der horizontalen Translationsfeder auf Höhe der Kellerdecke angegeben, die in der Praxis Anwendung finden können. Es wird besonders der Frage nachgegangen, welche Einflüsse die Berücksichtigung des Kellergeschosses bei der Erdbebenbemessung der Aussteifungswände hat. Dabei wird einerseits die Systemsteifigkeit, von der die Erdbebenersatzlasten abhängen, und andererseits die mögliche Verschiebungsduktilität, von der der mögliche Verhaltensbeiwert q abhängt, betrachtet.
Rheumatoid arthritis is an autoimmune disease that causes chronic inflammation of synovial joints, often resulting in irreversible structural damage. The activity of the disease is evaluated by clinical examinations, laboratory tests, and patient self-assessment. The long-term course of the disease is assessed with radiographs of hands and feet. The evaluation of the X-ray images performed by trained medical staff requires several minutes per patient. We demonstrate that deep convolutional neural networks can be leveraged for a fully automated, fast, and reproducible scoring of X-ray images of patients with rheumatoid arthritis. A comparison of the predictions of different human experts and our deep learning system shows that there is no significant difference in the performance of human experts and our deep learning model.
When a country grants preferential tariffs to another, either reciprocally in a free trade agreement (FTA) or unilaterally, rules of origin (RoOs) are defined to determine whether a product is eligible for preferential treatment. RoOs exist to avoid that exports from third countries enter through the member with the lowest tariff (trade deflection). However, RoOs distort exporters' sourcing decisions and burden them with red tape. Using a global data set, we show that, for 86% of all bilateral product-level comparisons within FTAs, trade deflection is not profitable because external tariffs are rather similar and transportation costs are non-negligible; in the case of unilateral trade preferences extended by rich countries to poor ones that ratio is a striking 98%. The pervasive and unconditional use of RoOs is, therefore, hard to rationalize.
The aim of this paper is to portray the risks of climate change for low mountain range tourism and to develop sustainable business models as adaption strategy. A mixed-method-approach is applied combining secondary analysis, a quantitative survey, and qualitative in-depth-interviews in a transdisciplinary setting. Results show, that until now, climate change impacts on the snow situation in the Black Forest – at least above 1,000 m – have been mild and compensated by artificial snowmaking, and up to now have not had measurable effects on tourism demand. In general, the Black Forest appears to be an attractive destination for more reasons than just snow. The climate issue seems to be regarded as a rather incidental occurrence with little importance to current business decisions. However, the authors present adaption strategies as alternatives for snow tourism, e. g. the implementation of hiking hostels, since climate change will make winter tourism in the Black Forest impossible in the long run.
This paper presents a framework to assess the cultural sustainability of Aboriginal tourism in British Columbia, which meets must take into account the protection of human rights, good self-governance, identity, control of land, the tourism product’s authenticity, and a market-ready tourism product. These criteria are specified by two indicators each. The cultural sustainability framework was generated by triangulating qualitative research methods like experts’ interviews, secondary research, and participant and non-participant observations. This paper is thus conceptual in nature and inductive in its approach. It partly leverages a collaborative approach, as it includes interviewees in an iterative research loop. Furthermore, the paper shows why cultural sustainability is a determinant of the success of Aboriginal tourism.
Purpose – The purpose of this paper is to examine visitor management in the German-Swiss border area of the Lake Constance region. Taking a customer perspective, it determines the requirements for an application with the ability to optimize personal mobility.
Design/methodology/approach – A quantitative study and a survey of focus groups were conducted to identify movement patterns of different types of visitors and their requirements concerning the development of a visitor management application.
Findings – Visitors want an application that provides real-time forecasts of issues such as traffic, parking and queues and, at the same time, enables them to create a personal activity schedule based on this information.
Research limitations/implications – Not every subsample reached a sufficient number of cases to yield representative results.
Practical implications – The results may lead to an optimization and management separation of mobility flows in the research area and be helpful to municipal planners, destination marketing organizations and visitors.
Originality/value – The German border cities of Konstanz, Radolfzell and Singen in the Lake Constance region need improved visitor management, mainly because of a high level of shopping tourism by Swiss visitors to Germany. In the Summer months, Lake Constance is also a popular destination for leisure tourists, which causes overtourism. For the first time, the results of this research presented here offer possible solutions, in particular by showing how a mobile application for visitors could defuse the situation.
Thermochemical surface hardening is used to overcome the weak mechanical performance of austenitic and duplex stainless steels. Both low-temperature carburizing and nitrocarburizing can improve the hardness, wear, galling, and cavitation resistance, while maintaining their good corrosion resistance. Therefore, it is crucial to not form chromium-rich precipitates during hardening as these can deteriorate the passivity of the alloy. The hardening parameters, the chemical composition of the steel, and the manufacturing route of a component determine whether precipitates are formed. This article gives an overview of suitable alloys for low-temperature surface hardening and the performance under corrosive loading.
In this paper, multivariate polynomials in the Bernstein basis over a box (tensorial Bernstein representation) are considered. A new matrix method for the computation of the polynomial coefficients with respect to the Bernstein basis, the so-called Bernstein coefficients, is presented and compared with existing methods. Also matrix methods for the calculation of the Bernstein coefficients over subboxes generated by subdivision of the original box are proposed. All the methods solely use matrix operations such as multiplication, transposition and reshaping; some of them rely on the bidiagonal factorization of the lower triangular Pascal matrix or the factorization of this matrix by a Toeplitz matrix. In the case that the coefficients of the polynomial are due to uncertainties and can be represented in the form of intervals it is shown that the developed methods can be extended to compute the set of the Bernstein coefficients of all members of the polynomial family.
The Lempel–Ziv–Welch (LZW) algorithm is an important dictionary-based data compression approach that is used in many communication and storage systems. The parallel dictionary LZW (PDLZW) algorithm speeds up the LZW encoding by using multiple dictionaries. This simplifies the parallel search in the dictionaries. However, the compression gain of the PDLZW depends on the partitioning of the address space, i.e. on the sizes of the parallel dictionaries. This work proposes an address space partitioning technique that optimises the compression rate of the PDLZW. Numerical results for address spaces with 512, 1024, and 2048 entries demonstrate that the proposed address partitioning improves the performance of the PDLZW compared with the original proposal. These address space sizes are suitable for flash storage systems. Moreover, the PDLZW has relative high memory requirements which dominate the costs of a hardware implementation. This work proposes a recursive dictionary structure and a word partitioning technique that significantly reduce the memory size of the parallel dictionaries.
We have introduced in this paper new variants of two methods for projecting Supply and Use Tables that are based on a distance minimisation approach (SUT-RAS) and the Leontief model (SUT-EURO). We have also compared them under similar and comparable exogenous information, i.e.: with and without exogenous industry output, and with explicit consideration of taxes less subsidies on products. We have conducted an empirical assessment of all of these methods against a set of annual tables between 2000 and 2005 for Austria, Belgium, Spain and Italy. From the empirical assessment, we obtained three main conclusions: (a) the use of extra information (i.e. industry output) generally improves projected estimates in both methods; (b) whenever industry output is available, the SUT-RAS method should be used and otherwise the SUT-EURO should be used instead; and (c) the total industry output is best estimated by the SUT-EURO method when this is not available.
While existing resource extraction debates have contributed to a better understanding of national economic and political dilemmas and institutional responses, there are flaws in understanding the specific relevance of the various types of mining schemes for rural households to deal with the various problems they are confronted with. Our paper examines the perceptions of gold mining effects on households in Northern Burkina Faso. The findings of our survey across six districts representing different mining schemes (industrial, artisanal, no mining) highlight the fact that artisanal gold mining can generate job opportunities and cash income for local households; whereas industrial gold mining widely fails to do so. However, the general economic and environmental settings exert a much stronger influence on the household state. Gold mining effects are perceived as being less advantageous in districts where people are suffering from a lack of education, a higher vulnerability to drought and poor market access. Our findings provide empirical support for those who back the enhanced formalization of artisanal and small-scale mining (ASM) and policies that entail more rigorous state monitoring of mining concessions, especially in economic and environmentally disadvantaged contexts. Effectively addressing communal and pro-poor development requires greater attention to the political economy of ASM and corporate mining. It also calls for a greater inclusion of local mining stakeholders and a more effective alignment of international regulatory and advocacy efforts.
In this paper, the problem of controlling the dissolved oxygen level (DO) during an aerobic fermentation is considered. The proposed approach deals with three major difficulties in respect to the nonlinear dynamics of the DO, the poor accuracy of the empirical models for the oxygen consumption rate and the fact that only sampled measurements are available on-line. A nonlinear integral high-gain control law including a continuous-discrete time observer is designed to keep the DO in the neighborhood of a set point value without any knowledge on the dissolved oxygen consumption rate. The local stability of the control algorithm is proved using Lyapunov tools. The performance of the control scheme is first analyzed in simulation and then experimentally evaluated during a successfull fermentation of the bacteria over a period of three days. Pseudomonas putida mt-2
Input–Output modellers are often faced with the task of estimating missing Use tables at basic prices and also valuation matrices of the individual countries. This paper examines a selection of estimation methods applied to the European context where the analysts are not in possession of superior data. The estimation methods are restricted to the use of automated methods that would require more than just the row and column sums of the tables (as in projections) but less than a combination of various conflicting information (as in compilation). The results are assessed against the official Supply, Use and Input–Output tables of Belgium, Germany, Italy, Netherlands, Finland, Austria and Slovakia by using matrix difference metrics. The main conclusion is that using the structures of previous years usually performs better than any other approach.
In several organizations, business workgroups autonomously implement information technology (IT) outside the purview of the IT department. Shadow IT, evolving as a type of workaround from nontransparent and unapproved end-user computing (EUC), is a term used to refer to this phenomenon, which challenges norms relative to IT controllability. This report describes shadow IT based on case studies of three companies and investigates its management. In 62% of cases, companies decided to reengineer detected instances or reallocate related subtasks to their IT department. Considerations of risks and transaction cost economics with regard to specificity, uncertainty, and scope explain these actions and the resulting coordination of IT responsibilities between the business workgroups and IT departments. This turns shadow IT into controlled business-managed IT activities and enhances EUC management. The results contribute to the governance of IT task responsibilities and provide a way to formalize the role of workarounds in business workgroups.
To evaluate the quality of sleep, it is important to determine how much time was spent in each sleep stage during the night. The gold standard in this domain is an overnight polysomnography (PSG). But the recording of the necessary electrophysiological signals is extensive and complex and the environment of the sleep laboratory, which is unfamiliar to the patient, might lead to distorted results. In this paper, a sleep stage detection algorithm is proposed that uses only the heart rate signal, derived from electrocardiogram (ECG), as a discriminator. This would make it possible for sleep analysis to be performed at home, saving a lot of effort and money. From the heart rate, using the fast Fourier transformation (FFT), three parameters were calculated in order to distinguish between the different sleep stages. ECG data along with a hypnogram scored by professionals was used from Physionet database, making it easy to compare the results. With an agreement rate of 41.3%, this approach is a good foundation for future research.
This letter proposes two contributions to improve the performance of transmission with generalized multistream spatial modulation (SM). In particular, a modified suboptimal detection algorithm based on the Gaussian approximation method is proposed. The proposed modifications reduce the complexity of the Gaussian approximation method and improve the performance for high signal-to-noise ratios. Furthermore, this letter introduces signal constellations based on Hurwitz integers, i.e., a 4-D lattice. Simulation results demonstrate that these signal constellations are beneficial for generalized SM with two active antennas.
This work proposes a lossless data compression algorithm for short data blocks. The proposed compression scheme combines a modified move-to-front algorithm with Huffman coding. This algorithm is applicable in storage systems where the data compression is performed on block level with short block sizes, in particular, in non-volatile memories. For block sizes in the range of 1(Formula presented.)kB, it provides a compression gain comparable to the Lempel–Ziv–Welch algorithm. Moreover, encoder and decoder architectures are proposed that have low memory requirements and provide fast data encoding and decoding.
Ulrich Finsterwalder
(2017)
Sensorlose Positionsregelung eines hydraulischen Proportional-Wegeventils mittels Signalinjektion
(2017)
Es wird eine Methode zur sensorlosen Positionsbestimmung bei elektromagnetisch betätigten Aktoren vorgestellt. Dabei werden basierend auf einer Signalinjektion die positionsabhängigen Parameter bei der injizierten Frequenz bestimmt und daraus über ein geeignetes Modell die Position des Magnetankers ermittelt. Die Eignung des Verfahrens zur sensorlosen Positionsregelung wird an einem bidirektionalen Proportionalventil anhand praktischer Versuche demonstriert.
A real matrix is called totally nonnegative if all of its minors are nonnegative. In this paper, the minors are determined from which the maximum allowable entry perturbation of a totally nonnegative matrix can be found, such that the perturbed matrix remains totally nonnegative. Also, the total nonnegativity of the first and second subdirect sum of two totally nonnegative matrices is considered.
In this paper totally nonnegative (positive) matrices are considered which are matrices having all their minors nonnegative (positve); the almost totally positive matrices form a class between the totally nonnegative matrices and the totally positive ones. An efficient determinantal test based on the Cauchon algorithm for checking a given matrix for falling in one of these three classes of matrices is applied to matrices which are related to roots of polynomials and poles of rational functions, specifically the Hankel matrix associated with the Laurent series at infinity of a rational function and matrices of Hurwitz type associated with polynomials. In both cases it is concluded from properties of one or two finite sections of the infinite matrix that the infinite matrix itself has these or related properties. Then the results are applied to derive a sufficient condition for the Hurwitz stability of an interval family of polynomials. Finally, interval problems for a subclass of the rational functions, viz. R-functions, are investigated. These problems include invariance of exclusively positive poles and exclusively negative roots in the presence of variation of the coefficients of the polynomials within given intervals.
A real matrix is called totally nonnegative if all of its minors are nonnegative. In this paper the extended Perron complement of a principal submatrix in a matrix A is investigated. In extension of known results it is shown that if A is irreducible and totally nonnegative and the principal submatrix consists of some specified consecutive rows then the extended Perron complement is totally nonnegative. Also inequalities between minors of the extended Perron complement and the Schur complement are presented.
We consider classes of n-by-n sign regular matrices, i.e., of matrices with the property that all their minors of fixed order k have one specified sign or are allowed also to vanish, k = 1, ... ,n. If the sign is nonpositive for all k, such a matrix is called totally nonpositive. The application of the Cauchon algorithm to nonsingular totally nonpositive matrices is investigated and a new determinantal test for these matrices is derived. Also matrix intervals with respect to the checkerboard partial ordering are considered. This order is obtained from the usual entry-wise ordering on the set of the n-by-n matrices by reversing the inequality sign for each entry in a checkerboard fashion. For some classes of sign regular matrices it is shown that if the two bound matrices of such a matrix interval are both in the same class then all matrices lying between these two bound matrices are in the same class, too.
A growing share of modern trade policy instruments is shaped by non-tariff barriers (NTBs). Based on a structural gravity equation and the recently updated Global Trade Alert database, we empirically investigate the effect of NTBs on imports. Our analysis reveals that the implementation of NTBs reduces imports of affected products by up to 12%. Their trade dampening effect is thus comparable to that of trade defence instruments such as anti-dumping duties. It is smaller for exporters that have a free trade agreement with the importing country. Different types of NTBs affect trade to a different extent. Finally, we investigate the effect of behind-the-border measures, showing that they significantly lower the importer’s market access.
Knot placement for curve approximation is a well known and yet open problem in geometric modeling. Selecting knot values that yield good approximations is a challenging task, based largely on heuristics and user experience. More advanced approaches range from parametric averaging to genetic algorithms.
In this paper, we propose to use Support Vector Machines (SVMs) to determine suitable knot vectors for B-spline curve approximation. The SVMs are trained to identify locations in a sequential point cloud where knot placement will improve the approximation error. After the training phase, the SVM can assign, to each point set location, a so-called score. This score is based on geometric and differential geometric features of points. It measures the quality of each location to be used as knots in the subsequent approximation. From these scores, the final knot vector can be constructed exploring the topography of the score-vector without the need for iteration or optimization in the approximation process. Knot vectors computed with our approach outperform state of the art methods and yield tighter approximations.
Know when you don't know
(2018)
Deep convolutional neural networks show outstanding performance in image-based phenotype classification given that all existing phenotypes are presented during the training of the network. However, in real-world high-content screening (HCS) experiments, it is often impossible to know all phenotypes in advance. Moreover, novel phenotype discovery itself can be an HCS outcome of interest. This aspect of HCS is not yet covered by classical deep learning approaches. When presenting an image with a novel phenotype to a trained network, it fails to indicate a novelty discovery but assigns the image to a wrong phenotype. To tackle this problem and address the need for novelty detection, we use a recently developed Bayesian approach for deep neural networks called Monte Carlo (MC) dropout to define different uncertainty measures for each phenotype prediction. With real HCS data, we show that these uncertainty measures allow us to identify novel or unclear phenotypes. In addition, we also found that the MC dropout method results in a significant improvement of classification accuracy. The proposed procedure used in our HCS case study can be easily transferred to any existing network architecture and will be beneficial in terms of accuracy and novelty detection.
A constructive nonlinear observer design for self-sensing of digital (ON/OFF) single coil electromagnetic actuators is studied. Self-sensing in this context means that solely the available energizing signals, i.e., coil current and driving voltage are used to estimate the position and velocity trajectories of the moving plunger. A nonlinear sliding mode observer is considered, where the stability of the reduced error dynamics is analyzed by the equivalent control method. No simplifications are made regarding magnetic saturation and eddy currents in the underlying dynamical model. The observer gains are constructed by taking into account some generic properties of the systems nonlinearities. Two possible choices of the observer gains are discussed. Furthermore, an observer-based tracking control scheme to achieve sensorless soft landing is considered and its closed-loop stability is studied. Experimental results for observer-based soft landing of a fast-switching solenoid valve under dry conditions are presented to demonstrate the usefulness of the approach.
A constructive method for the design of nonlinear observers is discussed. To formulate conditions for the construction of the observer gains, stability results for nonlinear singularly perturbed systems are utilised. The nonlinear observer is designed directly in the given coordinates, where the error dynamics between the plant and the observer becomes singularly perturbed by a high-gain part of the observer injection, and the information of the slow manifold is exploited to construct the observer gains of the reduced-order dynamics. This is in contrast to typical high-gain observer approaches, where the observer gains are chosen such that the nonlinearities are dominated by a linear system. It will be demonstrated that the considered approach is particularly suited for self-sensing electromechanical systems. Two variants of the proposed observer design are illustrated for a nonlinear electromagnetic actuator, where the mechanical quantities, i.e. the position and the velocity, are not measured
Objective: This paper presents an algorithm for non-invasive sleep stage identification using respiratory, heart rate and movement signals. The algorithm is part of a system suitable for long-term monitoring in a home environment, which should support experts analysing sleep. Approach: As there is a strong correlation between bio-vital signals and sleep stages, multinomial logistic regression was chosen for categorical distribution of sleep stages. Several derived parameters of three signals (respiratory, heart rate and movement) are input for the proposed method. Sleep recordings of five subjects were used for the training of a machine learning model and 30 overnight recordings collected from 30 individuals with about 27 000 epochs of 30 s intervals each were evaluated. Main results: The achieved rate of accuracy is 72% for Wake, NREM, REM (with Cohen's kappa value 0.67) and 58% for Wake, Light (N1 and N2), Deep (N3) and REM stages (Cohen's kappa is 0.50). Our approach has confirmed the potential of this method and disclosed several ways for its improvement. Significance: The results indicate that respiratory, heart rate and movement signals can be used for sleep studies with a reasonable level of accuracy. These inputs can be obtained in a non-invasive way applying it in a home environment. The proposed system introduces a convenient approach for a long-term monitoring system which could support sleep laboratories. The algorithm which was developed allows for an easy adjustment of input parameters that depend on available signals and for this reason could also be used with various hardware systems.
The business model canvas (BMC) and the lean start-up manifesto (LSM) have been changing both the entrepreneurial education and, on the practical side, the mindset in setting up innovative ventures since the burst of the dot-com bubble. However, few empirical insights on the business model implementation patterns that distinguish between digital and non-digital innovative ventures exist. Connecting practical management tools to network theory as well as to the theory of organizational learning, this paper investigates evolution patterns of digital and non-digital business models out of the deal flow of an innovation intermediary. For this purpose, a multi-dimensional quantitative content analysis research design is applied to 242 ventures' business plans. The measured strength of transaction relations to customers, suppliers, people, and financiers has been combined with performance indicators of the sampled ventures. The results indicate that in order to succeed, digital ventures iterate their business on the market early and search for investment afterwards. Contrariwise, non-digital ventures already need financial investments in the early stages to set up a product ready to be tested on the market. In both groups we found strong evidence that specific evolutionary patterns relate to higher rates of success.
Error correction coding based on soft-input decoding can significantly improve the reliability of non-volatile flash memories. This work proposes a soft-input decoder for generalized concatenated (GC) codes. GC codes are well suited for error correction in flash memories for high reliability data storage. We propose GC codes constructed from inner extended binary Bose-Chaudhuri-Hocquenghem (BCH) codes and outer Reed-Solomon codes. The extended BCH codes enable an efficient hard-input decoding. Furthermore, a low-complexity soft-input decoding method is proposed. This bit-flipping decoder uses a fixed number of test patterns and an algebraic decoder for soft-decoding. An acceptance criterion for the final candidate codeword is proposed. Combined with error and erasure decoding of the outer Reed-Solomon codes, this acceptance criterion can improve the decoding performance and reduce the decoding complexity. The presented simulation results show that the proposed bit-flipping decoder in combination with outer error and erasure decoding can outperform maximum likelihood decoding of the inner codes.
The Burrows–Wheeler transformation (BWT) is a reversible block sorting transform that is an integral part of many data compression algorithms. This work proposes a memory-efficient pipelined decoder for the BWT. In particular, the authors consider the limited context order BWT that has low memory requirements and enable fast encoding. However, the decoding of the limited context order BWT is typically much slower than the encoding. The proposed decoder pipeline provides a fast inverse BWT by splitting the decoding into several processing stages which are executed in parallel.
The introduction of multiple-level cell (MLC) and triple-level cell (TLC) technologies reduced the reliability of flash memories significantly compared with single-level cell flash. With MLC and TLC flash cells, the error probability varies for the different states. Hence, asymmetric models are required to characterize the flash channel, e.g., the binary asymmetric channel (BAC). This contribution presents a combined channel and source coding approach improving the reliability of MLC and TLC flash memories. With flash memories data compression has to be performed on block level considering short-data blocks. We present a coding scheme suitable for blocks of 1 kB of data. The objective of the data compression algorithm is to reduce the amount of user data such that the redundancy of the error correction coding can be increased in order to improve the reliability of the data storage system. Moreover, data compression can be utilized to exploit the asymmetry of the channel to reduce the error probability. With redundant data, the proposed combined coding scheme results in a significant improvement of the program/erase cycling endurance and the data retention time of flash memories.
Generalised concatenated (GC) codes are well suited for error correction in flash memories for high-reliability data storage. The GC codes are constructed from inner extended binary Bose–Chaudhuri–Hocquenghem (BCH) codes and outer Reed–Solomon codes. The extended BCH codes enable high-rate GC codes and low-complexity soft input decoding. This work proposes a decoder architecture for high-rate GC codes. For such codes, outer error and erasure decoding are mandatory. A pipelined decoder architecture is proposed that achieves a high data throughput with hard input decoding. In addition, a low-complexity soft input decoder is proposed. This soft decoding approach combines a bit-flipping strategy with algebraic decoding. The decoder components for the hard input decoding can be utilised which reduces the overhead for the soft input decoding. Nevertheless, the soft input decoding achieves a significant coding gain compared with hard input decoding.
This work studies a wind noise reduction approach for communication applications in a car environment. An endfire array consisting of two microphones is considered as a substitute for an ordinary cardioid microphone capsule of the same size. Using the decomposition of the multichannel Wiener filter (MWF), a suitable beamformer and a single-channel post filter are derived. Due to the known array geometry and the location of the speech source, assumptions about the signal properties can be made to simplify the MWF beamformer and to estimate the speech and noise power spectral densities required for the post filter. Even for closely spaced microphones, the different signal properties at the microphones can be exploited to achieve a significant reduction of wind noise. The proposed beamformer approach results in an improved speech signal regarding the signal-to-noise-ratio and keeps the linear speech distortion low. The derived post filter shows equal performance compared to known approaches but reduces the effort for noise estimation.
Im vorliegenden Aufsatz wird das grundlegende Vorgehen zur Berechnung der Durchbiegung im gerissenen Zustand inklusive Berücksichtigung von Kriechen und Schwinden vorgestellt. Dabei werden aufbauend auf den Vorgaben des EC2 verschiedene Verfahren mit unterschiedlichen Genauigkeitsniveaus für Balken und einachsig gespannte Platten erläutert. Für einfache Systeme können diese Verfahren auch von Hand durchgeführt werden und bieten dabei ein hohes Maß an Kontrolle über den Rechenablauf. Anschließend wird die Ermittlung der Schwindverformungen genauer betrachtet, welche nur mit hohem Rechenaufwand durchgeführt werden kann. Für eine Auswahl von Systemen können Diagramme mit Multiplikationsfaktoren hergeleitet werden, welche die Berechnung stark vereinfachen. Des Weiteren wird auf die Verformungsberechnung im Zustand II für Plattentragwerke eingegangen. Dabei werden sowohl die Berechnung mit FE‐Plattenprogrammen, welche oftmals entsprechende Funktionen anbieten, als auch vereinfachte manuelle Verfahren vorgestellt. Die Grundlagen der Rissbildung lassen sich von Stabtragwerken auf Platten übertragen, allerdings entstehen dabei neue Fragestellungen, wie beispielsweise zum Vorgehen bei der Steifigkeitsabminderung oder zum Einfluss der Schnittgrößenumlagerung infolge Rissbildung. In diesem Bereich besteht daher noch Forschungsbedarf.
Business units are increasingly able to fuel the transformation that digitalization demands of organizations. Thereby, they can implement Shadow IT (SIT) without involving a central IT department to create flexible and innovative solutions. Self-reinforcing effects lead to an intertwinement of SIT with the organization. As a result, high complexities, redundancies, and sometimes even lock-ins occur. IT Integration suggests itself to meet these challenges. However, it can also eliminate the benefits that SIT presents. To help organizations in this area of conflict, we are conducting a literature review including a systematic search and an analysis from a systemic viewpoint using path dependency and switching costs. Our resulting conceptual framework for SIT integration drawbacks classifies the drawbacks into three dimensions. The first dimension consists of switching costs that account for the financial, procedural, and emotional drawbacks and the drawbacks from a loss of SIT benefits. The second dimension includes organizational, technical, and level-spanning criteria. The third dimension classifies the drawbacks into the global level, the local level, and the interaction between them. We contribute to the scientific discussion by introducing a systemic viewpoint to the research on shadow IT. Practitioners can use the presented criteria to collect evidence to reach an IT integration decision.