Refine
Year of publication
Document Type
- Conference Proceeding (642) (remove)
Language
- English (492)
- German (149)
- Multiple languages (1)
Keywords
- 360-degree coverage (1)
- 3D Extended Object Tracking (1)
- 3D Extended Object Tracking (EOT) (2)
- 3D shape tracking (1)
- 3D ship detection (1)
- AAL (1)
- ADAM (1)
- AHI (1)
- Abrasive grain material (1)
- Abtragsprinzip (1)
Institute
- Fakultät Bauingenieurwesen (9)
- Fakultät Elektrotechnik und Informationstechnik (10)
- Fakultät Informatik (50)
- Fakultät Maschinenbau (9)
- Fakultät Wirtschafts-, Kultur- und Rechtswissenschaften (8)
- Institut für Angewandte Forschung - IAF (53)
- Institut für Optische Systeme - IOS (19)
- Institut für Strategische Innovation und Technologiemanagement - IST (29)
- Institut für Systemdynamik - ISD (64)
- Institut für Werkstoffsystemtechnik Konstanz - WIK (5)
Automotive computing applications like AI databases, ADAS, and advanced infotainment systems have a huge need for persistent memory. This trend requires NAND flash memories designed for extreme automotive environments. However, the error probability of NAND flash memories has increased in recent years due to higher memory density and production tolerances. Hence, strong error correction coding is needed to meet automotive storage requirements. Many errors can be corrected by soft decoding algorithms. However, soft decoding is very resource-intensive and should be avoided when possible. NAND flash memories are organized in pages, and the error correction codes are usually encoded page-wise to reduce the latency of random reads. This page-wise encoding does not reach the maximum achievable capacity. Reading soft information increases the channel capacity but at the cost of higher latency and power consumption. In this work, we consider cell-wise encoding, which also increases the capacity compared to page-wise encoding. We analyze the cell-wise processing of data in triple-level cell (TLC) NAND flash and show the performance gain when using Low-Density Parity-Check (LDPC) codes. In addition, we investigate a coding approach with page-wise encoding and cell-wise reading.
Large persistent memory is crucial for many applications in embedded systems and automotive computing like AI databases, ADAS, and cutting-edge infotainment systems. Such applications require reliable NAND flash memories made for harsh automotive conditions. However, due to high memory densities and production tolerances, the error probability of NAND flash memories has risen. As the number of program/erase cycles and the data retention times increase, non-volatile NAND flash memories' performance and dependability suffer. The read reference voltages of the flash cells vary due to these aging processes. In this work, we consider the issue of reference voltage adaption. The considered estimation procedure uses shallow neural networks to estimate the read reference voltages for different life-cycle conditions with the help of histogram measurements. We demonstrate that the training data for the neural networks can be enhanced by using shifted histograms, i.e., a training of the neural networks is possible based on a few measurements of some extreme points used as training data. The trained neural networks generalize well for other life-cycle conditions.
The code-based McEliece cryptosystem is a promising candidate for post-quantum cryptography. The sender encodes a message, using a public scrambled generator matrix, and adds a random error vector. In this work, we consider q-ary codes and restrict the Lee weight of the added error symbols. This leads to an increased error correction capability and a larger work factor for information-set decoding attacks. In particular, we consider codes over an extension field and use the one-Lee error channel, which restricts the error values to Lee weight one. For this channel model, generalized concatenated codes can achieve high error correction capabilities. We discuss the decoding of those codes and the possible gain for decoding beyond the guaranteed error correction capability.
Multi-object tracking filters require a birth density to detect new objects from measurement data. If the initial positions of new objects are unknown, it may be useful to choose an adaptive birth density. In this paper, a circular birth density is proposed, which is placed like a band around the surveillance area. This allows for 360° coverage. The birth density is described in polar coordinates and considers all point-symmetric quantities such as radius, radial velocity and tangential velocity of objects entering the surveillance area. Since it is assumed that these quantities are unknown and may vary between different targets, detected trajectories, and in particular their initial states, are used to estimate the distribution of initial states. The adapted birth density is approximated as a Gaussian mixture, so that it can be used for filters operating on Cartesian coordinates.
Virtual measurement models (VMM) can be used to generate artificial measurements and emulate complex sensor models such as Lidar. The input of the VMM is an estimation and the output is the set of measurements this estimation would cause. A Kalman filter with extension estimation based on random matrices is used to filter mean and covariance of the real measurements. If these match the mean and covariance of the artificial measurements, then the given estimation is appropriate. The optimal input of the VMM is found using an adaptation algorithm. In this paper, the VMM approach is expanded for multi-extended object tracking where objects can be occluded and are only partially visible. The occlusion can be compensated if the extension estimation is performed for all objects together. The VMM now receives as input an estimation for the multi-object state and the output are the measurements that this multi-object state would cause.
Digitization and sustainability are the two big topics of our current time. As the usage of digital products like IoT devices continues to grow, it affects the energy consumption caused by the Internet. At the same time, more and more companies feel the need to become carbon neutral and sustainable. Determining the environmental impact of an IoT device is challenging, as the production of the hardware components should be considered and the electricity consumption of the Internet since this is the primary communication medium of an IoT device. Estimating the electricity consumption of the Internet itself is a complex task. We performed a life cycle assessment (LCA) to determine the environmental impact of an intelligent smoke detector sold in Germany, taking its whole life-cycle from cradle-to-grave into account. We applied the impact assessment method ReCiPe 2016 Midpoint and compared its results with ILCD 2011 Midpoint+ to check the robustness of our results. The LCA results showed that electricity consumption during the use phase is the main contributor to environmental impacts. The mining of coal causes this contribution, which is a part of the German electricity mix. Consequently, the smoke detector mainly contributes to the impact categories of freshwater and marine ecotoxicity, but only marginally to global warming.
An IT-GRC approach in SME
(2022)
The digital transformation of business processes and the integration of IT systems leads to opportunities and risks for small and medium-sized enterprises (SMEs). Risks that can result in a lack of IT compliance. The purpose of this research-in-progress paper is to present the current state of a IT-Governance-Risk-Compliance (IT-GRC) research-project. First, the results of an already conducted literature research will be discussed, combined with qualitative interviews (expert survey) of persons close to IT compliance. In the context of this paper, a first design approach will be developed by selecting relevant existing frameworks and standards and the identification of SME-specific conditions. The first design is intended to contribute a further artefact conception of tailoring approaches and standards and the creation of a guidance.
Low-Code Development Platforms (LCDPs) enable non-information technology (IT) personnel to develop applications and workflows independently of the IT department. Consequently, these digital platforms help to overcome the growing need for software development. However, science and practice warn of several barriers that slow down or hinder the usage of LCDPs. This publication scientifically identifies, analyzes, and discusses challenges during implementation and application of LCDPs from both perspectives in a holistic manner. Therefore, we conduct an exploratory study (data from scientific literature, expert interviews, and practical studies) and assign the challenges to the socio-technical system model. The results show that the scientific and practical communities recognize common challenges (especially knowledge transfer) but also perceive differences related to technological (science) and social (practice) aspects. This paper proposes future research directions for academia, such as governance, culture change, and value evaluation of LCDPs. Additionally, practitioners can prepare for possible challenges when using LCPDs.
In recent years, there has been a noticeable trend towards a general contractor strategy for plant engineering companies. Multiple disciplines and departments must be administered in a joint project. In the process, different work results are often managed in various systems without any associative relationship. A possible way to address this complexity is to implement a specifically tailored PLM strategy to gain a competitive advantage. Maturity models as well as methods to evaluate possible benefits constitute increasingly applied tools during this journey. Both methods have been theoretically described in previous publications. However, this paper should provide insights in the practical application within machinery industry. Therefore, a medium-sized German plant engineering company serves as an example for determining the scope and value of a multi-national overarching Product Lifecycle Management architecture as the central piece of a future digitalization strategy. The company’s current maturity levels for several digitalization capabilities are evaluated, prioritized and benchmarked against a set of similar companies. This allows to derive suitable target states in terms of maturity levels as well as the technical specification of digitalization use cases. In order to provide profound data for cost justification the resulting benefits are quantified.
The development of a new product can be accelerated by using an approach called crowdsourcing. The engineers compete and try their best to provide the related solution based on the given product requirement submitted in the online crowdsourcing platform. The one who has submitted the best solution get a financial reward. This approach is proven to be three time faster than the conventional one. However, the crowdsourcing process is usually not transparent to a new user. The risk for the execution of a new project for developing a new product is not easy to be calculated [1, 2]. We developed a method InnoCrowd to handle this problem and the new user could use during the planning of a new product development project. This system uses AI concepts to generate a knowledgebase representing histories of successful product development projects. The system uses the knowledge to determine qualitative and quantitative risks of a new project. This paper describes the new method, the InnoCrowd design, and results of a validation experiment based on data from a current crowdsourcing platform. Finally, we compare InnoCrowd to related methods and systems in terms of design and benefits.
Digitization extends to all areas of people's lives and processes, including public administration and government technology (GovTech for short). However, there are various problems here, such as the inappropriate development of new application systems, that are to be solved efficiently by combining two aspects: methodical digitization according to the process-driven approach and the idea of an app store for processes. This simultaneously fuels a process competition to advance methodical process digitization in the EU. Furthermore, this study explains the target-oriented use of this “firing” within the EU and concludes with a proposal of a new 3-schema architecture standard for successful process digitization within the EU.
As fish farming is becoming more and more important worldwide, this ongoing project aims at the simulation and test-based analysis of highly stressed wire contacts, as they are found in off-shore fish farm cages in order to make them more reliable. The quasi-static tensile test of a wire mesh provides data for the construction of a finite element model to get a better understanding of the behavior of high-strength stainless steel from which the cages are made. Fatigue tests provide new insights that are used for an adjustment of the finite element model in order to predict the probability of possible damage caused by heavy mechanical loads (waves, storms, predators (sharks)).
Dissipation of heat can be a major challenge when applying sensor systems outdoors under varying environmental conditions. Typically, complex software and knowledge is needed to optimize thermal management. In this paper it is shown how the thermal optimization of a LiDAR (light detection and ranging) sensor can be performed efficiently. This approach uses standard CAD (computer aided design) software, which is readily available, and saves time and cost as the thermal design can be optimized before experimental realisation. A four-step process was developed and realized: (i) Measurement of the thermal energy distribution of the current sensor design; (ii) Simulation of the time-dependant thermal behaviour using standard CAD software; (iii) Simulation of a thermally optimized design. This was compared quantitatively with the original design and was also used for verification of sufficient increase in heat dissipation; (iv) Experimental realisation and verification of the optimized design. It could be shown that the optimized prototype shows significantly improved thermal behaviour in accordance with the predictions from the simulations. The new LiDAR sensor shows lower heat generation and optimized dissipation of thermal energy which proofs the applicability of the approach to complex sensors.
Dynamic Real-Time Range Queries (DRRQ) are a common means to handle mobile clients in high-density areas where both, clients requested by the query and the inquirers, are mobile. In contrast to the very well-known continuous range queries, only a few approaches, such as Adaptive Quad Streaming (AQS), address the mandatory scalability and real-time requirements of these so-called ad-hoc mobility challenges. In this paper we present the highly decentralized solution Adaptive Quad Streaming Flexible (AQSflex) as an extension of the already existing more theoretical AQS approach. Beside a highly distributed cell structure without data structures and a lightweight streaming communication, we use a multi-cell-assignment on limited pool resources instead of an idealistic unlimited cell-per-server assignment. The described experimental results show the potential of our local capacity balancing scheme for cell handover in a strongly decentralized setting. Leafs of a cell hierarchy define a kind of self-optimizing fuzzy edge for the processing resources in high-density systems without any centralized controlling or cloud component.
Feature-Based Proposal Density Optimization for Nonlinear Model Predictive Path Integral Control
(2022)
This paper presents a novel feature-based sampling strategy for nonlinear Model Predictive Path Integral (MPPI) control. In MPPI control, the optimal control is calculated by solving a stochastic optimal control problem online using the weighted inference of stochastic trajectories. While the algorithm can be excellently parallelized the closed- loop performance is dependent on the information quality of the drawn samples. Because these samples are drawn using a proposal density, its quality is crucial for the solver and thus the controller performance. In classical MPPI control, the explored state-space is strongly constrained by assumptions that refer to the control value variance, which are necessary for transforming the Hamilton-Jacobi-Bellman (HJB) equation into a linear second-order partial differential equation. To achieve excellent performance even with discontinuous cost-functions, in this novel approach, knowledge-based features are used to determine the proposal density and thus, the region of state- space for exploration. This paper addresses the question of how the performance of the MPPI algorithm can be improved using a feature-based mixture of base densities. Further, the developed algorithm is applied on an autonomous vessel that follows a track and concurrently avoids collisions using an emergency braking feature.
This paper presents a systematic comparison of different advanced approaches for motion prediction of vessels for docking scenarios. Therefore, a conventional nonlinear gray-box-model, its extension to a hybrid model using an additional regression neural network (RNN) and a black-box-model only based on a RNN are compared. The optimal hyperparameters are found by grid search. The training and validation data for the different models is collected in full-scale experiments using the solar research vessel Solgenia. The performances of the different prediction models are compared in full-scale scenarios. %To use the investigated approaches for controller design, a general optimal control problem containing the advanced models is described. These can improve advanced control strategies e.g., nonlinear model predictive control (NMPC) or reinforcement learning (RL). This paper explores the question of what the advantages and disadvantages of the different presented prediction approaches are and how they can be used to improve the docking behavior of a vessel.
This paper presents the swinging up and stabilization control of a Furuta pendulum using the recently published nonlinear Model Predictive Path Integral (MPPI) approach. This algorithm is based on a path integral over stochastic trajectories and can be parallelized easily. The controller parameters are tuned offline regarding the nonlinear system dynamics and simulations. Constraints in terms of state and input are taken into account in the cost function. The presented approach sequentially computes an optimal control sequence that minimizes this optimal control problem online. The control strategy has been tested in full-scale experiments using a pendulum prototype. The investigated MPPI controller has demonstrated excellent performance in simulation for the swinging up and stabilizing task. In order to also achieve outstanding performance in a real-world experiment using a controller with limited computing power, a linear quadratic controller (LQR) is designed for the stabilization task. In this paper, the determination of the controller parameters for the MPPI algorithm is described in detail. Further, a discussion treats the advantages of the nonlinear MPPI control.
Docking Control of a Fully-Actuated Autonomous Vessel using Model Predictive Path Integral Control
(2022)
This paper presents the docking control of an autonomous vessel using the nonlinear Model Predictive Path Integral (MPPI) approach. This algorithm is based on a path integral over stochastic trajectories and can be parallelized easily. The controller parameters are tuned offline using knowledge of the system and simulations, including nonlinear state and disturbance observer. The cost function implicitly contains information regarding the surrounding of the docking position. This approach allows continuous optimization of the trajectory with respect to the system state, disturbance state and actuator dynamics. The control strategy has been tested in full-scale experiments using the solar research vessel Solgenia. The investigated MPPI controller has demonstrated excellent performance in both, simulation and real-world experiments. This paper addresses the question of how the MPPI algorithm can be applied to dock a fully-actuated vessel and what benefits its application achieves.
In the last decade, both sustainability (Green &
Blue Economies) and business models for sustainability
(BMfS) have increased in importance. Social life cycle
sustainability assessment has not fully achieved goal,
mainly because sustainability‐oriented business is very
complex and dynamic. System Dynamics (SD) is a powerful
methodology and computer simulation modeling technique
for framing, understanding and discussing complex issues
and problems. This paper responds to the urgent need for
a new business model by presenting a concept for dynamic
business modeling for sustainability using system dynamics.
The paper illustrates the key operating principles through
an application from the smartphone industry with help
from STELLA® software for simulation. Simulations
suggest that dynamic business modeling for sustainability
may contribute to sustainable business model research and
practice by introducing a systemic design tool that frames
environmental, social, and economic drivers of value
generation into a dynamic business model causal feedback
structure, therefore overcoming shortcomings of current
business models when applied to complex systems.
A key objective of this research is to take a more detailed look at a central aspect of resilience in small and medium-sized enterprises (SMEs). A literature review and expert interviews were used to investigate which factors have an impact on the innovative capacity of start-ups and whether these can also be adapted by SMEs. First of all, it must be stated that there are considerable structural and process-related differences between start-ups and SMEs. These can considerably inhibit cooperation between the two forms of enterprise. However, in the same context, success factors and issues in the start-up sector could also be identified that can improve cooperation with SMEs. These and other findings are then discussed in both an economic and an academic context. This article was written as part of the research activities of the Smart Services Competence Centre (proper name: Kompetenzzentrum Smart Services), a central contact point for all questions in the area of smart service digitalization in Baden-Wuerttemberg. Here, companies can obtain information about various digital technologies and take advantage of various measures for the development of new ideas and innovative services (Kompetenzzentrum Smart Services BW: Über das Kompetenzzentrum, 2021).
Sleep is essential to existence, much like air, water, and food, as we spend nearly one-third of our time sleeping. Poor sleep quality or disturbed sleep causes daytime solemnity, which worsens daytime activities' mental and physical qualities and raises the risk of accidents. With advancements in sensor and communication technology, sleep monitoring is moving out of specialized clinics and into our everyday homes. It is possible to extract data from traditional overnight polysomnographic recordings using more basic tools and straightforward techniques. Ballistocardiogram is an unobtrusive, non-invasive, simple, and low-cost technique for measuring cardiorespiratory parameters. In this work, we present a sensor board interface to facilitate the communication between force sensitive resistor sensor and an embedded system to provide a high-performing prototype with an efficient signal-to-noise ratio. We have utilized a multi-physical-layer approach to locate each layer on top of another, yet supporting a low-cost, compact design with easy deployment under the bed frame.
Probabilistic Short-Term Low-Voltage Load Forecasting using Bernstein-Polynomial Normalizing Flows
(2021)
The transition to a fully renewable energy grid requires better forecasting of demand at the low-voltage level. However, high fluctuations and increasing electrification cause huge forecast errors with traditional point estimates. Probabilistic load forecasts take future uncertainties into account and thus enables various applications in low-carbon energy systems. We propose an approach for flexible conditional density forecasting of short-term load based on Bernstein-Polynomial Normalizing Flows where a neural network controls the parameters of the flow. In an empirical study with 363 smart meter customers, our density predictions compare favorably against Gaussian and Gaussian mixture densities and also outperform a non-parametric approach based on the pinball loss for 24h-ahead load forecasting for two different neural network architectures.
Ballistocardiography (BCG) can be used to monitor heart rate activity. Besides, the accelerometer should have high sensitivity and minimal internal noise; a low-cost approach was taken into consideration. Several measurements have been executed to determine the optimal positioning of a sensor under the mattress to obtain a signal strong enough for further analysis. A prototype for an unobtrusive accelerometer-based measurement system has been developed and tested in a conventional bed without any specific extras. The influence of the human sleep position for the output accelerometer data was tested. The obtained results indicate the potential to capture BCG signals using accelerometers. The measurement system can detect heart rate in an unobtrusive form in the home environment.
We compared vulnerable and fixed versions of the source code of 50 different PHP open source projects based on CVE reports for SQL injection vulnerabilities. We scanned the source code with commercial and open source tools for static code analysis. Our results show that five current state-of-the-art tools have issues correctly marking vulnerable and safe code. We identify 25 code patterns that are not detected as a vulnerability by at least one of the tools and 6 code patterns that are mistakenly reported as a vulnerability that cannot be confirmed by manual code inspection. Knowledge of the patterns could help vendors of static code analysis tools, and software developers could be instructed to avoid patterns that confuse automated tools.
The last decades have shown that the volume of tourism, in general, is constantly increasing (with some justified exceptions). To offer a possibility of travel for all groups of people, it is necessary to pay attention to accessibility. One of the possibilities for increasing accessibility is digital technologies, which could assist in planning and the implementation and completion of trips. To make a selection of technologies, first, a study of barriers was conducted, which was then analyzed, and finally, some technologies were made available in a test setup. A focus on two technologies was made: 360°-Tours and mobile app with the travel information. The two technologies were implemented and presented to the test subjects.
The evaluation results showed that both technologies could increase accessibility if some essential aspects (such as usability, completeness, relevance, etc.) are considered during the implementation.
The development of home health systems can provide continuous and user-friendly monitoring of key health parameters. This project aims to create a concept for such a system, implement it on a test basis, and evaluate it. Three health areas were selected for this purpose:
Sleep, Stress, and Rehabilitation. Appropriate devices were installed in the homes of test subjects and used by them for two weeks. Besides, relevant questionnaires were completed to obtain a complete picture. Finally, the implemented system was evaluated, and the results of the conducted study showed that home health systems have great potential. However, it is necessary to consider some points to increase the usability of the system and the motivation of the users. Among others, ease of use of the equipment is of extreme importance.
Health monitoring in a home environment can have broader use since it may provide continuous control of health parameters with relatively minor intrusiveness into regular life. This work aims to verify if it is possible to replace the typical in some sleep medicine areas subjective questioning by an objective measurement using electronic devices. For this purpose, a study was conducted with ten subjects, in which objective and subjective measurement of relevant sleep parameters took place. The results of both measurement methods were evaluated and analyzed. The results showed that while for some measures, such as Total Time in Bed, there is a high agreement between objective and subjective measurements, for others, such as sleep quality, there are significant differences. For this reason, currently, a combination of both measurement methods may be beneficial and provide the most detailed results, while a partial replacement can already reduce the number of questions at the subjective measurement by measurement through electronic devices.
Identifikation von Schlaf- und Wachzuständen durch die Auswertung von Atem- und Bewegungssignalen
(2021)
In this paper, a novel measurement model based on spherical double Fourier series (DFS) for estimating the 3D shape of a target concurrently with its kinematic state is introduced. Here, the shape is represented as a star-convex radial function, decomposed as spherical DFS. In comparison to ordinary DFS, spherical DFS do not suffer from ambiguities at the poles. Details will be given in the paper. The shape representation is integrated into a Bayesian state estimator framework via a measurement equation. As range sensors only generate measurements from the target side facing the sensor, the shape representation is modified to enable application of shape symmetries during the estimation process. The model is analyzed in simulations and compared to a shape estimation procedure using spherical harmonics. Finally, shape estimation using spherical and ordinary DFS is compared to analyze the effect of the pole problem in extended object tracking (EOT) scenarios.
Twenty-first century infrastructure needs to respond to changing demographics, becoming climate neutral, resilient and economically affordable, while remaining a driver for development and shared prosperity. However, the infrastructure sector remains one of the least innovative and digitalised, plagued by delays, cost overruns and benefit shortfalls (Cantarelli et al. 2008; Flyvbjerg, 2007; Flyvbjerg et al., 2003; Flyvbjerg et al., 2004). The root cause is the prevailing fragmentation of the infrastructure sector (Fellows and Liu, 2012). To help overcome these challenges, integration of the value chain is needed. This could be achieved through a use-case-based creation of federated ecosystems connecting open and trusted data spaces and advanced services applied to infrastructure projects. Such digital platforms enable full-lifecycle participation and responsible governance guided by a shared infrastructure vision. Digital federation enables secure and sovereign data exchange and thus collaboration across the silos within the infrastructure sector and between industries as well as within and between countries. Such an approach to infrastructure technology policy would not rely on technological solutionism but proposes the development of open and trusted data alliances. Federated data spaces provide access to the emerging data economy, especially for SMEs, and can foster the innovation of new digital services. Such responsible digital governance can help make the infrastructure sector more resilient, efficient and aligned with the realisation of ambitious decarbonisation and environmental protection targets. The European Union and the United States have already developed architectures for sovereign and secure data exchange.
Preliminary results of homomorphic deconvolution application to surface EMG signals during walking
(2021)
Homomorphic deconvolution is applied to sEMG signals recorded during walking. Gastrocnemius lateralis and tibialis anterior signals were acquired according to SENIAM recommendation. MUAP parameters like amplitude and scale were estimated, whilst the MUAP shape parameter was fixed. This features a useful time-frequency representation of sEMG signal. Estimation of scale MUAP parameter was verified extracting the mean frequency of filtered EMG signal, extracted from the scale parameter estimated with two different MUAP shape values.
Normal breathing during sleep is essential for people’s health and well-being. Therefore, it is crucial to diagnose apnoea events at an early stage and apply appropriate therapy. Detection of sleep apnoea is a central goal of the system design described in this article. To develop a correctly functioning system, it is first necessary to define the requirements outlined in this manuscript clearly. Furthermore, the selection of appropriate technology for the measurement of respiration is of great importance. Therefore, after performing initial literature research, we have analysed in detail three different methods and made a selection of a proper one according to determined requirements. After considering all the advantages and disadvantages of the three approaches, we decided to use the impedance measurement-based one. As a next step, an initial conceptual design of the algorithm for detecting apnoea events was created. As a result, we developed an activity diagram on which the main system components and data flows are visually represented.
Respiratory diseases are leading causes of death and disability in the world. The recent COVID-19 pandemic is also affecting the respiratory system. Detecting and diagnosing respiratory diseases requires both medical professionals and the clinical environment. Most of the techniques used up to date were also invasive or expensive.
Some research groups are developing hardware devices and techniques to make possible a non-invasive or even remote respiratory sound acquisition. These sounds are then processed and analysed for clinical, scientific, or educational purposes.
We present the literature review of non-invasive sound acquisition devices and techniques.
The results are about a huge number of digital tools, like microphones, wearables, or Internet of Thing devices, that can be used in this scope.
Some interesting applications have been found. Some devices make easier the sound acquisition in a clinic environment, but others make possible daily monitoring outside that ambient. We aim to use some of these devices and include the non-invasive recorded respiratory sounds in a Digital Twin system for personalized health.
Innovation Labs
(2021)
Today's increasing pace of change and intense competition places demands on organizations to use a different approach to innovation, going beyond the incremental innovation that is typically developed within the core of the organization. As an option to escape the existing beliefs of the core organization, innovation labs are used to develop more discontinuous innovation. Despite the abundance of these so-called innovation labs in practice, researchers have devoted little effort to scrutinizing the concept and to provide managers with a framework for exploiting this form of innovation. In this paper, we aim to perform an empirical investigation and to create a consensus around the concept of innovation labs. To do so, we conducted a multiple case study in large international organizations with a total of 31 interviews of an average length of 70 minutes. We offer a framework by identifying four innovation lab types and consider when each is most appropriate. Furthermore, we highlight the importance for managers and their organizations to align the strategic intent with the innovation lab type as well as the interface between the innovation lab and the core business.
Text produced by entrepreneurs represents a data source in entrepreneurship research on venture performance and fund-raising success. Manual text coding of single variables is increasingly assisted or replaced by computer-aided text analysis. Yet, for the development of prediction models with several variables, such dictionary-based text analysis methods are less suitable. Natural language processing techniques are an alternative; however, the implementation is more complex and requires substantial programming skills. More work is required to understand how text analytics can advance entrepreneurship research. This study hence experiments with different artificial intelligence methods rooted in Natural Language Processing and deep learning. It uses 766 business plans to train a model for the automated measurement of transaction relations, a construct which is an indicator for new technology-based firm survival. Empirical findings show that the accuracy of construct measurement can be significantly increased with automated methods and improves with larger amounts of training data. Language complexity sets limits to the precision of automated construct measurement though. We therefore recommend a hybrid approach: making use of the inherent advantages of combining automated with human coding until the amount of training data is sufficiently large to substitute the human coding completely. The study provides insights into the applicability of different text analytics methods in entrepreneurship research and points at future research potential.
Cultural Mapping 4.0
(2021)
Die Bodenseeregion gehört zu einer der ältesten Kulturlandschaften Europas. Ihre regionale kulturelle Identität trägt zum Image sowie Identifikation seitens der Bevölkerung mit der Bodenseeregion bei. Dennoch mangelt es an einer ganzheitlichen, die gesamte Bodenseeregion umfassende Betrachtung der Frage, was die kulturelle Identität der Bodenseeregion ausmacht. Das Forschungsprojekt «CultMap4.0» hat daher zum Ziel, aus einer räumlichen Perspektive die Wechselwirkung zwischen regionaler Identität, Kultur und Mobilität zu untersuchen. Neben der Leitfrage, was kulturelle Identität in einer grenzüberschreitenden und diversen Region wie dem Bodensee zu sein und leisten vermag, werden im Rahmen von vier Themenschwerpunkten folgende Forschungsfragen untersucht: Wie nehmen einheimische Bevölkerung, Unternehmen und TouristInnen die regionale kulturelle Identität (Eigenbild) und das Image (Fremdbild) der Bodenseeregion wahr? Wie kann der Ansatz des “Cultural Mappings” durch partizipative Kartierung digital transformiert werden, und wie können kulturelle Identität und Mobilität mit digitalem Storytelling in Storymaps visualisiert werden? Welchen Beitrag kann “Cultural Mapping 4.0” als partizipatives Werkzeug zur Regionalplanung und zur Kommunikation mit Stakeholdern in der Bodenseeregion und anderenorts leisten? Die dabei entstehenden Storymaps – interaktive Webinhalte aus Texten, Karten und weiteren Medien – zur kulturellen Identität der Bodenseeregion sollen auf der Plattform “Cultural Mapping” Project Lake Constance” veröffentlicht werden, um so von den Stakeholdern als Planungs- und Entscheidungstool sowie fürs Standortmarketing genutzt werden zu können.
Acoustic Echo Cancellation (AEC) plays a crucial role in speech communication devices to enable full-duplex communication. AEC algorithms have been studied extensively in the literature. However, device specific details like microphone or loudspeaker configurations are often neglected, despite their impact on the echo attenuation or near-end speech quality. In this work, we propose a method to investigate different loudspeaker-microphone configurations with respect to their contribution to the overall AEC performance. A generic AEC system consisting of an adaptive filter and a Wiener post filter is used for a fair comparison between different setups. We propose the near-end-to-residual-echo ratio (NRER) and the attenuation-of-near-end (AON) as quality measures for the full-duplex AEC performance.
Female Entrepreneurship has gained interest over the last 20 years. Therefore, this paper analyses 7,320 articles of the research field ‘women in entrepreneurial context’ published in 885 journals. The sample is analyzed by using a machine learning and text mining based methodological approach. Aiming to provide a broad overview over the research literature, 41 clusters and 11 superordinate topics were identified. Major developments of research attention are outlined by analyzing bibliometric data of the period from 2000 to 2020. Overall growth in terms of research attention measured by the development of yearly citations per article is best noticeable in clusters ‘corporate social responsibility’, ‘brand’, and ‘corporate (-governance)’, and in superordinate topics ‘performance’, ‘education’, and ‘corporate (board/ management)’. There are also indicators for an overall increase of research attention and cluster variety. The synthesis provides an insight into most trending superordinate topics. Therefore, this literature review gives a comprehensive and descriptive overview as well as an insight into thematic trend developments of the research field.
The encoding of antenna patterns with generalized spatial modulation as well as other index modulation techniques require w-out-of-n encoding where all binary vectors of length n have the same weight w. This constant-weight property cannot be obtained by conventional linear coding schemes. In this work, we propose a new class of constant-weight codes that result from the concatenation of convolutional codes with constant-weight block codes. These constant-weight convolutional codes are nonlinear binary trellis codes that can be decoded with the Viterbi algorithm. Some constructed constant-weight convolutional codes are optimum free distance codes. Simulation results demonstrate that the decoding performance with Viterbi decoding is close to the performance of the best-known linear codes. Similarly, simulation results for spatial modulation with a simple on-off keying show a significant coding gain with the proposed coded index modulation scheme.
List decoding for concatenated codes based on the Plotkin construction with BCH component codes
(2021)
Reed-Muller codes are a popular code family based on the Plotkin construction. Recently, these codes have regained some interest due to their close relation to polar codes and their low-complexity decoding. We consider a similar code family, i.e., the Plotkin concatenation with binary BCH component codes. This construction is more flexible regarding the attainable code parameters. In this work, we consider a list-based decoding algorithm for the Plotkin concatenation with BCH component codes. The proposed list decoding leads to a significant coding gain with only a small increase in computational complexity. Simulation results demonstrate that the Plotkin concatenation with the proposed decoding achieves near maximum likelihood decoding performance. This coding scheme can outperform polar codes for moderate code lengths.
Nowadays, the inexpensive memory space promotes an accelerating growth of stored image data. To exploit the data using supervised Machine or Deep Learning, it needs to be labeled. Manually labeling the vast amount of data is time-consuming and expensive, especially if human experts with specific domain knowledge are indispensable. Active learning addresses this shortcoming by querying the user the labels of the most informative images first. One way to obtain the ‘informativeness’ is by using uncertainty sampling as a query strategy, where the system queries those images it is most uncertain about how to classify. In this paper, we present a web-based active learning framework that helps to accelerate the labeling process. After manually labeling some images, the user gets recommendations of further candidates that could potentially be labeled equally (bulk image folder shift). We aim to explore the most efficient ‘uncertainty’ measure to improve the quality of the recommendations such that all images are sorted with a minimum number of user interactions (clicks). We conducted experiments using a manually labeled reference dataset to evaluate different combinations of classifiers and uncertainty measures. The results clearly show the effectiveness of an uncertainty sampling with bulk image shift recommendations (our novel method), which can reduce the number of required clicks to only around 20% compared to manual labeling.
Summary of the 9th workshop on metallization and interconnection for crystalline silicon solar cells
(2021)
The 9th edition of the Workshop on Metallization and Interconnection for Crystalline Silicon Solar Cells was held as an online event but nevertheless reached the workshop goals of knowledge sharing and networking. The technology of screen-printed contacts of high temperature pastes continues its fast progress enabled by better understanding of the phenomena taking place during printing and firing, and progress in materials. Great improvements were also achieved in low temperature paste printing and plated metallization. In the field of interconnection, progress was reported on multiwire approaches, electrically conductive adhesives and on foil-based approaches. Common to many contributions at the workshop was the use of advanced laser processes to improve performance or throughput.
Continuous range queries are a common means to handle mobile clients in high-density areas. Most existing approaches focus on settings in which the range queries for location-based services are mostly static whereas the mobile clients in the ranges move. We focus on a category called Dynamic Real-Time Range Queries (DRRQ) assuming that both, clients requested by the query and the inquirers, are mobile. In consequence, the query parameters results continuously change. This leads to two requirements: the ability to deal with an arbitrary high number of mobile nodes (scalability) and the real-time delivery of range query results. In this paper we present the highly decentralized solution Adaptive Quad Streaming (AQS) for the requirements of DRRQs. AQS approximates the query results in favor of a controlled real-time delivery and guaranteed scalability. While prior works commonly optimizes data structures on servers, we use AQS to focus on a highly distributed cell structure without data structures automatically adapting to changing client distributions. Instead of the commonly used request-response approach, we apply a lightweight streaming method in which no bidirectional communication and no storage or maintenance of queries are required at all.
Trajectory Tracking of a Fully-actuated Surface Vessel using Nonlinear Model Predictive Control
(2021)
The trajectory tracking problem for a fully-actuated real-scaled surface vessel is addressed in this paper. The unknown hydrodynamic and propulsion parameters of the vessel’s dynamic model were identified using an experimental maneuver-based identification process. Then, a nonlinear model predictive control (NMPC) scheme is designed and the controller’s performance is assessed through the variation of NMPC parameters and constraints tightening for tracking a curved trajectory.
This paper describes the development of a control system for an industrial heating application. In this process a moving substrate is passing through a heating zone with variable speed. Heat is applied by hot air to the substrate with the air flow rate being the manipulated variable. The aim is to control the substrate’s temperature at a specific location after passing the heating zone. First, a model is derived for a point attached to the moving substrate. This is modified to reflect the temperature of the moving substrate at the specified location. In order to regulate the temperature a nonlinear model predictive control approach is applied using an implicit Euler scheme to integrate the model and an augmented gradient based optimization approach. The performance of the controller has been validated both by simulations and experiments on the physical plant. The respective results are presented in this paper.
In multi-extended object tracking, parameters (e.g., extent) and trajectory are often determined independently. In this paper, we propose a joint parameter and trajectory (JPT) state and its integration into the Bayesian framework. This allows processing measurements that contain information about parameters and states. Examples of such measurements are bounding boxes given from an image processing algorithm. It is shown that this approach can consider correlations between states and parameters. In this paper, we present the JPT Bernoulli filter. Since parameters and state elements are considered in the weighting of the measurement data assignment hypotheses, the performance is higher than with the conventional Bernoulli filter. The JPT approach can be also used for other Bayes filters.
Im Rahmen des KONTEC Kongresses 2021 in Dresden wurden sowohl ein Poster als auch ein Paper des Forschungsprojekts EKont veröffentlicht. Neben der Schilderung des Versuchsaufbaus werden neuartige Schneidprozesse und Abtragsprinzipien vorgestellt. Im Anschluss daran werden vier Prototypen (gleichsinniger Stufenfräser, gegenläufiger Stufenfräser, mittig gegenläufiger Stufenfräser - Getriebe und oszillierender Werkzeugaufsatz) beschrieben.
The main aim of presented in this manuscript research is to compare the results of objective and subjective measurement of sleep quality for older adults (65+) in the home environment. A total amount of 73 nights was evaluated in this study. Placing under the mattress device was used to obtain objective measurement data, and a common question on perceived sleep quality was asked to collect the subjective sleep quality level. The achieved results confirm the correlation between objective and subjective measurement of sleep quality with the average standard deviation equal to 2 of 10 possible quality points.
Cultural Mapping 4.0
(2021)
Cultural mapping aims to capture and visualize tangible and intangible cultural assets. This extend abstract proposes the consequent extension of analogue forms of cultural mapping using digital technologies, and its contribution is two-fold. First, the necessary theoretical basis is provided by a literature review of the still-young field of cultural mapping and the complementary disciplines of participatory mapping and digital story-mapping. Second, we propose a digitally enhanced Cultural Mapping 4.0 vision based on a case study from an ongoing research project in the Lake Constance region. Digital participatory mapping approaches are applied to capture data, and to validate and disseminate the results, story-mapping - a spatial form of digital storytelling - is used.
This paper presents a generic method to enhance performance and incorporate temporal information for cardiorespiratory-based sleep stage classification with a limited feature set and limited data. The classification algorithm relies on random forests and a feature set extracted from long-time home monitoring for sleep analysis. Employing temporal feature stacking, the system could be significantly improved in terms of Cohen’s κ and accuracy. The detection performance could be improved for three classes of sleep stages (Wake, REM, Non-REM sleep), four classes (Wake, Non-REM-Light sleep, Non-REM Deep sleep, REM sleep), and five classes (Wake, N1, N2, N3/4, REM sleep) from a κ of 0.44 to 0.58, 0.33 to 0.51, and 0.28 to 0.44 respectively by stacking features before and after the epoch to be classified. Further analysis was done for the optimal length and combination method for this stacking approach. Overall, three methods and a variable duration between 30 s and 30 min have been analyzed. Overnight recordings of 36 healthy subjects from the Interdisciplinary Center for Sleep Medicine at Charité-Universitätsmedizin Berlin and Leave-One-Out-Cross-Validation on a patient-level have been used to validate the method.
Since its first edition in 2008, the Workshop on Metallization and Interconnection for Crystalline Silicon SolarCells has been a key event where knowledge in the critical fields of crystalline silicon solar cell metallization andinterconnection is shared between experts from academia and industry. It has become a highly recognized event forthe quality of the contributions, the lively Q&A sessions, and the exceptional networking opportunity.The situation with the Covid-19 pandemic made organizing the 9th edition as an in-person event impossible andforced us to reconsider the event format. The event took place virtually on October 5th and 6th 2020. We used aninnovative online platform that enabled not only presentations followed by Q&A but also more informal interactions,where participants could see and talk directly to other participants. 120 experts from 22 countries took part andattended 21 contributions presented live. In spite of a few technical glitches, the workshop was successful and thegoals of exchanging on the state-of-the-art in research/industry and connecting experts in the field were achieved.All presentations are available on www.miworkshop.info as .pdf documents. These proceedings contain asummary of the 9th edition (MIW2020) and peer-reviewed papers based on the workshop contributions. The organizerswish to thank the members of the Scientific Committee for the time spent reviewing the MIW2020 abstracts andproceedings. The organizers also wish to thank again the sponsors and supporters for their financial contributionswhich made the 9th Workshop on Metallization and Interconnection for Crystalline Silicon Solar Cells possible.
In this paper, a systematic comparison of three different advanced control strategies for automated docking of a vessel is presented. The controllers are automatically tuned offline by applying an optimization process using simulations of the whole system including trajectory planner and state and disturbance observer. Then investigations are conducted subject to performance and robustness using Monte Carlos simulation with varying model parameters and disturbances. The control strategies have also been tested in full scale experiments using the solar research vessel Solgenia. The investigated control strategies all have demonstrated very good performance in both, simulation and real world experiments. Videos are available under https://www.htwg-konstanz.de/forschung-und-transfer/institute-und-labore/isd/regelungstechnik/videos/
Guiding through the Fog
(2021)
Corporate Entrepreneurship (CE) programs are formalized efforts to realize entrepreneurial activities in established companies. Despite the growing and evolving landscape of CE programs, effectively managing them remains a challenging endeavor which results in disappointing outcomes and oftentimes leads to the early termination of such programs. We unmask the differences in goal setting of CE programs and highlight that setting appropriate goals is imperative for their desired outcomes. In practice, companies seem to struggle with the goal setting, and scholars have not yet fully solved the puzzle of goals setting in the context of CE programs either. Therefore, we set out to explore the current state of goal setting in the context of CE programs building upon 61 semi-structured interviews with CE program executives from cross-industry companies with different sizes. Our study contributes to a better understanding of goal setting in the context of CE programs by (1) characterizing the goal setting of CE programs based on goal attributes and goal types and (2) identifying differences among the goal setting of CE programs. We provide implications to practice for a more effective management of CE programs and conclude with a discussion for future research on the impact of the different goal settings.
Market research institutes forecast a growing relevance of Low-Code Development Platforms (LCDPs) for organizations. Moreover, the rising number of scientific publications in recent years shows the increasing interest of the academic community. However, an overview of current research focuses and fruitful future research topics is missing. This paper conducts a first scientific literature review on LCDPs to close this gap. The socio-technical system (STS) model, which categorizes information systems into a social and a technical system, serves to analyze the identified 32 publications. Most of current research focuses on the technical system (technology or task). In contrast, only three publications explicitly target the social system (structure or people). Hence, this paper enables future research to address the identified research gaps. Additionally, practitioners gain awareness of technical and social aspects involved in the development, implementation, and application of LCDPs.
Deep transformation models
(2021)
We present a deep transformation model for probabilistic regression. Deep learning is known for outstandingly accurate predictions on complex data but in regression tasks it is predominantly used to just predict a single number. This ignores the non-deterministic character of most tasks. Especially if crucial decisions are based on the predictions, like in medical applications, it is essential to quantify the prediction uncertainty. The presented deep learning transformation model estimates the whole conditional probability distribution, which is the most thorough way to capture uncertainty about the outcome. We combine ideas from a statistical transformation model (most likely transformation) with recent transformation models from deep learning (normalizing flows) to predict complex outcome distributions. The core of the method is a parameterized transformation function which can be trained with the usual maximum likelihood framework using gradient descent. The method can be combined with existing deep learning architectures. For small machine learning benchmark datasets, we report state of the art performance for most dataset and partly even outperform it. Our method works for complex input data, which we demonstrate by employing a CNN architecture on image data.
The IETF, concerned with the evolution of the Internet architecture, nowadays also looks into industrial automation processes. The contributions of a variety of IETF activities, initiated during the last ten years, enable now the replacement of proprietary standards by an open standardized protocol stack. This stack, denoted in the following as 6TiSCH-stack, is tailored for industrial internet of things (IIoTs). The suitability of 6TiSCH-stack for Industry 4.0 is yet to explore. In this paper, we identify four challenges that, in our opinion, may delay or hinder its adoption. As a prime example of that, we focus on the initial 6TiSCHnetwork
formation, highlighting the shortcomings of the default procedure and introducing our current work for a fast and reliable formation of dense network.
The recovery of our body and brain from fatigue directly depends on the quality of sleep, which can be determined from the results of a sleep study. The classification of sleep stages is the first step of this study and includes the measurement of vital data and their further processing. The non-invasive sleep analysis system is based on a hardware sensor network of 24 pressure sensors providing sleep phase detection. The pressure sensors are connected to an energy-efficient microcontroller via a system-wide bus. A significant difference between this system and other approaches is the innovative way in which the sensors are placed under the mattress. This feature facilitates the continuous use of the system without any noticeable influence on the sleeping person. The system was tested by conducting experiments that recorded the sleep of various healthy young people. Results indicate the potential to capture respiratory rate and body movement.
Polysomnography is a gold standard for a sleep study, and it provides very accurate results, but its cost (both personnel and material) are quite high. Therefore, the development of a low-cost system for overnight breathing and heartbeat monitoring, which provides more comfort while recording the data, is a well-motivated challenge. The system proposed in this manuscript is based on the usage of resistive pressure sensors installed under the mattress. These sensors can measure slight pressure changes provoked during breathing and heartbeat. The captured signal requires advanced processing, like applying filters and amplifiers before the analog signal is ready for the next step. Then, the output signal is digitalized and further processed by an algorithm that performs a custom filtering before it can recognize breathing and heart rate in real-time. The result can be directly visualized. Furthermore, a CSV file is created containing the raw data, timestamps, and unique IDs to facilitate further processing. The achieved results are promising, and the average deviation from a reference device is about 4bpm.
Good sleep is crucial for a healthy life of every person. Unfortunately, its quality often decreases with aging. A common approach to measuring the sleep characteristics is based on interviews with the subjects or letting them fill in a daily questionnaire and afterward evaluating the obtained data. However, this method has time and personal costs for the interviewer and evaluator of responses. Therefore, it would be important to execute the collection and evaluation of sleep characteristics automatically. To do that, it is necessary to investigate the level of agreement between measurements performed in a traditional way using questionnaires and measurements obtained using electronic monitoring devices. The study presented in this manuscript performs this investigation, comparing such sleep characteristics as "time going to bed", "total time in bed", "total sleep time" and "sleep efficiency". A total number of 106 night records of elderly persons (aged 65+) were analyzed. The results achieved so far reveal the fact that the degree of agreement between the two measurement methods varies substantially for different characteristics, from 31 minutes of mean difference for "time going to bed" to 77 minutes for "total sleep time". For this reason, a direct exchange of objective and subjective measuring methods is currently not possible.
The evaluation of the effectiveness of different machine learning algorithms on a publicly available database of signals derived from wearable devices is presented with the goal of optimizing human activity recognition and classification. Among the wide number of body signals we choose a couple of signals, namely photoplethysmographic (optically detected subcutaneous blood volume) and tri-axis acceleration signals that are easy to be simultaneously acquired using commercial widespread devices (e.g. smartwatches) as well as custom wearable wireless devices designed for sport, healthcare, or clinical purposes. To this end, two widely used algorithms (decision tree and k-nearest neighbor) were tested, and their performance were compared to two new recent algorithms (particle Bernstein and a Monte Carlo-based regression) both in terms of accuracy and processing time. A data preprocessing phase was also considered to improve the performance of the machine learning procedures, in order to reduce the problem size and a detailed analysis of the compression strategy and results is also presented.
In diesem Beitrag wird eine Methode des maschinellen Lernens entwickelt, die die Schlafstadienerkennung untersucht. Übliche Methoden der Schlafanalyse basieren auf der Polysomnographie (PSG). Der präsentierte Ansatz basiert auf Signalen, die ausschließlich nicht-invasiv in einer häuslichen Umgebung gemessen werden können. Bewegungs-, Herzschlags- und Atmungssignale können vergleichsweise leicht erfasst werden aber die Erkennung der Schlafstadien ist dadurch erschwert. Die Signale werden als Zeitreihenfolge strukturiert und in Epochen überführt. Die Leistungsfähigkeit von maschinellem Lernen wird der Polysomnographie gegenübergestellt und bewertet.
Die Schlafapnoe ist eine häufig auftretende Schlafstörung,
die unterschiedliche Auswirkungen auf unseren Alltag hat; so wurde z. B.
über eine Tagesschläfrigkeit von etwa 25 % der Patienten mit obstruktiver
Schlafapnoe (OSA) berichtet. Ziel dieser Arbeit ist die Entwicklung eines
Systems, das eine nichtinvasive Erkennung der Schlafapnoe in häuslicher
Umgebung ermöglichen soll.
Für die Überwachung des Schlafs zu Hause sind nichtinvasive Methoden besonders gut anwendbar. Die Signale, die häufig überwacht werden, sind Herzfrequenz und Atemfrequenz. Die Ballistokardiographie (BCG)ist eine Technik, bei der die Herzfrequenz aus den mechanischen Schwingungen des Körpers bei jedem Herzzyklus gemessen wird. Kürzlich wurden Übersichtsarbeiten veröffentlicht. Die Untersuchung soll in einem ersten Ansatz bewerten, ob die Herzfrequenz anhand von BCG erkannt werden kann. Die wesentlichen Randbedingungen sind, ob dies gelingt, wenn der Sensor unter der Matratze positioniert wird und kostengünstige Sensoren zum Einsatz kommen.
This paper presents the implementation of deep learning methods for sleep stage detection by using three signals that can be measured in a non-invasive way: heartbeat signal, respiratory signal, and movement signal. Since signals are measurements taken during the time, the problem is seen as time-series data classification. Deep learning methods are chosen to solve the problem are convolutional neural network and long-short term memory network. Input data is structured as a time-series sequence of mentioned signals that represent 30 seconds epoch, which is a standard interval for sleep analysis. The records used belong to the overall 23 subjects, which are divided into two subsets. Records from 18 subjects were used for training the data and from 5 subjects for testing the data. For detecting four sleep stages: REM (Rapid Eye Movement), Wake, Light sleep (Stage 1 and Stage 2), and Deep sleep (Stage 3 and Stage 4), the accuracy of the model is 55%, and F1 score is 44%. For five stages: REM, Stage 1, Stage 2, Deep sleep (Stage 3 and 4), and Wake, the model gives an accuracy of 40% and F1 score of 37%.
This work is a study about a comparison of survey tools and it should help developers in selecting a suited tool for application in an AAL environment. The first step was to identify the basic required functionality of the survey tools used for AAL technologies and to compare these tools by their functionality and assignments. The comparative study was derived from the data obtained, previous literature studies and further technical data. A list of requirements was stated and ordered in terms of relevance to the target application domain. With the help of an integrated assessment method, the calculation of a generalized estimate value was performed and the result is explained. Finally, the planned application of this tool in a running project is explained.
Cardiovascular diseases are directly or indirectly responsible for up to 38.5% of all deaths in Germany and thus represent the most frequent cause of death. At present, heart diseases are mainly discovered by chance during routine visits to the doctor or when acute symptoms occur. However, there is no practical method to proactively detect diseases or abnormalities of the heart in the daily environment and to take preventive measures for the person concerned. Long-term ECG devices, as currently used by physicians, are simply too expensive, impractical, and not widely available for everyday use. This work aims to develop an ECG device suitable for everyday use that can be worn directly on the body. For this purpose, an already existing hardware platform will be analyzed, and the corresponding potential for improvement will be identified. A precise picture of the existing data quality is obtained by metrological examination, and corresponding requirements are defined. Based on these identified optimization potentials, a new ECG device is developed. The revised ECG device is characterized by a high integration density and combines all components directly on one board except the battery and the ECG electrodes. The compact design allows the device to be attached directly to the chest. An integrated microcontroller allows digital signal processing without the need for an additional computer. Central features of the evaluation are a peak detection for detecting R-peaks and a calculation of the current heart rate based on the RR interval. To ensure the validity of the detected R-peaks, a model of the anatomical conditions is used. Thus, unrealistic RR-intervals can be excluded. The wireless interface allows continuous transmission of the calculated heart rate. Following the development of hardware and software, the results are verified, and appropriate conclusions about the data quality are drawn. As a result, a very compact and wearable ECG device with different wireless technologies, data storage, and evaluation of RR intervals was developed. Some tests yelled runtimes up to 24 hours with wireless Lan activated and streaming.
We present source code patterns that are difficult for modern static code analysis tools. Our study comprises 50 different open source projects in both a vulnerable and a fixed version for XSS vulnerabilities reported with CVE IDs over a period of seven years. We used three commercial and two open source static code analysis tools. Based on the reported vulnerabilities we discovered code patterns that appear to be difficult to classify by static analysis. The results show that code analysis tools are helpful, but still have problems with specific source code patterns. These patterns should be a focus in training for developers.
We propose and apply a requirements engineering approach that focuses on security and privacy properties and takes into account various stakeholder interests. The proposed methodology facilitates the integration of security and privacy by design into the requirements engineering process. Thus, specific, detailed security and privacy requirements can be implemented from the very beginning of a software project. The method is applied to an exemplary application scenario in the logistics industry. The approach includes the application of threat and risk rating methodologies, a technique to derive technical requirements from legal texts, as well as a matching process to avoid duplication and accumulate all essential requirements.
Die Überwindung des Bruchs (Seam) beim Lernen im Studium zwischen dem Hochschulkontext und der beruflichen Praxis ist durch die zeitlich, räumlich und organisatorisch bedingte Trennung der relevanten Akteure (u. a. Lehrende, Lernende, Unternehmensvertreter) eine sehr große Herausforderung (Milrad et al., 2013). Eine seamless-learning-basierte Konzeption einer Lehrveranstaltung auf Basis agiler Werte und Methoden (u. a. inkrementelles Vorgehen, Fokus auf lernendenzentrierte Veranstaltungen, individualisiertes Lernenden-Feedback) kann bei der Überwindung dieses bedeutenden Bruchs helfen. In dem Poster wird das grundsätzliche Design eines derartigen agilen SL-Konzepts auf Basis eines iterativ, inkrementellen Vorgehens innerhalb eines Semesterzyklus von 15 Wochen in drei Lernsprints erörtert. Darüber hinaus wird über erste Lehrerfahrungen der Dozierenden sowohl aus der Hochschule als auch aus dem industriellen Umfeld und Lernerfahrungen der Studierenden aus den vergangenen zwei Jahren berichtet.
Seamless-Learning-Plattform
(2020)
Side Channel Attack Resistance of the Elliptic Curve Point Multiplication using Gaussian Integers
(2020)
Elliptic curve cryptography is a cornerstone of embedded security. However, hardware implementations of the elliptic curve point multiplication are prone to side channel attacks. In this work, we present a new key expansion algorithm which improves the resistance against timing and simple power analysis attacks. Furthermore, we consider a new concept for calculating the point multiplication, where the points of the curve are represented as Gaussian integers. Gaussian integers are subset of the complex numbers, such that the real and imaginary parts are integers. Since Gaussian integer fields are isomorphic to prime fields, this concept is suitable for many elliptic curves. Representing the key by a Gaussian integer expansion is beneficial to reduce the computational complexity and the memory requirements of a secure hardware implementation.
Multi-Dimensional Connectionist Classification is amethod for weakly supervised training of Deep Neural Networksfor segmentation-free multi-line offline handwriting recognition.MDCC applies Conditional Random Fields as an alignmentfunction for this task. We discuss the structure and patterns ofhandwritten text that can be used for building a CRF. Since CRFsare cyclic graphical models, we have to resort to approximateinference when calculating the alignment of multi-line text duringtraining, here in the form of Loopy Belief Propagation. This workconcludes with experimental results for transcribing small multi-line samples from the IAM Offline Handwriting DB which showthat MDCC is a competitive methodology.
A residual neural network was adapted and applied to the Physionet/Computing data in Cardiology Challenge 2020 to detect 24 different classes of cardiac abnormalities from 12-lead. Additive Gaussian noise, signal shifting, and the classification of signal sections of different lengths were applied to prevent the network from overfitting and facilitating generalization. Due to the use of a global pooling layer after the feature extractor, the network is independent of the signal’s length. On the hidden test set of the challenge, the model achieved a validation score of 0.656 and a full test score of 0.27, placing us 15th out of 41 officially ranked teams (Team name: UC_Lab_Kn). These results show the potential of deep neural networks for ap- plication to raw data and a complex multi-class multi-label classification problem, even if the training data is from di- verse datasets and of differing lengths.
We have analyzed a pool of 37,839 articles published in 4,404 business-related journals in the entrepreneurship research field using a novel literature review approach that is based on machine learning and text data mining. Most papers have been published in the journals ‘Small Business Economics’, ‘International Journal of Entrepreneurship and Small Business’, and ‘Sustainability’ (Switzerland), while the sum of citations is highest in the ‘Journal of Business Venturing’, ‘Entrepreneurship Theory and Practice’, and ‘Small Business Economics’. We derived 29 overarching themes based on 52 identified clusters. The social entrepreneurship, development, innovation, capital, and economy clusters represent the largest ones among those with high thematic clarity. The most discussed clusters measured by the average number of citations per assigned paper are research, orientation, capital, gender, and growth. Clusters with the highest average growth in publications per year are social entrepreneurship, innovation, development, entrepreneurship education, and (business-) models. Measured by the average yearly citation rate per paper, the thematic cluster ‘research’, mostly containing literature studies, received most attention. The MLR allows for an inclusion of a significantly higher number of publications compared to traditional reviews thus providing a comprehensive, descriptive overview of the whole research field.
In today's volatile world, established companies must be capable of optimizing their core business with incremental innovations while simultaneously developing discontinuous innovations to maintain their long-term competitiveness. Balancing both is a major challenge for companies, since different types of innovation require different organizational structures, operational modes and management styles. Established companies tend to excel in improving their current business through incremental innovations which are closely related to their current knowledge base and competencies. However, this often goes hand in hand with challenges in the exploration of knowledge that is new to the company and that is essential for the development of discontinuous innovations. In this respect, the concept of corporate entrepreneurship is recognized as a way to strengthen the exploration of new knowledge and to support the development of discontinuous innovation. For managing corporate entrepreneurship more effectively, it is crucial to understand which types of knowledge can be created through corporate entrepreneurship and which organizational designs are more suited to gain certain types of knowledge. To answer these questions, this study analyzed 23 semi-structured interviews conducted with established companies that are running such entrepreneurial activities. The results show (1) that three general types of knowledge can be explored through corporate entrepreneurship and (2) that some organizational designs are more suited to explore certain knowledge types than others are.
The ageing infrastructure in ports requires regular inspection. This inspection is currently carried out manually by divers who sense by hand the entire underwater infrastructure. This process is cost-intensive as it involves a lot of time and human resources. To overcome these difficulties, we propose to scan the above and underwater port structure with a Multi-SensorSystem, and -by a fully automated processto classify the obtained point cloud into damaged and undamaged zones. We make use of simulated training data to test our approach since not enough training data with corresponding class labels are available yet. To that aim, we build a rasterised heightfield of a point cloud of a sheet pile wall by cutting it into verticall slices. The distance from each slice to the corresponding line generates the heightfield. This latter is propagated through a convolutional neural network which detects anomalies. We use the VGG19 Deep Neural Network model pretrained on natural images. This neural network has 19 layers and it is often used for image recognition tasks. We showed that our approach can achieve a fully automated, reproducible, quality-controlled damage detection which is able to analyse the whole structure instead of the sample wise manual method with divers. The mean true positive rate is 0.98 which means that we detected 98 % of the damages in the simulated environment.
Side Channel Attack Resistance of the Elliptic Curve Point Multiplication using Eisenstein Integers
(2020)
Asymmetric cryptography empowers secure key exchange and digital signatures for message authentication. Nevertheless, consumer electronics and embedded systems often rely on symmetric cryptosystems because asymmetric cryptosystems are computationally intensive. Besides, implementations of cryptosystems are prone to side-channel attacks (SCA). Consequently, the secure and efficient implementation of asymmetric cryptography on resource-constrained systems is demanding. In this work, elliptic curve cryptography is considered. A new concept for an SCA resistant calculation of the elliptic curve point multiplication over Eisenstein integers is presented and an efficient arithmetic over Eisenstein integers is proposed. Representing the key by Eisenstein integer expansions is beneficial to reduce the computational complexity and the memory requirements of an SCA protected implementation.
The reliability of flash memories suffers from various error causes. Program/erase cycles, read disturb, and cell to cell interference impact the threshold voltages and cause bit errors during the read process. Hence, error correction is required to ensure reliable data storage. In this work, we investigate the bit-labeling of triple level cell (TLC) memories. This labeling determines the page capacities and the latency of the read process. The page capacity defines the redundancy that is required for error correction coding. Typically, Gray codes are used to encode the cell state such that the codes of adjacent states differ in a single digit. These Gray codes minimize the latency for random access reads but cannot balance the page capacities. Based on measured voltage distributions, we investigate the page capacities and propose a labeling that provides a better rate balancing than Gray labeling.
Soft-input decoding of concatenated codes based on the Plotkin construction and BCH component codes
(2020)
Low latency communication requires soft-input decoding of binary block codes with small to medium block lengths.
In this work, we consider generalized multiple concatenated (GMC) codes based on the Plotkin construction. These codes are similar to Reed-Muller (RM) codes. In contrast to RM codes, BCH codes are employed as component codes. This leads to improved code parameters. Moreover, a decoding algorithm is proposed that exploits the recursive structure of the concatenation. This algorithm enables efficient soft-input decoding of binary block codes with small to medium lengths. The proposed codes and their decoding achieve significant performance gains compared with RM codes and recursive GMC decoding.
This paper proposes a novel transmission scheme for generalized multistream spatial modulation. This new approach uses one Mannheim error correcting codes over Gaussian or Eisenstein integers as multidimensional signal constellations. These codes enable a suboptimal decoding strategy with near maximum likelihood performance for transmission over the additive white Gaussian noise channel. In this contribution, this decoding algorithm is generalized to the detection for generalized multistream spatial modulation. The proposed method can outperform conventional generalized multistream spatial modulation with respect to decoding performance, detection complexity, and spectral efficiency.
Deep neural networks (DNNs) are known for their high prediction performance, especially in perceptual tasks such as object recognition or autonomous driving. Still, DNNs are prone to yield unreliable predictions when encountering completely new situations without indicating their uncertainty. Bayesian variants of DNNs (BDNNs), such as MC dropout BDNNs, do provide uncertainty measures. However, BDNNs are slow during test time because they rely on a sampling approach. Here we present a single shot MC dropout approximation that preserves the advantages of BDNNs without being slower than a DNN. Our approach is to analytically approximate for each layer in a fully connected network the expected value and the variance of the MC dropout signal. We evaluate our approach on different benchmark datasets and a simulated toy example. We demonstrate that our single shot MC dropout approximation resembles the point estimate and the uncertainty estimate of the predictive distribution that is achieved with an MC approach, while being fast enough for real-time deployments of BDNNs.
Despite the importance of Social Life Cycle Sustainability Assessment (S-LCSA), little research has addressed its integration into Product Lifecycle Management (PLM) systems. This paper presents a structured review of relevant research and practice. Also, to address practical aspects in more detail, it focuses on challenges and potential for adoption of such an integrated system at an electronics company.
We began by reviewing literature on implementations of Social-LCSA and identifying research needs. Then we investigated the status of Social-LCSA within the electronics industry, both by reviewing literature and interviewing decision makers, to identify challenges and the potential for adopting S-LCSA at an electronics company. We found low maturity of Social-LCSA, particularly difficulty in quantifying social sustainability. Adoption of Social-LCSA was less common among electronics industry suppliers, especially mining & smelting plants. Our results could provide a basis for conducting case studies that could further clarify issues involved in integrations of Social-LCSA into PLM systems.
Modeling a suitable birth density is a challenge when using Bernoulli filters such as the Labeled Multi-Bernoulli (LMB) filter. The birth density of newborn targets is unknown in most applications, but must be given as a prior to the filter. Usually the birth density stays unchanged or is designed based on the measurements from previous time steps.
In this paper, we assume that the true initial state of new objects is normally distributed. The expected value and covariance of the underlying density are unknown parameters. Using the estimated multi-object state of the LMB and the Rauch-Tung-Striebel (RTS) recursion, these parameters are recursively estimated and adapted after a target is detected.
The main contribution of this paper is an algorithm to estimate the parameters of the birth density and its integration into the LMB framework. Monte Carlo simulations are used to evaluate the detection driven adaptive birth density in two scenarios. The approach can also be applied to filters that are able to estimate trajectories.
Methods based exclusively on heart rate hardly allow to differentiate between physical activity, stress, relaxation, and rest, that is why an additional sensor like activity/movement sensor added for detection and classification. The response of the heart to physical activity, stress, relaxation, and no activity can be very similar. In this study, we can observe the influence of induced stress and analyze which metrics could be considered for its detection. The changes in the Root Mean Square of the Successive Differences provide us with information about physiological changes. A set of measurements collecting the RR intervals was taken. The intervals are used as a parameter to distinguish four different stages. Parameters like skin conductivity or skin temperature were not used because the main aim is to maintain a minimum number of sensors and devices and thereby to increase the wearability in the future.
Spatial modulation is a low-complexity multipleinput/ multipleoutput transmission technique. The recently proposed spatial permutation modulation (SPM) extends the concept of spatial modulation. It is a coding approach, where the symbols are dispersed in space and time. In the original proposal of SPM, short repetition codes and permutation codes were used to construct a space-time code. In this paper, we propose a similar coding scheme that combines permutation codes with codes over Gaussian integers. Short codes over Gaussian integers have good distance properties. Furthermore, the code alphabet can directly be applied as signal constellation, hence no mapping is required. Simulation results demonstrate that the proposed coding approach outperforms SPM with repetition codes.
The Montgomery multiplication is an efficient method for modular arithmetic. Typically, it is used for modular arithmetic over integer rings to prevent the expensive inversion for the modulo reduction. In this work, we consider modular arithmetic over rings of Gaussian integers. Gaussian integers are subset of the complex numbers such that the real and imaginary parts are integers. In many cases Gaussian integer rings are isomorphic to ordinary integer rings. We demonstrate that the concept of the Montgomery multiplication can be extended to Gaussian integers. Due to independent calculation of the real and imaginary parts, the computation complexity of the multiplication is reduced compared with ordinary integer modular arithmetic. This concept is suitable for coding applications as well as for asymmetric key cryptographic systems, such as elliptic curve cryptography or the Rivest-Shamir-Adleman system.
The expansion of a given multivariate polynomial into Bernstein polynomials is considered. Matrix methods for the calculation of the Bernstein expansion of the product of two polynomials and of the Bernstein expansion of a polynomial from the expansion of one of its partial derivatives are provided which allow also a symbolic computation.
Multi-dimensional spatial modulation is a multipleinput/ multiple-output wireless transmission technique, that uses only a few active antennas simultaneously. The computational complexity of the optimal maximum-likelihood (ML) detector at the receiver increases rapidly as more transmit antennas or larger modulation orders are employed. ML detection may be infeasible for higher bit rates. Many suboptimal detection algorithms for spatial modulation use two-stage detection schemes where the set of active antennas is detected in the first stage and the transmitted symbols in the second stage. Typically, these detection schemes use the ML strategy for the symbol detection. In this work, we consider a suboptimal detection algorithm for the second detection stage. This approach combines equalization and list decoding. We propose an algorithm for multi-dimensional signal constellations with a reduced search space in the second detection stage through set partitioning. In particular, we derive a set partitioning from the properties of Hurwitz integers. Simulation results demonstrate that the new algorithm achieves near-ML performance. It significantly reduces the complexity when compared with conventional two-stage detection schemes. Multi-dimensional constellations in combination with suboptimal detection can even outperform conventional signal constellations in combination with ML detection.
Many resource-constrained systems still rely on symmetric cryptography for verification and authentication. Asymmetric cryptographic systems provide higher security levels, but are very computational intensive. Hence, embedded systems can benefit from hardware assistance, i.e., coprocessors optimized for the required public key operations. In this work, we propose an elliptic curve cryptographic coprocessors design for resource-constrained systems. Many such coprocessor designs consider only special (Solinas) prime fields, which enable a low-complexity modulo arithmetic. Other implementations support arbitrary prime curves using the Montgomery reduction. These implementations typically require more time for the point multiplication. We present a coprocessor design that has low area requirements and enables a trade-off between performance and flexibility. The point multiplication can be performed either using a fast arithmetic based on Solinas primes or using a slower, but flexible Montgomery modular arithmetic.