Refine
Year of publication
Document Type
- Conference Proceeding (641) (remove)
Language
- English (491)
- German (149)
- Multiple languages (1)
Keywords
- 360-degree coverage (1)
- 3D Extended Object Tracking (1)
- 3D Extended Object Tracking (EOT) (2)
- 3D shape tracking (1)
- 3D ship detection (1)
- AAL (1)
- ADAM (1)
- AHI (1)
- Abrasive grain material (1)
- Abtragsprinzip (1)
Institute
- Fakultät Bauingenieurwesen (9)
- Fakultät Elektrotechnik und Informationstechnik (10)
- Fakultät Informatik (50)
- Fakultät Maschinenbau (9)
- Fakultät Wirtschafts-, Kultur- und Rechtswissenschaften (8)
- Institut für Angewandte Forschung - IAF (53)
- Institut für Optische Systeme - IOS (18)
- Institut für Strategische Innovation und Technologiemanagement - IST (29)
- Institut für Systemdynamik - ISD (64)
- Institut für Werkstoffsystemtechnik Konstanz - WIK (5)
In the digital age, information technology (IT) is a strategic asset for organizations. As a result, the IT costs are rising, and the cost-effective management of IT is crucial. Nevertheless, organizations still face major challenges and former studies lack comprehensiveness and depth. The goal of this paper is to generate a deep and holistic view on current management challenges of IT costs. In 15 expert interviews, we identify 23 challenges divided into 7 categories. The main challenges are to ensure transparency on IT cost information, to demonstrate the business impact of IT as well as to change the mindset for the value of IT and overcoming them requires attention to their interactions. Hence, this paper leads to a better understanding of the issues that IT cost management (ITCM) faces in the digital age and builds a base for future research.
Nowadays, organizations must invest strategically in information technology (IT) and choose the right digital initiatives to maximize their benefit. Nevertheless, Chief Information Officers still struggle to communicate IT costs and demonstrate the business value of IT. The goal of this paper is to support their effective communication. In focus groups, we analyzed how different stakeholders perceive IT costs and the business value of IT as the basis of communication. We identified 16 success factors to establish effective communication. Hence, this paper enables a better understanding of the perception and the operationalization of effective communication.
Nowadays, information technology (IT) is a strategic asset for organizations. As a result, the IT costs are rising and there is a need for transparency about their root causes. Cost drivers as an instrument in IT cost management enable a better transparency and understanding of costs. However, there is a lack of IT cost driver research with a focus on the strategic position of IT within organizations. The goal of this paper is to develop a comprehensive overview of strategic drivers of IT costs. The Delphi study leads to the identification and validation of 17 strategic drivers. Hence, this paper builds a base for cost driver analysis and contributes to a better understanding of the causes of costs. It facilitates future research regarding cost behavior and the business value of IT. Additionally, practitioners gain awareness of levers to influence IT costs and consequences of managerial decisions on their IT spend.
The goal of this paper pretends to show how a bed system with an embedded system with sensor is able to analyze a person’s movement, breathing and recognizing the positions that the subject is lying on the bed during the night without any additional physical contact. The measurements are performed with sensors placed between the mattress and the frame. An Intel Edison board was used as an endpoint that served as a communication node from the mesh network to external service. Two nodes and Intel Edison are attached to the bottom of the bed frame and they are connected to the sensors.
Requirements Engineering in Business Analytics for Innovation and Product Lifecycle Management
(2014)
Considering Requirements Engineering (RE) in business analytics, involving market oriented management, computer science and statistics, may be valuable for managing innovation in Product Lifecycle Management (PLM). RE and business analytics can help maximize the value of corporate product information throughout the value chain starting with innovation management. Innovation and PLM must address 1) big data, 2) development of well-defined business goals and principles, 3) cost/benefit analysis, 4) continuous change management, and 5) statistical and report science. This paper is a positioning note that addresses some business case considerations for analytics project involving PLM data, patents, and innovations. We describe a number of research challenges in RE that addresses business analytics when high PLM data should be turned into a successful market oriented innovation management strategy. We provide a draft on how to address these research challenges.
Deep Learning-based EEG Detection of Mental Alertness States from Drivers under Ethical Aspects
(2022)
One of the most critical factors for a successful road trip is a high degree of alertness while driving. Even a split second of inattention or sleepiness in a crucial moment, will make the difference between life and death. Several prestigious car manufacturers are currently pursuing the aim of automated drowsiness identification to resolve this problem. The path between neuro-scientific research in connection with artificial intelligence and the preservation of the dignity of human individual’s and its inviolability, is very narrow. The key contribution of this work is a system of data analysis for EEGs during a driving session, which draws on previous studies analyzing heart rate (ECG), brain waves (EEG), and eye function (EOG). The gathered data is hereby treated as sensitive as possible, taking ethical regulations into consideration. Obtaining evaluable signs of evolving exhaustion includes techniques that obtain sleeping stage frequencies, problematic are hereby the correlated interference’s in the signal. This research focuses on a processing chain for EEG band splitting that involves band-pass filtering, principal component analysis (PCA), independent component analysis (ICA) with automatic artefact severance, and fast fourier transformation (FFT). The classification is based on a step-by-step adaptive deep learning analysis that detects theta rhythms as a drowsiness predictor in the pre-processed data. It was possible to obtain an offline detection rate of 89% and an online detection rate of 73%. The method is linked to the simulated driving scenario for which it was developed. This leaves space for more optimization on laboratory methods and data collection during wakefulness-dependent operations.
This work proposes a construction for low-density parity-check (LDPC) codes over finite Gaussian integer fields. Furthermore, a new channel model for codes over Gaussian integers is introduced and its channel capacity is derived. This channel can be considered as a first order approximation of the additive white Gaussian noise channel with hard decision detection where only errors to nearest neighbors in the signal constellation are considered. For this channel, the proposed LDPC codes can be decoded with a simple non-probabilistic iterative decoding algorithm similar to Gallager's decoding algorithm A.
This paper proposes a novel transmission scheme for generalized multistream spatial modulation. This new approach uses one Mannheim error correcting codes over Gaussian or Eisenstein integers as multidimensional signal constellations. These codes enable a suboptimal decoding strategy with near maximum likelihood performance for transmission over the additive white Gaussian noise channel. In this contribution, this decoding algorithm is generalized to the detection for generalized multistream spatial modulation. The proposed method can outperform conventional generalized multistream spatial modulation with respect to decoding performance, detection complexity, and spectral efficiency.
Spatial modulation is a low-complexity multipleinput/ multipleoutput transmission technique. The recently proposed spatial permutation modulation (SPM) extends the concept of spatial modulation. It is a coding approach, where the symbols are dispersed in space and time. In the original proposal of SPM, short repetition codes and permutation codes were used to construct a space-time code. In this paper, we propose a similar coding scheme that combines permutation codes with codes over Gaussian integers. Short codes over Gaussian integers have good distance properties. Furthermore, the code alphabet can directly be applied as signal constellation, hence no mapping is required. Simulation results demonstrate that the proposed coding approach outperforms SPM with repetition codes.
Multi-dimensional spatial modulation is a multipleinput/ multiple-output wireless transmission technique, that uses only a few active antennas simultaneously. The computational complexity of the optimal maximum-likelihood (ML) detector at the receiver increases rapidly as more transmit antennas or larger modulation orders are employed. ML detection may be infeasible for higher bit rates. Many suboptimal detection algorithms for spatial modulation use two-stage detection schemes where the set of active antennas is detected in the first stage and the transmitted symbols in the second stage. Typically, these detection schemes use the ML strategy for the symbol detection. In this work, we consider a suboptimal detection algorithm for the second detection stage. This approach combines equalization and list decoding. We propose an algorithm for multi-dimensional signal constellations with a reduced search space in the second detection stage through set partitioning. In particular, we derive a set partitioning from the properties of Hurwitz integers. Simulation results demonstrate that the new algorithm achieves near-ML performance. It significantly reduces the complexity when compared with conventional two-stage detection schemes. Multi-dimensional constellations in combination with suboptimal detection can even outperform conventional signal constellations in combination with ML detection.
This work proposes a suboptimal detection algorithm for generalized multistream spatial modulation. Many suboptimal detection algorithms for spatial modulation use two-stage detection schemes where the set of active antennas is detected in the first stage and the transmitted symbols in the second stage. For multistream spatial modulation with large signal constellations the second detection step typically dominates the detection complexity. With the proposed detection scheme, the modified Gaussian approximation method is used for detecting the antenna pattern. In order to reduce the complexity for detecting the signal points, we propose a combined equalization and list decoding approach. Simulation results demonstrate that the new algorithm achieves near-maximum-likelihood performance with small list sizes. It significantly reduces the complexity when compared with conventional two-stage detection schemes.
Klimawandel
(2017)
CO2 compensation measures, in particular the compensation of flights, are becoming more and more popular. Carbon offsetting is defined as measures financed by donations that save greenhouse gases previously emitted elsewhere through climate protection projects.
CO2 abatement costs are often low in developing countries. This is why most offset projects are implemented there. Nevertheless, this does not mean that the holiday resort and the project country are in any way related to each other.
By linking carbon offset projects with the destination country, the tourist is able to get an impression of the co-financed project. In case such projects are realized in cooperation with the hotel, the hotel operator obtains a new tourist attraction and can demonstrate its efforts to climate protection in a PR-effective way.
Sleep is an important part of our life that significantly influences our health and well-being. The monitoring of sleep can provide data based on which sleep quality could be improved. This paper presents a system for heart rate detection during sleep. The data is collected from sensors underneath the test subjects. Though the data contains noise, it needs to be filtered to remove it. Due to the low strength of the signals, they need to be amplified after filtering. At some points of the signal, particular heartbeats may not be tracked by sensors due to the failure of a sensor or other reasons, which should be considered. The heart rate is detected in intervals of 15 s. A tool is implemented that detects the heart rate and visualizes it. The preprocessing of the data is performed with several filters: a highpass filter, a band-reject filter, a lowpass filter, and a motion detector. After the preprocessing of the data, the quality of the signal is significantly increased, and detection is possible.
This paper examines the corporate organisational aspects of the implementation of Industry 4.0. Industry 4.0 builds on new technologies and appears as a disruptive innovation to manufacturing firms. Although we do have a good understanding of the technical components, the implementation of the management and organisational aspects of Industry 4.0 is under-researched. It is challenging to find qualitative empirical evidence which provides comprehensive insights about real implementation cases. Based on a case study in a German high value manufacturing firm, we explore the corporate organisation and implementation of Industry 4.0. By using the framework of Complex Adaptive System (CAS), we have identified three key factors which facilitate the implementation of Industry 4.0 namely 1.) Organisational structure changes such as the foundation of a central department for digital transformation, 2.) The election of a Chief Digital Officer as a personnel change, and 3.) Corporate opening up towards cooperating with partners as a cultural change. We have furthermore found that Lean Management is an important enabler that ensures readiness for the adoption of Industry 4.0.
Business models (BM) are the logic of a firm on how to create, deliver and capture value. Business model innovation (BMI) is essential to organisations for keeping competitive advantage. However, the existence of barriers to BMI can impact the success of a corporate strategic alignment. Previous research has examined the internal barriers to business model innovation, however there is a lack of research on the potential external barriers that could potentially inhibit business model innovation. Drawn from an in-depth case study in a German medium size engineering company in the equestrian sports industry, we explore both internal and external barriers to business model innovation. BMI is defined as any change in one or more of the nine building blocks of the Business Model Canvas; customer segment, value propositions, channels, customer relation, revenue streams, key resources, key activities, key partners, cost structure [1]. Our results show that barriers to business model innovation can be overcome by the deployment of organisational learning mechanisms and the development of an open network capability.
The cornerstone of cognitive systems is environment awareness which enables agile and adaptive use of channel resources. Whitespace prediction based on learning the statistics of the wireless traffic has proven to be a powerful tool to achieve such awareness. In this paper, we propose a novel Hidden Markov Model (HMM) based spectrum learning and prediction approach which accurately estimates the exact length of the whitespace in WiFi channels within the shared industrial scientific medical ISM) bands. We show that extending the number of hidden states and formulating the prediction problem as a maximum likelihood (ML) classification leads to a substantial increase in the prediction horizon compared to classical approaches that predict the immediate (short-term) future. We verify the proposed algorithm through simulations which utilize a model for WiFi traffic based on extensive measurement campaigns.
Realistic traffic modeling plays a key role in efficient Dynamic Spectrum Access (DSA) which is considered as enabler for the employment of wireless technologies in critical industrial automation applications (IAA). The majority of models of spectrum usage are not suitable for this specific use case as they are based on measurement campaigns conducted in urban or controlled laboratory environments. In this work we present a time-domain traffic model for industrial communication in the 2.4 GHz industrial, scientific, medical (ISM) band based on measurements in an industrial automotive production site. As DSA is usually implemented on Software Defined Radios (SDR), our measurement campaign is based on SDR platforms rather than sophisticated spectrum analyzers. We show through the estimation of the Hurst parameter that industrial wireless traffic possesses inherent self-similarity that could be exploited for efficient DSA. We also show that wireless traffic could be modeled as a semi-Markov model with channel on and off durations Log-normally and Pareto distributed, respectively. We finally estimate the parameters of the derived models using Maximum Likelihood estimation.
Cognitive radio (CR) is a key enabler of wireless in industrial applications especially for those with strict quality-of-service (QoS) requirements. The cornerstone of CR is spectrum occupancy prediction that enables agile and proactive spectrum access and efficient utilization of spectral resources. Hidden Markov Models (HMM) provide powerful and flexible tools for statistical spectrum prediction. In this paper we introduce a HMM-based spectrum prediction algorithm for industrial applications that accurately predicts multiple slots in the future. Traditional HMM prediction approaches use two hidden states enabling the prediction of only one step ahead in the future. This one step is most often not enough due to internal hardware delays that render it outdated. We show in this work that extending the number of hidden states and formulating the prediction problem as a maximum likelihood (ML) classification approach enables a prediction span of multiple slots in the future even with fine spectrum sensing resolution. We verify the suitability of our approach to industrial wireless through extensive simulations that utilize a realistic measurement-based traffic model specifically tailored for industrial automotive settings.
In this work, we investigate a hybrid decoding approach that combines algebraic hard-input decoding of binary block codes with soft-input decoding. In particular, an acceptance criterion is proposed which determines the reliability of a candidate codeword. For many received codewords the stopping criterion indicates that the hard-decoding result is sufficiently reliable, and the costly soft-input decoding can be omitted. The proposed acceptance criterion significantly reduces the decoding complexity. For simulations we combine the algebraic hard-input decoding with ordered statistics decoding, which enables near maximum likelihood soft-input decoding for codes of small to medium block lengths.
The Montgomery multiplication is an efficient method for modular arithmetic. Typically, it is used for modular arithmetic over integer rings to prevent the expensive inversion for the modulo reduction. In this work, we consider modular arithmetic over rings of Gaussian integers. Gaussian integers are subset of the complex numbers such that the real and imaginary parts are integers. In many cases Gaussian integer rings are isomorphic to ordinary integer rings. We demonstrate that the concept of the Montgomery multiplication can be extended to Gaussian integers. Due to independent calculation of the real and imaginary parts, the computation complexity of the multiplication is reduced compared with ordinary integer modular arithmetic. This concept is suitable for coding applications as well as for asymmetric key cryptographic systems, such as elliptic curve cryptography or the Rivest-Shamir-Adleman system.
The Lempel-Ziv-Welch (LZW) algorithm is an important dictionary-based data compression approach that is used in many communication and storage systems. The parallel dictionary LZW (PDLZW) algorithm speeds up the LZW encoding by using multiple dictionaries. The PDLZW algorithm applies different dictionaries to store strings of different lengths, where each dictionary stores only strings of the same length. This simplifies the parallel search in the dictionaries for hardware implementations. The compression gain of the PDLZW depends on the partitioning of the address space, i.e. on the sizes of the parallel dictionaries. However, there is no universal partitioning that is optimal for all data sources. This work proposes an address space partitioning technique that optimizes the compression rate of the PDLZW using a Markov model for the data. Numerical results for address spaces with 512, 1024, and 2048 entries demonstrate that the proposed partitioning improves the performance of the PDLZW compared with the original proposal.
In diesem Beitrag wird die Hardware-Implementierung eines Datenkompressionsverfahrens auf einem FPGA vorgestellt. Das Verfahren wurde speziell für Kompression kurzer Datenblöcke in Flash-Speichern entwickelt. Dabei werden Quelldaten mithilfe eines Encoders komprimiert und mit einem Decoder verlustlos dekomprimiert. Durch die Reduktion der Datenrate kann in Flash-Speichern die Übertragungsdauer zum Lesen und Schreiben reduziert werden. Ebenso ist eine Kompression von Nutzdaten sinnvoll, um zusätzliche Redundanzen für einen Fehlerschutz einfügen zu können, ohne den Gesamtspeicherplatzbedarf zu erhöhen.
Today, many resource-constrained systems, such as embedded systems, still rely on symmetric cryptography for authentication and digital signatures. Asymmetric cryptography provide a higher security level, but software implementations of public-key algorithms on small embedded systems are extremely slow. Hence, such embedded systems require hardware assistance, i.e. crypto coprocessors optimized for public key operations. Many such coprocessor designs aim on high computational performance. In this work, an area efficient elliptic curve cryptography (ECC) coprocessor is presented for applications in small embedded systems where high performance coprocessors are too costly. We propose a simple control unit with a small instruction set that supports different ECC point multiplication (PM) algorithms. The control unit reduces the logic and number of registers compared with other implementations of ECC point multiplications.
Side Channel Attack Resistance of the Elliptic Curve Point Multiplication using Gaussian Integers
(2020)
Elliptic curve cryptography is a cornerstone of embedded security. However, hardware implementations of the elliptic curve point multiplication are prone to side channel attacks. In this work, we present a new key expansion algorithm which improves the resistance against timing and simple power analysis attacks. Furthermore, we consider a new concept for calculating the point multiplication, where the points of the curve are represented as Gaussian integers. Gaussian integers are subset of the complex numbers, such that the real and imaginary parts are integers. Since Gaussian integer fields are isomorphic to prime fields, this concept is suitable for many elliptic curves. Representing the key by a Gaussian integer expansion is beneficial to reduce the computational complexity and the memory requirements of a secure hardware implementation.
Digitale Signaturen zum Überprüfen der Integrität von Daten, beispielsweise von Software-Updates, gewinnen zunehmend an Bedeutung. Im Bereich der eingebetteten Systeme kommen derzeit wegen der geringen Komplexität noch überwiegend symmetri-sche Verschlüsselungsverfahren zur Berechnung eines Authentifizierungscodes zum Einsatz. Asym-metrische Kryptosysteme sind rechenaufwendiger, bieten aber mehr Sicherheit, weil der Schlüssel zur Authentifizierung nicht geheim gehalten werden muss. Asymmetrische Signaturverfahren werden typischerweise zweistufig berechnet. Der Schlüssel wird nicht direkt auf die Daten angewendet, sondern auf deren Hash-Wert, der mit Hilfe einer Hash-funktion zuvor berechnet wurde. Zum Einsatz dieser Verfahren in eingebetteten Systemen ist es erforder-lich, dass die Hashfunktion einen hinreichend gro-ßen Datendurchsatz ermöglicht. In diesem Beitrag wird eine effiziente Hardware-Implementierung der SHA-256 Hashfunktion vorgestellt.
A significant proportion of road traffic accidents are due to inattentiveness or fatigue at the wheel. Approaches to monitoring the driver's condition range from eye tracking and driving behavior analysis to yawn and blink detection and ECG measurement. This work describes the development of a mobile system for the measurement and processing of ECG data. The aim of the signal processing is to quantify the driver’s fatigue with the heartrate variability (HRV). The work includes the hardware and software design of the sensor. First, the development of low-noise electronics including AD conversion is described. Then the software signal processing with QRS complex detection and plotting front end is explained. The resulting sensor is compact, low-cost and provides a good signal for HRV extraction.
Die Entstehung von Radsatz-Torsionsschwingungen kann aufgrund der zunehmenden Ausnut- zung des Kraftschlusses im Rad-Schiene-Kontakt nicht vollends verhindert werden. Während analytische Untersuchungen zeigen, dass bei der Überschreitung eines bestimmten Dämp- fungswerts die Entstehung von Radsatz-Torsionsschwingungen vollständig vermieden wird, zeigen Simulationen, dass die Schwingungsamplitude des dynamischen Torsionsmoments in- folge des betragsmäβig sinkenden Kraftschlussgradienten und einer gewissen Antriebsstrang- dämpfung bei höheren Gleitgeschwindigkeiten begrenzt ist. Im Gegensatz zur analytischen Formel (17) ist damit ein maximales, dynamisches Torsionsmoment berechenbar.
Durch eine ganzheitliche Betrachtung des Gesamtsystems, bestehend aus dem Kraftschluss zwischen Rad und Schiene, dem Radsatz inkl. Lagerung, sowie Antriebsstrang können ver- schiedene Einflussfaktoren untersucht werden, die eine Radsatz-Torsionsschwingung beein- flussen. Die damit verbundenen Berechnungsmodelle sind aufgrund der Anzahl von Massen und Freiheitsgraden nicht mehr analytisch lösbar. Durch den Einsatz von Mehrkörperdynamik-Software können die Einflussfaktoren identifiziert und deren Einfluss auf die Torsionsneigung des Systems und das dynamische Torsionsmoment quantifiziert werden.
Radsatz-Torsionsschwingungen verursachen Torsionsamplituden, die das quasistatische Torsionsmoment um ein Vielfaches überschreiten können. Dies führt zu enormen Beanspruchungen von Radsatzwelle, Pressverbänden und Radsatzgetrieben. Durch die Auswertung und Analyse zahlreicher Messfahrten verschiedener Fahrzeuge konnten einige Merkmale bezüglich der Verteilung der Torsionsbelastung innerhalb eines Triebdrehgestells identifiziert werden. Da die Messresultate stark von der individuellen Antriebsregelung den vorliegenden Kraftschlussbedingungen abhängig sind lassen sich für Fahrzeuge mit unterschiedlicher Antriebskonfiguration nur bedingt vergleichen. Während im Regelbetrieb von Schienenfahrzeugen das Auftreten von Radsatz- Torsionsschwingungen vermieden werden soll, sind insbesondere bei Zulassungsfahrten Fahrzustände anzustreben in denen bei aktiver Rollier- und Schleuderschutzsoftware maximale Torsionsschwingungen erzeugt werden können. Basierend auf den durchgeführten Analysen und gesammelten Erfahrungen haben sich für die Durchführung von Versuchsfahrten folgende Bedingungen zielführend herausgestellt: (1) Herstellung möglichst identischer Kraftschlussverhältnisse aller im Gruppenantrieb elektrisch gekoppelten Radsätze durch gemeinsame Bewässerung in beiden Fahrrichtungen. (2) Sicherstellung einer maximalen Zugkraftausnutzung durch hohe Fahrwiderstände, geeignete Streckwahl mit konstanter Steigung oder Einsatz einer Bremslokomotive.
Unter bestimmten Kontaktbedingungen zwischen Rad und Schiene können selbsterregte Schwingungen angeregt werden, die zu gegenphasigen Drehbewegungen der Radscheiben und hohen Torsionsmomenten in der Radsatzwelle führen. Zur Bestimmung des maximalen Torsionsmoments sind bislang aufwendige Testfahrten erforderlich, da keine Verfahren bekannt waren, die eine konservative Berechnung des Torsionsmoments ermöglichen [1]. In den vergangenen Jahren wurden die drei folgenden Berechnungsmethoden vertieft untersucht, um das maximale, dynamische Torsionsmoment zu berechnen:
- Simulationen von komplexen Mehrkörpersystemen (MKS)
- Differentialgleichungssysteme mit numerischer Berechnung
- Analytische Berechnung durch Reduktion auf ein Minimalmodell
In dieser Publikation sollen diese Berechnungsmethoden näher vorgestellt werden und durch eine Gegenüberstellung der jeweils berechneten und gemessenen Ergebnisse deren Möglichkeiten aber auch Limitationen aufgezeigt werden.
Algorithms for calculating the string edit distance are used in e.g. information retrieval and document analysis systems or for evaluation of text recognizers. Text recognition based on CTC-trained LSTM networks includes a decoding step to produce a string, possibly using a language model, and evaluation using the string edit distance. The decoded string can further be used as a query for database search, e.g. in document retrieval. We propose to closely integrate dictionary search with text recognition to train both combined in a continuous fashion. This work shows that LSTM networks are capable of calculating the string edit distance while allowing for an exchangeable dictionary to separate learned algorithm from data. This could be a step towards integrating text recognition and dictionary search in one deep network.
Digital cameras are subject to physical, electronic and optic effects that result in errors and noise in the image. These effects include for example a temperature dependent dark current, read noise, optical vignetting or different sensitivities of individual pixels. The task of a radiometric calibration is to reduce these errors in the image and thus improve the quality of the overall application. In this work we present an algorithm for radiometric calibration based on Gaussian processes. Gaussian processes are a regression method widely used in machine learning that is particularly useful in our context. Then Gaussian process regression is used to learn a temperature and exposure time dependent mapping from observed gray-scale values to true light intensities for each pixel. Regression models based on the characteristics of single pixels suffer from excessively high runtime and thus are unsuitable for many practical applications. In contrast, a single regression model for an entire image with high spatial resolution leads to a low quality radiometric calibration, which also limits its practical use. The proposed algorithm is predicated on a partitioning of the pixels such that each pixel partition can be represented by one single regression model without quality loss. Partitioning is done by extracting features from the characteristic of each pixel and using them for lexicographic sorting. Splitting the sorted data into partitions with equal size yields the final partitions, each of which is represented by the partition centers. An individual Gaussian process regression and model selection is done for each partition. Calibration is performed by interpolating the gray-scale value of each pixel with the regression model of the respective partition. The experimental comparison of the proposed approach to classical flat field calibration shows a consistently higher reconstruction quality for the same overall number of calibration frames.
Visualization-Assisted Development of Deep Learning Models in Offline Handwriting Recognition
(2018)
Deep learning is a field of machine learning that has been the focus of active research and successful applications in recent years. Offline handwriting recognition is one of the research fields and applications were deep neural networks have shown high accuracy. Deep learning models and their training pipeline show a large amount of hyper-parameters in their data selection, transformation, network topology and training process that are sometimes interdependent. This increases the overall difficulty and time necessary for building and training a model for a specific data set and task at hand. This work proposes a novel visualization-assisted workflow that guides the model developer through the hyper-parameter search in order to identify relevant parameters and modify them in a meaningful way. This decreases the overall time necessary for building and training a model. The contributions of this work are a workflow for hyper-parameter search in offline handwriting recognition and a heat map based visualization technique for deep neural networks in multi-line offline handwriting recognition. This work applies to offline handwriting recognition, but the general workflow can possibly be adapted to other tasks as well.
Increasing robustness of handwriting recognition using character N-Gram decoding on large lexica
(2016)
Offline handwriting recognition systems often include a decoding step, that is retrieving the most likely character sequence from the underlying machine learning algorithm. Decoding is sensitive to ranges of weakly predicted characters, caused e.g. by obstructions in the scanned document. We present a new algorithm for robust decoding of handwriting recognizer outputs using character n-grams. Multidimensional hierarchical subsampling artificial neural networks with Long-Short-Term-Memory cells have been successfully applied to offline handwriting recognition. Output activations from such networks, trained with Connectionist Temporal Classification, can be decoded with several different algorithms in order to retrieve the most likely literal string that it represents. We present a new algorithm for decoding the network output while restricting the possible strings to a large lexicon. The index used for this work is an n-gram index with tri-grams used for experimental comparisons. N-grams are extracted from the network output using a backtracking algorithm and each n-gram assigned a mean probability. The decoding result is obtained by intersecting the n-gram hit lists while calculating the total probability for each matched lexicon entry. We conclude with an experimental comparison of different decoding algorithms on a large lexicon.
Offline handwriting recognition systems often use LSTM networks, trained with line- or word-images. Multi-line text makes it necessary to use segmentation to explicitly obtain these images. Skewed, curved, overlapping, incorrectly written text, or noise can lead to errors during segmentation of multi-line text and reduces the overall recognition capacity of the system. Last year has seen the introduction of deep learning methods capable of segmentation-free recognition of whole paragraphs. Our method uses Conditional Random Fields to represent text and align it with the network output to calculate a loss function for training. Experiments are promising and show that the technique is capable of training a LSTM multi-line text recognition system.
Multi-Dimensional Connectionist Classification is amethod for weakly supervised training of Deep Neural Networksfor segmentation-free multi-line offline handwriting recognition.MDCC applies Conditional Random Fields as an alignmentfunction for this task. We discuss the structure and patterns ofhandwritten text that can be used for building a CRF. Since CRFsare cyclic graphical models, we have to resort to approximateinference when calculating the alignment of multi-line text duringtraining, here in the form of Loopy Belief Propagation. This workconcludes with experimental results for transcribing small multi-line samples from the IAM Offline Handwriting DB which showthat MDCC is a competitive methodology.
Recent years have seen the proposal of several different gradient-based optimization methods for training artificial neural networks. Traditional methods include steepest descent with momentum, newer methods are based on per-parameter learning rates and some approximate Newton-step updates. This work contains the result of several experiments comparing different optimization methods. The experiments were targeted at offline handwriting recognition using hierarchical subsampling networks with recurrent LSTM layers. We present an overview of the used optimization methods, the results that were achieved and a discussion of why the methods lead to different results.
Nowadays, the inexpensive memory space promotes an accelerating growth of stored image data. To exploit the data using supervised Machine or Deep Learning, it needs to be labeled. Manually labeling the vast amount of data is time-consuming and expensive, especially if human experts with specific domain knowledge are indispensable. Active learning addresses this shortcoming by querying the user the labels of the most informative images first. One way to obtain the ‘informativeness’ is by using uncertainty sampling as a query strategy, where the system queries those images it is most uncertain about how to classify. In this paper, we present a web-based active learning framework that helps to accelerate the labeling process. After manually labeling some images, the user gets recommendations of further candidates that could potentially be labeled equally (bulk image folder shift). We aim to explore the most efficient ‘uncertainty’ measure to improve the quality of the recommendations such that all images are sorted with a minimum number of user interactions (clicks). We conducted experiments using a manually labeled reference dataset to evaluate different combinations of classifiers and uncertainty measures. The results clearly show the effectiveness of an uncertainty sampling with bulk image shift recommendations (our novel method), which can reduce the number of required clicks to only around 20% compared to manual labeling.
Cardiovascular diseases are directly or indirectly responsible for up to 38.5% of all deaths in Germany and thus represent the most frequent cause of death. At present, heart diseases are mainly discovered by chance during routine visits to the doctor or when acute symptoms occur. However, there is no practical method to proactively detect diseases or abnormalities of the heart in the daily environment and to take preventive measures for the person concerned. Long-term ECG devices, as currently used by physicians, are simply too expensive, impractical, and not widely available for everyday use. This work aims to develop an ECG device suitable for everyday use that can be worn directly on the body. For this purpose, an already existing hardware platform will be analyzed, and the corresponding potential for improvement will be identified. A precise picture of the existing data quality is obtained by metrological examination, and corresponding requirements are defined. Based on these identified optimization potentials, a new ECG device is developed. The revised ECG device is characterized by a high integration density and combines all components directly on one board except the battery and the ECG electrodes. The compact design allows the device to be attached directly to the chest. An integrated microcontroller allows digital signal processing without the need for an additional computer. Central features of the evaluation are a peak detection for detecting R-peaks and a calculation of the current heart rate based on the RR interval. To ensure the validity of the detected R-peaks, a model of the anatomical conditions is used. Thus, unrealistic RR-intervals can be excluded. The wireless interface allows continuous transmission of the calculated heart rate. Following the development of hardware and software, the results are verified, and appropriate conclusions about the data quality are drawn. As a result, a very compact and wearable ECG device with different wireless technologies, data storage, and evaluation of RR intervals was developed. Some tests yelled runtimes up to 24 hours with wireless Lan activated and streaming.
In previous studies, we used a method for detecting stress that was based exclusively on heart rate and ECG for differentiation between such situations as mental stress, physical activity, relaxation, and rest. As a response of the heart to these situations, we observed different behavior in the Root Mean Square of the Successive differences heartbeats (RMSSD). This study aims to analyze Virtual Reality via a virtual reality headset as an effective stressor for future works. The value of the Root Mean Square of the Successive Differences is an important marker for the parasympathetic effector on the heart and can provide information about stress. For these measurements, the RR interval was collected using a breast belt. In these studies, we can observe the Root Mean Square of the successive differences heartbeats. Additional sensors for the analysis were not used. We conducted experiments with ten subjects that had to drive a simulator for 25 minutes using monitors and 25 minutes using virtual reality headset. Before starting and after finishing each simulation, the subjects had to complete a survey in which they had to describe their mental state. The experiment results show that driving using virtual reality headset has some influence on the heart rate and RMSSD, but it does not significantly increase the stress of driving.
Stress is recognized as a factor of predominant disease and in the future the costs for treatment will increase. The presented approach tries to detect stress in a very basic and easy to implement way, so that the cost for the device and effort to wear it remain low. The user should benefit from the fact that the system offers an easy interface reporting the status of his body in real time. In parallel, the system provides interfaces to pass the obtained data forward for further processing and (professional) analyses, in case the user agrees. The system is designed to be used in every day’s activities and it is not restricted to laboratory use or environments. The implementation of the enhanced prototype shows that the detection of stress and the reporting can be managed using correlation plots and automatic pattern recognition even on a very light-weighted microcontroller platform.
Stress and physical activities are important aspects of life of people. Body reactions on stress and on physical activities can be very similar but long-term stress leads to diseases and damages the body. Currently there is no method to differentiate easily and clearly between these two aspects in a time slot. We have confronted this problem while developing a mobile system for detection and analysis of stress. This paper presents an approach, which uses a long-term monitor with ECG/EKG capabilities and analysis of the heart rate data that is extracted from the device. The focus of the work is to find characteristics that are useful for differentiation between physical activity and stress.
Stress is recognized as a predominant disease with raising costs for rehabilitation and treatment. Currently there several different approaches that can be used for determining and calculating the stress levels. Usually the methods for determining stress are divided in two categories. The first category do not require any special equipment for measuring the stress. This category useless the variation in the behaviour patterns that occur while stress. The core disadvantage for the category is their limitation to specific use case. The second category uses laboratories instruments and biological sensors. This category allow to measure stress precisely and proficiently but on the same time they are not mobile and transportable and do not support real-time feedback. This work presents a mobile system that provides the calculation of stress. For achieving this, the of a mobile ECG sensor is analysed, processed and visualised over a mobile system like a smartphone. This work also explains the used stress measurement algorithm. The result of this work is a portable system that can be used with a mobile system like a smartphone as visual interface for reporting the current stress level.
The perception of the amount of stress is subjective to every person, and the perception of it changes depending on many factors. One of the factors that has an impact on perceived stress is the emotional state. In this work, we compare the emotional state of 40 German driving students and present different partitions that can be advantageous for using artificial intelligence and classification. Like this, we evaluate the data quality and prepare for the specific use. The Stress Perceived Questionnaire (PSQ20) was employed to assess the level of stress experienced by individuals while participating in a driving simulation for 5 and 25 min. As a result of our analysis, we present a categorisation of various emotional states into intervals, comparing different classifications and facilitating a more straightforward implementation of artificial intelligence for classification purposes.
The investigation of stress requires to distinguish between stress caused by physical activity and stress that is caused by psychosocial factors. The behaviour of the heart in response to stress and physical activity is very similar in case the set of monitored parameters is reduced to one. Currently, the differentiation remains difficult and methods which only use the heart rate are not able to differentiate between stress and physical activity, without using additional sensor data input. The approach focusses on methods which generate signals providing characteristics that are useful for detecting stress, physical activity, no activity and relaxation.
Methods based exclusively on heart rate hardly allow to differentiate between physical activity, stress, relaxation, and rest, that is why an additional sensor like activity/movement sensor added for detection and classification. The response of the heart to physical activity, stress, relaxation, and no activity can be very similar. In this study, we can observe the influence of induced stress and analyze which metrics could be considered for its detection. The changes in the Root Mean Square of the Successive Differences provide us with information about physiological changes. A set of measurements collecting the RR intervals was taken. The intervals are used as a parameter to distinguish four different stages. Parameters like skin conductivity or skin temperature were not used because the main aim is to maintain a minimum number of sensors and devices and thereby to increase the wearability in the future.
The purpose of this paper is to examine the effects of perceived stress on traffic and road safety. One of the leading causes of stress among drivers is the feeling of having a lack of control during the driving process. Stress can result in more traffic accidents, an increase in driver errors, and an increase in traffic violations. To study this phenomenon, the Stress Perceived Questionnaire (PSQ) was used to evaluate the perceived stress while driving in a simulation. The study was conducted with participants from Germany, and they were grouped into different categories based on their emotional stability. Each participant was monitored using wearable devices that measured their instantaneous heart rate (HR). The preference for wearable devices was due to their non-intrusive and portable nature. The results of this study provide an overview of how stress can affect traffic and road safety, which can be used for future research or to implement strategies to reduce road accidents and promote traffic safety.
The principal objective of this study is to investigate the impact of perceived stress on traffic and road safety. Therefore, we designed a study that allows the generation and collection of stress-relevant data. Drivers often experience stress due to their perception of lack of control during the driving process. This can lead to an increased likelihood of traffic accidents, driver errors, and traffic violations. To explore this phenomenon, we used the Stress Perceived Questionnaire (PSQ) to evaluate perceived stress levels during driving simulations and the EPQR questionnaire to determine the personality of the driver. With the presented study, participants can categorised based on their emotional stability and personality traits. Wearable devices were utilised to monitor each participant's instantaneous heart rate (HR) due to their non-intrusive and portable nature. The findings of this study deliver an overview of the link between stress and traffic and road safety. These findings can be utilised for future research and implementing strategies to reduce road accidents and promote traffic safety.
Stress is a recognized as a predominant disease with growing costs of treatment. The approach presented here is aimed to detect stress using a light weighted, mobile, cheap and easy to use system. The result shows that stress can be detected even in case a person’s natural bio vital data is out of the main range. The system enables storage of measured data, while maintaining communication channels of online and post-processing.
The magneto-mechanical behavior of magnetic shape memory (MSM) materials has been investigated by means of different simulation and modeling approaches by several research groups. The target of this paper is to simulate actuators driven by MSM alloys and to understand the MSM element behavior during actuation, which shall lead to an increased performance of the actuator. It is shown that internal and external stresses should be taken into consideration using numerical computation tools for magnetic fields in an efficient way.
Die Überwindung des Bruchs (Seam) beim Lernen im Studium zwischen dem Hochschulkontext und der beruflichen Praxis ist durch die zeitlich, räumlich und organisatorisch bedingte Trennung der relevanten Akteure (u. a. Lehrende, Lernende, Unternehmensvertreter) eine sehr große Herausforderung (Milrad et al., 2013). Eine seamless-learning-basierte Konzeption einer Lehrveranstaltung auf Basis agiler Werte und Methoden (u. a. inkrementelles Vorgehen, Fokus auf lernendenzentrierte Veranstaltungen, individualisiertes Lernenden-Feedback) kann bei der Überwindung dieses bedeutenden Bruchs helfen. In dem Poster wird das grundsätzliche Design eines derartigen agilen SL-Konzepts auf Basis eines iterativ, inkrementellen Vorgehens innerhalb eines Semesterzyklus von 15 Wochen in drei Lernsprints erörtert. Darüber hinaus wird über erste Lehrerfahrungen der Dozierenden sowohl aus der Hochschule als auch aus dem industriellen Umfeld und Lernerfahrungen der Studierenden aus den vergangenen zwei Jahren berichtet.
Szenografie ist eine Universaldisziplin, die mit Raum, Licht, Grafik, Ton und digitalen Medien Erlebnisräume schafft, in denen Wissen vermittelt und Besuchern verborgene Zusammenhänge intuitiv zugänglich gemacht werden können. Hierbei spielen mediale Vermittlungsstrategien eine zentrale Rolle. Eine dieser Strategien ist Information on demand: kontextualisierende Informationen werden hier nicht permanent angeboten, sondern nur dann, wenn der Besucher sie abruft. Dies ermöglicht sowohl eine auratische Objektpräsentation als auch gleichermaßen Kontextualisierung und Informationsvermittlung. Eine weitere Strategie sind
personalisierte Devices, die zudem eine zielgruppenorientierte Ansprache von Besuchern ermöglichen. Darüber hinaus bieten VR- und AR-Technologien die Möglichkeit zur publikumswirksamen Darstellung komplexer wissenschaftlicher Zusammenhänge.
Think BIQ: Gender Differences, Entrepreneurship Support and the Quality of Business Idea Description
(2023)
Entrepreneurship support, its influencing factors and female entrepreneurship are recently discussed topics with great relevance for society and politics. However, research on the subject has been divergent in its results and lacks a focus on the impact of support programs’ characteristics concerning different types of entrepreneurs. Thus, we conduct a fuzzy-set Qualitative Comparative Analysis on entrepreneurship support characteristics aiming to shed light on possible gender differences occurring in respective programs. We investigate the quality of business idea descriptions, as a predecessor for a high-potential business model, operationalized using inter alia causation and effectuation theory and social role theory as possible explanations. In our fuzzy-set Qualitative Comparative Analysis on a sample of 911 Norwegian ventures, we find a variety of differences related to the entrepreneurs’ gender. For instance, that financial support combined with a well described key contribution or careful planning seem to be more important antecedents for female entrepreneurs’ business idea quality than for males. Moreover, it seems a well-described key contribution has a positive effect on the outcome variable in most cases. Another interesting finding concerns the entrepreneurs’ network partners, where we found evident gender differences in our combinations. Female entrepreneurs seemingly benefitted from rather small networks, and males from big networks, although the former possess larger networks in the sample. In conclusion, we find that gender differences in combinations of entrepreneurship support for high business idea quality still occur even in a country like Norway, calling for an adaption of the provided support and environment.
The paper investigates an innovative actuator combination based on the magnetic shape memory technology. The actuator is composed of an electromagnet, which is activated to produce motion, and a magnetic shape memory element, which is used passively to yield multistability, i.e. the possibility of holding a position without input power. Based on the experimental open-loop frequency characterization of the actuator, a position controller is developed and tested in several experiments.
Corrosion
(2016)
This paper summarizes the trends in metallization and interconnection technology in the eyes of the participants of the 8th Metallization and Interconnection Workshop. Participants were asked in a questionnaire to share their view on the future development of metallization technology, the kind of metal used for front side metallization and the future development of interconnection technology. The continuous improvement of the screen-printing technology is reflected in the high expected percentage share decreasing from 88% in three years to still 70% in ten years. The dominating front side metal in the view of the participants will be silver with an expected percentage share of nearly 70% in 2029. Regarding interconnection technologies, the experts of the workshop expect new technologies to gain significant technology shares faster. Whereas in three years soldering on busbars is expected to dominate with a percentage share of 71% it will drop in ten years to 35% in the eyes of the participants. Multiwire and shingling technologies are seen to have the highest potential with expected percentage shares of 33% (multiwire) and 16% (shingling) in ten years.
To get a better understanding of Cross Site Scripting vulnerabilities, we investigated 50 randomly selected CVE reports which are related to open source projects. The vulnerable and patched source code was manually reviewed to find out what kind of source code patterns were used. Source code pattern categories were found for sources, concatenations, sinks, html context and fixes. Our resulting categories are compared to categories from CWE. A source code sample which might have led developers to believe that the data was already sanitized is described in detail. For the different html context categories, the necessary Cross Site Scripting prevention mechanisms are described.
We investigated 50 randomly selected buffer overflow vulnerabilities in Firefox. The source code of these vulnerabilities and the corresponding patches were manually reviewed and patterns were identified. Our main contribution are taxonomies of errors, sinks and fixes seen from a developer's point of view. The results are compared to the CWE taxonomy with an emphasis on vulnerability details. Additionally, some ideas are presented on how the taxonomy could be used to improve the software security education.
Many secure software development methods and tools are well-known and understood. Still, the same software security vulnerabilities keep occurring. To find out if new source code patterns evolved or the same patterns are reoccurring, we investigate SQL injections in PHP open source projects. SQL injections are well-known and a core part of software security education. For each common part of SQL injections, the source code patterns are analysed. Examples are pointed out showing that developers had software security in mind, but nevertheless created vulnerabilities. A comparison to earlier work shows that some categories are not found as often as expected. Our main contribution is the categorization of source code patterns.
We present source code patterns that are difficult for modern static code analysis tools. Our study comprises 50 different open source projects in both a vulnerable and a fixed version for XSS vulnerabilities reported with CVE IDs over a period of seven years. We used three commercial and two open source static code analysis tools. Based on the reported vulnerabilities we discovered code patterns that appear to be difficult to classify by static analysis. The results show that code analysis tools are helpful, but still have problems with specific source code patterns. These patterns should be a focus in training for developers.
We compared vulnerable and fixed versions of the source code of 50 different PHP open source projects based on CVE reports for SQL injection vulnerabilities. We scanned the source code with commercial and open source tools for static code analysis. Our results show that five current state-of-the-art tools have issues correctly marking vulnerable and safe code. We identify 25 code patterns that are not detected as a vulnerability by at least one of the tools and 6 code patterns that are mistakenly reported as a vulnerability that cannot be confirmed by manual code inspection. Knowledge of the patterns could help vendors of static code analysis tools, and software developers could be instructed to avoid patterns that confuse automated tools.
Systematic Generation of XSS and SQLi Vulnerabilities in PHP as Test Cases for Static Code Analysis
(2022)
Synthetic static code analysis test suites are important to test the basic functionality of tools. We present a framework that uses different source code patterns to generate Cross Site Scripting and SQL injection test cases. A decision tree is used to determine if the test cases are vulnerable. The test cases are split into two test suites. The first test suite contains 258,432 test cases that have influence on the decision trees. The second test suite contains 20 vulnerable test cases with different data flow patterns. The test cases are scanned with two commercial static code analysis tools to show that they can be used to benchmark and identify problems of static code analysis tools. Expert interviews confirm that the decision tree is a solid way to determine the vulnerable test cases and that the test suites are relevant.
Investigation of magnetic effects on austenitic stainless steels after low temperature carburization
(2018)
Fachvortrag auf der 10th International European Stainless Steel Conference and 6th European Duplex Stainless Steel Conference (ESSC & DUPLEX 2019), 30.09. – 02.10.2019, Vienna, Austria
Magnetic effects on austenitic stainless steels, formed during a low temperature carburizing depending on the alloy composition are discussed in this paper. Samples of different austenitic stainless steel alloys have been subjected to a multiple low-temperature carburization. Layer characteriszation with light microscope and hardness profiles show a growth of the layer thickness. The formation of an expanded austenite layer (lattice expansion) could be detected by X-ray diffraction (XRD). Feritscope was used to determine the magnetizability, whereby not all austenitic alloys form a magnetizability after treatment. Furthermore, test procedures were developed to visualize the magnetizability. For this purpose, magnetic force microscope measurements and investigations with ferrofluid were carried out and a fir tree ferromagnetic layer strucure could be proven.
Small vessels or unmanned surface vehicles only have a limited amount of space and energy available. If these vessels require an active sensing collision avoidance system it is often not possible to mount large sensor systems like X-Band radars. Thus, in this paper an energy efficient automotive radar and a laser range sensor are evaluated for tracking surrounding vessels. For these targets, those type of sensors typically generate more than one detection per scan. Therefore, an extended target tracking problem has to be solved to estimate state end extension of the vessels. In this paper, an extended version of the probabilistic data association filter that uses random matrices is applied. The performance of the tracking system using either radar or laser range data is demonstrated in real experiments.
Probabilistic data association for tracking extended targets under clutter using random matrices
(2015)
The use of random matrices for tracking extended objects has received high attention in recent years. It is an efficient approach for tracking objects that give rise to more than one measurement per time step. In this paper, the concept of random matrices is used to track surface vessels using highresolution automotive radar sensors. Since the radar also receives a large number of clutter measurements from the water, for the data association problem, a generalized probabilistic data association filter is applied. Additionally, a modification of the filter update step is proposed to incorporate the Doppler velocity measurements. The presented tracking algorithm is validated using Monte Carlo Simulation, and some performance results with real radar data are shown as well.
Ein neuzeitlicher Pfahlbau
(2016)
Bauhäusler am Bodensee
(2016)
This paper aims to apply the basics of the Service-Dominant Logic, especially the concept of creating benefits through serving, to the stationary retail industry. In the industrial context, the shift from a product-driven point of view to a service-driven perspective has been discussed widely. However, there are only few connections to how this can be applied to the retail sector on a B2C-level and how retailers can use smart services in order to enable customer engagement, loyalty and retention. The expectations of customers towards future stationary retail develop significantly as consumers got used to the comfort of online shopping. Especially the younger generation—the Generation Z—seems to have changed their priorities from the bare purchase of products to an experience- and service-driven approach when shopping over-the-counter. To stay successful long-term, companies from this sector need to adapt to the expectations of their future main customer group. Therefore, this paper will analyse the specific needs of Generation Z, explain how smart services contribute to creating benefit for this customer group and how this affects the economic sustainability of these firms.
Effect of cold working on the localized corrosion behavior of CrNi and CrNiMnN metastable austenites
(2015)
Differences in the pitting resistance between cold worked CrNi and CrNiMnN metastable austenites
(2015)
Fachvortrag auf dem Kongress CORROSION 2015, 15-19 March, Dallas, Texas, USA. NACE International
Measuring cardiorespiratory parameters in sleep, using non-contact sensors and the Ballistocardiography technique has received much attention due to the low-cost, unobtrusive, and non-invasive method. Designing a user-friendly, simple-to-use, and easy-to-deployment preserving less errorprone remains open and challenging due to the complex morphology of the signal. In this work, using four forcesensitive resistor sensors, we conducted a study by designing four distributions of sensors, in order to simplify the complexity of the system by identifying the region of interest for heartbeat and respiration measurement. The sensors are deployed under the mattress and attached to the bed frame without any interference with the subjects. The four distributions are combined in two linear horizontal, one linear vertical, and one square, covering the influencing region in cardiorespiratory activities. We recruited 4 subjects and acquired data in four regular sleeping positions, each for a duration of 80 seconds. The signal processing was performed using discrete wavelet transform bior 3.9 and smooth level of 4 as well as bandpass filtering. The results indicate that we have achieved the mean absolute error of 2.35 and 4.34 for respiration and heartbeat, respectively. The results recommend the efficiency of a triangleshaped structure of three sensors for measuring heartbeat and respiration parameters in all four regular sleeping positions.
Für die Überwachung des Schlafs zu Hause sind nichtinvasive Methoden besonders gut anwendbar. Die Signale, die häufig überwacht werden, sind Herzfrequenz und Atemfrequenz. Die Ballistokardiographie (BCG)ist eine Technik, bei der die Herzfrequenz aus den mechanischen Schwingungen des Körpers bei jedem Herzzyklus gemessen wird. Kürzlich wurden Übersichtsarbeiten veröffentlicht. Die Untersuchung soll in einem ersten Ansatz bewerten, ob die Herzfrequenz anhand von BCG erkannt werden kann. Die wesentlichen Randbedingungen sind, ob dies gelingt, wenn der Sensor unter der Matratze positioniert wird und kostengünstige Sensoren zum Einsatz kommen.
Present demographic change and a growing population of elderly people leads to new medical needs. Meeting these with state of the art technology is as a consequence a rapidly growing market. So this work is aimed at taking modern concepts of mobile and sensor technology and putting them in a medical context. By measuring a user’s vital signs on sensors which are processed on a Android smartphone, the target system is able to determine the current health state of the user and to visualize gathered information. The system also includes a weather forecasting functionality, which alerts the user on possibly dangerous future meteorological events. All information are collected centrally and distributed to users based on their location. Further, the system can correlate the client-side measurement of vital signs with a server-side weather history. This enables personalized forecasting for each user individually. Finally, a portable and affordable application was developed that continuously monitors the health status by many vital sensors, all united on a common smartphone.
Assistive environments are entering our homes faster than ever. However, there are still various barriers to be broken. One of the crucial points is a personalization of offered services and integration of assistive technologies in common objects and therefore in a regular daily routine. Recognition of sleep patterns for the preliminary sleep study is one of the health services that could be performed in an undisturbing way. This article proposes the hardware system for the measurement of bio-vital signals necessary for initial sleep study in a non-obtrusive way. The first results confirm the potential of measurement of breathing and movement signals with the proposed system.
Long-term sleep monitoring can be done primarily in the home environment. Good patient acceptance requires low user and installation barriers. The selection of parameters in this approach is significantly limited compared to a PSG session. The aim is a qualified selection of parameters, which on the one hand allow a sufficiently good classification of sleep phases and on the other hand can be detected by non-invasive methods.
The development of home health systems can provide continuous and user-friendly monitoring of key health parameters. This project aims to create a concept for such a system, implement it on a test basis, and evaluate it. Three health areas were selected for this purpose:
Sleep, Stress, and Rehabilitation. Appropriate devices were installed in the homes of test subjects and used by them for two weeks. Besides, relevant questionnaires were completed to obtain a complete picture. Finally, the implemented system was evaluated, and the results of the conducted study showed that home health systems have great potential. However, it is necessary to consider some points to increase the usability of the system and the motivation of the users. Among others, ease of use of the equipment is of extreme importance.
Sleep is a multi-dimensional influencing factor on physical health, cognitive function, emotional well-being, mental health, daily performance, and productivity. The barriers such as time-consuming, invasiveness, and expense have caused a gradual shift in sleep monitoring from traditional and standard in-lab approach, e. g., polysomnography (PSG) to unobtrusive and noninvasive in-home sleep monitoring, yet further improvement is required. Despite an increasing interest in fiberoptic-based methods for cardiorespiratory estimation, the traditional mechanical-based sensors consist of force-sensitive resistors (FSR), lead zirconate titanate piezoelectric (PZT), and accelerometers yet serve as the dominant approach. The part of popularity lies in reducing the system’s complexity, expense, easy maintenance, and user-friendliness. However, care must be taken regarding the performance of such sensors with respect to accuracy and calibration.
Autism spectrum disorders (ASD) affect a large number of children both in the Russian Federation and in Germany. Early diagnosis is key for these children, because the sooner parents notice such disorders in a child and the rehabilitation and treatment program starts, the higher the likelihood of his social adaptation. The difficulties in raising such a child lie in the complexity of his learning outside of children's groups and the complexity of his medical care. In this regard, the development of digital applications that facilitate medical care and education of such children at home is important and relevant. The purpose of the project is to improve the availability and quality of healthcare and social adaptation at home of children with ASD through the use of digital technologies.
The citizen-centered health platform project is intended to provide a platform that can be used in EU cross-border regions, where social and economic exchange occurs across national borders. The overriding challenges are: (a) social: improving citizen-centered health and care provision; (b) technical: providing a digital platform for networking citizens, service providers, and municipal actors; (c) economic: developing long-term successful (sustainable) business models/value chains. The platform should strengthen and expand existing networks and establish new regional networks. Each network addresses particular challenges and apply them in a region-specific manner. Here, the national boundary conditions and the interregional needs play an essential role. These objectives require sufficient participation of civil society representatives. Furthermore, the platform will establish an overarching, sustainable, and knowledge-based network of health experts. The platform is to be jointly developed and implemented in the regions and follow an open-access approach. Therefore, synergies will be shared more quickly, strengthening competencies and competitiveness. In addition to practice partners, scientific and municipal institutions and SMEs are involved. The actors thus contribute to scientific performance, innovative strength, and resilience.
In diesem Beitrag wird eine Methode des maschinellen Lernens entwickelt, die die Schlafstadienerkennung untersucht. Übliche Methoden der Schlafanalyse basieren auf der Polysomnographie (PSG). Der präsentierte Ansatz basiert auf Signalen, die ausschließlich nicht-invasiv in einer häuslichen Umgebung gemessen werden können. Bewegungs-, Herzschlags- und Atmungssignale können vergleichsweise leicht erfasst werden aber die Erkennung der Schlafstadien ist dadurch erschwert. Die Signale werden als Zeitreihenfolge strukturiert und in Epochen überführt. Die Leistungsfähigkeit von maschinellem Lernen wird der Polysomnographie gegenübergestellt und bewertet.
Corporate entrepreneurship (CE) supports the strategic renewal of established companies. Corporate venturing represents one key concept of CE that supports companies to strengthen their innovation capabilities. For the successful implementation of corporate ventures dual structures are recommended. The question, how the interface should be designed, plays a crucial role. Although it seems to be an important factor, this aspect requires further attention. One relevant element of the interface design are the different roles of the individuals that are interacting within the interface. This study is based on nine interviews that are representing six internal corporate ventures within one large German corporate from the ICT sector. The results that were mirrored with short case studies of 25 additional companies of the data sample, contribute to a better understanding of the interface design by adding insights about roles in corporate entrepreneurship. This deeper understanding about roles allows to draw conclusions on the interface design from a structural point of view.
In today's volatile world, established companies must be capable of optimizing their core business with incremental innovations while simultaneously developing discontinuous innovations to maintain their long-term competitiveness. Balancing both is a major challenge for companies, since different types of innovation require different organizational structures, operational modes and management styles. Established companies tend to excel in improving their current business through incremental innovations which are closely related to their current knowledge base and competencies. However, this often goes hand in hand with challenges in the exploration of knowledge that is new to the company and that is essential for the development of discontinuous innovations. In this respect, the concept of corporate entrepreneurship is recognized as a way to strengthen the exploration of new knowledge and to support the development of discontinuous innovation. For managing corporate entrepreneurship more effectively, it is crucial to understand which types of knowledge can be created through corporate entrepreneurship and which organizational designs are more suited to gain certain types of knowledge. To answer these questions, this study analyzed 23 semi-structured interviews conducted with established companies that are running such entrepreneurial activities. The results show (1) that three general types of knowledge can be explored through corporate entrepreneurship and (2) that some organizational designs are more suited to explore certain knowledge types than others are.
Corporate entrepreneurship (CE) is experiencing continuously increasing interest from scholars and practitioners. One reason for this seems to be rooted in the organizational structures of established companies, which are cumbersome for implementing organizational agility and for developing radical innovations. In view of the advancing digitalization, however, exactly this is required in order to be successful in the long-term. CE is a promising managerial tool that offers a wide range of options to pursue the creation of new businesses and to support the companies' transformation in order to adapt to changes in the environment. Even though CE offers a broad range of opportunities, the effective management is a challenge. One reason for this is the ambiguity when it comes to the differences between the various CE forms and the objectives that can be achieved by those. This study, which is based on 13 in-depth interviews from eight high-tech companies, contributes to a better understanding of CE by offering a first harmonized set of CE objectives that is suitable to compare and differentiate across the different forms. In addition to that, three CE types, offering a new perspective on how to differentiate CE forms, are identified and give implications for a more effective management.
Corporate venturing has gained much attention due
to challenges and changes that occur because of discontinuous
innovations – which seem to be promoted by digitalization. In this
context, open innovation has become a promising tool for
established companies to strengthen their innovation capabilities.
While the external opening of the innovation process has gained
much attention, the internal opening lacks on investigations.
Especially new organizational forms, such as Internal Corporate
Accelerators, have not been investigated sufficiently. This study,
which is based on 13 interviews from two German tech-companies,
contributes to a better understanding of this new form of corporate
venturing and the resulting effects on the organizational renewal.