Refine
Document Type
- Conference Proceeding (461) (remove)
Language
- English (327)
- German (133)
- Multiple languages (1)
Keywords
- 3D ship detection (1)
- AAL (1)
- Abrasive grain material (1)
- Academic german (1)
- Accelerometers (1)
- Accessible Tourism (1)
- Actions (1)
- Activity monitoring (1)
- Actuators (2)
- Adaptive (1)
Institute
- Fakultät Bauingenieurwesen (6)
- Fakultät Elektrotechnik und Informationstechnik (7)
- Fakultät Informatik (40)
- Fakultät Maschinenbau (7)
- Fakultät Wirtschafts-, Kultur- und Rechtswissenschaften (3)
- Institut für Angewandte Forschung - IAF (7)
- Institut für Optische Systeme - IOS (13)
- Institut für Strategische Innovation und Technologiemanagement - IST (13)
- Institut für Systemdynamik - ISD (29)
- Institut für Werkstoffsystemtechnik Konstanz - WIK (4)
Uncertainty about the future requires companies to create discontinuous innovations. Established companies, however, struggle to do so; whereas independent startups seem to better cope with this. Consequently, established companies set up entrepreneurial initiatives to make use of startups' benefits. Consequently, this led-amongst others-to great interest in socalled corporate entrepreneurship (CE) programs and to the development and characterization of several different forms. Their processes to achieve certain objectives, yet, are still rather ineffective. Thus, considerations of the actions performed in preparation for and during CE programs could be one approach to improve this but are still absent today. Furthermore, the increasing use of several CE programs in parallel seems to bear the potential for synergies and, thus, more efficient use of resources. Aiming to provide insights to both issues, this study analyzes actions of CE programs, by looking at interviews with managers of seven corporate incubators and accelerator programs of five established German tech-companies.
In today's volatile world, established companies must be capable of optimizing their core business with incremental innovations while simultaneously developing discontinuous innovations to maintain their long-term competitiveness. Balancing both is a major challenge for companies, since different types of innovation require different organizational structures, operational modes and management styles. Established companies tend to excel in improving their current business through incremental innovations which are closely related to their current knowledge base and competencies. However, this often goes hand in hand with challenges in the exploration of knowledge that is new to the company and that is essential for the development of discontinuous innovations. In this respect, the concept of corporate entrepreneurship is recognized as a way to strengthen the exploration of new knowledge and to support the development of discontinuous innovation. For managing corporate entrepreneurship more effectively, it is crucial to understand which types of knowledge can be created through corporate entrepreneurship and which organizational designs are more suited to gain certain types of knowledge. To answer these questions, this study analyzed 23 semi-structured interviews conducted with established companies that are running such entrepreneurial activities. The results show (1) that three general types of knowledge can be explored through corporate entrepreneurship and (2) that some organizational designs are more suited to explore certain knowledge types than others are.
We have analyzed a pool of 37,839 articles published in 4,404 business-related journals in the entrepreneurship research field using a novel literature review approach that is based on machine learning and text data mining. Most papers have been published in the journals ‘Small Business Economics’, ‘International Journal of Entrepreneurship and Small Business’, and ‘Sustainability’ (Switzerland), while the sum of citations is highest in the ‘Journal of Business Venturing’, ‘Entrepreneurship Theory and Practice’, and ‘Small Business Economics’. We derived 29 overarching themes based on 52 identified clusters. The social entrepreneurship, development, innovation, capital, and economy clusters represent the largest ones among those with high thematic clarity. The most discussed clusters measured by the average number of citations per assigned paper are research, orientation, capital, gender, and growth. Clusters with the highest average growth in publications per year are social entrepreneurship, innovation, development, entrepreneurship education, and (business-) models. Measured by the average yearly citation rate per paper, the thematic cluster ‘research’, mostly containing literature studies, received most attention. The MLR allows for an inclusion of a significantly higher number of publications compared to traditional reviews thus providing a comprehensive, descriptive overview of the whole research field.
This paper proposes a novel transmission scheme for generalized multistream spatial modulation. This new approach uses one Mannheim error correcting codes over Gaussian or Eisenstein integers as multidimensional signal constellations. These codes enable a suboptimal decoding strategy with near maximum likelihood performance for transmission over the additive white Gaussian noise channel. In this contribution, this decoding algorithm is generalized to the detection for generalized multistream spatial modulation. The proposed method can outperform conventional generalized multistream spatial modulation with respect to decoding performance, detection complexity, and spectral efficiency.
Soft-input decoding of concatenated codes based on the Plotkin construction and BCH component codes
(2020)
Low latency communication requires soft-input decoding of binary block codes with small to medium block lengths.
In this work, we consider generalized multiple concatenated (GMC) codes based on the Plotkin construction. These codes are similar to Reed-Muller (RM) codes. In contrast to RM codes, BCH codes are employed as component codes. This leads to improved code parameters. Moreover, a decoding algorithm is proposed that exploits the recursive structure of the concatenation. This algorithm enables efficient soft-input decoding of binary block codes with small to medium lengths. The proposed codes and their decoding achieve significant performance gains compared with RM codes and recursive GMC decoding.
The reliability of flash memories suffers from various error causes. Program/erase cycles, read disturb, and cell to cell interference impact the threshold voltages and cause bit errors during the read process. Hence, error correction is required to ensure reliable data storage. In this work, we investigate the bit-labeling of triple level cell (TLC) memories. This labeling determines the page capacities and the latency of the read process. The page capacity defines the redundancy that is required for error correction coding. Typically, Gray codes are used to encode the cell state such that the codes of adjacent states differ in a single digit. These Gray codes minimize the latency for random access reads but cannot balance the page capacities. Based on measured voltage distributions, we investigate the page capacities and propose a labeling that provides a better rate balancing than Gray labeling.
Side Channel Attack Resistance of the Elliptic Curve Point Multiplication using Eisenstein Integers
(2020)
Asymmetric cryptography empowers secure key exchange and digital signatures for message authentication. Nevertheless, consumer electronics and embedded systems often rely on symmetric cryptosystems because asymmetric cryptosystems are computationally intensive. Besides, implementations of cryptosystems are prone to side-channel attacks (SCA). Consequently, the secure and efficient implementation of asymmetric cryptography on resource-constrained systems is demanding. In this work, elliptic curve cryptography is considered. A new concept for an SCA resistant calculation of the elliptic curve point multiplication over Eisenstein integers is presented and an efficient arithmetic over Eisenstein integers is proposed. Representing the key by Eisenstein integer expansions is beneficial to reduce the computational complexity and the memory requirements of an SCA protected implementation.
Deep neural networks (DNNs) are known for their high prediction performance, especially in perceptual tasks such as object recognition or autonomous driving. Still, DNNs are prone to yield unreliable predictions when encountering completely new situations without indicating their uncertainty. Bayesian variants of DNNs (BDNNs), such as MC dropout BDNNs, do provide uncertainty measures. However, BDNNs are slow during test time because they rely on a sampling approach. Here we present a single shot MC dropout approximation that preserves the advantages of BDNNs without being slower than a DNN. Our approach is to analytically approximate for each layer in a fully connected network the expected value and the variance of the MC dropout signal. We evaluate our approach on different benchmark datasets and a simulated toy example. We demonstrate that our single shot MC dropout approximation resembles the point estimate and the uncertainty estimate of the predictive distribution that is achieved with an MC approach, while being fast enough for real-time deployments of BDNNs.
We compared vulnerable and xed versions of the source code of 50 dierent PHP open source projects based on CVE reports for SQL injection vulnerabilities. We scanned the source code with commercial and open source tools for static code analysis. Our results show that ve current state-of-the-art tools have issues correctly marking vulnerable and safe code. We identify 25 code patterns that are not detected as a vulnerability by at least one of the tools and 6 code patterns that are mistakenly reported as a vulnerability that cannot be conrmed by manual code inspection. Knowledge of the patterns could help vendors of static code analysis tools, and software developers could be instructed to avoid patterns that confuse automated tools.
This paper presents the goals, service design approach, and the results of the project “Accessible Tourism around Lake Constance”, which is currently run by different universities, industrial partners and selected hotels in Switzerland, Germany and Austria. In the 1st phase, interviews with different persons with disabilities and elderly persons have been conducted to identify the barriers and pains faced by tourists who want to spend their holidays in the region of Lake Constance as well as possible assistive technologies that help to overcome these barriers. The analysis of the interviews shows that one third of the pains and barriers are due to missing, insufficient, wrong or inaccessible information about the
accessibility of the accommodation, surroundings, and points of interests during the planning phase of the holidays. Digital assistive technologies hence play a
major role in bridging this information gap. In the 2nd phase so-called Hotel-Living-Labs (HLL) have been established where the identified assistive technologies
can be evaluated. Based on these HLLs an overall service for accessible holidays has been designed and developed. In the last phase, this service has been implemented
based on the HLLs as well as the identified assistive technologies and is currently field tested with tourists with disabilities from the three participated countries.
The ageing infrastructure in ports requires regular inspection. This inspection is currently carried out manually by divers who sense by hand the entire underwater infrastructure. This process is cost-intensive as it involves a lot of time and human resources. To overcome these difficulties, we propose to scan the above and underwater port structure with a Multi-SensorSystem, and -by a fully automated processto classify the obtained point cloud into damaged and undamaged zones. We make use of simulated training data to test our approach since not enough training data with corresponding class labels are available yet. To that aim, we build a rasterised heightfield of a point cloud of a sheet pile wall by cutting it into verticall slices. The distance from each slice to the corresponding line generates the heightfield. This latter is propagated through a convolutional neural network which detects anomalies. We use the VGG19 Deep Neural Network model pretrained on natural images. This neural network has 19 layers and it is often used for image recognition tasks. We showed that our approach can achieve a fully automated, reproducible, quality-controlled damage detection which is able to analyse the whole structure instead of the sample wise manual method with divers. The mean true positive rate is 0.98 which means that we detected 98 % of the damages in the simulated environment.
Modeling a suitable birth density is a challenge when using Bernoulli filters such as the Labeled Multi-Bernoulli (LMB) filter. The birth density of newborn targets is unknown in most applications, but must be given as a prior to the filter. Usually the birth density stays unchanged or is designed based on the measurements from previous time steps.
In this paper, we assume that the true initial state of new objects is normally distributed. The expected value and covariance of the underlying density are unknown parameters. Using the estimated multi-object state of the LMB and the Rauch-Tung-Striebel (RTS) recursion, these parameters are recursively estimated and adapted after a target is detected.
The main contribution of this paper is an algorithm to estimate the parameters of the birth density and its integration into the LMB framework. Monte Carlo simulations are used to evaluate the detection driven adaptive birth density in two scenarios. The approach can also be applied to filters that are able to estimate trajectories.
The expansion of a given multivariate polynomial into Bernstein polynomials is considered. Matrix methods for the calculation of the Bernstein expansion of the product of two polynomials and of the Bernstein expansion of a polynomial from the expansion of one of its partial derivatives are provided which allow also a symbolic computation.
We propose and apply a requirements engineering approach that focuses on security and privacy properties and takes into account various stakeholder interests. The proposed methodology facilitates the integration of security and privacy by design into the requirements engineering process. Thus, specific, detailed security and privacy requirements can be implemented from the very beginning of a software project. The method is applied to an exemplary application scenario in the logistics industry. The approach includes the application of threat and risk rating methodologies, a technique to derive technical requirements from legal texts, as well as a matching process to avoid duplication and accumulate all essential requirements.
We present source code patterns that are difficult for modern static code analysis tools. Our study comprises 50 different open source projects in both a vulnerable and a fixed version for XSS vulnerabilities reported with CVE IDs over a period of seven years. We used three commercial and two open source static code analysis tools. Based on the reported vulnerabilities we discovered code patterns that appear to be difficult to classify by static analysis. The results show that code analysis tools are helpful, but still have problems with specific source code patterns. These patterns should be a focus in training for developers.
The evaluation of the effectiveness of different machine learning algorithms on a publicly available database of signals derived from wearable devices is presented with the goal of optimizing human activity recognition and classification. Among the wide number of body signals we choose a couple of signals, namely photoplethysmographic (optically detected subcutaneous blood volume) and tri-axis acceleration signals that are easy to be simultaneously acquired using commercial widespread devices (e.g. smartwatches) as well as custom wearable wireless devices designed for sport, healthcare, or clinical purposes. To this end, two widely used algorithms (decision tree and k-nearest neighbor) were tested, and their performance were compared to two new recent algorithms (particle Bernstein and a Monte Carlo-based regression) both in terms of accuracy and processing time. A data preprocessing phase was also considered to improve the performance of the machine learning procedures, in order to reduce the problem size and a detailed analysis of the compression strategy and results is also presented.
Good sleep is crucial for a healthy life of every person. Unfortunately, its quality often decreases with aging. A common approach to measuring the sleep characteristics is based on interviews with the subjects or letting them fill in a daily questionnaire and afterward evaluating the obtained data. However, this method has time and personal costs for the interviewer and evaluator of responses. Therefore, it would be important to execute the collection and evaluation of sleep characteristics automatically. To do that, it is necessary to investigate the level of agreement between measurements performed in a traditional way using questionnaires and measurements obtained using electronic monitoring devices. The study presented in this manuscript performs this investigation, comparing such sleep characteristics as "time going to bed", "total time in bed", "total sleep time" and "sleep efficiency". A total number of 106 night records of elderly persons (aged 65+) were analyzed. The results achieved so far reveal the fact that the degree of agreement between the two measurement methods varies substantially for different characteristics, from 31 minutes of mean difference for "time going to bed" to 77 minutes for "total sleep time". For this reason, a direct exchange of objective and subjective measuring methods is currently not possible.
Polysomnography is a gold standard for a sleep study, and it provides very accurate results, but its cost (both personnel and material) are quite high. Therefore, the development of a low-cost system for overnight breathing and heartbeat monitoring, which provides more comfort while recording the data, is a well-motivated challenge. The system proposed in this manuscript is based on the usage of resistive pressure sensors installed under the mattress. These sensors can measure slight pressure changes provoked during breathing and heartbeat. The captured signal requires advanced processing, like applying filters and amplifiers before the analog signal is ready for the next step. Then, the output signal is digitalized and further processed by an algorithm that performs a custom filtering before it can recognize breathing and heart rate in real-time. The result can be directly visualized. Furthermore, a CSV file is created containing the raw data, timestamps, and unique IDs to facilitate further processing. The achieved results are promising, and the average deviation from a reference device is about 4bpm.
Cardiovascular diseases are directly or indirectly responsible for up to 38.5% of all deaths in Germany and thus represent the most frequent cause of death. At present, heart diseases are mainly discovered by chance during routine visits to the doctor or when acute symptoms occur. However, there is no practical method to proactively detect diseases or abnormalities of the heart in the daily environment and to take preventive measures for the person concerned. Long-term ECG devices, as currently used by physicians, are simply too expensive, impractical, and not widely available for everyday use. This work aims to develop an ECG device suitable for everyday use that can be worn directly on the body. For this purpose, an already existing hardware platform will be analyzed, and the corresponding potential for improvement will be identified. A precise picture of the existing data quality is obtained by metrological examination, and corresponding requirements are defined. Based on these identified optimization potentials, a new ECG device is developed. The revised ECG device is characterized by a high integration density and combines all components directly on one board except the battery and the ECG electrodes. The compact design allows the device to be attached directly to the chest. An integrated microcontroller allows digital signal processing without the need for an additional computer. Central features of the evaluation are a peak detection for detecting R-peaks and a calculation of the current heart rate based on the RR interval. To ensure the validity of the detected R-peaks, a model of the anatomical conditions is used. Thus, unrealistic RR-intervals can be excluded. The wireless interface allows continuous transmission of the calculated heart rate. Following the development of hardware and software, the results are verified, and appropriate conclusions about the data quality are drawn. As a result, a very compact and wearable ECG device with different wireless technologies, data storage, and evaluation of RR intervals was developed. Some tests yelled runtimes up to 24 hours with wireless Lan activated and streaming.
In previous studies, we used a method for detecting stress that was based exclusively on heart rate and ECG for differentiation between such situations as mental stress, physical activity, relaxation, and rest. As a response of the heart to these situations, we observed different behavior in the Root Mean Square of the Successive differences heartbeats (RMSSD). This study aims to analyze Virtual Reality via a virtual reality headset as an effective stressor for future works. The value of the Root Mean Square of the Successive Differences is an important marker for the parasympathetic effector on the heart and can provide information about stress. For these measurements, the RR interval was collected using a breast belt. In these studies, we can observe the Root Mean Square of the successive differences heartbeats. Additional sensors for the analysis were not used. We conducted experiments with ten subjects that had to drive a simulator for 25 minutes using monitors and 25 minutes using virtual reality headset. Before starting and after finishing each simulation, the subjects had to complete a survey in which they had to describe their mental state. The experiment results show that driving using virtual reality headset has some influence on the heart rate and RMSSD, but it does not significantly increase the stress of driving.
This work is a study about a comparison of survey tools and it should help developers in selecting a suited tool for application in an AAL environment. The first step was to identify the basic required functionality of the survey tools used for AAL technologies and to compare these tools by their functionality and assignments. The comparative study was derived from the data obtained, previous literature studies and further technical data. A list of requirements was stated and ordered in terms of relevance to the target application domain. With the help of an integrated assessment method, the calculation of a generalized estimate value was performed and the result is explained. Finally, the planned application of this tool in a running project is explained.
This paper presents the implementation of deep learning methods for sleep stage detection by using three signals that can be measured in a non-invasive way: heartbeat signal, respiratory signal, and movement signal. Since signals are measurements taken during the time, the problem is seen as time-series data classification. Deep learning methods are chosen to solve the problem are convolutional neural network and long-short term memory network. Input data is structured as a time-series sequence of mentioned signals that represent 30 seconds epoch, which is a standard interval for sleep analysis. The records used belong to the overall 23 subjects, which are divided into two subsets. Records from 18 subjects were used for training the data and from 5 subjects for testing the data. For detecting four sleep stages: REM (Rapid Eye Movement), Wake, Light sleep (Stage 1 and Stage 2), and Deep sleep (Stage 3 and Stage 4), the accuracy of the model is 55%, and F1 score is 44%. For five stages: REM, Stage 1, Stage 2, Deep sleep (Stage 3 and 4), and Wake, the model gives an accuracy of 40% and F1 score of 37%.
Für die Überwachung des Schlafs zu Hause sind nichtinvasive Methoden besonders gut anwendbar. Die Signale, die häufig überwacht werden, sind Herzfrequenz und Atemfrequenz. Die Ballistokardiographie (BCG)ist eine Technik, bei der die Herzfrequenz aus den mechanischen Schwingungen des Körpers bei jedem Herzzyklus gemessen wird. Kürzlich wurden Übersichtsarbeiten veröffentlicht. Die Untersuchung soll in einem ersten Ansatz bewerten, ob die Herzfrequenz anhand von BCG erkannt werden kann. Die wesentlichen Randbedingungen sind, ob dies gelingt, wenn der Sensor unter der Matratze positioniert wird und kostengünstige Sensoren zum Einsatz kommen.
Die Schlafapnoe ist eine häufig auftretende Schlafstörung,
die unterschiedliche Auswirkungen auf unseren Alltag hat; so wurde z. B.
über eine Tagesschläfrigkeit von etwa 25 % der Patienten mit obstruktiver
Schlafapnoe (OSA) berichtet. Ziel dieser Arbeit ist die Entwicklung eines
Systems, das eine nichtinvasive Erkennung der Schlafapnoe in häuslicher
Umgebung ermöglichen soll.
In diesem Beitrag wird eine Methode des maschinellen Lernens entwickelt, die die Schlafstadienerkennung untersucht. Übliche Methoden der Schlafanalyse basieren auf der Polysomnographie (PSG). Der präsentierte Ansatz basiert auf Signalen, die ausschließlich nicht-invasiv in einer häuslichen Umgebung gemessen werden können. Bewegungs-, Herzschlags- und Atmungssignale können vergleichsweise leicht erfasst werden aber die Erkennung der Schlafstadien ist dadurch erschwert. Die Signale werden als Zeitreihenfolge strukturiert und in Epochen überführt. Die Leistungsfähigkeit von maschinellem Lernen wird der Polysomnographie gegenübergestellt und bewertet.
Seamless-Learning-Plattform
(2020)
Die Überwindung des Bruchs (Seam) beim Lernen im Studium zwischen dem Hochschulkontext und der beruflichen Praxis ist durch die zeitlich, räumlich und organisatorisch bedingte Trennung der relevanten Akteure (u. a. Lehrende, Lernende, Unternehmensvertreter) eine sehr große Herausforderung (Milrad et al., 2013). Eine seamless-learning-basierte Konzeption einer Lehrveranstaltung auf Basis agiler Werte und Methoden (u. a. inkrementelles Vorgehen, Fokus auf lernendenzentrierte Veranstaltungen, individualisiertes Lernenden-Feedback) kann bei der Überwindung dieses bedeutenden Bruchs helfen. In dem Poster wird das grundsätzliche Design eines derartigen agilen SL-Konzepts auf Basis eines iterativ, inkrementellen Vorgehens innerhalb eines Semesterzyklus von 15 Wochen in drei Lernsprints erörtert. Darüber hinaus wird über erste Lehrerfahrungen der Dozierenden sowohl aus der Hochschule als auch aus dem industriellen Umfeld und Lernerfahrungen der Studierenden aus den vergangenen zwei Jahren berichtet.
The Montgomery multiplication is an efficient method for modular arithmetic. Typically, it is used for modular arithmetic over integer rings to prevent the expensive inversion for the modulo reduction. In this work, we consider modular arithmetic over rings of Gaussian integers. Gaussian integers are subset of the complex numbers such that the real and imaginary parts are integers. In many cases Gaussian integer rings are isomorphic to ordinary integer rings. We demonstrate that the concept of the Montgomery multiplication can be extended to Gaussian integers. Due to independent calculation of the real and imaginary parts, the computation complexity of the multiplication is reduced compared with ordinary integer modular arithmetic. This concept is suitable for coding applications as well as for asymmetric key cryptographic systems, such as elliptic curve cryptography or the Rivest-Shamir-Adleman system.
In this work, we investigate a hybrid decoding approach that combines algebraic hard-input decoding of binary block codes with soft-input decoding. In particular, an acceptance criterion is proposed which determines the reliability of a candidate codeword. For many received codewords the stopping criterion indicates that the hard-decoding result is sufficiently reliable, and the costly soft-input decoding can be omitted. The proposed acceptance criterion significantly reduces the decoding complexity. For simulations we combine the algebraic hard-input decoding with ordered statistics decoding, which enables near maximum likelihood soft-input decoding for codes of small to medium block lengths.
Multi-dimensional spatial modulation is a multipleinput/ multiple-output wireless transmission technique, that uses only a few active antennas simultaneously. The computational complexity of the optimal maximum-likelihood (ML) detector at the receiver increases rapidly as more transmit antennas or larger modulation orders are employed. ML detection may be infeasible for higher bit rates. Many suboptimal detection algorithms for spatial modulation use two-stage detection schemes where the set of active antennas is detected in the first stage and the transmitted symbols in the second stage. Typically, these detection schemes use the ML strategy for the symbol detection. In this work, we consider a suboptimal detection algorithm for the second detection stage. This approach combines equalization and list decoding. We propose an algorithm for multi-dimensional signal constellations with a reduced search space in the second detection stage through set partitioning. In particular, we derive a set partitioning from the properties of Hurwitz integers. Simulation results demonstrate that the new algorithm achieves near-ML performance. It significantly reduces the complexity when compared with conventional two-stage detection schemes. Multi-dimensional constellations in combination with suboptimal detection can even outperform conventional signal constellations in combination with ML detection.
Spatial modulation is a low-complexity multipleinput/ multipleoutput transmission technique. The recently proposed spatial permutation modulation (SPM) extends the concept of spatial modulation. It is a coding approach, where the symbols are dispersed in space and time. In the original proposal of SPM, short repetition codes and permutation codes were used to construct a space-time code. In this paper, we propose a similar coding scheme that combines permutation codes with codes over Gaussian integers. Short codes over Gaussian integers have good distance properties. Furthermore, the code alphabet can directly be applied as signal constellation, hence no mapping is required. Simulation results demonstrate that the proposed coding approach outperforms SPM with repetition codes.
Many resource-constrained systems still rely on symmetric cryptography for verification and authentication. Asymmetric cryptographic systems provide higher security levels, but are very computational intensive. Hence, embedded systems can benefit from hardware assistance, i.e., coprocessors optimized for the required public key operations. In this work, we propose an elliptic curve cryptographic coprocessors design for resource-constrained systems. Many such coprocessor designs consider only special (Solinas) prime fields, which enable a low-complexity modulo arithmetic. Other implementations support arbitrary prime curves using the Montgomery reduction. These implementations typically require more time for the point multiplication. We present a coprocessor design that has low area requirements and enables a trade-off between performance and flexibility. The point multiplication can be performed either using a fast arithmetic based on Solinas primes or using a slower, but flexible Montgomery modular arithmetic.
Side Channel Attack Resistance of the Elliptic Curve Point Multiplication using Gaussian Integers
(2020)
Elliptic curve cryptography is a cornerstone of embedded security. However, hardware implementations of the elliptic curve point multiplication are prone to side channel attacks. In this work, we present a new key expansion algorithm which improves the resistance against timing and simple power analysis attacks. Furthermore, we consider a new concept for calculating the point multiplication, where the points of the curve are represented as Gaussian integers. Gaussian integers are subset of the complex numbers, such that the real and imaginary parts are integers. Since Gaussian integer fields are isomorphic to prime fields, this concept is suitable for many elliptic curves. Representing the key by a Gaussian integer expansion is beneficial to reduce the computational complexity and the memory requirements of a secure hardware implementation.
This document presents a new complete standalone system for a recognition of sleep apnea using signals from the pressure sensors placed under the mattress. The developed hardware part of the system is tuned to filter and to amplify the signal. Its software part performs more accurate signal filtering and identification of apnea events. The overall achieved accuracy of the recognition of apnea occurrence is 91%, with the average measured recognition delay of about 15 seconds, which confirms the suitability of the proposed method for future employment. The main aim of the presented approach is the support of the healthcare system with the cost-efficient tool for recognition of sleep apnea in the home environment.
The ballistocardiography is a technique that measures the heart rate from the mechanical vibrations of the body due to the heart movement. In this work a novel noninvasive device placed under the mattress of a bed estimates the heart rate using the ballistocardiography. Different algorithms for heart rate estimation have been developed.
Methods based exclusively on heart rate hardly allow to differentiate between physical activity, stress, relaxation, and rest, that is why an additional sensor like activity/movement sensor added for detection and classification. The response of the heart to physical activity, stress, relaxation, and no activity can be very similar. In this study, we can observe the influence of induced stress and analyze which metrics could be considered for its detection. The changes in the Root Mean Square of the Successive Differences provide us with information about physiological changes. A set of measurements collecting the RR intervals was taken. The intervals are used as a parameter to distinguish four different stages. Parameters like skin conductivity or skin temperature were not used because the main aim is to maintain a minimum number of sensors and devices and thereby to increase the wearability in the future.
The recovery of our body and brain from fatigue directly depends on the quality of sleep, which can be determined from the results of a sleep study. The classification of sleep stages is the first step of this study and includes the measurement of vital data and their further processing. The non-invasive sleep analysis system is based on a hardware sensor network of 24 pressure sensors providing sleep phase detection. The pressure sensors are connected to an energy-efficient microcontroller via a system-wide bus. A significant difference between this system and other approaches is the innovative way in which the sensors are placed under the mattress. This feature facilitates the continuous use of the system without any noticeable influence on the sleeping person. The system was tested by conducting experiments that recorded the sleep of various healthy young people. Results indicate the potential to capture respiratory rate and body movement.
Putze und Fugenmoertel, insbesondere in Sichtmauerwerken uebernehmen wesentliche technische, bauphysikalische und aesthetische Aufgaben fuer die Fassade und tragen massgeblich zum Erhalt der historischen Bausubstanz bei. Dabei sind an Denkmalen haeufig Befunde historischer Putze und Fugenmoertel sowie Verarbeitungsweisen und Handwerkstechniken vorhanden, die heute nicht mehr gebraeuchlich sind und beherrscht werden. Deshalb kommt ihrer Erhaltung besondere Bedeutung zu. Als aeusserster 'Angriffspunkt' der Gebaeudehuelle sind die Putze und Fugenmoertel der Witterung und zum Teil chemischen, biologischen und mechanischen Belastungen frei ausgesetzt und werden in Spritzwasserbereichen auch durch Tausalze beansprucht. Die dadurch sowie durch Alterung und Ermuedung entstehenden Schaeden werden beschrieben und Moeglichkeiten fuer die Erhaltung und Instandsetzung der Putze aufgezeigt.
Ziegelsplittbetone der Nachkriegsjahre und moderne RC-Betone - Nachhaltigkeit an Objektbeispielen
(2018)
Das Bauwesen gehört zu den größten Verbrauchern an natürlichen Ressourcen und Energie in der deutschen Wirtschaft. Das ist in vielen Fällen trotzdem ökologisch und ökonomisch vertretbar, weil die Bauteile und Bauwerke verglichen mit anderen "Produkten" eine deutlich längere technische Lebensdauer haben - im Fall des Betons zwischen ca. 25 und 100 Jahren - und wenn nach dem Rückbau hohe Recyclingquoten erzielt werden. In Bezug auf Einsparmöglichkeiten spielt der Massenbaustoff Beton eine ganz zentrale Rolle. Neben der Einsparung des sehr energieintensiven Zements und der Entwicklung von Substitutionsbindemitteln, stehen auch die Gesteinskörnungen im Fokus, die den größten Anteil am Beton ausmachen. Im Artikel werden auf der Basis eigener Ergebnisse aus einem DBU?geförderten Forschungsprojekt zu RC?Betonen die Nachhaltigkeit von Beton an Objektbeispielen vorgestellt und über den Sachstand zur aktuellen Nutzung von R-Betonen in Deutschland informiert. Der Fokus liegt dabei auf der Wertigkeit mineralischer Baustoffe aus vergleichsweise wenigen, überwiegend natürlichen Komponenten wie beim Beton, dessen Instandsetzungsmöglichkeiten und bessere Voraussetzungen für späteres Recycling gegenüber vielen modernen, kunststoffhaltigen Verbundbaustoffen.
Was in Kommunen im benachbarten Ausland, bspw. der Schweiz, Österreich oder den Niederlanden offenbar seit vielen Jahren Stand der Technik ist, ist auf Deutschlands Kommunalstraßen eine „Sonderbauweise“: Betonfahrbahnen. Obwohl, gerade in den neuen Bundesländern diese Bauweise häufig zum Einsatz kam, wird sie heute auf größeren Flächen wie gesamte Straßenzüge, bspw. in Wohngebieten, kaum noch angewendet (Bild 1). Die Gründe liegen offenbar in den schlechten Erfahrungen hinsichtlich einer komfortablen Nutzungsdauer sowie der Notwendigkeit, einen „leichteren“ Zugriff auf die unter der Straßenoberfläche liegenden Ver- und Entsorgungseinrichtungen zu haben. Infolge vermehrt auftretender Schäden wie Spurrinnen, Verdrückungen und weiteren Schäden werden hochbelastete Verkehrsflächen wie Bushaltestellen, Busspuren und Kreisverkehre immer häufiger anstatt in Asphalt- oder Pflaster in Betonbauweise ausgeführt. Die Gründe für die Vernachlässigung der Betonbauweise im kommunalen Umfeld liegen neben einem geringen Erfahrungsschatz sicherlich auch in einer aufwändigeren Planung, höheren Ausführungskosten, einem komplexeren Einbau, gerade in Zusammenhang mit Einbauten, Instandhaltungsmaßnahmen und Maßnahmen Dritter an den Ver- und Entsorgungseinrichtungen. Offen ist auch der Nachweis, dass die Betonbauweise im Lebenszyklus wirtschaftlicher als beispielsweise die Asphaltbauweise ist. Die Thematik „Einsatz von Betonflächen in Kommunen“ ist sehr umfangreich und weitläufig und lässt sich mit einem Vortrag generell nicht behandeln. Mit nachfolgenden Ausführungen soll demnach grundsätzlich auf die Belange der Planung, dem Bau und der Wirtschaftlichkeit kommunaler Verkehrsflächen in Betonbauweise eingegangen werden. Es können leider nicht alle Besonderheiten und Einzelheiten wie Baustoffe (Glasfaser) Berücksichtigung finden. Ziel des Vortrages ist, generelle Möglichkeiten hinsichtlich dem Einsatz von Betonflächen im kommunalen Bereich aufzuzeigen. Besonderer Dank gilt dem Straßenbauamt Böblingen sowie Herrn Baudirektor Andreas Klein, dessen persönliche Erfahrungen hier einfließen durften.
Low temperature carburizing of a series of austenitic stainless with various combinations of chromium and nickel equivalents was performed. The investigation of the response towards low temperature carburized for three stainless steels with various Cr- and Ni-equivalents showed that the carbon uptake depends significantly on the chemical composition of the base material. The higher carbon content in the expanded austenite layer of specimen 6 (1.4565) and specimen 4 (1.4539/AISI 904L) compared to specimen 2 (1.4404/AISI 316L) is assumed to be mainly related to the difference in the specimens’ chromium content. More chromium leads to more lattice expansion. Along with the higher carbon content, higher hardness values and higher compressive residual stresses in the expanded austenite zone are introduced than for low temperature carburized AISI 316L. The residual stresses obtained from X-ray diffraction lattice strain investigation depend strongly on the chosen X-ray elastic constants. Presently, no values are known for carbon (or nitrogen) stabilized expanded austenite. Nevertheless, first principle elastic constants for γ′&minus Fe4C appear to provide realistic residual stress values. Magnetic force microscopy and measurement with an eddy current probe indicate that austenitic stainless steels can become ferromagnetic upon carburizing, similar for low temperature nitriding. The apparent transition from para- to ferromagnetism cannot be attributed entirely to the interstitially dissolved carbon content in the formed expanded austenite layer but appears to depend also on the metallic composition of the alloy, in particular the Ni content.
When designing drying processes for sensitive biological foodstuffs like fruit or vegetables, energy and time efficiency as well as product quality are gaining more and more importance. These all are greatly influenced by the different drying parameters (e.g. air temperature, air velocity and dew point temperature) in the process. In sterilization of food products the cooking value is widely used as a cross-link between these parameters. In a similar way, the so-called cumulated thermal load (CTL) was introduced for drying processes. This was possible because most quality changes mainly depend on drying air temperature and drying time. In a first approach, the CTL was therefore defined as the time integral of the surface temperature of agricultural products. When conducting experiments with mangoes and pineapples, however, it was found that the CTL as it was used had to be adjusted to a more practical form. So the definition of the CTL was improved and the behaviour of the adjusted CTL (CTLad) was investigated in the drying of pineapples and mangoes. On the basis of these experiments and the work that had been done on the cooking value, it was found, that more optimization on the CTLad had to be done to be able to compare a great variety of different products as well as different quality parameters.
Flooded Edge Gateways
(2019)
Increasing numbers of internet-compatible devices, in particular in the context of IoT, usually cause increasing amounts of data. The processing and analysis of a continuously growing amount of data in real-time by means of cloud platforms cannot be guaranteed anymore. Approaches of Edge Computing decentralize parts of the data analysis logics towards the data sources in order to control the data transfer rate to the cloud through pre-processing with predefined quality-of-service parameters. In this paper, we present a solution for preventing overloaded gateways by optimizing the transfer of IoT data through a combination of Complex Event Processing and Machine Learning. The presented solution is completely based on open-source technologies and can therefore also be used in smaller companies.
With the increased deployment of biometric authentication systems, some security concerns have also arisen. In particular, presentation attacks directed to the capture device pose a severe threat. In order to prevent them, liveness features such as the blood flow can be utilised to develop presentation attack detection (PAD) mechanisms. In this context, laser speckle contrast imaging (LSCI) is a technology widely used in biomedical applications in order to visualise blood flow. We therefore propose a fingerprint PAD method based on textural information extracted from pre-processed LSCI images. Subsequently, a support vector machine is used for classification. In the experiments conducted on a database comprising 32 different artefacts, the results show that the proposed approach classifies correctly all bona fides. However, the LSCI technology experiences difficulties with thin and transparent overlay attacks.
Online-based business models, such as shopping platforms, have added new possibilities for consumers over the last two decades. Aside from basic differences to other distribution channels, customer reviews on such platforms have become a powerful tool, which bestows an additional source for gaining transparency to consumers. Related research has, for the most part, been labelled under the term electronic word-of-mouth (eWOM). An approach, providing a theoretical basis for this phenomenon, will be provided here. The approach is mainly based on work in the field of consumer culture theory (CCT) and on the concept of co-creation. The work of several authors in these streams of research is used to construct a culturally informed resource-based theory, as advocated by Arnould & Thompson and Algesheimer & Gurâu.
This article describes a research project that aims at investigating individual entrepreneurial founders concerning their shift tendencies of decision-making logics - especially during the respective phases of the venture creating process. Prior studies found that team founders show a hybrid perspective on strategic decision-making. They not only combine causation (planning-based) and effectuation (flexible) logics but also show logic shifts and also re-shifts over time. Due to the fact, that founders' social identity shapes early structuring processes, this article describes the necessity of elimination of in-group influences of multi-founding ventures and focus on individuals in order to make specific assessments on logic shifts and re-shifts. Based on an extensive literature review, a pre-selection-test and a qualitative case study design from the empirical body of the paper. Insofar, this study applies a qualitative design of a process research approach to investigate shifts of decision-making logics of individual founders in new venture creation over time.
Ziel der Masterarbeit war es, die Feuchtigkeitseigenschaften von Estrichen bei unterschiedlichen Klimaten mithilfe von Sorptionsisothermen zu charakterisieren. Die wenigen Literaturangaben zu Sorptionsisothermen von mineralischen Estrichen beziehen sich im Wesentlichen auf Calciumsulfatestriche und genormte Zementestriche (ohne dass die Zementart: Portlandzemente, Hochofenzemente bzw. CEM I, CEM II, CEM III etc. unterschieden werden) und i.d.R. nur auf eine Lufttemperatur (= 20 Grad C). Anliegen der Arbeit war es, zusätzlich die seit ca. 20 Jahren marktüblichen ternären Schnellzemente mit zu untersuchen und die baupraktisch interessanten Temperaturen von 15 Grad C und 25 Grad C einzubeziehen. Ebenso wurden die Auswirkungen der Klimabedingung auf der Baustelle (Jahreszeit, Luftfeuchtigkeit, Temperatur) auf den Hydratationsvorgang der Estriche untersucht. Dabei wurden jeweils nicht nur ein Vertreter der verschiedenen Bindemittelsysteme, sondern mindestens zwei verschiedene Estriche unterschiedlicher Hersteller mit einbezogen. In Kombination mit den Ergebnissen der Gefügeuntersuchungen (u. a. Hg-Porosametrie) wird belegt, weshalb sich die zement- und schnellzementgebundenen Estriche vollkommen anders verhalten als die calciumsulfatgebundenen Estriche. Dieses unterschiedliche Verhalten ist auch einer der Gründe, warum Estriche mit der KRL-Methode in Bezug auf ihren Feuchtegehalt nicht bewertet werden können. Aus diesem Grund folgt ein Vergleich der Materialfeuchtemessungen "KRL-Methode" mit der handwerksüblichen und seit Jahrzehnten in der Praxis bewährten "CM-Methode".