Refine
Year of publication
Document Type
- Conference Proceeding (43)
- Article (7)
- Master's Thesis (5)
- Report (4)
- Bachelor Thesis (1)
- Doctoral Thesis (1)
- Other Publications (1)
- Working Paper (1)
Language
- English (63) (remove)
Keywords
- 2 D environment Laser data (1)
- AAL (2)
- Accelerometers (1)
- Activity monitoring (1)
- Ambient assisted living (1)
- Analog-Digital-Umsetzer (1)
- Analog-to-digital-Converter ; Spice Simulation ; Analog integrated circuit design ; (1)
- Arbeitsablauf (1)
- Assisted living (1)
- Auftragsabwicklung (1)
- Automotive (1)
- Autonomous Mobile Indoor Robot (1)
- BCG (1)
- Ballistocardiography (1)
- Bio-vital data (1)
- Biomedical Signal Capturing (1)
- Biomedical signals (1)
- Biosignal analysis (1)
- Biosignal processing (1)
- Biovital signal (2)
- Blockchain (1)
- Breathing (1)
- Breathing rate (1)
- Client Management System ; Haaland Internet Productions ; Client Relationship Management ; workflow (1)
- Codierung (1)
- Common Criteria (1)
- Convolution (1)
- Convolutional neural network (1)
- Correlation (1)
- Cross Site Scripting (1)
- Data fusion (1)
- Deep Convolutional Neural Network (1)
- Deep learning (2)
- Distributed ledger (1)
- Driving (1)
- Driving safety (1)
- Driving stress (1)
- Drowsiness (1)
- E-Health (1)
- ECG (5)
- EKG (1)
- EMG (1)
- Electrocardiography (2)
- Electromyography (1)
- Erkennung (1)
- FACS (1)
- FSR sensor (1)
- Facial Action Coding System (1)
- Force resistor sensor (1)
- Freistellungssemesterbericht (4)
- GDPR (1)
- Health information exchange (1)
- Heart rate (5)
- Heart rate variability (1)
- I3M (1)
- Illuminance (1)
- Impedance measurement (1)
- Information Security Management (1)
- Internet of Things (1)
- Interpretability (1)
- IoT (1)
- Kundenmanagement (1)
- Laser-Sensor (1)
- Laserimpuls (1)
- Lasermesstechnik (1)
- Low-pass filters (1)
- Machine learning (2)
- Medical systems (1)
- Mimik (1)
- Monitoring (1)
- Movement detection (3)
- Multinomial logistic regression (2)
- Mund-Kiefer-Gesichtsbereich (1)
- Navigation (1)
- Navigieren (1)
- Non-invasive (1)
- Non-invasive sleep study (2)
- PHP (1)
- PPG (1)
- PSQI (1)
- Pattern recognition (1)
- Photoplethysmography (1)
- Physical activity (1)
- Physiognomik (1)
- Posture tracking (1)
- Pressure sensor (1)
- Pressure sensors (2)
- Privacy by Design (1)
- Probabilistic modeling (1)
- Probability of Exploitation (1)
- Remediation Strategies (1)
- Requirements Engineering (1)
- Residual Neural Network (1)
- Respiration Rate (1)
- Respiration rate (1)
- Respiratory sounds (1)
- Roboter (1)
- SPICE <Programm> (1)
- SQL Injection (1)
- Scoring Systems (1)
- Security by Design (1)
- Sensor Bed (1)
- Sensor grid (1)
- Sensors (1)
- Signal acquisition (1)
- Signal processing (3)
- Skin (1)
- Sleep (2)
- Sleep Stages (1)
- Sleep apnea (1)
- Sleep apnoea (1)
- Sleep efficiency (1)
- Sleep latency (1)
- Sleep medicine (3)
- Sleep phase (1)
- Sleep positions (2)
- Sleep quality (3)
- Sleep stage classification (1)
- Sleep stages (2)
- Sleep study (10)
- Sleep tracking (1)
- Sleep/Wake states (1)
- Smart bed (1)
- Smart cushion (1)
- Smart home (1)
- Smart-care (1)
- Static Code Analysis (1)
- Statistics (1)
- Stethoscope (1)
- Stress (3)
- Stress detection (2)
- Stress measurement (1)
- System design (1)
- Telemedicine (2)
- Test Suite (1)
- Transistortechnologie (1)
- Un-certainty (1)
- Vulnerability Prioritization (1)
- Wearable (1)
- Web Services (1)
- chat system (1)
- distance measurement (1)
- facial expression recognition ; action unit recognition (1)
- istant messaging system (1)
- low-level feature extraction (1)
- support vector machines (1)
Institute
- Fakultät Informatik (63) (remove)
This thesis emphasizes problems that reports generated by vulnerability scanners impose on the process of vulnerability management, which are a. an overwhelming amount of data and b. an insufficient prioritization of the scan results.
To assist the process of developing means to counteract those problems and to allow for quantitative evaluation of their solutions, two metrics are proposed for their effectiveness and efficiency. These metrics imply a focus on higher severity vulnerabilities and can be applied to any simplification process of vulnerability scan results, given it relies on a severity score and time of remediation estimation for each vulnerability.
A priority score is introduced which aims to improve the widely used Common Vulnerability Scoring System (CVSS) base score of each vulnerability dependent on a vulnerability’s ease of exploit, estimated probability of exploitation and probability of its existence.
Patterns within the reports generated by the Open Vulnerability Assessment System (OpenVAS) vulnerability scanner between vulnerabilities are discovered which identify criteria by which they can be categorized from a remediation actor standpoint. These categories lay the groundwork of a final simplified report and consist of updates that need to be installed on a host, severe vulnerabilities, vulnerabilities that occur on multiple hosts and vulnerabilities that will take a lot of time for remediation. The highest potential time savings are found to exist within frequently occurring vulnerabilities, minor- and major suggested updates.
Processing of the results provided by the vulnerability scanner and creation of the report is realized in the form of a python script. The resulting reports are short, straight to the point and provide a top down remediation process which should theoretically allow to minimize the institutions attack surface as fast as possible. Evaluation of the practicality must follow as the reports are yet to be introduced into the Information Security Management Lifecycle.
Dynamic Real-Time Range Queries (DRRQ) are a common means to handle mobile clients in high-density areas where both, clients requested by the query and the inquirers, are mobile. In contrast to the very well-known continuous range queries, only a few approaches, such as Adaptive Quad Streaming (AQS), address the mandatory scalability and real-time requirements of these so-called ad-hoc mobility challenges. In this paper we present the highly decentralized solution Adaptive Quad Streaming Flexible (AQSflex) as an extension of the already existing more theoretical AQS approach. Beside a highly distributed cell structure without data structures and a lightweight streaming communication, we use a multi-cell-assignment on limited pool resources instead of an idealistic unlimited cell-per-server assignment. The described experimental results show the potential of our local capacity balancing scheme for cell handover in a strongly decentralized setting. Leafs of a cell hierarchy define a kind of self-optimizing fuzzy edge for the processing resources in high-density systems without any centralized controlling or cloud component.
The influence of sleep on human life, including physiological, psychological, and mental aspects, is remarkable. Therefore, it is essential to apply appropriate therapy in the case of sleep disorders. For this, however, the irregularities must first be recognised, preferably conveniently for the person concerned. This dissertation, structured as a composition of research articles, presents the development of mathematically based algorithmic principles for a sleep analysis system. The particular focus is on the classification of sleep stages with a minimal set of physiological parameters. In addition, the aspects of using the sleep analysis system as part of the more complex healthcare systems are explored. Design of hardware for non-obtrusive measurement of relevant physiological parameters and the use of such systems to detect other sleep disorders, such as sleep apnoea, are also referred to. Multinomial logistic regression was selected as the basis for development resulting from the investigations carried out. By following a methodical procedure, the number of physiological parameters necessary for the classification of sleep stages was successively reduced to two: Respiratory and Movement signals. These signals might be measured in a contactless way. A prototype implementation of the developed algorithms was performed to validate the proposed method, and the evaluation of 19324 sleep epochs was carried out. The results, with the achieved accuracy of 73% in the classification of Wake/NREM/REM stages and Cohen's kappa of 0.44, outperform the state of the art and demonstrate the appropriateness of the selected approach. In the future, this method could enable convenient, cost-effective, and accurate sleep analysis, leading to the detection of sleep disorders at an early stage so that therapy can be initiated as soon as possible, thus improving the general population's health status and quality of life.
Systematic Generation of XSS and SQLi Vulnerabilities in PHP as Test Cases for Static Code Analysis
(2022)
Synthetic static code analysis test suites are important to test the basic functionality of tools. We present a framework that uses different source code patterns to generate Cross Site Scripting and SQL injection test cases. A decision tree is used to determine if the test cases are vulnerable. The test cases are split into two test suites. The first test suite contains 258,432 test cases that have influence on the decision trees. The second test suite contains 20 vulnerable test cases with different data flow patterns. The test cases are scanned with two commercial static code analysis tools to show that they can be used to benchmark and identify problems of static code analysis tools. Expert interviews confirm that the decision tree is a solid way to determine the vulnerable test cases and that the test suites are relevant.
Interpretability and uncertainty modeling are important key factors for medical applications. Moreover, data in medicine are often available as a combination of unstructured data like images and structured predictors like patient’s metadata. While deep learning models are state-of-the-art for image classification, the models are often referred to as ’black-box’, caused by the lack of interpretability. Moreover, DL models are often yielding point predictions and are too confident about the parameter estimation and outcome predictions.
On the other side with statistical regression models, it is possible to obtain interpretable predictor effects and capture parameter and model uncertainty based on the Bayesian approach. In this thesis, a publicly available melanoma dataset, consisting of skin lesions and patient’s age, is used to predict the melanoma types by using a semi-structured model, while interpretable components and model uncertainty is quantified. For Bayesian models, transformation model-based variational inference (TM-VI) method is used to determine the posterior distribution of the parameter. Several model constellations consisting of patient’s age and/or skin lesion were implemented and evaluated. Predictive performance was shown to be best by using a combined model of image and patient’s age, while providing the interpretable posterior distribution of the regression coefficient is possible. In addition, integrating uncertainty in image and tabular parts results in larger variability of the outputs corresponding to high uncertainty of the single model components.
Background:
One of the most promising health care development areas is introducing telemedicine services and creating solutions based on blockchain technology. The study of systems combining both these domains indicates the ongoing expansion of digital technologies in this market segment.
Objective:
This paper aims to review the feasibility of blockchain technology for telemedicine.
Methods:
The authors identified relevant studies via systematic searches of databases including PubMed, Scopus, Web of Science, IEEE Xplore, and Google Scholar. The suitability of each for inclusion in this review was assessed independently. Owing to the lack of publications, available blockchain-based tokens were discovered via conventional web search engines (Google, Yahoo, and Yandex).
Results:
Of the 40 discovered projects, only 18 met the selection criteria. The 5 most prevalent features of the available solutions (N=18) were medical data access (14/18, 78%), medical service processing (14/18, 78%), diagnostic support (10/18, 56%), payment transactions (10/18, 56%), and fundraising for telemedical instrument development (5/18, 28%).
Conclusions:
These different features (eg, medical data access, medical service processing, epidemiology reporting, diagnostic support, and treatment support) allow us to discuss the possibilities for integration of blockchain technology into telemedicine and health care on different levels. In this area, a wide range of tasks can be identified that could be accomplished based on digital technologies using blockchains.
Preliminary results of homomorphic deconvolution application to surface EMG signals during walking
(2021)
Homomorphic deconvolution is applied to sEMG signals recorded during walking. Gastrocnemius lateralis and tibialis anterior signals were acquired according to SENIAM recommendation. MUAP parameters like amplitude and scale were estimated, whilst the MUAP shape parameter was fixed. This features a useful time-frequency representation of sEMG signal. Estimation of scale MUAP parameter was verified extracting the mean frequency of filtered EMG signal, extracted from the scale parameter estimated with two different MUAP shape values.
Normal breathing during sleep is essential for people’s health and well-being. Therefore, it is crucial to diagnose apnoea events at an early stage and apply appropriate therapy. Detection of sleep apnoea is a central goal of the system design described in this article. To develop a correctly functioning system, it is first necessary to define the requirements outlined in this manuscript clearly. Furthermore, the selection of appropriate technology for the measurement of respiration is of great importance. Therefore, after performing initial literature research, we have analysed in detail three different methods and made a selection of a proper one according to determined requirements. After considering all the advantages and disadvantages of the three approaches, we decided to use the impedance measurement-based one. As a next step, an initial conceptual design of the algorithm for detecting apnoea events was created. As a result, we developed an activity diagram on which the main system components and data flows are visually represented.
Respiratory diseases are leading causes of death and disability in the world. The recent COVID-19 pandemic is also affecting the respiratory system. Detecting and diagnosing respiratory diseases requires both medical professionals and the clinical environment. Most of the techniques used up to date were also invasive or expensive.
Some research groups are developing hardware devices and techniques to make possible a non-invasive or even remote respiratory sound acquisition. These sounds are then processed and analysed for clinical, scientific, or educational purposes.
We present the literature review of non-invasive sound acquisition devices and techniques.
The results are about a huge number of digital tools, like microphones, wearables, or Internet of Thing devices, that can be used in this scope.
Some interesting applications have been found. Some devices make easier the sound acquisition in a clinic environment, but others make possible daily monitoring outside that ambient. We aim to use some of these devices and include the non-invasive recorded respiratory sounds in a Digital Twin system for personalized health.
A residual neural network was adapted and applied to the Physionet/Computing data in Cardiology Challenge 2020 to detect 24 different classes of cardiac abnormalities from 12-lead. Additive Gaussian noise, signal shifting, and the classification of signal sections of different lengths were applied to prevent the network from overfitting and facilitating generalization. Due to the use of a global pooling layer after the feature extractor, the network is independent of the signal’s length. On the hidden test set of the challenge, the model achieved a validation score of 0.656 and a full test score of 0.27, placing us 15th out of 41 officially ranked teams (Team name: UC_Lab_Kn). These results show the potential of deep neural networks for ap- plication to raw data and a complex multi-class multi-label classification problem, even if the training data is from di- verse datasets and of differing lengths.
Ballistocardiography (BCG) can be used to monitor heart rate activity. Besides, the accelerometer should have high sensitivity and minimal internal noise; a low-cost approach was taken into consideration. Several measurements have been executed to determine the optimal positioning of a sensor under the mattress to obtain a signal strong enough for further analysis. A prototype for an unobtrusive accelerometer-based measurement system has been developed and tested in a conventional bed without any specific extras. The influence of the human sleep position for the output accelerometer data was tested. The obtained results indicate the potential to capture BCG signals using accelerometers. The measurement system can detect heart rate in an unobtrusive form in the home environment.
The main aim of presented in this manuscript research is to compare the results of objective and subjective measurement of sleep quality for older adults (65+) in the home environment. A total amount of 73 nights was evaluated in this study. Placing under the mattress device was used to obtain objective measurement data, and a common question on perceived sleep quality was asked to collect the subjective sleep quality level. The achieved results confirm the correlation between objective and subjective measurement of sleep quality with the average standard deviation equal to 2 of 10 possible quality points.
We compared vulnerable and fixed versions of the source code of 50 different PHP open source projects based on CVE reports for SQL injection vulnerabilities. We scanned the source code with commercial and open source tools for static code analysis. Our results show that five current state-of-the-art tools have issues correctly marking vulnerable and safe code. We identify 25 code patterns that are not detected as a vulnerability by at least one of the tools and 6 code patterns that are mistakenly reported as a vulnerability that cannot be confirmed by manual code inspection. Knowledge of the patterns could help vendors of static code analysis tools, and software developers could be instructed to avoid patterns that confuse automated tools.
We propose and apply a requirements engineering approach that focuses on security and privacy properties and takes into account various stakeholder interests. The proposed methodology facilitates the integration of security and privacy by design into the requirements engineering process. Thus, specific, detailed security and privacy requirements can be implemented from the very beginning of a software project. The method is applied to an exemplary application scenario in the logistics industry. The approach includes the application of threat and risk rating methodologies, a technique to derive technical requirements from legal texts, as well as a matching process to avoid duplication and accumulate all essential requirements.
We present source code patterns that are difficult for modern static code analysis tools. Our study comprises 50 different open source projects in both a vulnerable and a fixed version for XSS vulnerabilities reported with CVE IDs over a period of seven years. We used three commercial and two open source static code analysis tools. Based on the reported vulnerabilities we discovered code patterns that appear to be difficult to classify by static analysis. The results show that code analysis tools are helpful, but still have problems with specific source code patterns. These patterns should be a focus in training for developers.
The evaluation of the effectiveness of different machine learning algorithms on a publicly available database of signals derived from wearable devices is presented with the goal of optimizing human activity recognition and classification. Among the wide number of body signals we choose a couple of signals, namely photoplethysmographic (optically detected subcutaneous blood volume) and tri-axis acceleration signals that are easy to be simultaneously acquired using commercial widespread devices (e.g. smartwatches) as well as custom wearable wireless devices designed for sport, healthcare, or clinical purposes. To this end, two widely used algorithms (decision tree and k-nearest neighbor) were tested, and their performance were compared to two new recent algorithms (particle Bernstein and a Monte Carlo-based regression) both in terms of accuracy and processing time. A data preprocessing phase was also considered to improve the performance of the machine learning procedures, in order to reduce the problem size and a detailed analysis of the compression strategy and results is also presented.
Good sleep is crucial for a healthy life of every person. Unfortunately, its quality often decreases with aging. A common approach to measuring the sleep characteristics is based on interviews with the subjects or letting them fill in a daily questionnaire and afterward evaluating the obtained data. However, this method has time and personal costs for the interviewer and evaluator of responses. Therefore, it would be important to execute the collection and evaluation of sleep characteristics automatically. To do that, it is necessary to investigate the level of agreement between measurements performed in a traditional way using questionnaires and measurements obtained using electronic monitoring devices. The study presented in this manuscript performs this investigation, comparing such sleep characteristics as "time going to bed", "total time in bed", "total sleep time" and "sleep efficiency". A total number of 106 night records of elderly persons (aged 65+) were analyzed. The results achieved so far reveal the fact that the degree of agreement between the two measurement methods varies substantially for different characteristics, from 31 minutes of mean difference for "time going to bed" to 77 minutes for "total sleep time". For this reason, a direct exchange of objective and subjective measuring methods is currently not possible.
Polysomnography is a gold standard for a sleep study, and it provides very accurate results, but its cost (both personnel and material) are quite high. Therefore, the development of a low-cost system for overnight breathing and heartbeat monitoring, which provides more comfort while recording the data, is a well-motivated challenge. The system proposed in this manuscript is based on the usage of resistive pressure sensors installed under the mattress. These sensors can measure slight pressure changes provoked during breathing and heartbeat. The captured signal requires advanced processing, like applying filters and amplifiers before the analog signal is ready for the next step. Then, the output signal is digitalized and further processed by an algorithm that performs a custom filtering before it can recognize breathing and heart rate in real-time. The result can be directly visualized. Furthermore, a CSV file is created containing the raw data, timestamps, and unique IDs to facilitate further processing. The achieved results are promising, and the average deviation from a reference device is about 4bpm.