Refine
Year of publication
Document Type
- Conference Proceeding (14)
- Article (3)
- Part of a Book (1)
- Report (1)
Keywords
- Cloud (1)
- Cloud computing (1)
- Common Criteria (1)
- Configuration (1)
- Cross Site Scripting (1)
- Freistellungssemesterbericht (1)
- GDPR (1)
- Metrics for privacy (1)
- PHP (2)
- Privacy (1)
Institute
To learn from the past, we analyse 1,088 "computer as a target" judgements for evidential reasoning by extracting four case elements: decision, intent, fact, and evidence. Analysing the decision element is essential for studying the scale of sentence severity for cross-jurisdictional comparisons. Examining the intent element can facilitate future risk assessment. Analysing the fact element can enhance an organization's capability of analysing criminal activities for future offender profiling. Examining the evidence used against a defendant from previous judgements can facilitate the preparation of evidence for upcoming legal disclosure. Follow the concepts of argumentation diagrams, we develop an automatic judgement summarizing system to enhance the accessibility of judgements and avoid repeating past mistakes. Inspired by the feasibility of extracting legal knowledge for argument construction and employing grounds of inadmissibility for probability assessment, we conduct evidential reasoning of kernel traces for forensic readiness. We integrate the narrative methods from attack graphs/languages for preventing confirmation bias, the argumentative methods from argumentation diagrams for constructing legal arguments, and the probabilistic methods from Bayesian networks for comparing hypotheses.
Conducting surveillance impact assessment is the first step to solve the "Who monitors the monitor?" problem. Since the surveillance impacts on different dimensions of privacy and society are always changing, measuring compliance and impact through metrics can ensure the negative consequences are minimized to acceptable levels. To develop metrics systematically for surveillance impact assessment, we follow the top-down process of the Goal/Question/Metric paradigm: 1) establish goals through the social impact model, 2) generate questions through the dimensions of surveillance activities, and 3) develop metrics through the scales of measure. With respect to the three factors of impact magnitude: the strength of sources, the immediacy of sources, and the number of sources, we generate questions concerning surveillance activities: by whom, for whom, why, when, where, of what, and how, and develop metrics with the scales of measure: the nominal scale, the ordinal scale, the interval scale, and the ratio scale. In addition to compliance assessment and impact assessment, the developed metrics have the potential to address the power imbalance problem through sousveillance, which employs surveillance to control and redirect the impact exposures.
We identify 74 generic, reusable technical requirements based on the GDPR that can be applied to software products which process personal data. The requirements can be traced to corresponding articles and recitals of the GDPR and fulfill the key principles of lawfulness and transparency. Therefore, we present an approach to requirements engineering with regard to developing legally compliant software that satisfies the principles of privacy by design, privacy by default as well as security by design.
We present an approach to reduce the complexity of adjusting privacy preferences for multiple online social networks. To achieve this, we quantify the effect on privacy for choices that users make, and simplify configuration by introducing privacy configuration as a service. We present an algorithm that effectively measures privacy and adjusts privacy settings across social networks. The aim is to configure privacy with one click.
We present an analysis of how to determine security requirements for software that controls routing decisions in the distribution of discrete physical goods. Requirements are derived from stakeholder interests and threat scenarios. Three deployment scenarios are discussed: cloud and hybrid deployment as well as on-premise installation for legacy sites.
We propose and apply a requirements engineering approach that focuses on security and privacy properties and takes into account various stakeholder interests. The proposed methodology facilitates the integration of security and privacy by design into the requirements engineering process. Thus, specific, detailed security and privacy requirements can be implemented from the very beginning of a software project. The method is applied to an exemplary application scenario in the logistics industry. The approach includes the application of threat and risk rating methodologies, a technique to derive technical requirements from legal texts, as well as a matching process to avoid duplication and accumulate all essential requirements.
With the increased deployment of biometric authentication systems, some security concerns have also arisen. In particular, presentation attacks directed to the capture device pose a severe threat. In order to prevent them, liveness features such as the blood flow can be utilised to develop presentation attack detection (PAD) mechanisms. In this context, laser speckle contrast imaging (LSCI) is a technology widely used in biomedical applications in order to visualise blood flow. We therefore propose a fingerprint PAD method based on textural information extracted from pre-processed LSCI images. Subsequently, a support vector machine is used for classification. In the experiments conducted on a database comprising 32 different artefacts, the results show that the proposed approach classifies correctly all bona fides. However, the LSCI technology experiences difficulties with thin and transparent overlay attacks.
Insecurity Refactoring is a change to the internal structure of software to inject a vulnerability without changing the observable behavior in a normal use case scenario. An implementation of Insecurity Refactoring is formally explained to inject vulnerabilities in source code projects by using static code analysis. It creates learning examples with source code patterns from known vulnerabilities.
Insecurity Refactoring is achieved by creating an Adversary Controlled Input Dataflow tree based on a Code Property Graph. The tree is used to find possible injection paths. Transformation of the possible injection paths allows to inject vulnerabilities. Insertion of data flow patterns introduces different code patterns from related Common Vulnerabilities and Exposures (CVE) reports. The approach is evaluated on 307 open source projects. Additionally, insecurity-refactored projects are deployed in virtual machines to be used as learning examples. Different static code analysis tools, dynamic tools and manual inspections are used with modified projects to confirm the presence of vulnerabilities.
The results show that in 8.1% of the open source projects it is possible to inject vulnerabilities. Different inspected code patterns from CVE reports can be inserted using corresponding data flow patterns. Furthermore the results reveal that the injected vulnerabilities are useful for a small sample size of attendees (n=16). Insecurity Refactoring is useful to automatically generate learning examples to improve software security training. It uses real projects as base whereas the injected vulnerabilities stem from real CVE reports. This makes the injected vulnerabilities unique and realistic.
We investigated 50 randomly selected buffer overflow vulnerabilities in Firefox. The source code of these vulnerabilities and the corresponding patches were manually reviewed and patterns were identified. Our main contribution are taxonomies of errors, sinks and fixes seen from a developer's point of view. The results are compared to the CWE taxonomy with an emphasis on vulnerability details. Additionally, some ideas are presented on how the taxonomy could be used to improve the software security education.
To get a better understanding of Cross Site Scripting vulnerabilities, we investigated 50 randomly selected CVE reports which are related to open source projects. The vulnerable and patched source code was manually reviewed to find out what kind of source code patterns were used. Source code pattern categories were found for sources, concatenations, sinks, html context and fixes. Our resulting categories are compared to categories from CWE. A source code sample which might have led developers to believe that the data was already sanitized is described in detail. For the different html context categories, the necessary Cross Site Scripting prevention mechanisms are described.
Many secure software development methods and tools are well-known and understood. Still, the same software security vulnerabilities keep occurring. To find out if new source code patterns evolved or the same patterns are reoccurring, we investigate SQL injections in PHP open source projects. SQL injections are well-known and a core part of software security education. For each common part of SQL injections, the source code patterns are analysed. Examples are pointed out showing that developers had software security in mind, but nevertheless created vulnerabilities. A comparison to earlier work shows that some categories are not found as often as expected. Our main contribution is the categorization of source code patterns.
We compared vulnerable and fixed versions of the source code of 50 different PHP open source projects based on CVE reports for SQL injection vulnerabilities. We scanned the source code with commercial and open source tools for static code analysis. Our results show that five current state-of-the-art tools have issues correctly marking vulnerable and safe code. We identify 25 code patterns that are not detected as a vulnerability by at least one of the tools and 6 code patterns that are mistakenly reported as a vulnerability that cannot be confirmed by manual code inspection. Knowledge of the patterns could help vendors of static code analysis tools, and software developers could be instructed to avoid patterns that confuse automated tools.
We present source code patterns that are difficult for modern static code analysis tools. Our study comprises 50 different open source projects in both a vulnerable and a fixed version for XSS vulnerabilities reported with CVE IDs over a period of seven years. We used three commercial and two open source static code analysis tools. Based on the reported vulnerabilities we discovered code patterns that appear to be difficult to classify by static analysis. The results show that code analysis tools are helpful, but still have problems with specific source code patterns. These patterns should be a focus in training for developers.
Systematic Generation of XSS and SQLi Vulnerabilities in PHP as Test Cases for Static Code Analysis
(2022)
Synthetic static code analysis test suites are important to test the basic functionality of tools. We present a framework that uses different source code patterns to generate Cross Site Scripting and SQL injection test cases. A decision tree is used to determine if the test cases are vulnerable. The test cases are split into two test suites. The first test suite contains 258,432 test cases that have influence on the decision trees. The second test suite contains 20 vulnerable test cases with different data flow patterns. The test cases are scanned with two commercial static code analysis tools to show that they can be used to benchmark and identify problems of static code analysis tools. Expert interviews confirm that the decision tree is a solid way to determine the vulnerable test cases and that the test suites are relevant.