Refine
Year of publication
Document Type
- Article (426) (remove)
Language
- English (217)
- German (207)
- Multiple languages (2)
Keywords
- (Strict) sign-regularity (1)
- 1D-CNN (1)
- 3D urban planning (1)
- AAL (2)
- Aboriginal people (1)
- Abstract interpretation (1)
- Accelerometer calibration (1)
- Accelerometer sensor (1)
- Accelerometers (1)
- Actuators (1)
Institute
- Fakultät Architektur und Gestaltung (5)
- Fakultät Bauingenieurwesen (24)
- Fakultät Elektrotechnik und Informationstechnik (1)
- Fakultät Informatik (9)
- Fakultät Maschinenbau (9)
- Fakultät Wirtschafts-, Kultur- und Rechtswissenschaften (43)
- Institut für Angewandte Forschung - IAF (30)
- Institut für Optische Systeme - IOS (8)
- Institut für Strategische Innovation und Technologiemanagement - IST (14)
- Institut für Systemdynamik - ISD (27)
SyNumSeS is a Python package for numerical simulation of semiconductor devices. It uses the Scharfetter-Gummel discretization for solving the one dimensional Van Roosbroeck system which describes the free electron and hole transport by the drift-diffusion model. As boundary conditions voltages can be applied to Ohmic contacts. It is suited for the simulation of pn-diodes, MOS-diodes, LEDs (hetero junction), solar cells, and (hetero) bipolar transistors.
Further applications of the Cauchon algorithm to rank determination and bidiagonal factorization
(2018)
For a class of matrices connected with Cauchon diagrams, Cauchon matrices, and the Cauchon algorithm, a method for determining the rank, and for checking a set of consecutive row (or column) vectors for linear independence is presented. Cauchon diagrams are also linked to the elementary bidiagonal factorization of a matrix and to certain types of rank conditions associated with submatrices called descending rank conditions.
Totally nonnegative matrices, i.e., matrices having all their minors nonnegative, and matrix intervals with respect to the checkerboard partial order are considered. It is proven that if the two bound matrices of such a matrix interval are totally nonnegative and satisfy certain conditions, then all matrices from this interval are also totally nonnegative and satisfy the same conditions.
A real matrix is called totally nonnegative if all of its minors are nonnegative. In this paper the extended Perron complement of a principal submatrix in a matrix A is investigated. In extension of known results it is shown that if A is irreducible and totally nonnegative and the principal submatrix consists of some specified consecutive rows then the extended Perron complement is totally nonnegative. Also inequalities between minors of the extended Perron complement and the Schur complement are presented.
We consider classes of n-by-n sign regular matrices, i.e., of matrices with the property that all their minors of fixed order k have one specified sign or are allowed also to vanish, k = 1, ... ,n. If the sign is nonpositive for all k, such a matrix is called totally nonpositive. The application of the Cauchon algorithm to nonsingular totally nonpositive matrices is investigated and a new determinantal test for these matrices is derived. Also matrix intervals with respect to the checkerboard partial ordering are considered. This order is obtained from the usual entry-wise ordering on the set of the n-by-n matrices by reversing the inequality sign for each entry in a checkerboard fashion. For some classes of sign regular matrices it is shown that if the two bound matrices of such a matrix interval are both in the same class then all matrices lying between these two bound matrices are in the same class, too.
The class of square matrices of order n having a negative determinant and all their minors up to order n-1 nonnegative is considered. A characterization of these matrices is presented which provides an easy test based on the Cauchon algorithm for their recognition. Furthermore, the maximum allowable perturbation of the entry in position (2,2) such that the perturbed matrix remains in this class is given. Finally, it is shown that all matrices lying between two matrices of this class with respect to the checkerboard ordering are contained in this class, too.
A real matrix is called totally nonnegative if all of its minors are nonnegative. In this paper, the minors are determined from which the maximum allowable entry perturbation of a totally nonnegative matrix can be found, such that the perturbed matrix remains totally nonnegative. Also, the total nonnegativity of the first and second subdirect sum of two totally nonnegative matrices is considered.
In this paper totally nonnegative (positive) matrices are considered which are matrices having all their minors nonnegative (positve); the almost totally positive matrices form a class between the totally nonnegative matrices and the totally positive ones. An efficient determinantal test based on the Cauchon algorithm for checking a given matrix for falling in one of these three classes of matrices is applied to matrices which are related to roots of polynomials and poles of rational functions, specifically the Hankel matrix associated with the Laurent series at infinity of a rational function and matrices of Hurwitz type associated with polynomials. In both cases it is concluded from properties of one or two finite sections of the infinite matrix that the infinite matrix itself has these or related properties. Then the results are applied to derive a sufficient condition for the Hurwitz stability of an interval family of polynomials. Finally, interval problems for a subclass of the rational functions, viz. R-functions, are investigated. These problems include invariance of exclusively positive poles and exclusively negative roots in the presence of variation of the coefficients of the polynomials within given intervals.
In 1970, B.A. Asner, Jr., proved that for a real quasi-stable polynomial, i.e., a polynomial whose zeros lie in the closed left half-plane of the complex plane, its finite Hurwitz matrix is totally nonnegative, i.e., all its minors are nonnegative, and that the converse statement is not true. In this work, we explain this phenomenon in detail, and provide necessary and sufficient conditions for a real polynomial to have a totally nonnegative finite Hurwitz matrix.
Let A = [a_ij] be a real symmetric matrix. If f:(0,oo)-->[0,oo) is a Bernstein function, a sufficient condition for the matrix [f(a_ij)] to have only one positive eigenvalue is presented. By using this result, new results for a symmetric matrix with exactly one positive eigenvalue, e.g., properties of its Hadamard powers, are derived.
In this paper, rectangular matrices whose minors of a given order have the same strict sign are considered and sufficient conditions for their recognition are presented. The results are extended to matrices whose minors of a given order have the same sign or are allowed to vanish. A matrix A is called oscillatory if all its minors are nonnegative and there exists a positive integer k such that A^k has all its minors positive. As a generalization, a new type of matrices, called oscillatory of a specific order, is introduced and some of their properties are investigated.
Positive systems play an important role in systems and control theory and have found applications in multiagent systems, neural networks, systems biology, and more. Positive systems map the nonnegative orthant to itself (and also the non-positive orthant to itself). In other words, they map the set of vectors with zero sign variation to itself. In this article, discrete-time linear systems that map the set of vectors with up to k-1 sign variations to itself are introduced. For the special case k = 1 these reduce to discrete-time positive linear systems. Properties of these systems are analyzed using tools from the theory of sign-regular matrices. In particular, it is shown that almost every solution of such systems converges to the set of vectors with up to k-1 sign variations. It is also shown that these systems induce a positive dynamics of k-dimensional parallelotopes.
Short-Term Density Forecasting of Low-Voltage Load using Bernstein-Polynomial Normalizing Flows
(2023)
The transition to a fully renewable energy grid requires better forecasting of demand at the low-voltage level to increase efficiency and ensure reliable control. However, high fluctuations and increasing electrification cause huge forecast variability, not reflected in traditional point estimates. Probabilistic load forecasts take uncertainties into account and thus allow more informed decision-making for the planning and operation of low-carbon energy systems. We propose an approach for flexible conditional density forecasting of short-term load based on Bernstein polynomial normalizing flows, where a neural network controls the parameters of the flow. In an empirical study with 3639 smart meter customers, our density predictions for 24h-ahead load forecasting compare favorably against Gaussian and Gaussian mixture densities. Furthermore, they outperform a non-parametric approach based on the pinball loss, especially in low-data scenarios.
This study investigates the application of Force Sensing Resistor (FSR) sensors and machine learning algorithms for non-invasive body position monitoring during sleep. Although reliable, traditional methods like Polysomnography (PSG) are invasive and unsuited for extended home-based monitoring. Our approach utilizes FSR sensors placed beneath the mattress to detect body positions effectively. We employed machine learning techniques, specifically Random Forest (RF), K-Nearest Neighbors (KNN), and XGBoost algorithms, to analyze the sensor data. The models were trained and tested using data from a controlled study with 15 subjects assuming various sleep positions. The performance of these models was evaluated based on accuracy and confusion matrices. The results indicate XGBoost as the most effective model for this application, followed by RF and KNN, offering promising avenues for home-based sleep monitoring systems.
Cardiovascular diseases (CVD) are leading contributors to global mortality, necessitating advanced methods for vital sign monitoring. Heart Rate Variability (HRV) and Respiratory Rate, key indicators of cardiovascular health, are traditionally monitored via Electrocardiogram (ECG). However, ECG's obtrusiveness limits its practicality, prompting the exploration of Ballistocardiography (BCG) as a non-invasive alternative. BCG records the mechanical activity of the body with each heartbeat, offering a contactless method for HRV monitoring. Despite its benefits, BCG signals are susceptible to external interference and present a challenge in accurately detecting J-Peaks. This research uses advanced signal processing and deep learning techniques to overcome these limitations. Our approach integrates accelerometers for long-term BCG data collection during sleep, applying Discrete Wavelet Transforms (DWT) and Ensemble Empirical Mode Decomposition (EEMD) for feature extraction. The Bi-LSTM model, leveraging these features, enhances heartbeat detection, offering improved reliability over traditional methods. The study's findings indicate that the combined use of DWT, EEMD, and Bi-LSTM for J-Peak detection in BCG signals is effective, with potential applications in unobtrusive long-term cardiovascular monitoring. Our results suggest that this methodology could contribute to HRV monitoring, particularly in home settings, enhancing patient comfort and compliance.
Ein Holzhaus als Botschaft. Die erste diplomatische Vertretung des Deutschen Reiches in Ankara 1924
(2015)
Auf die Gründung der türkischen Republik und die Verlagerung der Hauptstadt nach Ankara reagierte das Deutsche Reich 1924 zeitnah mit der Errichtung eines Botschaftsgebäudes in Fertigteilbauweise. Dieses präfabrizierte Holzhaus wurde von der für den Holzbau der Moderne bedeutenden Firma Christoph & Unmack AG aus Niesky/Oberlausitz geliefert. Nach Errichtung eines größeren, massiven Botschaftsgebäudes in Ankara wurde das Holzhaus 1934 auf das Gelände des Atatürk Orman Çiftliği transloziert, wo es noch heute existiert. 2010 konnte es einer gründlichen Baudokumentation unterzogen werden. Im Rahmen von drei Einzelbeiträgen werden die Ergebnisse der Untersuchungen an diesem ebenso ungewöhnlichen wie bedeutenden Baudenkmal vorgestellt.
Reed-Muller (RM) codes have recently regained some interest in the context of low latency communications and due to their relation to polar codes. RM codes can be constructed based on the Plotkin construction. In this work, we consider concatenated codes based on the Plotkin construction, where extended Bose-Chaudhuri-Hocquenghem (BCH) codes are used as component codes. This leads to improved code parameters compared to RM codes. Moreover, this construction is more flexible concerning the attainable code rates. Additionally, new soft-input decoding algorithms are proposed that exploit the recursive structure of the concatenation and the cyclic structure of the component codes. First, we consider the decoding of the cyclic component codes and propose a low complexity hybrid ordered statistics decoding algorithm. Next, this algorithm is applied to list decoding of the Plotkin construction. The proposed list decoding approach achieves near-maximum-likelihood performance for codes with medium lengths. The performance is comparable to state-of-the-art decoders, whereas the complexity is reduced.
Non-volatile NAND flash memories store information as an electrical charge. Different read reference voltages are applied to read the data. However, the threshold voltage distributions vary due to aging effects like program erase cycling and data retention time. It is necessary to adapt the read reference voltages for different life-cycle conditions to minimize the error probability during readout. In the past, methods based on pilot data or high-resolution threshold voltage histograms were proposed to estimate the changes in voltage distributions. In this work, we propose a machine learning approach with neural networks to estimate the read reference voltages. The proposed method utilizes sparse histogram data for the threshold voltage distributions. For reading the information from triple-level cell (TLC) memories, several read reference voltages are applied in sequence. We consider two histogram resolutions. The simplest histogram consists of the zero-and-one ratios for the hard decision read operation, whereas a higher resolution is obtained by considering the quantization levels for soft-input decoding. This approach does not require pilot data for the voltage adaptation. Furthermore, only a few measurements of extreme points of the threshold voltage distributions are required as training data. Measurements with different conditions verify the proposed approach. The resulting neural networks perform well under other life-cycle conditions.
The growing error rates of triple-level cell (TLC) and quadruple-level cell (QLC) NAND flash memories have led to the application of error correction coding with soft-input decoding techniques in flash-based storage systems. Typically, flash memory is organized in pages where the individual bits per cell are assigned to different pages and different codewords of the error-correcting code. This page-wise encoding minimizes the read latency with hard-input decoding. To increase the decoding capability, soft-input decoding is used eventually due to the aging of the cells. This soft-decoding requires multiple read operations. Hence, the soft-read operations reduce the achievable throughput, and increase the read latency and power consumption. In this work, we investigate a different encoding and decoding approach that improves the error correction performance without increasing the number of reference voltages. We consider TLC and QLC flashes where all bits are jointly encoded using a Gray labeling. This cell-wise encoding improves the achievable channel capacity compared with independent page-wise encoding. Errors with cell-wise read operations typically result in a single erroneous bit per cell. We present a coding approach based on generalized concatenated codes that utilizes this property.
Beidhändig gestalten
(2018)
Alles digital – was nun?
(2018)
Beidhändig zum Erfolg
(2018)
Study design:
Retrospective, mono-centric cohort research study.
Objectives:
The purpose of this study is to validate a novel artificial intelligence (AI)-based algorithm against human-generated ground truth for radiographic parameters of adolescent idiopathic scoliosis (AIS).
Methods:
An AI-algorithm was developed that is capable of detecting anatomical structures of interest (clavicles, cervical, thoracic, lumbar spine and sacrum) and calculate essential radiographic parameters in AP spine X-rays fully automatically. The evaluated parameters included T1-tilt, clavicle angle (CA), coronal balance (CB), lumbar modifier, and Cobb angles in the proximal thoracic (C-PT), thoracic, and thoracolumbar regions. Measurements from 2 experienced physicians on 100 preoperative AP full spine X-rays of AIS patients were used as ground truth and to evaluate inter-rater and intra-rater reliability. The agreement between human raters and AI was compared by means of single measure Intra-class Correlation Coefficients (ICC; absolute agreement; .75 rated as excellent), mean error and additional statistical metrics.
Results:
The comparison between human raters resulted in excellent ICC values for intra- (range: .97-1) and inter-rater (.85-.99) reliability. The algorithm was able to determine all parameters in 100% of images with excellent ICC values (.78-.98). Consistently with the human raters, ICC values were typically smallest for C-PT (eg, rater 1A vs AI: .78, mean error: 4.7°) and largest for CB (.96, -.5 mm) as well as CA (.98, .2°).
Conclusions:
The AI-algorithm shows excellent reliability and agreement with human raters for coronal parameters in preoperative full spine images. The reliability and speed offered by the AI-algorithm could contribute to the efficient analysis of large datasets (eg, registry studies) and measurements in clinical practice.
For a long time, the use of intermediate products in production has been growing more rapidly in most countries than domestic production. This is a strong indication of more interdependency in production. The main purpose of input-output analysis is to study the interdependency of industries in an economy. Often the term interindustry analysis is also used. Therefore, the exchange of intermediate products is a key issue of input-output analysis. We will use input–output data for this study that the author prepared for the new ‘Handbook on Supply, Use and Input–Output Tables with Extensions and Applications’ of the United Nations. The supply use and input–output tables contain separate valuation matrices for trade margins, transport margins, value added tax, other taxes on products and subsidies on products. For the study, two input–output models were developed to evaluate the impact of fuel subsidy and taxation reform on output, gross domestic product, inflation and trade. Six scenarios are discussed covering different aspects of the reform.
The main objective of this paper is to revisit Temursho’s (2020) article “On the Euro method” in a critical and constructive way. We have praised part of his work and at the same time, we have analysed some of his arguments against the Euro method and against the work published by Valderas-Jaramillo et al. (2019). Moreover, we have analysed some other relevant aspects of the SUT-Euro and SUT-RAS methods not covered in Temursho (2020). Temursho (2020) seems to conclude that no one should use the Euro method again because of its limitations and drawbacks. However, although not being the Euro method perfect, we are afraid that there is still space for the use of the Euro method in updating/regionalizing supply and use tables.
The State of Custom
(2021)
In our article, we engage with the anthropologist Gerd Spittler’s pathbreaking
article “Dispute settlement in the shadow of Leviathan” (1980) in which
he strives to integrate the existence of state courts (the eponymous Leviathan’s
shadow) in (post-)colonial Africa into the analysis on non-state court legal practices.
According to Spittler, it is because of undesirable characteristics inherent
in state courts that the disputing parties tended to rather involve mediators than
pursue a state court judgment. The less people liked state courts, the more likely
they were to (re-)turn to dispute settlement procedures. Now how has this situation
changed in the last four decades since its publication date? We relate his findings
to contemporary debates in legal anthropology that investigate the relationship
between disputing, law and the state. We also show through our own work in
Africa and Asia, particularly in Southern Ethiopia and Kyrgyzstan, in what ways
Spittler’s by now classical contribution to the field of legal anthropology in 1980
can be made fruitful for a contemporary anthropology of the state at a time when
not only (legal) anthropology has changed, but especially the way states deal with
putatively “customary” forms of dispute settlement.
The transformation to an Industry 4.0, which is in general seen as a solution to increasing market challenges, is forcing companies to radically change their way of thinking and to be open to new forms of cooperation. In this context, the opening-up of the innovation process is widely seen as a necessity to meet these challenges, especially for small and medium enterprises (SMEs). The aim of the study therefore is to analyze how cooperation today can be characterized, how this character has changed since the establishment of the term Industry 4.0 at Hanover Fair in 2011 and which cooperation strategies have proven successful. The analysis consists of a quantitative, secondary data analysis that includes country-specific data from 35 European countries of 2010 and 2016 collected by the European Commission and the OECD. The research, focusing on the secondary sector, shows that multinational enterprises MNEs still tend to cooperate more than SMEs, with a slight overall trend towards protectionism. Nevertheless, there is a clear tendency towards the opening-up of SMEs. In this regard, especially universities, competitors and suppliers have become increasingly attractive as cooperation partners for SMEs.
Sleep is essential to physical and mental health. However, the traditional approach to sleep analysis—polysomnography (PSG)—is intrusive and expensive. Therefore, there is great interest in the development of non-contact, non-invasive, and non-intrusive sleep monitoring systems and technologies that can reliably and accurately measure cardiorespiratory parameters with minimal impact on the patient. This has led to the development of other relevant approaches, which are characterised, for example, by the fact that they allow greater freedom of movement and do not require direct contact with the body, i.e., they are non-contact. This systematic review discusses the relevant methods and technologies for non-contact monitoring of cardiorespiratory activity during sleep. Taking into account the current state of the art in non-intrusive technologies, we can identify the methods of non-intrusive monitoring of cardiac and respiratory activity, the technologies and types of sensors used, and the possible physiological parameters available for analysis. To do this, we conducted a literature review and summarised current research on the use of non-contact technologies for non-intrusive monitoring of cardiac and respiratory activity. The inclusion and exclusion criteria for the selection of publications were established prior to the start of the search. Publications were assessed using one main question and several specific questions. We obtained 3774 unique articles from four literature databases (Web of Science, IEEE Xplore, PubMed, and Scopus) and checked them for relevance, resulting in 54 articles that were analysed in a structured way using terminology. The result was 15 different types of sensors and devices (e.g., radar, temperature sensors, motion sensors, cameras) that can be installed in hospital wards and departments or in the environment. The ability to detect heart rate, respiratory rate, and sleep disorders such as apnoea was among the characteristics examined to investigate the overall effectiveness of the systems and technologies considered for cardiorespiratory monitoring. In addition, the advantages and disadvantages of the considered systems and technologies were identified by answering the identified research questions. The results obtained allow us to determine the current trends and the vector of development of medical technologies in sleep medicine for future researchers and research.
Die wenigen Literaturangaben zu Sorptionsisothermen von mineralischen Estrichen beziehen sich im Wesentlichen auf Calciumsulfatestriche und genormte Zementestriche, sowie i.d.R. nur auf eine festgesetzte Lufttemperatur (= 20 Grad C). Daher war es das Anliegen der im Beitrag beschriebenen Untersuchung, die Feuchtigkeitseigenschaften von Estrichen bei unterschiedlichen Klimaten mithilfe von Sorptionsisothermen zu charakterisieren. Ergänzend sollten die seit ca. 20 Jahren marktüblichen ternären Schnellzemente mit untersucht und die baupraktisch interessanten Temperaturen von 15 Grad C und 25 Grad C einbezogen werden. Ebenso wurden die Auswirkungen der Klimabedingungen auf der Baustelle (Jahreszeit, Luftfeuchtigkeit, Temperatur) auf den Hydratationsvorgang der Estriche untersucht. In Kombination mit den Ergebnissen der Gefügeuntersuchungen (u.a. Hg-Porosimetrie) wird belegt, weshalb sich die zement- und schnellzementgebundenen Estriche vollkommen anders verhalten als die calciumsulfatgebundenen Estriche. Dieses unterschiedliche Verhalten ist auch einer der Gründe, warum Estriche mit der KRL-Methode in Bezug auf ihren Feuchtegehalt nicht bewertet werden können. Deshalb folgt ein Vergleich der Materialfeuchtemessungen "KRL-Methode" mit der handwerksüblichen und seit Jahrzehnten in der Praxis bewährten "CM-Methode".
Sensorlose Positionsregelung eines hydraulischen Proportional-Wegeventils mittels Signalinjektion
(2017)
Es wird eine Methode zur sensorlosen Positionsbestimmung bei elektromagnetisch betätigten Aktoren vorgestellt. Dabei werden basierend auf einer Signalinjektion die positionsabhängigen Parameter bei der injizierten Frequenz bestimmt und daraus über ein geeignetes Modell die Position des Magnetankers ermittelt. Die Eignung des Verfahrens zur sensorlosen Positionsregelung wird an einem bidirektionalen Proportionalventil anhand praktischer Versuche demonstriert.
A constructive method for the design of nonlinear observers is discussed. To formulate conditions for the construction of the observer gains, stability results for nonlinear singularly perturbed systems are utilised. The nonlinear observer is designed directly in the given coordinates, where the error dynamics between the plant and the observer becomes singularly perturbed by a high-gain part of the observer injection, and the information of the slow manifold is exploited to construct the observer gains of the reduced-order dynamics. This is in contrast to typical high-gain observer approaches, where the observer gains are chosen such that the nonlinearities are dominated by a linear system. It will be demonstrated that the considered approach is particularly suited for self-sensing electromechanical systems. Two variants of the proposed observer design are illustrated for a nonlinear electromagnetic actuator, where the mechanical quantities, i.e. the position and the velocity, are not measured
A constructive nonlinear observer design for self-sensing of digital (ON/OFF) single coil electromagnetic actuators is studied. Self-sensing in this context means that solely the available energizing signals, i.e., coil current and driving voltage are used to estimate the position and velocity trajectories of the moving plunger. A nonlinear sliding mode observer is considered, where the stability of the reduced error dynamics is analyzed by the equivalent control method. No simplifications are made regarding magnetic saturation and eddy currents in the underlying dynamical model. The observer gains are constructed by taking into account some generic properties of the systems nonlinearities. Two possible choices of the observer gains are discussed. Furthermore, an observer-based tracking control scheme to achieve sensorless soft landing is considered and its closed-loop stability is studied. Experimental results for observer-based soft landing of a fast-switching solenoid valve under dry conditions are presented to demonstrate the usefulness of the approach.
Vertrauen durch Integrität
(2021)
In der Diskussion um unternehmerische Verantwortung hat die Frage nach der Reichweite der Unternehmensverantwortung an Intensität zugenommen, getrieben von der Globalisierung und gesellschaftlichen, ökologischen und technologischen Herausforderungen. Für viele gesellschaftliche und ökologische Problemstellungen wird verstärkt auch die Verantwortung von Unternehmen eingefordert, etwa bei der Achtung von Menschenrechten in der Wertschöpfungskette, der Einhaltung von Sozial- und Umweltstandards, beim Thema Nachhaltigkeit und bei der Produktsicherheit. Wie Unternehmen durch Integritäts- und Compliance-Management ihrer Verantwortung gerecht werden können, zeigt der vorliegende Beitrag.
Welche Kompetenzen brauchen Führungskräfte, damit der Ansatz Compliance und Integrity als Führungsaufgabe in Organisationen verfängt? Und wie lassen sich diese systematisch nutzen und trainieren? Der Beitrag stellt den ersten Baustein eines am Konstanz Institut für Corporate Governance angesiedelten Forschungsprojekts vor, das darauf abzielt, bestehende Compliance-Systeme in Unternehmen praxistauglicher zu machen und die Wirksamkeit der Maßnahmen eines Compliance-Management-Systems (CMS) zu steigern.
Network effects, economies of scale, and lock-in-effects increasingly lead to a concentration of digital resources and capabilities, hindering the free and equitable development of digital entrepreneurship, new skills, and jobs, especially in small communities and their small and medium-sized enterprises (“SMEs”). To ensure the affordability and accessibility of technologies, promote digital entrepreneurship and community well-being, and protect digital rights, we propose data cooperatives as a vehicle for secure, trusted, and sovereign data exchange. In post-pandemic times, community/SME-led cooperatives can play a vital role by ensuring that supply chains to support digital commons are uninterrupted, resilient, and decentralized. Digital commons and data sovereignty provide communities with affordable and easy access to information and the ability to collectively negotiate data-related decisions. Moreover, cooperative commons (a) provide access to the infrastructure that underpins the modern economy, (b) preserve property rights, and (c) ensure that privatization and monopolization do not further erode self-determination, especially in a world increasingly mediated by AI. Thus, governance plays a significant role in accelerating communities’/SMEs’ digital transformation and addressing their challenges. Cooperatives thrive on digital governance and standards such as open trusted application programming interfaces (“APIs”) that increase the efficiency, technological capabilities, and capacities of participants and, most importantly, integrate, enable, and accelerate the digital transformation of SMEs in the overall process. This review article analyses an array of transformative use cases that underline the potential of cooperative data governance. These case studies exemplify how data and platform cooperatives, through their innovative value creation mechanisms, can elevate digital commons and value chains to a new dimension of collaboration, thereby addressing pressing societal issues. Guided by our research aim, we propose a policy framework that supports the practical implementation of digital federation platforms and data cooperatives. This policy blueprint intends to facilitate sustainable development in both the Global South and North, fostering equitable and inclusive data governance strategies.
Increasing demand for sustainable, resilient, and low-carbon construction materials has highlighted the potential of Compacted Mineral Mixtures (CMMs), which are formulated from various soil types (sand, silt, clay) and recycled mineral waste. This paper presents a comprehensive inter- and transdisciplinary research concept that aims to industrialise and scale up the adoption of CMM-based construction materials and methods, thereby accelerating the construction industry’s systemic transition towards carbon neutrality. By drawing upon the latest advances in soil mechanics, rheology, and automation, we propose the development of a robust material properties database to inform the design and application of CMM-based materials, taking into account their complex, time-dependent behaviour. Advanced soil mechanical tests would be utilised to ensure optimal performance under various loading and ageing conditions. This research has also recognised the importance of context-specific strategies for CMM adoption. We have explored the implications and limitations of implementing the proposed framework in developing countries, particularly where resources may be constrained. We aim to shed light on socio-economic and regulatory aspects that could influence the adoption of these sustainable construction methods. The proposed concept explores how the automated production of CMM-based wall elements can become a fast, competitive, emission-free, and recyclable alternative to traditional masonry and concrete construction techniques. We advocate for the integration of open-source digital platform technologies to enhance data accessibility, processing, and knowledge acquisition; to boost confidence in CMM-based technologies; and to catalyse their widespread adoption. We believe that the transformative potential of this research necessitates a blend of basic and applied investigation using a comprehensive, holistic, and transfer-oriented methodology. Thus, this paper serves to highlight the viability and multiple benefits of CMMs in construction, emphasising their pivotal role in advancing sustainable development and resilience in the built environment.
We call for a paradigm shift in engineering education. We are entering the era of the Fourth Industrial Revolution (“4IR”), accelerated by Artificial Intelligence (“AI”). Disruptive changes affect all industrial sectors and society, leading to increased uncertainty that makes it impossible to predict what lies ahead. Therefore, gradual cultural change in education is no longer an option to ease social pain. The vast majority of engineering education and training systems, which have remained largely static and underinvested for decades, are inadequate for the emerging 4IR and AI labour markets. Nevertheless, some positive developments can be observed in the reorientation of the engineering education sector. Novel approaches to engineering education are already providing distinctive, technology-enhanced, personalised, student-centred curriculum experiences within an integrated and unified education system. We need to educate engineering students for a future whose key characteristics are volatility, uncertainty, complexity and ambiguity (“VUCA”). Talent and skills gaps are expected to increase in all industries in the coming years. The authors argue for an engineering curriculum that combines timeless didactic traditions such as Socratic inquiry, mastery-based and project-based learning and first-principles thinking with novel elements, e.g., student-centred active and e-learning with a focus on case studies, as well as visualization/metaverse and gamification elements discussed in this paper, and a refocusing of engineering skills and knowledge enhanced by AI on human qualities such as creativity, empathy and dexterity. These skills strengthen engineering students’ perceptions of the world and the decisions they make as a result. This 4IR engineering curriculum will prepare engineering students to become curious engineers and excellent collaborators who navigate increasingly complex multistakeholder ecosystems.
Digital federated platforms and data cooperatives for secure, trusted and sovereign data exchange will play a central role in the construction industry of the future. With the help of platforms, cooperatives and their novel value creation, the digital transformation and the degree of organization of the construction value chain can be taken to a new level of collaboration. The goal of this research project was to develop an experimental prototype for a federated innovation data platform along with a suitable exemplary use case. The prototype is to serve the construction industry as a demonstrator for further developments and form the basis for an innovation platform. It exemplifies how an overall concept is concretely implemented along one or more use cases that address high-priority industry pain points. This concept will create a blueprint and a framework for further developments, which will then be further established in the market. The research project illuminates the perspective of various governance innovations to increase industry collaboration, productivity and capital project performance and transparency as well as the overall potential of possible platform business models. However, a comprehensive expert survey revealed that there are considerable obstacles to trust-based data exchange between the key stakeholders in the industry value network. The obstacles to cooperation are predominantly not of a technical nature but rather of a competitive, predominantly trust-related nature. To overcome these obstacles and create a pre-competitive space of trust, the authors therefore propose the governance structure of a data cooperative model, which is discussed in detail in this paper.
Specific climate adaptation and resilience measures can be efficiently designed and implemented at regional and local levels. Climate and environmental databases are critical for achieving the sustainable development goals (SDGs) and for efficiently planning and implementing appropriate adaptation measures. Available federated and distributed databases can serve as necessary starting points for municipalities to identify needs, prioritize resources, and allocate investments, taking into account often tight budget constraints. High-quality geospatial, climate, and environmental data are now broadly available and remote sensing data, e.g., Copernicus services, will be critical. There are forward-looking approaches to use these datasets to derive forecasts for optimizing urban planning processes for local governments. On the municipal level, however, the existing data have only been used to a limited extent. There are no adequate tools for urban planning with which remote sensing data can be merged and meaningfully combined with local data and further processed and applied in municipal planning and decision-making. Therefore, our project CoKLIMAx aims at the development of new digital products, advanced urban services, and procedures, such as the development of practical technical tools that capture different remote sensing and in-situ data sets for validation and further processing. CoKLIMAx will be used to develop a scalable toolbox for urban planning to increase climate resilience. Focus areas of the project will be water (e.g., soil sealing, stormwater drainage, retention, and flood protection), urban (micro)climate (e.g., heat islands and air flows), and vegetation (e.g., greening strategy, vegetation monitoring/vitality). To this end, new digital process structures will be embedded in local government to enable better policy decisions for the future.
Globalization has increased the number of road trips and vehicles. The result has been an intensification of traffic accidents, which are becoming one of the most important causes of death worldwide. Traffic accidents are often due to human error, the probability of which increases when the cognitive ability of the driver decreases. Cognitive capacity is closely related to the driver’s mental state, as well as other external factors such as the CO2 concentration inside the vehicle. The objective of this work is to analyze how these elements affect driving. We have conducted an experiment with 50 drivers who have driven for 25 min using a driving simulator. These drivers completed a survey at the start and end of the experiment to obtain information about their mental state. In addition, during the test, their stress level was monitored using biometric sensors and the state of the environment (temperature, humidity and CO2 level) was recorded. The results of the experiment show that the initial level of stress and tiredness of the driver can have a strong impact on stress, driving behavior and fatigue produced by the driving test. Other elements such as sadness and the conditions of the interior of the vehicle also cause impaired driving and affect compliance with traffic regulations.
Sleep is an important aspect in life of every human being. The average sleep duration for an adult is approximately 7 h per day. Sleep is necessary to regenerate physical and psychological state of a human. A bad sleep quality has a major impact on the health status and can lead to different diseases. In this paper an approach will be presented, which uses a long-term monitoring of vital data gathered by a body sensor during the day and the night supported by mobile application connected to an analyzing system, to estimate sleep quality of its user as well as give recommendations to improve it in real-time. Actimetry and historical data will be used to improve the individual recommendations, based on common techniques used in the area of machine learning and big data analysis.
Die digitale Transformation von Geschäftsprozessen und die stärkere Integration von IT-Systemen führen zu Chancen und Risiken für kleine und mittlere Unternehmen (KMU). Risiken, die zu fehlender IT-Governance, Risk und Compliance (GRC) führen können. Ziel dieses Beitrags ist es, die Design- und Evaluierungsphase der Erstellung eines Artefakts darzustellen. Dabei wird der Design Science Research Ansatz nach Hevner verwendet. Das Artefakt wird für die Auswahl von Standards entwickelt, indem KMU-relevante Ausprägungen und bestehende Rahmenwerke auf die definierten Kriterien angepasst werden.
Kleine und mittelständische Unternehmen (KMU) sind bekannt für ihre Innovationskraft und bilden das Rückgrat der deutschen Wirtschaft. Wie Studien zeigen sind sie in Bezug auf Compliance-Maßnahmen im Vergleich zu
kapitalmarktorientierten Unternehmen jedoch im Rückstand. Eine gesonderte Betrachtung der IT-Compliance erfolgt dabei in den Studien in der Regel nicht. Auch wenn zu den Gründen und Motiven fehlender IT-Compliance-Strukturen in KMU kaum Forschungsergebnisse vorliegen, zeigen doch die vielen Publikationen, die sich mit Teilaspekten von Compliance und KMU beschäftigen, dass Handlungsbedarf besteht. Insbesondere die aktuellen Veränderungen unter dem Stichwort Digitalisierung deuten auf eine gesteigerte Bedeutung von IT-Compliance-Maßnahmen vor allem in mittelständischen Unternehmen. In dieser Arbeit sollen daher mithilfe einer Literaturrecherche die aktuell behandelten Themen in Bezug auf IT-Compliance und KMU analysiert sowie aktuelle Themenschwerpunkte herausgearbeitet werden.
Die digitale Transformation von Geschäftsprozessen und die stärkere Einbindung von IT-Systemen erzeugen bei kleinen und mittelständischen Unternehmen (KMU) Chancen und Risiken zugleich. Risiken, die insbesondere in einer fehlenden IT-Compliance resultieren können. Wie Studien zeigen, sind KMU in Bezug auf IT-Compliance-Maßnahmen im Vergleich zu kapitalmarktorientierten Unternehmen jedoch im Rückstand [1]. Im Beitrag wird mithilfe von Experteninterviews und einer qualitativen Datenanalyse der Frage nachgegangen, welcher Status quo an Maßnahmen aktuell implementiert und wie der empfundene Compliance-Reifegrad ist. Weiterhin werden die Gründe und Motive erörtert, die zu diesem Zustand geführt haben. Letztlich sind Treiber identifiziert worden, die zu einem höheren Bewusstsein in der Zukunft führen können. Die Arbeit zeigt interessante Erkenntnisse aus der Praxis, da die Experteninterviews Einblicke in den aktuellen Status quo in Bezug auf IT-Compliance liefern.
Wettbewerb und Wagnis
(2015)
Karl Bernhard
(2015)
Schicksal der Freybrücke
(2016)
Karl Bernhard gilt als einer der bedeutendsten Bauingenieure in Deutschland Anfang des 20. Jahrhunderts. Seine beiden Brücken entlang der Heerstraße in Berlin, die Stößenseebrücke und die Freybrücke über die Havel, sind seit 1909 verkehrstechnisch von höchster Bedeutung für die Ost-West-Verbindung der Stadt. Die Freybrücke wurde jedoch im März 2015 abgerissen, um einem Ersatzneubau Platz zu machen. Wie kam es dazu, dass eine historisch und verkehrstechnisch derart wichtige, seit 1971 unter Denkmalschutz stehende Konstruktion so lange vernachlässigt wurde, bis sie abbruchreif war? Dieser Artikel ist eine Hommage an Bernhards Havelbrücke und geht ihrer wechselvollen Geschichte nach.
Die Bauingenieure sind sich ihrer gesellschaftlichen Verantwortung bewusst, finden es jedoch unbefriedigend, dass ihr Anteil an der Baukultur weder ausreichend bekannt ist, noch gebührend gewürdigt wird. Der Beitrag gibt einen kurzen Überblick über diese Thematik und versucht die eigenen Defizite der Ingenieure aufzuspüren, die die Lösung des Problems erschweren. Am Beispiel von Ulrich Fintterwalder wird gezeigt, welche Haltung einer der großen Baumeister des zwanzigsten Jahrhunderts zum verantwortlichen und nachhaltigen Bauen, zur Ästhetik und Gestaltung und zur kreativen Zusammenarbeit mit Architekten eingenommen hatte.
Ulrich Finsterwalder
(2017)
Ziel des Forschungsprojekts "Ekont" ist es, ein handgeführtes Gerät zum Betonabtrag an Innenkanten und Störstellen in Kernkraftwerken (KKW) zu entwickeln. Um die Reaktionskräfte zu reduzieren wird hierbei der neuartige Ansatz eines gegenläufigen Fräsprozesses untersucht. Ergebnis ist eine Getriebelösung, bei der eine mittlere Frässcheibe mit annähernd derselben Umfangsgeschwindigkeit in die entgegengesetzte Richtung von weiteren Frässcheiben rotiert.
Im Beitrag wird erörtert, welche Rolle der schriftlichen Fehlerkorrektur im studienbegleitenden und -vorbereitenden Deutschunterricht zukommt. Dazu wird ein Überblick über den Stand der Forschung gegeben und eine Studie zur Fehlerkorrektur mit fortgeschrittenen Lernenden vorgestellt, in der die Wirkung einer Fehlerkorrektur am Lernertext überprüft wurde. Die Fehlerkorrektur führte bei der Experimentalgruppe zwar zu einer Reduktion der Fehler in der Überarbeitung, im Vergleich zur Kontrollgruppe aber nicht zu einer signifikanten Verbesserung in einem neuen Text. Um zu eruieren, wie die Lernenden mit der Fehlerkorrektur umgingen, wurde in der Studie außerdem eine qualitative Analyse einzelner Arbeiten vorgenommen. Der Beitrag schließt mit der Frage, welche Rolle eine – nicht sonderlich effektive – Fehlerkorrektur in einer auf Wissenschaftssprache bezogenen Vermittlung einnehmen sollte.
In this work, a storage study was conducted to find suitable packaging material for tomato powder storage. Experiments were laid out in a single factor completely randomized design (CRD) to study the effect of packaging materials on lycopene, vitamin C moisture content, and water activity of tomato powder; The factor (packaging materials) has three levels (low‐density polyethylene bag, polypropylene bottle, wrapped with aluminum foils, and packed in low‐density polyethylene bag) and is replicated three times. During the study, a twin layer solar tunnel dried tomato slices of var. Galilea was used. The dried tomato slices were then ground and packed (40 g each) in the packaging materials and stored at room temperature. Samples were drawn from the packages at 2‐month interval for quality analysis and SAS (version 9.2) software was used for statistical analysis. From the result, higher retention of lycopene (80.13%) and vitamin C (49.32%) and a nonsignificant increase in moisture content and water activity were observed for tomato powder packed in polypropylene bottles after 6 months of storage. For low‐density polyethylene packed samples and samples wrapped with aluminum foil and packed in a low‐density polyethylene bag, 57.06% and 60.45% lycopene retention and 42.9% and 49.23% Vitamin C retention were observed, respectively, after 6 months of storage. Considering the results found, it can be concluded that lycopene and vitamin C content of twin layer solar tunnel dried tomato powder can be preserved at ambient temperature storage by packing in a polypropylene bottle with a safe range of moisture content and water activity levels for 6 months.
A flight-like absolute optical frequency reference based on iodine for laser systems at 1064 nm
(2017)
We present an absolute optical frequency reference based on precision spectroscopy of hyperfine transitions in molecular iodine 127I2 for laser systems operating at 1064 nm. A quasi-monolithic spectroscopy setup was developed, integrated, and tested with respect to potential deployment in space missions that require frequency stable laser systems. We report on environmental tests of the setup and its frequency stability and reproducibility before and after each test. Furthermore, we report on the first measurements of the frequency stability of the iodine reference with an unsaturated absorption cell which will greatly simplify its application in space missions. Our frequency reference fulfills the requirements on the frequency stability for planned space missions such as LISA or NGGM.
This study aims to investigate the utilization of Bayesian techniques for the calibration of micro-electro-mechanical system (MEMS) accelerometers. These devices have garnered substantial interest in various practical applications and typically require calibration through error-correcting functions. The parameters of these error-correcting functions are determined during a calibration process. However, due to various sources of noise, these parameters cannot be determined with precision, making it desirable to incorporate uncertainty in the calibration models. Bayesian modeling offers a natural and complete way of reflecting uncertainty by treating the model parameters as variables rather than fixed values. In addition, Bayesian modeling enables the incorporation of prior knowledge, making it an ideal choice for calibration. Nevertheless, it is infrequently used in sensor calibration. This study introduces Bayesian methods for the calibration of MEMS accelerometer data in a straightforward manner using recent advances in probabilistic programming.
Know when you don't know
(2018)
Deep convolutional neural networks show outstanding performance in image-based phenotype classification given that all existing phenotypes are presented during the training of the network. However, in real-world high-content screening (HCS) experiments, it is often impossible to know all phenotypes in advance. Moreover, novel phenotype discovery itself can be an HCS outcome of interest. This aspect of HCS is not yet covered by classical deep learning approaches. When presenting an image with a novel phenotype to a trained network, it fails to indicate a novelty discovery but assigns the image to a wrong phenotype. To tackle this problem and address the need for novelty detection, we use a recently developed Bayesian approach for deep neural networks called Monte Carlo (MC) dropout to define different uncertainty measures for each phenotype prediction. With real HCS data, we show that these uncertainty measures allow us to identify novel or unclear phenotypes. In addition, we also found that the MC dropout method results in a significant improvement of classification accuracy. The proposed procedure used in our HCS case study can be easily transferred to any existing network architecture and will be beneficial in terms of accuracy and novelty detection.
Dieser Beitrag untersucht, ob externe Interventionen, in Form von Forschung und/oder Wissenschaftskommunikation, als Mediator für Innovationen in Krisenzeiten in der Tourismusbranche fungieren können. Dabei wird anhand dreier Case Studies diskutiert, inwiefern die Corona-Krise ein Window-
of-opportunity für innovative Geschäftsmodelle im Tourismus darstellen konnte. Die Projektergebnisse geben Hinweise darauf, dass Krisen im Allgemeinen und Wissenschaftskommunikation im Speziellen als Push-Faktoren Innovationen befördern können. Zwar kam es bei den Projektpartnern zu einer Entwicklung von Innovationen im Projektzeitraum, jedoch wurde die Implementierung vermehrt in eine unbestimmte Zukunft verschoben. Durch die damit verbundene Rückkehr zum Status-Quo blieben die angestoßenen Innovationen zu einem Großteil auf einer konzeptionellen Ebene. Dies deutet auf eine Attitude-behavior-gap in Bezug auf die Schaffung und Umsetzung von Innovationen in Krisenzeiten.
Die vorliegende Studie analysiert die Barrierefreiheit der
Stadt Konstanz im Hinblick auf Angebote für und Nachfrage von Touristinnen und Touristen. Die Datenerhebung basierte auf einem Methodenmix aus Interviews und Umfragen von Probanden und Probandinnen mit Behinderungen und zuständigen Akteurinnen und Akteuren in der Stadtplanung sowie Begehungen vor Ort. Als theoretische Grundlage wird das Modell der Unabhängigkeit nach
Nosek and Fuhrer (1992) verwendet. Die Untersuchung zeigt, dass der Bedarf an barrierefreien Angeboten sehr divers ist und die Umsetzung im Sinne eines Universal Design durch die zunehmende Nachfrage zentral. Die Analyse des Tourismusraum Konstanz zeigt Schwachpunkte und Stärken, mit denen sich Implikationen für andere Tourismusregionen ableiten lassen.
Uzbekistan is an emerging tourism destination that has experienced a strong increase in tourists since 2017. However, little research on tourism development in Uzbekistan exists to date. This study therefore analyzes possible research topics and proposes a tourism research agenda for Uzbekistan. A mix of methods was used consisting of participant observation, semi-structured qualitative expert interviews and qualitative content anal- ysis. The results revealed a variety of research deficits in different areas, which could be synthesized into a total of ten research fields, which were clustered into three overarching areas, namely market research, management, and culture & environment. The subordi- nate research fields identified are Demand, Statistics, Potentials, Governance, Products, Infrastructure & Development, Marketing, Heritage & Nation-building, Sustainability as well as Peace & Conflict Prevention. A strategic research plan based on this tourism research agenda could help to foster a purposeful scientific debate. Tourism research in these fields has both the potential to investigate and compare theoretical issues in an unique context and to produce applied research results that can make a relevant contri- bution to tourism development in Uzbekistan.
Angesichts der aktuellen höchstrichterlichen Rechtsprechung ist der betagte oder sonst hinfällige, auf Geldentschädigung klagende Verletzte gezwungen, in Rechtsmittelüberlegungen sein nahendes Ableben als erhebliches Prozessrisiko mit einzubeziehen. Das führt zu einer beträchtlichen Schmälerung der ideellen Bestandteile seines Persönlichkeitsrechts. Es ist daher dringend geboten, im Wege der richterlichen Rechtsfortbildung Kriterien für Ausnahmefallgruppen aufzustellen, um untragbare Härtefälle künftig verlässlich auszuschließen.
Die Massai greifen seit einiger Zeit international agierende Modehäuser an. Unstatthafte kulturelle Aneignung, so lautet der Vorwurf. Die Volksgruppe will die Unternehmen nun mit ihren eigenen Waffen schlagen. Sie plant, ihre kulturellen Zeichen zu kommodifizieren. Wer das vorhat, ist mit dem Eintritt ins internationale Marktgeschehen zwangsläufig denselben Regeln unterworfen wie alle anderen Marktteilnehmer auch. Das heißt: Ohne solide Ausschließlichkeitsrechte werden die Massai von möglichen Verhandlungspartnern als potentielle Lizenzgeber erwartungsgemäß nicht ernst genommen werden. Auch läuft die afrikanische Volkgsruppe Gefahr, dass die kulturell-historische Bedeutung ihrer Zeichen im Wege eines kommerziellen Einsatzes überschrieben oder im schlimmsten Fall sogar umgedeutet wird.
Sexistischer Werbung, die gegen die Menschenwürde verstößt, kann über die Auffangnorm des § 3 Abs. 1 UWG bekämpft werden. Dennoch zeigt sich die Rechtsprechung zurückhaltend und stattdessen übernimmt der Deutsche Werberat die Deutungshoheit. Die höchstrichterliche Rechtsprechung konnte sich insoweit bislang nicht fortbilden. Gerade in Zeiten des Wertewandels ist eine aktualisierte höchstrichterliche Rechtsprechung aber nicht zuletzt auch für den Rechtsfrieden unerlässlich.
"KI first" braucht Verlierer
(2023)
Aktuell vergeht kaum eine Woche, in der nicht ein Unternehmen den Kampf um die Vorherrschaft im Bereich der Künstlichen Intelligenz (KI) aufnimmt. Tech-Konzerne versprechen sich auch von KI-gesteuerten Bildgeneratoren satte Gewinne. Diese ahmen mit synthetischen Mischbildern stilprägende Künstler/innen nach. Dabei wird auf die Rechtslage verwiesen, die eine zustimmungs- und vergütungsfreie Vervielfältigung ihrer Kunstwerke für Trainingszwecke angeblich zulässt. Doch Widerstand von Künstlern/innen hiergegen ist gesellschaftlich dringend geboten und wäre im Übrigen auch rechtlich gedeckt.
We quantify the effects of GATT/WTO membership on trade and welfare. Using an extensive database covering manufacturing trade for 186 countries over the period 1980–2016, we find that the average partial equilibrium impact of GATT/WTO membership on trade among member countries is large, positive, and significant. We contribute to the literature by estimating country-specific estimates and find them to vary widely across the countries in our sample with poorer members benefitting more. Using these estimates, we simulate the general equilibrium effects of GATT/WTO on welfare, which are sizable and heterogeneous across members. We show that countries not experiencing positive trade effects from joining GATT/WTO can still gain in terms of welfare, due to lower import prices and higher export demand.
When a country grants preferential tariffs to another, either reciprocally in a free trade agreement (FTA) or unilaterally, rules of origin (RoOs) are defined to determine whether a product is eligible for preferential treatment. RoOs exist to avoid that exports from third countries enter through the member with the lowest tariff (trade deflection). However, RoOs distort exporters' sourcing decisions and burden them with red tape. Using a global data set, we show that, for 86% of all bilateral product-level comparisons within FTAs, trade deflection is not profitable because external tariffs are rather similar and transportation costs are non-negligible; in the case of unilateral trade preferences extended by rich countries to poor ones that ratio is a striking 98%. The pervasive and unconditional use of RoOs is, therefore, hard to rationalize.
The present contribution proposes a novel method for the indirect measurement of the ground reaction forces (GRF) induced by a pedestrian during walking on a vibrating structure. Its main idea is to formulate and solve an inverse problem in the time domain with the aim of finding the optimal time dependent moving point force describing the GRF of a pedestrian (input data), which minimizes the difference between a set of computed and a set of measured structural responses (output data). The solution of the inverse problem is addressed by means of the gradient-based trust region optimization strategy. The moving force identification process uses output data from a set of acceleration and displacement time histories recorded at different locations on the structure. The practicability and the accuracy of the proposed GRF identification method is firstly evaluated using simulated measurements, which revealed a high accuracy, robustness and stability of the results in relation to high noise levels. Subsequently, a comprehensive experimental validation process using real measurement data recorded on the HUMVIB experimental footbridge on the campus of the Technical University of Darmstadt (Germany) was carried out. Besides the conventional sensors for the acquisition of structural responses, an array of biomechanical force plates as well as classical load cells at the supports were used for measurement reference GRFs needed in the experimental validation process. The results show that the proposed method delivers a very accurate estimation of the GRF induced by a subject during walking on the experimental structure.
Error correction coding for optical communication and storage requires high rate codes that enable high data throughput and low residual errors. Recently, different concatenated coding schemes were proposed that are based on binary BCH codes with low error correcting capabilities. In this work, low-complexity hard- and soft-input decoding methods for such codes are investigated. We propose three concepts to reduce the complexity of the decoder. For the algebraic decoding we demonstrate that Peterson's algorithm can be more efficient than the Berlekamp-Massey algorithm for single, double, and triple error correcting BCH codes. We propose an inversion-less version of Peterson's algorithm and a corresponding decoding architecture. Furthermore, we propose a decoding approach that combines algebraic hard-input decoding with soft-input bit-flipping decoding. An acceptance criterion is utilized to determine the reliability of the estimated codewords. For many received codewords the stopping criterion indicates that the hard-decoding result is sufficiently reliable, and the costly soft-input decoding can be omitted. To reduce the memory size for the soft-values, we propose a bit-flipping decoder that stores only the positions and soft-values of a small number of code symbols. This method significantly reduces the memory requirements and has little adverse effect on the decoding performance.
This work proposes a lossless data compression algorithm for short data blocks. The proposed compression scheme combines a modified move-to-front algorithm with Huffman coding. This algorithm is applicable in storage systems where the data compression is performed on block level with short block sizes, in particular, in non-volatile memories. For block sizes in the range of 1(Formula presented.)kB, it provides a compression gain comparable to the Lempel–Ziv–Welch algorithm. Moreover, encoder and decoder architectures are proposed that have low memory requirements and provide fast data encoding and decoding.
The introduction of multiple-level cell (MLC) and triple-level cell (TLC) technologies reduced the reliability of flash memories significantly compared with single-level cell flash. With MLC and TLC flash cells, the error probability varies for the different states. Hence, asymmetric models are required to characterize the flash channel, e.g., the binary asymmetric channel (BAC). This contribution presents a combined channel and source coding approach improving the reliability of MLC and TLC flash memories. With flash memories data compression has to be performed on block level considering short-data blocks. We present a coding scheme suitable for blocks of 1 kB of data. The objective of the data compression algorithm is to reduce the amount of user data such that the redundancy of the error correction coding can be increased in order to improve the reliability of the data storage system. Moreover, data compression can be utilized to exploit the asymmetry of the channel to reduce the error probability. With redundant data, the proposed combined coding scheme results in a significant improvement of the program/erase cycling endurance and the data retention time of flash memories.
This letter proposes two contributions to improve the performance of transmission with generalized multistream spatial modulation (SM). In particular, a modified suboptimal detection algorithm based on the Gaussian approximation method is proposed. The proposed modifications reduce the complexity of the Gaussian approximation method and improve the performance for high signal-to-noise ratios. Furthermore, this letter introduces signal constellations based on Hurwitz integers, i.e., a 4-D lattice. Simulation results demonstrate that these signal constellations are beneficial for generalized SM with two active antennas.
This letter introduces signal constellations based on multiplicative groups of Eisenstein integers, i.e., hexagonal lattices. These sets of Eisenstein integers are proposed as signal constellations for generalized spatial modulation. The algebraic properties of the new constellations are investigated and a set partitioning technique is developed. This technique can be used to design coded modulation schemes over hexagonal lattices.
Codes over quotient rings of Lipschitz integers have recently attracted some attention. This work investigates the performance of Lipschitz integer constellations for transmission over the AWGN channel by means of the constellation figure of merit. A construction of sets of Lipschitz integers that leads to a better constellation figure of merit compared to ordinary Lipschitz integer constellations is presented. In particular, it is demonstrated that the concept of set partitioning can be applied to quotient rings of Lipschitz integers where the number of elements is not a prime number. It is shown that it is always possible to partition such quotient rings into additive subgroups in a manner that the minimum Euclidean distance of each subgroup is strictly larger than in the original set. The resulting signal constellations have a better performance for transmission over an additive white Gaussian noise channel compared to Gaussian integer constellations and to ordinary Lipschitz integer constellations. In addition, we present multilevel code constructions for the new signal constellations.
The McEliece cryptosystem is a promising candidate for post-quantum public-key encryption. In this work, we propose q-ary codes over Gaussian integers for the McEliece system and a new channel model. With this one Mannheim error channel, errors are limited to weight one. We investigate the channel capacity of this channel and discuss its relation to the McEliece system. The proposed codes are based on a simple product code construction and have a low complexity decoding algorithm. For the one Mannheim error channel, these codes achieve a higher error correction capability than maximum distance separable codes with bounded minimum distance decoding. This improves the work factor regarding decoding attacks based on information-set decoding.
Zur Rhetorik der Technik
(2017)
Schreiben und Rhetorik an einer Hochschule für angewandte Wissenschaften - ein Erfahrungsbericht
(2018)
Zitate sind kein Selbstzweck
(2015)
Rhetorik-Wörterbuch
(2015)
In Anlehnung an das Tempcore-Verfahren wurde an wärmebehandeltem Stabstahl das Zugverfestigungsverhalten des Kernes, der Außenhaut sowie dem gesamten Stab experimentell und numerisch ermittelt. Es zeigte sich, dass die Dehnungen am Kern und am äußeren Rand gleich sind und der Einfluss des Kerngefüges entscheidend für den Beginn der Einschnürung in der Außenhaut ist. Eine Verbesserung der Eigenschaften des Kerngefüges kann somit die Bruchempfindlichkeit des gesamten Stabes reduzieren.
Mechanical properties after stretching testings were calcu-lated and experimentally determined via Tempcore method for bar core, bar surface and whole bar cross section. It was displayed on the base of experiments and imitating simulation that deformation in core and surface areas of a bar are equal and therefore influence of structural parameters in the core area is principally decisive for initiating of neck forming in the surface area. The results showed that resistance to destruction of martensite surface layer has rather less effect on bar properties in general in comparison with previous investigations. It is concluded that improvement of core structure quality can help to lower brittleness of the whole bar. It was also proved that used techniques provide good concordance between the obtained results and experimental data. Therefore, the additivity rule for structural components can be used successfully for determination of whole bar parameters, taking into account thickness of surface layer that can be measured easily using hardness sensor. It will simplify practically quality control of products.
Due to their structure of crossed yarns embedded in coating, woven fabric membranes are characterised by a highly nonlinear stress-strain behaviour. In order to determine an accurate structural response of membrane structures, a suitable description of the material behaviour is required. Typical phenomenological material models like linear-elastic orthotropic models only allow a limited determination of the real material behaviour. A more accurate approach becomes evident by focusing on the meso-scale, which reveals an inhomogeneous however periodic structure of woven fabrics. The present work focuses on an established meso-scale model. The novelty of this work is an enhancement of this model with regard to the coating stiffness. By performing an inverse process of parameter identification using a state-of-the-art Levenberg-Marquardt algorithm, a close fit w.r.t. measured data from a common biaxial test is shown and compared to results applying established models. Subsequently, the enhanced meso-scale model is processed into a multi-scale model and is implemented as a material law into a finite element program. Within finite element analyses of an exemplary full scale membrane structure by using the implemented material model as well as by using established material models, the results are compared and discussed.