Refine
Year of publication
- 2024 (20)
- 2023 (181)
- 2022 (202)
- 2021 (162)
- 2020 (139)
- 2019 (169)
- 2018 (197)
- 2017 (224)
- 2016 (215)
- 2015 (222)
- 2014 (33)
- 2013 (17)
- 2012 (17)
- 2011 (22)
- 2010 (18)
- 2009 (15)
- 2008 (14)
- 2007 (14)
- 2006 (17)
- 2005 (10)
- 2004 (23)
- 2003 (28)
- 2002 (19)
- 2001 (7)
- 2000 (2)
- 1999 (1)
- 1996 (1)
- 1995 (2)
- 1992 (2)
- 1991 (1)
- 1987 (1)
- 1980 (2)
- 1979 (1)
- 1973 (1)
- 1917 (2)
- 1916 (1)
Document Type
- Conference Proceeding (642)
- Article (426)
- Other Publications (143)
- Part of a Book (141)
- Working Paper (128)
- Book (118)
- Report (115)
- Journal (Complete Issue of a Journal) (85)
- Master's Thesis (77)
- Doctoral Thesis (58)
Language
- German (1113)
- English (882)
- Multiple languages (8)
Keywords
Institute
- Fakultät Architektur und Gestaltung (41)
- Fakultät Bauingenieurwesen (104)
- Fakultät Elektrotechnik und Informationstechnik (34)
- Fakultät Informatik (121)
- Fakultät Maschinenbau (60)
- Fakultät Wirtschafts-, Kultur- und Rechtswissenschaften (106)
- Institut für Angewandte Forschung - IAF (115)
- Institut für Naturwissenschaften und Mathematik - INM (3)
- Institut für Optische Systeme - IOS (39)
- Institut für Strategische Innovation und Technologiemanagement - IST (60)
"KI first" braucht Verlierer
(2023)
Aktuell vergeht kaum eine Woche, in der nicht ein Unternehmen den Kampf um die Vorherrschaft im Bereich der Künstlichen Intelligenz (KI) aufnimmt. Tech-Konzerne versprechen sich auch von KI-gesteuerten Bildgeneratoren satte Gewinne. Diese ahmen mit synthetischen Mischbildern stilprägende Künstler/innen nach. Dabei wird auf die Rechtslage verwiesen, die eine zustimmungs- und vergütungsfreie Vervielfältigung ihrer Kunstwerke für Trainingszwecke angeblich zulässt. Doch Widerstand von Künstlern/innen hiergegen ist gesellschaftlich dringend geboten und wäre im Übrigen auch rechtlich gedeckt.
"Wem gehört der Kocher?"
(2016)
"Wem gehört die Murg?"
(2016)
100 Jahre Türkische Republik
(2023)
Aktuelle Fachvorträge aus dem internationalen Spektrum des Bauingenieurwesens an der HTWG Konstanz, anlässlich des 100-jährigen Bestehens der Fakultät BI. Themenbereiche: Bauerhaltung und Bausanierung, Geotechnik, Konstruktiver Ingenieurbau, Höhepunkte des Hochbaus im 21. Jahrhundert, Verkehrswesen, Wasserwirtschaft, Wirtschaftsingenieurwesen. Gefördert wurde die Veranstaltung von: ADK Modulraum GmbH ; Architektenkammer BW ; Bauen mit Stahl ; Beton Marketing Süd ; Bund Deutscher Architekten BDA ; Institut Feuerverzinken ; Form TL - Ingenieure für Tragwerk und Leichtbau GmbH ; OTT Ziegel Pfullendorf ; Stahlbauzentrum Schweiz ; Euro Poles Pfleiderer ; Ingenieurk@mmer Baden-Württemberg
3-Stufen-Pulswechselrichter
(2016)
33 Baukultur Rezepte
(2017)
Das vorliegende Buch ist entstanden im Rahmen des Forschungsprojekts „Baukultur konkret“ im Forschungsprogramm Experimenteller Wohnungs- und Städtebau (ExWoSt) des Bundesinstituts für Bau-, Stadt- und Raumforschung BBSR im Auftrag des Bundesministeriums für Umwelt, Naturschutz, Bau und Reaktorsicherheit BMUB.
The magneto-mechanical behavior of magnetic shape memory (MSM) materials has been investigated by means of different simulation and modeling approaches by several research groups. The target of this paper is to simulate actuators driven by MSM alloys and to understand the MSM element behavior during actuation, which shall lead to an increased performance of the actuator. It is shown that internal and external stresses should be taken into consideration using numerical computation tools for magnetic fields in an efficient way.
40 Jahre Neuland des Denkens
(2020)
Vor 40 Jahren erschien Frederic Vesters Hauptwerk „Neuland des Denkens“. Der Beitrag beleuchtet die wesentlichen Themen dieses programmatischen Buches im Hinblick auf Vesters Biokybernetik und deren Anwendung auf zahlreiche aktuelle Fragen in der Nachhaltigkeits-Debatte, z.B. Klimawandel-Problematik und Energiewende.
The binary asymmetric channel (BAC) is a model for the error characterization of multi-level cell (MLC) flash memories. This contribution presents a joint channel and source coding approach improving the reliability of MLC flash memories. The objective of the data compression algorithm is to reduce the amount of user data such that the redundancy of the error correction coding can be increased in order to improve the reliability of the data storage system. Moreover, data compression can be utilized to exploit the asymmetry of the channel to reduce the error probability. With MLC flash memories data compression has to be performed on block level considering short data blocks. We present a coding scheme suitable for blocks of 1 kilobyte of data.
Multi-object tracking filters require a birth density to detect new objects from measurement data. If the initial positions of new objects are unknown, it may be useful to choose an adaptive birth density. In this paper, a circular birth density is proposed, which is placed like a band around the surveillance area. This allows for 360° coverage. The birth density is described in polar coordinates and considers all point-symmetric quantities such as radius, radial velocity and tangential velocity of objects entering the surveillance area. Since it is assumed that these quantities are unknown and may vary between different targets, detected trajectories, and in particular their initial states, are used to estimate the distribution of initial states. The adapted birth density is approximated as a Gaussian mixture, so that it can be used for filters operating on Cartesian coordinates.
This work proposes a lossless data compression algorithm for short data blocks. The proposed compression scheme combines a modified move-to-front algorithm with Huffman coding. This algorithm is applicable in storage systems where the data compression is performed on block level with short block sizes, in particular, in non-volatile memories. For block sizes in the range of 1(Formula presented.)kB, it provides a compression gain comparable to the Lempel–Ziv–Welch algorithm. Moreover, encoder and decoder architectures are proposed that have low memory requirements and provide fast data encoding and decoding.
This work presents a new concept to implement the elliptic curve point multiplication (PM). This computation is based on a new modular arithmetic over Gaussian integer fields. Gaussian integers are a subset of the complex numbers such that the real and imaginary parts are integers. Since Gaussian integer fields are isomorphic to prime fields, this arithmetic is suitable for many elliptic curves. Representing the key by a Gaussian integer expansion is beneficial to reduce the computational complexity and the memory requirements of secure hardware implementations, which are robust against attacks. Furthermore, an area-efficient coprocessor design is proposed with an arithmetic unit that enables Montgomery modular arithmetic over Gaussian integers. The proposed architecture and the new arithmetic provide high flexibility, i.e., binary and non-binary key expansions as well as protected and unprotected PM calculations are supported. The proposed coprocessor is a competitive solution for a compact ECC processor suitable for applications in small embedded systems.
The main aim of presented in this manuscript research is to compare the results of objective and subjective measurement of sleep quality for older adults (65+) in the home environment. A total amount of 73 nights was evaluated in this study. Placing under the mattress device was used to obtain objective measurement data, and a common question on perceived sleep quality was asked to collect the subjective sleep quality level. The achieved results confirm the correlation between objective and subjective measurement of sleep quality with the average standard deviation equal to 2 of 10 possible quality points.
Digitalization is one of the most frequently discussed topics in industry. New technologies, platform concepts and integrated data models do enable disruptive business models and drive changes in organization, processes, and tools. The goal is to make a company more efficient, productive and ultimately profitable. However, many companies are facing the challenge of how to approach digital transformation in a structured way and to realize these potential benefits. What they realize is that Product Lifecycle Management plays a key role in digitalization intends, as object, structure and process management along the life cycle is a foundation for many digitalization use cases. The introduced maturity model for assessing a firm’s capabilities along the product lifecycle has been used almost two hundred times. It allows a company to compare its performance with an industry specific benchmark to reveal individual strengths and weaknesses. Furthermore, an empirical study produced multidimensional correlation coefficients, which identify dependencies between business model characteristics and the maturity level of capabilities.
One major realm of Condition Based Maintenance is finding features that reflect the current health state of the asset or component under observation. Most of the existing approaches are accompanied with high computational costs during the different feature processing phases making them infeasible in a real-world scenario. In this paper a feature generation method is evaluated compensating for two problems: (1) storing and handling large amounts of data and (2) computational complexity. Both aforementioned problems are existent e.g. when electromagnetic solenoids are artificially aged and health indicators have to be extracted or when multiple identical solenoids have to be monitored. To overcome those problems, Compressed Sensing (CS), a new research field that keeps constantly emerging into new applications, is employed. CS is a data compression technique allowing original signal reconstruction with far fewer samples than Shannon-Nyquist dictates, when some criteria are met. By applying this method to measured solenoid coil current, raw data vectors can be reduced to a way smaller set of samples that yet contain enough information for proper reconstruction. The obtained CS vector is also assumed to contain enough relevant information about solenoid degradation and faults, allowing CS samples to be used as input to fault detection or remaining useful life estimation routines. The paper gives some results demonstrating compression and reconstruction of coil current measurements and outlines the application of CS samples as condition monitoring data by determining deterioration and fault related features. Nevertheless, some unresolved issues regarding information loss during the compression stage, the design of the compression method itself and its influence on diagnostic/prognostic methods exist.
A conceptual framework for indigenous ecotourism projects – a case study in Wayanad, Kerala, India
(2020)
This paper analyses indigenous ecotourism in the Indian district of Wayanad, Kerala, using a conceptual framework based on a PATA 2015 study on indigenous tourism that includes the criteria: human rights, participation, business and ecology. Detailed indicator sets for each criterion are applied to a case study of the Priyadarshini Tea Environs with a qualitative research approach addressing stakeholders from the public sector, non-governmental organisations, academia, tour operators and communities including Adivasi and non-Adivasi. In-depth interviews were supported by participant and non-participant observations. The authors adapted this framework to the needs of the case study and consider that this modified version is a useful tool for academics and practitioners wishing to evaluate and develop indigenous ecotourism projects. The results show that the Adivasi involved in the Priyadarshini Tea Environs project benefit from indigenous ecotourism. But they could profit more if they had more involvement in and control of the whole tourism value chain.
Research credits corporate entrepreneurship (CE) with enabling established companies to create new types of innovation. Scholars have focused on the organizational design of CE activities, proposing specific organizational units. These semi-autonomous units create a tense management situation between the core organization and its CE activities. Management and organization research considers control as a key managerial function for help. However, control has received limited research attention regarding CE units, leaving design issues for appropriate control of CE units unanswered. In this study, we link management control and CE to illustrate how control is understood in the context of CE. For this, we scanned the CE literature to identify underlying attributes and characteristics that allow specifying control for CE. We identified 11 attributes to describe control for CE activities in a first round and to derive future research paths.
In many industrial applications a workpiece is continuously fed through a heating zone in order to reach a desired temperature to obtain specific material properties. Many examples of such distributed parameter systems exist in heavy industry and also in furniture production such processes can be found. In this paper, a real-time capable model for a heating process with application to industrial furniture production is modeled. As the model is intended to be used in a Model Predictive Control (MPC) application, the main focus is to achieve minimum computational runtime while maintaining a sufficient amount of accuracy. Thus, the governing Partial Differential Equation (PDE) is discretized using finite differences on a grid, specifically tailored to this application. The grid is optimized to yield acceptable accuracy with a minimum number of grid nodes such that a relatively low order model is obtained. Subsequently, an explicit Runge-Kutta ODE (Ordinary Differential Equation) solver of fourth order is compared to the Crank-Nicolson integration scheme presented in Weiss et al. (2022) in terms of runtime and accuracy. Finally, the unknown thermal parameters of the process are estimated using real-world measurement data that was obtained from an experimental setup. The final model yields acceptable accuracy while at the same time shows promising computation time, which enables its use in an MPC controller.
This paper describes an early lumping approach for generating a mathematical model of the heating process of a moving dual-layer substrate. The heat is supplied by convection and nonlinearly distributed over the whole considered spatial extend of the substrate. Using CFD simulations as a reference, two different modelling approaches have been investigated in order to achieve the most suitable model type. It is shown that due to the possibility of using the transition matrix for time discretization, an equivalent circuit model achieves superior results when compared to the Crank-Nicolson method. In order to maintain a constant sampling time for the in-visioned-control strategies, the effect of variable speed is transformed into a system description, where the state vector has constant length but a variable number of non-zero entries. The handling of the variable transport speed during the heating process is considered as the main contribution of this work. The result is a model, suitable for being used in future control strategies.
Online-based business models, such as shopping platforms, have added new possibilities for consumers over the last two decades. Aside from basic differences to other distribution channels, customer reviews on such platforms have become a powerful tool, which bestows an additional source for gaining transparency to consumers. Related research has, for the most part, been labelled under the term electronic word-of-mouth (eWOM). An approach, providing a theoretical basis for this phenomenon, will be provided here. The approach is mainly based on work in the field of consumer culture theory (CCT) and on the concept of co-creation. The work of several authors in these streams of research is used to construct a culturally informed resource-based theory, as advocated by Arnould & Thompson and Algesheimer & Gurâu.
This contribution presents a data compression scheme for applications in non-volatile flash memories. The objective of the data compression algorithm is to reduce the amount of user data such that the redundancy of the error correction coding can be increased in order to improve the reliability of the data storage system. The data compression is performed on block level considering data blocks of 1 kilobyte. We present an encoder architecture that has low memory requirements and provides a fast data encoding.
Large-scale quantum computers threaten the security of today's public-key cryptography. The McEliece cryptosystem is one of the most promising candidates for post-quantum cryptography. However, the McEliece system has the drawback of large key sizes for the public key. Similar to other public-key cryptosystems, the McEliece system has a comparably high computational complexity. Embedded devices often lack the required computational resources to compute those systems with sufficiently low latency. Hence, those systems require hardware acceleration. Lately, a generalized concatenated code construction was proposed together with a restrictive channel model, which allows for much smaller public keys for comparable security levels. In this work, we propose a hardware decoder suitable for a McEliece system based on these generalized concatenated codes. The results show that those systems are suitable for resource-constrained embedded devices.
This work proposes a decoder implementation for high-rate generalized concatenated (GC) codes. The proposed codes are well suited for error correction in flash memories for high reliability data storage. The GC codes are constructed from inner extended binary Bose-Chaudhuri-Hocquenghem (BCH) codes and outer Reed-Solomon (RS) codes. The extended BCH codes enable high-rate GC codes. Moreover, the decoder can take advantage of soft information. For the first three levels of inner codes we propose an optional Chase soft decoder. In this work, the code construction is explained and a decoder architecture is presented. Furthermore, area and throughput results are discussed.
This paper presents the implementation of deep learning methods for sleep stage detection by using three signals that can be measured in a non-invasive way: heartbeat signal, respiratory signal, and movement signal. Since signals are measurements taken during the time, the problem is seen as time-series data classification. Deep learning methods are chosen to solve the problem are convolutional neural network and long-short term memory network. Input data is structured as a time-series sequence of mentioned signals that represent 30 seconds epoch, which is a standard interval for sleep analysis. The records used belong to the overall 23 subjects, which are divided into two subsets. Records from 18 subjects were used for training the data and from 5 subjects for testing the data. For detecting four sleep stages: REM (Rapid Eye Movement), Wake, Light sleep (Stage 1 and Stage 2), and Deep sleep (Stage 3 and Stage 4), the accuracy of the model is 55%, and F1 score is 44%. For five stages: REM, Stage 1, Stage 2, Deep sleep (Stage 3 and 4), and Wake, the model gives an accuracy of 40% and F1 score of 37%.
Modeling a suitable birth density is a challenge when using Bernoulli filters such as the Labeled Multi-Bernoulli (LMB) filter. The birth density of newborn targets is unknown in most applications, but must be given as a prior to the filter. Usually the birth density stays unchanged or is designed based on the measurements from previous time steps.
In this paper, we assume that the true initial state of new objects is normally distributed. The expected value and covariance of the underlying density are unknown parameters. Using the estimated multi-object state of the LMB and the Rauch-Tung-Striebel (RTS) recursion, these parameters are recursively estimated and adapted after a target is detected.
The main contribution of this paper is an algorithm to estimate the parameters of the birth density and its integration into the LMB framework. Monte Carlo simulations are used to evaluate the detection driven adaptive birth density in two scenarios. The approach can also be applied to filters that are able to estimate trajectories.
A flight-like absolute optical frequency reference based on iodine for laser systems at 1064 nm
(2017)
We present an absolute optical frequency reference based on precision spectroscopy of hyperfine transitions in molecular iodine 127I2 for laser systems operating at 1064 nm. A quasi-monolithic spectroscopy setup was developed, integrated, and tested with respect to potential deployment in space missions that require frequency stable laser systems. We report on environmental tests of the setup and its frequency stability and reproducibility before and after each test. Furthermore, we report on the first measurements of the frequency stability of the iodine reference with an unsaturated absorption cell which will greatly simplify its application in space missions. Our frequency reference fulfills the requirements on the frequency stability for planned space missions such as LISA or NGGM.
This thesis deals with the object tracking problem of multiple extended objects. For instance, this tracking problem occurs when a car with sensors drives on the road and detects multiple other cars in front of it. When the setup between the senor and the other cars is in a such way that multiple measurements are created by each single car, the cars are called extended objects. This can occur in real world scenarios, mainly with the use of high resolution sensors in near field applications. Such a near field scenario leads a single object to occupy several resolution cells of the sensor so that multiple measurements are generated per scan. The measurements are additionally superimposed by the sensor’s noise. Beside the object generated measurements, there occur false alarms, which are not caused by any object and sometimes in a sensor scan, single objects could be missed so that they not generate any measurements.
To handle these scenarios, object tracking filters are needed to process the sensor measurements in order to obtain a stable and accurate estimate of the objects in each sensor scan. In this thesis, the scope is to implement such a tracking filter that handles the extended objects, i.e. the filter estimates their positions and extents. In context of this, the topic of measurement partitioning occurs, which is a pre-processing of the measurement data. With the use of partitioning, the measurements that are likely generated by one object are put into one cluster, also called cell. Then, the obtained cells are processed by the tracking filter for the estimation process. The partitioning of measurement data is a crucial part for the performance of tracking filter because insufficient partitioning leads to bad tracking performance, i.e. inaccurate object estimates.
In this thesis, a Gaussian inverse Wishart Probability Hypothesis Density (GIW-PHD) filter was implemented to handle the multiple extended object tracking problem. Within this filter framework, the number of objects are modelled as Random Finite Sets (RFSs) and the objects’ extent as random matrices (RM). The partitioning methods that are used to cluster the measurement data are existing ones as well as a new approach that is based on likelihood sampling methods. The applied classical heuristic methods are Distance Partitioning (DP) and Sub-Partitioning (SP), whereas the proposed likelihood-based approach is called Stochastic Partitioning (StP). The latter was developed in this thesis based on the Stochastic Optimisation approach by Granström et al. An implementation, including the StP method and its integration into the filter framework, is provided within this thesis.
The implementations, using the different partitioning methods, were tested on simulated random multi-object scenarios and in a fixed parallel tracking scenario using Monte Carlo methods. Further, a runtime analysis was done to provide an insight into the computational effort using the different partitioning methods. It emphasized, that the StP method outperforms the classical partitioning methods in scenarios, where the objects move spatially close. The filter using StP performs more stable and with more accurate estimates. However, this advantage is associated with a higher computational effort compared to the classical heuristic partitioning methods.
Error correction coding (ECC) for optical communication and persistent storage systems require high rate codes that enable high data throughput and low residual errors. Recently, different concatenated coding schemes were proposed that are based on binary Bose-Chaudhuri-Hocquenghem (BCH) codes that have low error correcting capabilities. Commonly, hardware implementations for BCH decoding are based on the Berlekamp-Massey algorithm (BMA). However, for single, double, and triple error correcting BCH codes, Peterson's algorithm can be more efficient than the BMA. The known hardware architectures of Peterson's algorithm require Galois field inversion. This inversion dominates the hardware complexity and limits the decoding speed. This work proposes an inversion-less version of Peterson's algorithm. Moreover, a decoding architecture is presented that is faster than decoders that employ inversion or the fully parallel BMA at a comparable circuit size.
The digital transformation of business processes and the integration of IT systems leads to opportunities and risks for small and medium-sized enterprises (SMEs). Risks that can result in a lack of IT Governance, Risk and Compliance (IT-GRC). The purpose of this paper is to present the current state of the research project. With this, the Design Science Research approach based on Hevner is using. Based on the phase of Problem Identification and Objectives, this paper will deal with the development of an artefact and thus present the draft of the Design phase. The artefact will be developed by selecting relevant existing frameworks and standards and the identification of SME-specific conditions.
Introduction. Despite its high accuracy, polysomnography (PSG) has several drawbacks for diagnosing obstructive sleep apnea (OSA). Consequently, multiple portable monitors (PMs) have been proposed. Objective. This systematic review aims to investigate the current literature to analyze the sets of physiological parameters captured by a PM to select the minimum number of such physiological signals while maintaining accurate results in OSA detection. Methods. Inclusion and exclusion criteria for the selection of publications were established prior to the search. The evaluation of the publications was made based on one central question and several specific questions. Results. The abilities to detect hypopneas, sleep time, or awakenings were some of the features studied to investigate the full functionality of the PMs to select the most relevant set of physiological signals. Based on the physiological parameters collected (one to six), the PMs were classified into sets according to the level of evidence. The advantages and the disadvantages of each possible set of signals were explained by answering the research questions proposed in the methods. Conclusions. The minimum number of physiological signals detected by PMs for the detection of OSA depends mainly on the purpose and context of the sleep study. The set of three physiological signals showed the best results in the detection of OSA.
In several organizations, business workgroups autonomously implement information technology (IT) outside the purview of the IT department. Shadow IT, evolving as a type of workaround from nontransparent and unapproved end-user computing (EUC), is a term used to refer to this phenomenon, which challenges norms relative to IT controllability. This report describes shadow IT based on case studies of three companies and investigates its management. In 62% of cases, companies decided to reengineer detected instances or reallocate related subtasks to their IT department. Considerations of risks and transaction cost economics with regard to specificity, uncertainty, and scope explain these actions and the resulting coordination of IT responsibilities between the business workgroups and IT departments. This turns shadow IT into controlled business-managed IT activities and enhances EUC management. The results contribute to the governance of IT task responsibilities and provide a way to formalize the role of workarounds in business workgroups.
The McEliece cryptosystem is a promising candidate for post-quantum public-key encryption. In this work, we propose q-ary codes over Gaussian integers for the McEliece system and a new channel model. With this one Mannheim error channel, errors are limited to weight one. We investigate the channel capacity of this channel and discuss its relation to the McEliece system. The proposed codes are based on a simple product code construction and have a low complexity decoding algorithm. For the one Mannheim error channel, these codes achieve a higher error correction capability than maximum distance separable codes with bounded minimum distance decoding. This improves the work factor regarding decoding attacks based on information-set decoding.
In this thesis, a new framework has been proposed, designed and developed for creating efficient and cost effective logistics chains for long items within the building industry. The building industry handles many long items such as pipes, profiles and so on. The handling of these long items is quite complicated and difficult because they are bulky, unstable and heavy. So it is not cost effective and efficient to handle them manually. Existing planning frameworks ignore these special requirements of such goods and are not planned for handling these goods. That leads to that many additional manual handling steps are currently required to handle long items. Therefore, it is very important to develop a new framework for creating the efficient and cost-effective logistics chain for long items. To propose such a new framework, the expert interviews were conducted to gain the fully understanding about the customer requirements. The experts from all stages of the building industry supply chain were interviewed. The data collected from the expert interviews has been analysed and the meaningful findings about the customer requirements have been applied as the valuable inputs for the proposition of the new framework. To have fully knowledge about current practices, all existing planning frameworks have been analysed and evaluated using SWOT analysis. The strengths, weaknesses, opportunities and threats of the current planning frameworks have been comparatively analysed and evaluated. The findings from SWOT analysis have been used for proposing, designing and developing the new framework. The great efforts have been made during the implementation stage. The six different key parameters for a successful implementation have been identified. They are: • Improvement Process with Employees • Control of the Improvements • Gifts/Money for the Improvements and Additional Work • KAIZEN Workshops • Motivation of the Employees for Improvements • Presentation of the Results Among these six parameters, it has been found that KAIZEN workshops is a very effective way for creating an efficient and cost-effective logistics chain for long items. It is believed that the new framework can be theoretically used for the planning of logistics that handle long items and commercial goods. This framework can also be used to plan all kinds of in-house logistics processes from the incoming goods, storage, picking, delivery combination areas and through to the outgoing goods area. The achievements of this project are as follows (1) the new framework for creating efficient and cost-effective logistics chains for long items, (2) the data collection and the data evaluation at the preliminary planning, (3) the decision for one planning variant already at the end of the structure planning, (4) the analysis and evaluation of customer requirements, (5) the consideration and implementation of the customer requirements in the new framework, (6) the creation of figures and tables as planning guideline, (7) the research and further development of Minomi with regards to long items, (8) the research on the information flow, (9) the classification of the improvements and the improvement handling at the implementation, (10) the identification of key parameters for a successful implementation of the planning framework. This framework has been evaluated both theoretically and through a case study of a logistics system planning for handling long items and commercial goods. It has been found that the new framework is theoretically sound and practically valuable. It can be applied to creating the logistics system for long items, especially for building industry.
A new thermal shock application-oriented testing method for ceramic components and refractories
(2019)
Ceramics and refractories are often used in high-temperature applications like industrial furnaces. Therefore, thermomechanical and heat resistance of ceramic and refractory materials are important. The material behaviour is described by thermal stress resistance. Established material tests to determine thermal shock behaviour are complex and do not yield key figures. The potential of application-related material testing in combination with simulations with transfer from ceramics to refractories is described below. The combination of model-based simulation with applied material testing offers numerous advantages. On the one hand, the design of the test setup is supported by the simulation, which results in a goal and application-oriented test setup. On the other hand, the iterative approach allows the model verification with the help of the applied material testing. The simulation shows that the transfer from ceramics to refractory material is possible and results according to literature. The design reliability of the components is thereby improved, since initially different loads can be simulated in the model in combination with a variety of materials and geometries, and thereby substitute complex and expensive preliminary tests. As a result, verified models offer a great savings potential in terms of time to market, development expenses and use of raw materials. Very important is, that the method is suitable for technical ceramics and refractory materials.
Twenty-first century infrastructure needs to respond to changing demographics, becoming climate neutral, resilient and economically affordable, while remaining a driver for development and shared prosperity. However, the infrastructure sector remains one of the least innovative and digitalised, plagued by delays, cost overruns and benefit shortfalls (Cantarelli et al. 2008; Flyvbjerg, 2007; Flyvbjerg et al., 2003; Flyvbjerg et al., 2004). The root cause is the prevailing fragmentation of the infrastructure sector (Fellows and Liu, 2012). To help overcome these challenges, integration of the value chain is needed. This could be achieved through a use-case-based creation of federated ecosystems connecting open and trusted data spaces and advanced services applied to infrastructure projects. Such digital platforms enable full-lifecycle participation and responsible governance guided by a shared infrastructure vision. Digital federation enables secure and sovereign data exchange and thus collaboration across the silos within the infrastructure sector and between industries as well as within and between countries. Such an approach to infrastructure technology policy would not rely on technological solutionism but proposes the development of open and trusted data alliances. Federated data spaces provide access to the emerging data economy, especially for SMEs, and can foster the innovation of new digital services. Such responsible digital governance can help make the infrastructure sector more resilient, efficient and aligned with the realisation of ambitious decarbonisation and environmental protection targets. The European Union and the United States have already developed architectures for sovereign and secure data exchange.
Observer-based self sensing for digital (on–off) single-coil solenoid valves is investigated. Self sensing refers to the case where merely the driving signals used to energize the actuator (voltage and coil current) are available to obtain estimates of both the position and velocity. A novel observer approach for estimating the position and velocity from the driving signals is presented, where the dynamics of the mechanical subsystem can be neglected in the model. Both the effect of eddy currents and saturation effects are taken into account in the observer model. Practical experimental results are shown and the new method is compared with a full-order sliding mode observer.
Cardiovascular diseases are directly or indirectly responsible for up to 38.5% of all deaths in Germany and thus represent the most frequent cause of death. At present, heart diseases are mainly discovered by chance during routine visits to the doctor or when acute symptoms occur. However, there is no practical method to proactively detect diseases or abnormalities of the heart in the daily environment and to take preventive measures for the person concerned. Long-term ECG devices, as currently used by physicians, are simply too expensive, impractical, and not widely available for everyday use. This work aims to develop an ECG device suitable for everyday use that can be worn directly on the body. For this purpose, an already existing hardware platform will be analyzed, and the corresponding potential for improvement will be identified. A precise picture of the existing data quality is obtained by metrological examination, and corresponding requirements are defined. Based on these identified optimization potentials, a new ECG device is developed. The revised ECG device is characterized by a high integration density and combines all components directly on one board except the battery and the ECG electrodes. The compact design allows the device to be attached directly to the chest. An integrated microcontroller allows digital signal processing without the need for an additional computer. Central features of the evaluation are a peak detection for detecting R-peaks and a calculation of the current heart rate based on the RR interval. To ensure the validity of the detected R-peaks, a model of the anatomical conditions is used. Thus, unrealistic RR-intervals can be excluded. The wireless interface allows continuous transmission of the calculated heart rate. Following the development of hardware and software, the results are verified, and appropriate conclusions about the data quality are drawn. As a result, a very compact and wearable ECG device with different wireless technologies, data storage, and evaluation of RR intervals was developed. Some tests yelled runtimes up to 24 hours with wireless Lan activated and streaming.
We propose and apply a requirements engineering approach that focuses on security and privacy properties and takes into account various stakeholder interests. The proposed methodology facilitates the integration of security and privacy by design into the requirements engineering process. Thus, specific, detailed security and privacy requirements can be implemented from the very beginning of a software project. The method is applied to an exemplary application scenario in the logistics industry. The approach includes the application of threat and risk rating methodologies, a technique to derive technical requirements from legal texts, as well as a matching process to avoid duplication and accumulate all essential requirements.
Background
This is a systematic review protocol to identify automated features, applied technologies, and algorithms in the electronic early warning/track and triage system (EW/TTS) developed to predict clinical deterioration (CD).
Methodology
This study will be conducted using PubMed, Scopus, and Web of Science databases to evaluate the features of EW/TTS in terms of their automated features, technologies, and algorithms. To this end, we will include any English articles reporting an EW/TTS without time limitation. Retrieved records will be independently screened by two authors and relevant data will be extracted from studies and abstracted for further analysis. The included articles will be evaluated independently using the JBI critical appraisal checklist by two researchers.
Discussion
This study is an effort to address the available automated features in the electronic version of the EW/TTS to shed light on the applied technologies, automated level of systems, and utilized algorithms in order to smooth the road toward the fully automated EW/TTS as one of the potential solutions of prevention CD and its adverse consequences.
This paper presents a modeling approach of an industrial heating process where a stripe-shaped workpiece is heated up to a specific temperature by applying hot air through a nozzle. The workpiece is moving through the heating zone and is considered to be of infinite length. The speed of the substrate is varying over time. The derived model is supposed to be computationally cheap to enable its use in a model-based control setting. We start by formulating the governing PDE and the corresponding boundary conditions. The PDE is then discretized on a spatial grid using finite differences and two different integration schemes, explicit and implicit, are derived. The two models are evaluated in terms of computational effort and accuracy. It turns out that the implicit approach is favorable for the regarded process. We optimize the grid of the model to achieve a low number of grid nodes while maintaining a sufficient amount of accuracy. Finally, the thermodynamical parameters are optimized in order to fit the model's output to real-world data that was obtained by experiments.
A semilinear distributed parameter approach for solenoid valve control including saturation effects
(2015)
In this paper a semilinear parabolic PDE for the control of solenoid valves is presented. The distributed parameter model of the cylinder becomes nonlinear by the inclusion of saturation effects due to the material's B/H-curve. A flatness based solution of the semilinear PDE is shown as well as a convergence proof of its series solution. By numerical simulation results the adaptability of the approach is demonstrated, and differences between the linear and the nonlinear case are discussed. The major contribution of this paper is the inclusion of saturation effects into the magnetic field governing linear diffusion equation, and the development of a flatness based solution for the resulting semilinear PDE as an extension of previous works [1] and [2].
Sleep quality and in general, behavior in bed can be detected using a sleep state analysis. These results can help a subject to regulate sleep and recognize different sleeping disorders. In this work, a sensor grid for pressure and movement detection supporting sleep phase analysis is proposed. In comparison to the leading standard measuring system, which is Polysomnography (PSG), the system proposed in this project is a non-invasive sleep monitoring device. For continuous analysis or home use, the PSG or wearable Actigraphy devices tends to be uncomfortable. Besides this fact, they are also very expensive. The system represented in this work classifies respiration and body movement with only one type of sensor and also in a non-invasive way. The sensor used is a pressure sensor. This sensor is low cost and can be used for commercial proposes. The system was tested by carrying out an experiment that recorded the sleep process of a subject. These recordings showed the potential for classification of breathing rate and body movements. Although previous researches show the use of pressure sensors in recognizing posture and breathing, they have been mostly used by positioning the sensors between the mattress and bedsheet. This project however, shows an innovative way to position the sensors under the mattress.
Creating cages that enclose a 3D-model of some sort is part of many preprocessing pipelines in computational geometry. Creating a cage of preferably lower resolution than the original model is of special interest when performing an operation on the original model might be to costly. The desired operation can be applied to the cage first and then transferred to the enclosed model. With this paper the authors present a short survey of recent and well known methods for cage computation.
The authors would like to give the reader an insight in common methods and their differences.
A constructive method for the design of nonlinear observers is discussed. To formulate conditions for the construction of the observer gains, stability results for nonlinear singularly perturbed systems are utilised. The nonlinear observer is designed directly in the given coordinates, where the error dynamics between the plant and the observer becomes singularly perturbed by a high-gain part of the observer injection, and the information of the slow manifold is exploited to construct the observer gains of the reduced-order dynamics. This is in contrast to typical high-gain observer approaches, where the observer gains are chosen such that the nonlinearities are dominated by a linear system. It will be demonstrated that the considered approach is particularly suited for self-sensing electromechanical systems. Two variants of the proposed observer design are illustrated for a nonlinear electromagnetic actuator, where the mechanical quantities, i.e. the position and the velocity, are not measured
This paper proposes a soft input decoding algorithm and a decoder architecture for generalized concatenated (GC) codes. The GC codes are constructed from inner nested binary Bose-Chaudhuri-Hocquenghem (BCH) codes and outer Reed-Solomon codes. In order to enable soft input decoding for the inner BCH block codes, a sequential stack decoding algorithm is used. Ordinary stack decoding of binary block codes requires the complete trellis of the code. In this paper, a representation of the block codes based on the trellises of supercodes is proposed in order to reduce the memory requirements for the representation of the BCH codes. This enables an efficient hardware implementation. The results for the decoding performance of the overall GC code are presented. Furthermore, a hardware architecture of the GC decoder is proposed. The proposed decoder is well suited for applications that require very low residual error rates.
Generalized concatenated (GC) codes with soft-input decoding were recently proposed for error correction in flash memories. This work proposes a soft-input decoder for GC codes that is based on a low-complexity bit-flipping procedure. This bit-flipping decoder uses a fixed number of test patterns and an algebraic decoder for soft-input decoding. An acceptance criterion for the final candidate codeword is proposed. Combined with error and erasure decoding of the outer Reed-Solomon codes, this bit-flipping decoder can improve the decoding performance and reduce the decoding complexity compared to the previously proposed sequential decoding. The bit-flipping decoder achieves a decoding performance similar to a maximum likelihood decoder for the inner codes.
The introduction of multiple-level cell (MLC) and triple-level cell (TLC) technologies reduced the reliability of flash memories significantly compared with single-level cell flash. With MLC and TLC flash cells, the error probability varies for the different states. Hence, asymmetric models are required to characterize the flash channel, e.g., the binary asymmetric channel (BAC). This contribution presents a combined channel and source coding approach improving the reliability of MLC and TLC flash memories. With flash memories data compression has to be performed on block level considering short-data blocks. We present a coding scheme suitable for blocks of 1 kB of data. The objective of the data compression algorithm is to reduce the amount of user data such that the redundancy of the error correction coding can be increased in order to improve the reliability of the data storage system. Moreover, data compression can be utilized to exploit the asymmetry of the channel to reduce the error probability. With redundant data, the proposed combined coding scheme results in a significant improvement of the program/erase cycling endurance and the data retention time of flash memories.
This paper considers intervals of real matrices with respect to partial orders and the problem to infer from some exposed matrices lying on the boundary of such an interval that all real matrices taken from the interval possess a certain property. In many cases such a property requires that the chosen matrices have an identically signed inverse. We also briefly survey related problems, e.g., the invariance of matrix properties under entry-wise perturbations.
This paper considers intervals of real matrices with respect to partial orders and the problem to infer from some exposed matrices lying on the boundary of such an interval that all real matrices taken from the interval possess a certain property. In many cases such a property requires that the chosen matrices have an identically signed inverse. We also briefly survey related problems, e.g., the invariance of matrix properties under entry-wise perturbations.
The development of automatic solutions for the detection of physiological events of interest is booming. Improvements in the collection and storage of large amounts of healthcare data allow access to these data faster and more efficiently. This fact means that the development of artificial intelligence models for the detection and monitoring of a large number of pathologies is becoming increasingly common in the medical field. In particular, developing deep learning models for detecting obstructive apnea (OSA) events is at the forefront. Numerous scientific studies focus on the architecture of the models and the results that these models can provide in terms of OSA classification and Apnea-Hypopnea-Index (AHI) calculation. However, little focus is put on other aspects of great relevance that are crucial for the training and performance of the models. Among these aspects can be found the set of physiological signals used and the preprocessing tasks prior to model training. This paper covers the essential requirements that must be considered before training the deep learning model for obstructive sleep apnea detection, in addition to covering solutions that currently exist in the scientific literature by analyzing the preprocessing tasks prior to training.
The present contribution proposes a novel method for the indirect measurement of the ground reaction forces (GRF) induced by a pedestrian during walking on a vibrating structure. Its main idea is to formulate and solve an inverse problem in the time domain with the aim of finding the optimal time dependent moving point force describing the GRF of a pedestrian (input data), which minimizes the difference between a set of computed and a set of measured structural responses (output data). The solution of the inverse problem is addressed by means of the gradient-based trust region optimization strategy. The moving force identification process uses output data from a set of acceleration and displacement time histories recorded at different locations on the structure. The practicability and the accuracy of the proposed GRF identification method is firstly evaluated using simulated measurements, which revealed a high accuracy, robustness and stability of the results in relation to high noise levels. Subsequently, a comprehensive experimental validation process using real measurement data recorded on the HUMVIB experimental footbridge on the campus of the Technical University of Darmstadt (Germany) was carried out. Besides the conventional sensors for the acquisition of structural responses, an array of biomechanical force plates as well as classical load cells at the supports were used for measurement reference GRFs needed in the experimental validation process. The results show that the proposed method delivers a very accurate estimation of the GRF induced by a subject during walking on the experimental structure.
Uzbekistan is an emerging tourism destination that has experienced a strong increase in tourists since 2017. However, little research on tourism development in Uzbekistan exists to date. This study therefore analyzes possible research topics and proposes a tourism research agenda for Uzbekistan. A mix of methods was used consisting of participant observation, semi-structured qualitative expert interviews and qualitative content anal- ysis. The results revealed a variety of research deficits in different areas, which could be synthesized into a total of ten research fields, which were clustered into three overarching areas, namely market research, management, and culture & environment. The subordi- nate research fields identified are Demand, Statistics, Potentials, Governance, Products, Infrastructure & Development, Marketing, Heritage & Nation-building, Sustainability as well as Peace & Conflict Prevention. A strategic research plan based on this tourism research agenda could help to foster a purposeful scientific debate. Tourism research in these fields has both the potential to investigate and compare theoretical issues in an unique context and to produce applied research results that can make a relevant contri- bution to tourism development in Uzbekistan.
If the process contains a delay (dead time), the Nyquist criterion is well suited to derive a PI or PID tuning rule because the delay is taken into account without approximation. The tuning of the speed of the closed loop enters naturally by the crossover frequency. The goal of robustness and performance is translated into the phase margin.
We present a 3d-laser-scan simulation in virtual
reality for creating synthetic scans of CAD models. Consisting of
the virtual reality head-mounted display Oculus Rift and the
motion controller Razer Hydra our system can be used like
common hand-held 3d laser scanners. It supports scanning of
triangular meshes as well as b-spline tensor product surfaces
based on high performance ray-casting algorithms. While point
clouds of known scanning simulations are missing the man-made
structure, our approach overcomes this problem by imitating
real scanning scenarios. Calculation speed, interactivity and the
resulting realistic point clouds are the benefits of this system.
ABCdarium of a journey
(2017)
Even though immutability is a desirable property, especially in a multi-threaded environment, implementing immutable Java classes is surprisingly hard because of a lack of language support. We present a static analysis tool using abstract bytecode interpretation that checks Java classes for compliance with a set of rules that together constitute state-based immutability. Being realized as a Find Bugs plug in, the tool can easily be integrated into most IDEs and hence the software development process. Our evaluation on a large, real world codebase shows that the average run-time effort for a single class is in the range of a few milliseconds, with only a very few statistical spikes.
Nowadays, the inexpensive memory space promotes an accelerating growth of stored image data. To exploit the data using supervised Machine or Deep Learning, it needs to be labeled. Manually labeling the vast amount of data is time-consuming and expensive, especially if human experts with specific domain knowledge are indispensable. Active learning addresses this shortcoming by querying the user the labels of the most informative images first. One way to obtain the ‘informativeness’ is by using uncertainty sampling as a query strategy, where the system queries those images it is most uncertain about how to classify. In this paper, we present a web-based active learning framework that helps to accelerate the labeling process. After manually labeling some images, the user gets recommendations of further candidates that could potentially be labeled equally (bulk image folder shift). We aim to explore the most efficient ‘uncertainty’ measure to improve the quality of the recommendations such that all images are sorted with a minimum number of user interactions (clicks). We conducted experiments using a manually labeled reference dataset to evaluate different combinations of classifiers and uncertainty measures. The results clearly show the effectiveness of an uncertainty sampling with bulk image shift recommendations (our novel method), which can reduce the number of required clicks to only around 20% compared to manual labeling.
Sleep is an essential part of human existence, as we are in this state for approximately a third of our lives. Sleep disorders are common conditions that can affect many aspects of life. Sleep disorders are diagnosed in special laboratories with a polysomnography system, a costly procedure requiring much effort for the patient. Several systems have been proposed to address this situation, including performing the examination and analysis at the patient's home, using sensors to detect physiological signals automatically analysed by algorithms. This work aims to evaluate the use of a contactless respiratory recording system based on an accelerometer sensor in sleep apnea detection. For this purpose, an installation mounted under the bed mattress records the oscillations caused by the chest movements during the breathing process. The presented processing algorithm performs filtering of the obtained signals and determines the apnea events presence. The performance of the developed system and algorithm of apnea event detection (average values of accuracy, specificity and sensitivity are 94.6%, 95.3%, and 93.7% respectively) confirms the suitability of the proposed method and system for further ambulatory and in-home use.
Using multi-camera matching techniques for 3d reconstruction there is usually the trade-off between the quality of the computed depth map and the speed of the computations. Whereas high quality matching methods take several seconds to several minutes to compute a depth map for one set of images, real-time methods achieve only low quality results. In this paper we present a multi-camera matching method that runs in real-time and yields high resolution depth maps. Our method is based on a novel multi-level combination of normalized cross correlation, deformed matching windows based on the multi-level depth map information, and sub-pixel precise disparity maps. The whole process is implemented completely on the GPU. With this approach we can process four 0.7 megapixel images in 129 milliseconds to a full resolution 3d depth map. Our technique is tailored for the recognition of non-technical shapes, because our target application is face recognition.
The investigation of stress requires to distinguish between stress caused by physical activity and stress that is caused by psychosocial factors. The behaviour of the heart in response to stress and physical activity is very similar in case the set of monitored parameters is reduced to one. Currently, the differentiation remains difficult and methods which only use the heart rate are not able to differentiate between stress and physical activity, without using additional sensor data input. The approach focusses on methods which generate signals providing characteristics that are useful for detecting stress, physical activity, no activity and relaxation.
When designing drying processes for sensitive biological foodstuffs like fruit or vegetables, energy and time efficiency as well as product quality are gaining more and more importance. These all are greatly influenced by the different drying parameters (e.g. air temperature, air velocity and dew point temperature) in the process. In sterilization of food products the cooking value is widely used as a cross-link between these parameters. In a similar way, the so-called cumulated thermal load (CTL) was introduced for drying processes. This was possible because most quality changes mainly depend on drying air temperature and drying time. In a first approach, the CTL was therefore defined as the time integral of the surface temperature of agricultural products. When conducting experiments with mangoes and pineapples, however, it was found that the CTL as it was used had to be adjusted to a more practical form. So the definition of the CTL was improved and the behaviour of the adjusted CTL (CTLad) was investigated in the drying of pineapples and mangoes. On the basis of these experiments and the work that had been done on the cooking value, it was found, that more optimization on the CTLad had to be done to be able to compare a great variety of different products as well as different quality parameters.
An approach for an adaptive position-dependent friction estimation for linear electromagnetic actuators with altered characteristics is proposed in this paper. The objective is to obtain a friction model that can be used to describe different stages of aging of magnetic actuators. It is compared to a classical Stribeck friction model by means of model fit, sensitivity, and parameter correlation. The identifiability of the parameters in the friction model is of special interest since the model is supposed to be used for diagnostic and prognostic purposes. A method based on the Fisher information matrix is employed to analyze the quality of the model structure and the parameter estimates.
The Lempel-Ziv-Welch (LZW) algorithm is an important dictionary-based data compression approach that is used in many communication and storage systems. The parallel dictionary LZW (PDLZW) algorithm speeds up the LZW encoding by using multiple dictionaries. The PDLZW algorithm applies different dictionaries to store strings of different lengths, where each dictionary stores only strings of the same length. This simplifies the parallel search in the dictionaries for hardware implementations. The compression gain of the PDLZW depends on the partitioning of the address space, i.e. on the sizes of the parallel dictionaries. However, there is no universal partitioning that is optimal for all data sources. This work proposes an address space partitioning technique that optimizes the compression rate of the PDLZW using a Markov model for the data. Numerical results for address spaces with 512, 1024, and 2048 entries demonstrate that the proposed partitioning improves the performance of the PDLZW compared with the original proposal.
adidas and Reebok
(2016)
Advanced approaches for analysis and form finding of membrane structures with finite elements
(2018)
Part I deals with material modelling of woven fabric membranes. Due to their structure of crossed yarns embedded in coating, woven fabric membranes are characterised by a highly nonlinear stress-strain behaviour. In order to determine an accurate structural response of membrane structures, a suitable description of the material behaviour is required. A linear elastic orthotropic model approach, which is current practice, only allows a relative coarse approximation of the material behaviour. The present work focuses on two different material approaches: A first approach becomes evident by focusing on the meso-scale. The inhomogeneous, however periodic structure of woven fabrics motivates for microstructural modelling. An established microstructural model is considered and enhanced with regard to the coating stiffness. Secondly, an anisotropic hyperelastic material model for woven fabric membranes is considered. By performing inverse processes of parameter identification, fits of the two different material models w.r.t. measured data from a common biaxial test are shown. The results of the inversely parametrised material models are compared and discussed.
Part II presents an extended approach for a simultaneous form finding and cutting patterning computation of membrane structures. The approach is formulated as an optimisation problem in which both the geometries of the equilibrium and cutting patterning configuration are initially unknown. The design objectives are minimum deviations from prescribed stresses in warp and fill direction along with minimum shear deformation. The equilibrium equations are introduced into the optimisation problem as constraints. Additional design criteria can be formulated (for the geometry of seam lines etc.). Similar to the motivation for the Updated Reference Strategy [4] the described problem is singular in the tangent plane. In both the equilibrium and the cutting patterning configuration finite element nodes can move without changing stresses. Therefore, several approaches are presented to stabilise the algorithm. The overall result of the computation is a stressed equilibrium and an unstressed cutting patterning geometry. The interaction of both configurations is described in Total Lagrangian formulation.
The microstructural model, which is focused in Part I, is applied. Based on this approach, information about fibre orientation as well as the ending of fibres at cutting edges are available. As a result, more accurate results can be computed compared to simpler approaches commonly used in practice.
This chapter contains three advanced topics in model order reduction (MOR): nonlinear MOR, MOR for multi-terminals (or multi-ports) and finally an application in deriving a nonlinear macromodel covering phase shift when coupling oscillators. The sections are offered in a preferred order for reading, but can be read independently.
This research project has been awarded as part of the research competition organized by Connect2Recover, which is a global initiative by the International Telecommunication Union (ITU) with the priority of reinforcing and strengthening the digital infrastructure and ecosystems of developing countries. Carried out by an international and transdisciplinary research consortium, the project sets out to analyze the prospects of digital federation and data sharing within the context of Botswana. Considering the country’s stage of economic and digital development, the project team identified Botswana’s smallholder agricultural sector as the most important area of digital transformation given the development need of the country’s primary sector.
Derived from semi-structured interviews, a focus group, as well as secondary research, the project team developed a digital transformation roadmap based on three development stages: (a) crowdfarming pilot, (b) crowdfarming marketplace, and (c) digital ecosystem for smallholder agriculture. Based on a detailed review of Botswana’s smallholder agriculture and the government’s digitalization strategy, the report envisions each phase, especially the pilot project, in terms of a minimal viable product. This is to consider the low level of digital penetration of Botswana’s primary sector, while providing an incentive to connect smallholders with consumers, traders, and retailers.
The project team has been successful in receiving commitment from actual smallholder farmers, the farmer association and government, as well as support for the idea of developing a crowdfarming marketplace as a novel production model and, eventually, a digital agriculture ecosystem for smallholder farmers, livestock producers, and agricultural technology companies and start-ups. The report is a proposal for a phase-one pilot project with the objective to advance smallholder agribusiness in Botswana.
Because process and product innovations are usually no longer sufficient to establish a company in the market or to generate a competitive advantage, Business Model Innovation is considered a powerful tool, especially for start-ups for which innovation is at the core of their business. Due to the complexity of this process, frameworks should help entrepreneurs with executing Business Model Innovation. However, theory and practice diverge. The aim of this paper is to identify the needs of a start-up regarding Business Model Innovation frameworks, underlining the importance of Business Model Innovation for start-ups as well as the relevance of a supporting framework. The research results aim to contribute to an ideal process for Business Model Innovation when applied to start-ups.
Acoustic Echo Cancellation (AEC) plays a crucial role in speech communication devices to enable full-duplex communication. AEC algorithms have been studied extensively in the literature. However, device specific details like microphone or loudspeaker configurations are often neglected, despite their impact on the echo attenuation or near-end speech quality. In this work, we propose a method to investigate different loudspeaker-microphone configurations with respect to their contribution to the overall AEC performance. A generic AEC system consisting of an adaptive filter and a Wiener post filter is used for a fair comparison between different setups. We propose the near-end-to-residual-echo ratio (NRER) and the attenuation-of-near-end (AON) as quality measures for the full-duplex AEC performance.
Successful project management (PM), as one of the most important key competences in the western-oriented working world, is mainly influenced by experience and social skills. As a direct impact on PM training, the degree of practice and reality is crucial for the application of lessons learned in a challenging everyday work life. This work presents a recursive approach that adapts well-known principles of PM itself for PM training. Over three years, we have developed a concept and an integrated software system that support our PM university courses. Stepwise, it transfers theoretical PM knowledge into realistic project phases by automatically adjusting to the individual learning progress. Our study reveals predictors such as degrees of collaboration or weekend work as vital aspects in the PM training progress. The chosen granularity of project phases with variances in different dimensions makes our model a canonical incarnation of seamless learning.
Die stetig steigende Digitalisierung von Kommunikation und Interaktion ermöglicht eine immer flexiblere und schnellere Erfassung und Ausführung von Aktivitäten in Geschäftsprozessen. Dabei ermöglichen technologische und organisatorische Treiber, wie beispielsweise Cloud Computing und Industrie 4.0, immer komplexere organisationsübergreifende Geschäftsprozesse. Die effektive und effiziente Einbindung aller beteiligten Menschen (z.B. IT-Experten, Endanwender) ist hierbei ein entscheidender Erfolgsfaktor. Nur wenn alle Prozessbeteiligten Kenntnis über die aktuellen Geschäftsprozesse besitzen, kann eine adäquate Ausführung dieser sichergestellt werden. Die notwendige Balance zwischen Flexibilität und Stabilität wird durch die traditionellen Methoden des Geschäftsprozessmanagements (GPM) nur unzureichend gewährleistet. Sowohl aktuelle Forschungen als auch anwendungsbezogene Studien stellen die unzureichende Integration aller Beteiligten, deren fehlendes Verständnis und die geringe Akzeptanz gegenüber GPM dar. Die Dissertation, welche im Rahmen des anwenderorientierten Forschungsprojekts „BPM@Cloud“ erstellt wird, befasst sich mit der Erarbeitung einer neuen Methode zum agilen Geschäftsprozessmanagement auf Basis gebrauchssprachlicher (alltagssprachlicher, fachsprachlicher) Modellierung von Geschäftsprozessen. Die Methode umfasst drei Bestandteile (Vorgehensweise, Modellierungssprache, Softwarewerkzeug), wodurch eine ganzheitliche Unterstützung bei der Umsetzung von GPM Projekten sichergestellt wird. Durch die Adaption und Erweiterung von agilen Konzepten der Softwareentwicklung wird die Vorgehensweise zum iterativen, inkrementellen und empirischen Management von Geschäftsprozessen beschrieben. Des Weiteren wird eine Modellierungssprache für Geschäftsprozesse entwickelt, welche zur intuitiven, gebrauchssprachlichen Erfassung von Geschäftsprozessen angewendet werden kann. Die Implementierung eines Software-Prototyps ermöglicht des Weiteren die direkte Aufnahme von Feedback während der Ausführung von Geschäftsprozessen. Die drei sich ergänzenden Bestandteile – Vorgehensweise, Sprache und Software-Prototyp – bilden eine neuartige Grundlage für eine verbesserte Erfassung, Anreicherung, Ausführung und Optimierung von Geschäftsprozessen.
The business plan is one of the most frequently available artifacts to innovation intermediaries of technology-based ventures' presentations in their early stages [1]–[4]. Agreement on the evaluations of venturing projects based on the business plans highly depends on the individual perspective of the readers [5], [6]. One reason is that little empirical proof exists for descriptions in business plans that suggest survival of early-stage technology ventures [7]–[9]. We identified descriptions of transaction relations [10]–[13] as an anchor of the snapshot model business plan to business reality [13]. In the early-stage, surviving ventures are building transaction relations to human resources, financial resources, and suppliers on the input side, and customers on the output side of the business towards a stronger ego-centric value network [10]–[13]. We conceptualized a multidimensional measurement instrument that evaluates the maturity of this ego-centric value networks based on the transaction relations of different strength levels that are described in business plans of early-stage technology ventures [13]. In this paper, the research design and the instrument are purified to achieve high agreement in the evaluation of business plans [14]–[16]. As a result, we present an overall research design that can reach acceptable quality for quantitative research. The paper so contributes to the literature on business analysis in the early-stage of technology-based ventures and the research technique of content analysis.
Aktive Solarenergienutzung
(2015)
BMBF-Programm "Anwendungsorientierte Forschung und Entwicklung an Fachhochschulen Schlussbericht FKZ 17086 00 Ausgangssituation Bei der Entwässerung von Siedlungsgebieten sind teure Maßnahmen zum Schutz der Siedlungen und der Gewässer vor Hochwasser und Schmutzstoffen erforderlich. Mit den Zielsetzungen „Kosteneinsparung“ und “Verbesserung des Gewässerschutzes“ hat der Beratende Ingenieur Dipl.-Ing.(FH) Harald Güthler (1996) aus Waldshut-Tiengen das Verfahren des „HydrOstyx“ gebremsten Abflusses entwickelt und im Dezember 1995 zum Patent angemeldet. Die im Rahmen des Verfahrens benutzte neue technische Einrichtung ist die HydrOstyx-Abflussbremse. Bei der HydrOstyx-Abflussbremse handelt es sich um eine technische Einrichtung, die wegen ihrer Einfachheit ein preiswertes Drosselorgan im Abwasserkanal darstellt, das ohne Fremdenergie weitgehend wartungsfrei und betriebssicher funktioniert. Die HydrOstyx-Abflussbremse kann sowohl in bestehende als auch in neue Abwasserkanäle eingebaut werden. Ziele Mit der Hydrostyx-Abflußbremse können u. a. folgende Ziele erreicht werden: Reduzierung von Neubaumaßnahmen durch Aktivierung bisher ungenutzter Speicher. Da im Sinne der Richtlinien für die Bemessung und Gestaltung von Regenentlastungsanlagen in Mischwasserkanälen nach ATV-Arbeitsblatt A 128 die Anrechenbarkeit solcher Speicherräume zulässig ist, kann bei geschickter Nutzung der aktivierten Retentionsvolumen eine Redimensionierung der bisher geplanten Regenbecken erfolgen. Beseitigung von Konfliktpunkten im Kanalnetz. Durch Aktivierung von Retentionsvolumen lassen sich Abflussspitzen reduzieren. Überlastete Kanäle können nun die abflussverzögerte Wassermenge transportieren, der Austausch bzw. Neubau von Kanälen kann entfallen. Minderung von Überflutungshäufig und Belastungen von Flüssen und Bächen aus Regenwasserentlastungen. Der Einstau des Wassers im Kanal führt zu einer Retentionswirkung, dadurch werden die Abflussspitzen bei Regenwasserentlastungen in die Vorfluter häufig wesentlich reduziert. Die führt zu Verbesserungen beim Hochwasserschutz und bei der Gewässerökologie. Optimierung des Wirkungsgrades von Kläranlagen durch verbesserte Ausschöpfung der zulässigen Beaufschlagung. Kläranlagen sind durch den diskontinuierlichen Zufluss von Schmutz- bzw. Mischwasser Schwankungen im Betriebsablauf unterzogen, die durch entsprechende Reserven in den Becken ausgeglichen werden können. Durch die Aktivierung von Retentionsraum lassen sich Stoßbelastungen reduzieren und bei Regenwetter kann ein größerer Teil des verschmutzten Wasser als bisher auf der Kläranlage gereinigt werden. Im Rahmen des BMBF-Forschungsprojekts wurden folgende Teilaufgaben bearbeitet: Hydraulische Untersuchung der Hydrostyx-Abflussbremse im Wasserbaulabor der HTWG Konstanz. Ziel war die Ermittlung der Schachtverluste, der Überfall- und Durchflussbeiwerte für die HydrOstyx-Abflussbremse. Auswertung und Beurteilung der Abfluss- und Schmutzfrachtmessungen an den Regenüberläufen Hoppetenzell und Zizenhausen/Stampfwiese vor und nach dem Einbau von 4 HydrOstyx-Abflussbremsen. Durchführung von Schmutzfrachtberechnungen mittels Langzeitsimulation mit dem Simulationsmodell KOSIM XL vom Institut für technisch-wissenschaftliche Hydrologie in Hannover. Die Simulationsberechnungen erfolgen mit und ohne Hydrostyx-Abflussbremsen für die Einzugsgebiete der beiden Regenüberläufe. Mit Hilfe der hydraulischen Untersuchungen im Wasserbaulabor und durch die Auswertung der Naturmessungen sollten die hydraulischen und hydrologischen Bemessungsgrundlagen für die Neuentwicklung erarbeitet werden. Die Untersuchungsergebnisse liefern auch wichtige Grundlagen für die wasserwirtschaftliche Beurteilung des neuen Verfahrens. Verwertung der Ergebnisse Das Projekt hat insgesamt gezeigt, dass mit Hilfe von HydrOstyx-Abflussbremsen erhebliche Einsparpotentiale sowohl bei der hydraulischen Sanierung als auch bei der Regenwasserbehandlung in Mischwasserkanalisationen möglich sind. Die Ergebnisse dieses Projekts tragen sicher dazu bei, dass beim weiteren Ausbau der Regenwasserbehandlung und bei der Sanierung von Mischwasserkanalisationen verstärkt nach kostengünstigen Alternativen gesucht wird. Hierzu kann das Verfahren des HydrOstyx gebremsten Abflusses einen wesentlichen Beitrag leisten.
Algorithms and Architectures for Cryptography and Source Coding in Non-Volatile Flash Memories
(2021)
In this work, algorithms and architectures for cryptography and source coding are developed, which are suitable for many resource-constrained embedded systems such as non-volatile flash memories. A new concept for elliptic curve cryptography is presented, which uses an arithmetic over Gaussian integers. Gaussian integers are a subset of the complex numbers with integers as real and imaginary parts. Ordinary modular arithmetic over Gaussian integers is computational expensive. To reduce the complexity, a new arithmetic based on the Montgomery reduction is presented. For the elliptic curve point multiplication, this arithmetic over Gaussian integers improves the computational efficiency, the resistance against side channel attacks, and reduces the memory requirements. Furthermore, an efficient variant of the Lempel-Ziv-Welch (LZW) algorithm for universal lossless data compression is investigated. Instead of one LZW dictionary, this algorithm applies several dictionaries to speed up the encoding process. Two dictionary partitioning techniques are introduced that improve the compression rate and reduce the memory size of this parallel dictionary LZW algorithm.
While existing resource extraction debates have contributed to a better understanding of national economic and political dilemmas and institutional responses, there are flaws in understanding the specific relevance of the various types of mining schemes for rural households to deal with the various problems they are confronted with. Our paper examines the perceptions of gold mining effects on households in Northern Burkina Faso. The findings of our survey across six districts representing different mining schemes (industrial, artisanal, no mining) highlight the fact that artisanal gold mining can generate job opportunities and cash income for local households; whereas industrial gold mining widely fails to do so. However, the general economic and environmental settings exert a much stronger influence on the household state. Gold mining effects are perceived as being less advantageous in districts where people are suffering from a lack of education, a higher vulnerability to drought and poor market access. Our findings provide empirical support for those who back the enhanced formalization of artisanal and small-scale mining (ASM) and policies that entail more rigorous state monitoring of mining concessions, especially in economic and environmentally disadvantaged contexts. Effectively addressing communal and pro-poor development requires greater attention to the political economy of ASM and corporate mining. It also calls for a greater inclusion of local mining stakeholders and a more effective alignment of international regulatory and advocacy efforts.
Alles digital – was nun?
(2018)
Alles fließt
(2016)
Allgemeine Geschäftsbedingungen als Instrument der Vereinfachung betrieblicher Vertragsgestaltung
(2018)