Refine
Year of publication
Document Type
- Conference Proceeding (608)
- Article (356)
- Other Publications (136)
- Part of a Book (135)
- Book (69)
- Doctoral Thesis (52)
- Working Paper (40)
- Report (12)
- Patent (4)
- Preprint (2)
Language
- English (745)
- German (668)
- Multiple languages (7)
Has Fulltext
- no (1420) (remove)
Keywords
- (Strict) sign-regularity (1)
- 360-degree coverage (1)
- 3D Extended Object Tracking (EOT) (2)
- 3D Skelett Wickeltechnik (1)
- 3D ship detection (1)
- 3D urban planning (1)
- AAL (3)
- ADAM (1)
- AHI (1)
- ASEAN (1)
Institute
- Fakultät Architektur und Gestaltung (14)
- Fakultät Bauingenieurwesen (28)
- Fakultät Elektrotechnik und Informationstechnik (11)
- Fakultät Informatik (62)
- Fakultät Maschinenbau (27)
- Fakultät Wirtschafts-, Kultur- und Rechtswissenschaften (66)
- Institut für Angewandte Forschung - IAF (71)
- Institut für Naturwissenschaften und Mathematik - INM (1)
- Institut für Optische Systeme - IOS (26)
- Institut für Strategische Innovation und Technologiemanagement - IST (58)
A constructive method for the design of nonlinear observers is discussed. To formulate conditions for the construction of the observer gains, stability results for nonlinear singularly perturbed systems are utilised. The nonlinear observer is designed directly in the given coordinates, where the error dynamics between the plant and the observer becomes singularly perturbed by a high-gain part of the observer injection, and the information of the slow manifold is exploited to construct the observer gains of the reduced-order dynamics. This is in contrast to typical high-gain observer approaches, where the observer gains are chosen such that the nonlinearities are dominated by a linear system. It will be demonstrated that the considered approach is particularly suited for self-sensing electromechanical systems. Two variants of the proposed observer design are illustrated for a nonlinear electromagnetic actuator, where the mechanical quantities, i.e. the position and the velocity, are not measured
This paper proposes a soft input decoding algorithm and a decoder architecture for generalized concatenated (GC) codes. The GC codes are constructed from inner nested binary Bose-Chaudhuri-Hocquenghem (BCH) codes and outer Reed-Solomon codes. In order to enable soft input decoding for the inner BCH block codes, a sequential stack decoding algorithm is used. Ordinary stack decoding of binary block codes requires the complete trellis of the code. In this paper, a representation of the block codes based on the trellises of supercodes is proposed in order to reduce the memory requirements for the representation of the BCH codes. This enables an efficient hardware implementation. The results for the decoding performance of the overall GC code are presented. Furthermore, a hardware architecture of the GC decoder is proposed. The proposed decoder is well suited for applications that require very low residual error rates.
Generalized concatenated (GC) codes with soft-input decoding were recently proposed for error correction in flash memories. This work proposes a soft-input decoder for GC codes that is based on a low-complexity bit-flipping procedure. This bit-flipping decoder uses a fixed number of test patterns and an algebraic decoder for soft-input decoding. An acceptance criterion for the final candidate codeword is proposed. Combined with error and erasure decoding of the outer Reed-Solomon codes, this bit-flipping decoder can improve the decoding performance and reduce the decoding complexity compared to the previously proposed sequential decoding. The bit-flipping decoder achieves a decoding performance similar to a maximum likelihood decoder for the inner codes.
The introduction of multiple-level cell (MLC) and triple-level cell (TLC) technologies reduced the reliability of flash memories significantly compared with single-level cell flash. With MLC and TLC flash cells, the error probability varies for the different states. Hence, asymmetric models are required to characterize the flash channel, e.g., the binary asymmetric channel (BAC). This contribution presents a combined channel and source coding approach improving the reliability of MLC and TLC flash memories. With flash memories data compression has to be performed on block level considering short-data blocks. We present a coding scheme suitable for blocks of 1 kB of data. The objective of the data compression algorithm is to reduce the amount of user data such that the redundancy of the error correction coding can be increased in order to improve the reliability of the data storage system. Moreover, data compression can be utilized to exploit the asymmetry of the channel to reduce the error probability. With redundant data, the proposed combined coding scheme results in a significant improvement of the program/erase cycling endurance and the data retention time of flash memories.
This paper considers intervals of real matrices with respect to partial orders and the problem to infer from some exposed matrices lying on the boundary of such an interval that all real matrices taken from the interval possess a certain property. In many cases such a property requires that the chosen matrices have an identically signed inverse. We also briefly survey related problems, e.g., the invariance of matrix properties under entry-wise perturbations.
This paper considers intervals of real matrices with respect to partial orders and the problem to infer from some exposed matrices lying on the boundary of such an interval that all real matrices taken from the interval possess a certain property. In many cases such a property requires that the chosen matrices have an identically signed inverse. We also briefly survey related problems, e.g., the invariance of matrix properties under entry-wise perturbations.
The development of automatic solutions for the detection of physiological events of interest is booming. Improvements in the collection and storage of large amounts of healthcare data allow access to these data faster and more efficiently. This fact means that the development of artificial intelligence models for the detection and monitoring of a large number of pathologies is becoming increasingly common in the medical field. In particular, developing deep learning models for detecting obstructive apnea (OSA) events is at the forefront. Numerous scientific studies focus on the architecture of the models and the results that these models can provide in terms of OSA classification and Apnea-Hypopnea-Index (AHI) calculation. However, little focus is put on other aspects of great relevance that are crucial for the training and performance of the models. Among these aspects can be found the set of physiological signals used and the preprocessing tasks prior to model training. This paper covers the essential requirements that must be considered before training the deep learning model for obstructive sleep apnea detection, in addition to covering solutions that currently exist in the scientific literature by analyzing the preprocessing tasks prior to training.
The present contribution proposes a novel method for the indirect measurement of the ground reaction forces (GRF) induced by a pedestrian during walking on a vibrating structure. Its main idea is to formulate and solve an inverse problem in the time domain with the aim of finding the optimal time dependent moving point force describing the GRF of a pedestrian (input data), which minimizes the difference between a set of computed and a set of measured structural responses (output data). The solution of the inverse problem is addressed by means of the gradient-based trust region optimization strategy. The moving force identification process uses output data from a set of acceleration and displacement time histories recorded at different locations on the structure. The practicability and the accuracy of the proposed GRF identification method is firstly evaluated using simulated measurements, which revealed a high accuracy, robustness and stability of the results in relation to high noise levels. Subsequently, a comprehensive experimental validation process using real measurement data recorded on the HUMVIB experimental footbridge on the campus of the Technical University of Darmstadt (Germany) was carried out. Besides the conventional sensors for the acquisition of structural responses, an array of biomechanical force plates as well as classical load cells at the supports were used for measurement reference GRFs needed in the experimental validation process. The results show that the proposed method delivers a very accurate estimation of the GRF induced by a subject during walking on the experimental structure.
Uzbekistan is an emerging tourism destination that has experienced a strong increase in tourists since 2017. However, little research on tourism development in Uzbekistan exists to date. This study therefore analyzes possible research topics and proposes a tourism research agenda for Uzbekistan. A mix of methods was used consisting of participant observation, semi-structured qualitative expert interviews and qualitative content anal- ysis. The results revealed a variety of research deficits in different areas, which could be synthesized into a total of ten research fields, which were clustered into three overarching areas, namely market research, management, and culture & environment. The subordi- nate research fields identified are Demand, Statistics, Potentials, Governance, Products, Infrastructure & Development, Marketing, Heritage & Nation-building, Sustainability as well as Peace & Conflict Prevention. A strategic research plan based on this tourism research agenda could help to foster a purposeful scientific debate. Tourism research in these fields has both the potential to investigate and compare theoretical issues in an unique context and to produce applied research results that can make a relevant contri- bution to tourism development in Uzbekistan.
We present a 3d-laser-scan simulation in virtual
reality for creating synthetic scans of CAD models. Consisting of
the virtual reality head-mounted display Oculus Rift and the
motion controller Razer Hydra our system can be used like
common hand-held 3d laser scanners. It supports scanning of
triangular meshes as well as b-spline tensor product surfaces
based on high performance ray-casting algorithms. While point
clouds of known scanning simulations are missing the man-made
structure, our approach overcomes this problem by imitating
real scanning scenarios. Calculation speed, interactivity and the
resulting realistic point clouds are the benefits of this system.
ABCdarium of a journey
(2017)
Even though immutability is a desirable property, especially in a multi-threaded environment, implementing immutable Java classes is surprisingly hard because of a lack of language support. We present a static analysis tool using abstract bytecode interpretation that checks Java classes for compliance with a set of rules that together constitute state-based immutability. Being realized as a Find Bugs plug in, the tool can easily be integrated into most IDEs and hence the software development process. Our evaluation on a large, real world codebase shows that the average run-time effort for a single class is in the range of a few milliseconds, with only a very few statistical spikes.
Nowadays, the inexpensive memory space promotes an accelerating growth of stored image data. To exploit the data using supervised Machine or Deep Learning, it needs to be labeled. Manually labeling the vast amount of data is time-consuming and expensive, especially if human experts with specific domain knowledge are indispensable. Active learning addresses this shortcoming by querying the user the labels of the most informative images first. One way to obtain the ‘informativeness’ is by using uncertainty sampling as a query strategy, where the system queries those images it is most uncertain about how to classify. In this paper, we present a web-based active learning framework that helps to accelerate the labeling process. After manually labeling some images, the user gets recommendations of further candidates that could potentially be labeled equally (bulk image folder shift). We aim to explore the most efficient ‘uncertainty’ measure to improve the quality of the recommendations such that all images are sorted with a minimum number of user interactions (clicks). We conducted experiments using a manually labeled reference dataset to evaluate different combinations of classifiers and uncertainty measures. The results clearly show the effectiveness of an uncertainty sampling with bulk image shift recommendations (our novel method), which can reduce the number of required clicks to only around 20% compared to manual labeling.
Sleep is an essential part of human existence, as we are in this state for approximately a third of our lives. Sleep disorders are common conditions that can affect many aspects of life. Sleep disorders are diagnosed in special laboratories with a polysomnography system, a costly procedure requiring much effort for the patient. Several systems have been proposed to address this situation, including performing the examination and analysis at the patient's home, using sensors to detect physiological signals automatically analysed by algorithms. This work aims to evaluate the use of a contactless respiratory recording system based on an accelerometer sensor in sleep apnea detection. For this purpose, an installation mounted under the bed mattress records the oscillations caused by the chest movements during the breathing process. The presented processing algorithm performs filtering of the obtained signals and determines the apnea events presence. The performance of the developed system and algorithm of apnea event detection (average values of accuracy, specificity and sensitivity are 94.6%, 95.3%, and 93.7% respectively) confirms the suitability of the proposed method and system for further ambulatory and in-home use.
The investigation of stress requires to distinguish between stress caused by physical activity and stress that is caused by psychosocial factors. The behaviour of the heart in response to stress and physical activity is very similar in case the set of monitored parameters is reduced to one. Currently, the differentiation remains difficult and methods which only use the heart rate are not able to differentiate between stress and physical activity, without using additional sensor data input. The approach focusses on methods which generate signals providing characteristics that are useful for detecting stress, physical activity, no activity and relaxation.
When designing drying processes for sensitive biological foodstuffs like fruit or vegetables, energy and time efficiency as well as product quality are gaining more and more importance. These all are greatly influenced by the different drying parameters (e.g. air temperature, air velocity and dew point temperature) in the process. In sterilization of food products the cooking value is widely used as a cross-link between these parameters. In a similar way, the so-called cumulated thermal load (CTL) was introduced for drying processes. This was possible because most quality changes mainly depend on drying air temperature and drying time. In a first approach, the CTL was therefore defined as the time integral of the surface temperature of agricultural products. When conducting experiments with mangoes and pineapples, however, it was found that the CTL as it was used had to be adjusted to a more practical form. So the definition of the CTL was improved and the behaviour of the adjusted CTL (CTLad) was investigated in the drying of pineapples and mangoes. On the basis of these experiments and the work that had been done on the cooking value, it was found, that more optimization on the CTLad had to be done to be able to compare a great variety of different products as well as different quality parameters.
An approach for an adaptive position-dependent friction estimation for linear electromagnetic actuators with altered characteristics is proposed in this paper. The objective is to obtain a friction model that can be used to describe different stages of aging of magnetic actuators. It is compared to a classical Stribeck friction model by means of model fit, sensitivity, and parameter correlation. The identifiability of the parameters in the friction model is of special interest since the model is supposed to be used for diagnostic and prognostic purposes. A method based on the Fisher information matrix is employed to analyze the quality of the model structure and the parameter estimates.
The Lempel-Ziv-Welch (LZW) algorithm is an important dictionary-based data compression approach that is used in many communication and storage systems. The parallel dictionary LZW (PDLZW) algorithm speeds up the LZW encoding by using multiple dictionaries. The PDLZW algorithm applies different dictionaries to store strings of different lengths, where each dictionary stores only strings of the same length. This simplifies the parallel search in the dictionaries for hardware implementations. The compression gain of the PDLZW depends on the partitioning of the address space, i.e. on the sizes of the parallel dictionaries. However, there is no universal partitioning that is optimal for all data sources. This work proposes an address space partitioning technique that optimizes the compression rate of the PDLZW using a Markov model for the data. Numerical results for address spaces with 512, 1024, and 2048 entries demonstrate that the proposed partitioning improves the performance of the PDLZW compared with the original proposal.
adidas and Reebok
(2016)
Advanced approaches for analysis and form finding of membrane structures with finite elements
(2018)
Part I deals with material modelling of woven fabric membranes. Due to their structure of crossed yarns embedded in coating, woven fabric membranes are characterised by a highly nonlinear stress-strain behaviour. In order to determine an accurate structural response of membrane structures, a suitable description of the material behaviour is required. A linear elastic orthotropic model approach, which is current practice, only allows a relative coarse approximation of the material behaviour. The present work focuses on two different material approaches: A first approach becomes evident by focusing on the meso-scale. The inhomogeneous, however periodic structure of woven fabrics motivates for microstructural modelling. An established microstructural model is considered and enhanced with regard to the coating stiffness. Secondly, an anisotropic hyperelastic material model for woven fabric membranes is considered. By performing inverse processes of parameter identification, fits of the two different material models w.r.t. measured data from a common biaxial test are shown. The results of the inversely parametrised material models are compared and discussed.
Part II presents an extended approach for a simultaneous form finding and cutting patterning computation of membrane structures. The approach is formulated as an optimisation problem in which both the geometries of the equilibrium and cutting patterning configuration are initially unknown. The design objectives are minimum deviations from prescribed stresses in warp and fill direction along with minimum shear deformation. The equilibrium equations are introduced into the optimisation problem as constraints. Additional design criteria can be formulated (for the geometry of seam lines etc.). Similar to the motivation for the Updated Reference Strategy [4] the described problem is singular in the tangent plane. In both the equilibrium and the cutting patterning configuration finite element nodes can move without changing stresses. Therefore, several approaches are presented to stabilise the algorithm. The overall result of the computation is a stressed equilibrium and an unstressed cutting patterning geometry. The interaction of both configurations is described in Total Lagrangian formulation.
The microstructural model, which is focused in Part I, is applied. Based on this approach, information about fibre orientation as well as the ending of fibres at cutting edges are available. As a result, more accurate results can be computed compared to simpler approaches commonly used in practice.
This chapter contains three advanced topics in model order reduction (MOR): nonlinear MOR, MOR for multi-terminals (or multi-ports) and finally an application in deriving a nonlinear macromodel covering phase shift when coupling oscillators. The sections are offered in a preferred order for reading, but can be read independently.
Because process and product innovations are usually no longer sufficient to establish a company in the market or to generate a competitive advantage, Business Model Innovation is considered a powerful tool, especially for start-ups for which innovation is at the core of their business. Due to the complexity of this process, frameworks should help entrepreneurs with executing Business Model Innovation. However, theory and practice diverge. The aim of this paper is to identify the needs of a start-up regarding Business Model Innovation frameworks, underlining the importance of Business Model Innovation for start-ups as well as the relevance of a supporting framework. The research results aim to contribute to an ideal process for Business Model Innovation when applied to start-ups.
Acoustic Echo Cancellation (AEC) plays a crucial role in speech communication devices to enable full-duplex communication. AEC algorithms have been studied extensively in the literature. However, device specific details like microphone or loudspeaker configurations are often neglected, despite their impact on the echo attenuation or near-end speech quality. In this work, we propose a method to investigate different loudspeaker-microphone configurations with respect to their contribution to the overall AEC performance. A generic AEC system consisting of an adaptive filter and a Wiener post filter is used for a fair comparison between different setups. We propose the near-end-to-residual-echo ratio (NRER) and the attenuation-of-near-end (AON) as quality measures for the full-duplex AEC performance.
Successful project management (PM), as one of the most important key competences in the western-oriented working world, is mainly influenced by experience and social skills. As a direct impact on PM training, the degree of practice and reality is crucial for the application of lessons learned in a challenging everyday work life. This work presents a recursive approach that adapts well-known principles of PM itself for PM training. Over three years, we have developed a concept and an integrated software system that support our PM university courses. Stepwise, it transfers theoretical PM knowledge into realistic project phases by automatically adjusting to the individual learning progress. Our study reveals predictors such as degrees of collaboration or weekend work as vital aspects in the PM training progress. The chosen granularity of project phases with variances in different dimensions makes our model a canonical incarnation of seamless learning.
Die stetig steigende Digitalisierung von Kommunikation und Interaktion ermöglicht eine immer flexiblere und schnellere Erfassung und Ausführung von Aktivitäten in Geschäftsprozessen. Dabei ermöglichen technologische und organisatorische Treiber, wie beispielsweise Cloud Computing und Industrie 4.0, immer komplexere organisationsübergreifende Geschäftsprozesse. Die effektive und effiziente Einbindung aller beteiligten Menschen (z.B. IT-Experten, Endanwender) ist hierbei ein entscheidender Erfolgsfaktor. Nur wenn alle Prozessbeteiligten Kenntnis über die aktuellen Geschäftsprozesse besitzen, kann eine adäquate Ausführung dieser sichergestellt werden. Die notwendige Balance zwischen Flexibilität und Stabilität wird durch die traditionellen Methoden des Geschäftsprozessmanagements (GPM) nur unzureichend gewährleistet. Sowohl aktuelle Forschungen als auch anwendungsbezogene Studien stellen die unzureichende Integration aller Beteiligten, deren fehlendes Verständnis und die geringe Akzeptanz gegenüber GPM dar. Die Dissertation, welche im Rahmen des anwenderorientierten Forschungsprojekts „BPM@Cloud“ erstellt wird, befasst sich mit der Erarbeitung einer neuen Methode zum agilen Geschäftsprozessmanagement auf Basis gebrauchssprachlicher (alltagssprachlicher, fachsprachlicher) Modellierung von Geschäftsprozessen. Die Methode umfasst drei Bestandteile (Vorgehensweise, Modellierungssprache, Softwarewerkzeug), wodurch eine ganzheitliche Unterstützung bei der Umsetzung von GPM Projekten sichergestellt wird. Durch die Adaption und Erweiterung von agilen Konzepten der Softwareentwicklung wird die Vorgehensweise zum iterativen, inkrementellen und empirischen Management von Geschäftsprozessen beschrieben. Des Weiteren wird eine Modellierungssprache für Geschäftsprozesse entwickelt, welche zur intuitiven, gebrauchssprachlichen Erfassung von Geschäftsprozessen angewendet werden kann. Die Implementierung eines Software-Prototyps ermöglicht des Weiteren die direkte Aufnahme von Feedback während der Ausführung von Geschäftsprozessen. Die drei sich ergänzenden Bestandteile – Vorgehensweise, Sprache und Software-Prototyp – bilden eine neuartige Grundlage für eine verbesserte Erfassung, Anreicherung, Ausführung und Optimierung von Geschäftsprozessen.
The business plan is one of the most frequently available artifacts to innovation intermediaries of technology-based ventures' presentations in their early stages [1]–[4]. Agreement on the evaluations of venturing projects based on the business plans highly depends on the individual perspective of the readers [5], [6]. One reason is that little empirical proof exists for descriptions in business plans that suggest survival of early-stage technology ventures [7]–[9]. We identified descriptions of transaction relations [10]–[13] as an anchor of the snapshot model business plan to business reality [13]. In the early-stage, surviving ventures are building transaction relations to human resources, financial resources, and suppliers on the input side, and customers on the output side of the business towards a stronger ego-centric value network [10]–[13]. We conceptualized a multidimensional measurement instrument that evaluates the maturity of this ego-centric value networks based on the transaction relations of different strength levels that are described in business plans of early-stage technology ventures [13]. In this paper, the research design and the instrument are purified to achieve high agreement in the evaluation of business plans [14]–[16]. As a result, we present an overall research design that can reach acceptable quality for quantitative research. The paper so contributes to the literature on business analysis in the early-stage of technology-based ventures and the research technique of content analysis.
Aktive Solarenergienutzung
(2015)
Algorithms and Architectures for Cryptography and Source Coding in Non-Volatile Flash Memories
(2021)
In this work, algorithms and architectures for cryptography and source coding are developed, which are suitable for many resource-constrained embedded systems such as non-volatile flash memories. A new concept for elliptic curve cryptography is presented, which uses an arithmetic over Gaussian integers. Gaussian integers are a subset of the complex numbers with integers as real and imaginary parts. Ordinary modular arithmetic over Gaussian integers is computational expensive. To reduce the complexity, a new arithmetic based on the Montgomery reduction is presented. For the elliptic curve point multiplication, this arithmetic over Gaussian integers improves the computational efficiency, the resistance against side channel attacks, and reduces the memory requirements. Furthermore, an efficient variant of the Lempel-Ziv-Welch (LZW) algorithm for universal lossless data compression is investigated. Instead of one LZW dictionary, this algorithm applies several dictionaries to speed up the encoding process. Two dictionary partitioning techniques are introduced that improve the compression rate and reduce the memory size of this parallel dictionary LZW algorithm.
While existing resource extraction debates have contributed to a better understanding of national economic and political dilemmas and institutional responses, there are flaws in understanding the specific relevance of the various types of mining schemes for rural households to deal with the various problems they are confronted with. Our paper examines the perceptions of gold mining effects on households in Northern Burkina Faso. The findings of our survey across six districts representing different mining schemes (industrial, artisanal, no mining) highlight the fact that artisanal gold mining can generate job opportunities and cash income for local households; whereas industrial gold mining widely fails to do so. However, the general economic and environmental settings exert a much stronger influence on the household state. Gold mining effects are perceived as being less advantageous in districts where people are suffering from a lack of education, a higher vulnerability to drought and poor market access. Our findings provide empirical support for those who back the enhanced formalization of artisanal and small-scale mining (ASM) and policies that entail more rigorous state monitoring of mining concessions, especially in economic and environmentally disadvantaged contexts. Effectively addressing communal and pro-poor development requires greater attention to the political economy of ASM and corporate mining. It also calls for a greater inclusion of local mining stakeholders and a more effective alignment of international regulatory and advocacy efforts.
Alles digital – was nun?
(2018)
Alles fließt
(2016)
Allgemeine Geschäftsbedingungen als Instrument der Vereinfachung betrieblicher Vertragsgestaltung
(2018)
In this work, we investigate a hybrid decoding approach that combines algebraic hard-input decoding of binary block codes with soft-input decoding. In particular, an acceptance criterion is proposed which determines the reliability of a candidate codeword. For many received codewords the stopping criterion indicates that the hard-decoding result is sufficiently reliable, and the costly soft-input decoding can be omitted. The proposed acceptance criterion significantly reduces the decoding complexity. For simulations we combine the algebraic hard-input decoding with ordered statistics decoding, which enables near maximum likelihood soft-input decoding for codes of small to medium block lengths.
This work proposes an efficient hardware Implementation of sequential stack decoding of binary block codes. The decoder can be applied for soft input decoding for generalized concatenated (GC) codes. The GC codes are constructed from inner nested binary Bose-Chaudhuri-Hocquenghem (BCH) codes and outer Reed-Solomon (RS) codes. In order to enable soft input decoding for the inner BCH block codes, a sequential stack decoding algorithm is used.
Many resource-constrained systems still rely on symmetric cryptography for verification and authentication. Asymmetric cryptographic systems provide higher security levels, but are very computational intensive. Hence, embedded systems can benefit from hardware assistance, i.e., coprocessors optimized for the required public key operations. In this work, we propose an elliptic curve cryptographic coprocessors design for resource-constrained systems. Many such coprocessor designs consider only special (Solinas) prime fields, which enable a low-complexity modulo arithmetic. Other implementations support arbitrary prime curves using the Montgomery reduction. These implementations typically require more time for the point multiplication. We present a coprocessor design that has low area requirements and enables a trade-off between performance and flexibility. The point multiplication can be performed either using a fast arithmetic based on Solinas primes or using a slower, but flexible Montgomery modular arithmetic.
Business models (BM) are the logic of a firm on how to create, deliver and capture value. Business model innovation (BMI) is essential to organisations for keeping competitive advantage. However, the existence of barriers to BMI can impact the success of a corporate strategic alignment. Previous research has examined the internal barriers to business model innovation, however there is a lack of research on the potential external barriers that could potentially inhibit business model innovation. Drawn from an in-depth case study in a German medium size engineering company in the equestrian sports industry, we explore both internal and external barriers to business model innovation. BMI is defined as any change in one or more of the nine building blocks of the Business Model Canvas; customer segment, value propositions, channels, customer relation, revenue streams, key resources, key activities, key partners, cost structure [1]. Our results show that barriers to business model innovation can be overcome by the deployment of organisational learning mechanisms and the development of an open network capability.
In the digital age, information technology (IT) is a strategic asset for organizations. As a result, the IT costs are rising, and the cost-effective management of IT is crucial. Nevertheless, organizations still face major challenges and former studies lack comprehensiveness and depth. The goal of this paper is to generate a deep and holistic view on current management challenges of IT costs. In 15 expert interviews, we identify 23 challenges divided into 7 categories. The main challenges are to ensure transparency on IT cost information, to demonstrate the business impact of IT as well as to change the mindset for the value of IT and overcoming them requires attention to their interactions. Hence, this paper leads to a better understanding of the issues that IT cost management (ITCM) faces in the digital age and builds a base for future research.
The paper investigates an innovative actuator combination based on the magnetic shape memory technology. The actuator is composed of an electromagnet, which is activated to produce motion, and a magnetic shape memory element, which is used passively to yield multistability, i.e. the possibility of holding a position without input power. Based on the experimental open-loop frequency characterization of the actuator, a position controller is developed and tested in several experiments.
An IT-GRC approach in SME
(2022)
The digital transformation of business processes and the integration of IT systems leads to opportunities and risks for small and medium-sized enterprises (SMEs). Risks that can result in a lack of IT compliance. The purpose of this research-in-progress paper is to present the current state of a IT-Governance-Risk-Compliance (IT-GRC) research-project. First, the results of an already conducted literature research will be discussed, combined with qualitative interviews (expert survey) of persons close to IT compliance. In the context of this paper, a first design approach will be developed by selecting relevant existing frameworks and standards and the identification of SME-specific conditions. The first design is intended to contribute a further artefact conception of tailoring approaches and standards and the creation of a guidance.
Ceramics are often used in high-temperature applications. Therefore, thermomechanical and heat resistance of ceramic and refractory materials are important. The material behaviour is described by thermal stress resistance. Established material tests to determine thermal shock behaviour are complex. The potential of application-related material testing in combination with simulations is described below.
Regional economies clearly benefit from thriving entrepreneurial ecosystems. However, ecosystems are not yet entirely gender-inclusive and therefore are not tapping their full potential. This is most critical with respect to technology-based entrepreneurship which features the largest gender imbalance. Despite the considerably growing amount of literature in the two research fields of female entrepreneurship and entrepreneurial ecosystems, the intersection of the two areas has not yet been outlined. We depict the state of knowledge with a structured review of the literature highlighting bibliometric information, methods used, and the main topics addressed in current articles. From there, recommendations for future research are derived.