Refine
Year of publication
Document Type
- Conference Proceeding (642)
- Article (425)
- Other Publications (143)
- Part of a Book (141)
- Working Paper (128)
- Book (118)
- Report (115)
- Journal (Complete Issue of a Journal) (85)
- Master's Thesis (76)
- Doctoral Thesis (58)
Language
- German (1112)
- English (881)
- Multiple languages (8)
Keywords
Institute
- Fakultät Architektur und Gestaltung (41)
- Fakultät Bauingenieurwesen (104)
- Fakultät Elektrotechnik und Informationstechnik (33)
- Fakultät Informatik (121)
- Fakultät Maschinenbau (60)
- Fakultät Wirtschafts-, Kultur- und Rechtswissenschaften (106)
- Institut für Angewandte Forschung - IAF (114)
- Institut für Naturwissenschaften und Mathematik - INM (3)
- Institut für Optische Systeme - IOS (39)
- Institut für Strategische Innovation und Technologiemanagement - IST (60)
If the process contains a delay (dead time), the Nyquist criterion is well suited to derive a PI or PID tuning rule because the delay is taken into account without approximation. The tuning of the speed of the closed loop enters naturally by the crossover frequency. The goal of robustness and performance is translated into the phase margin.
Twenty-first century infrastructure needs to respond to changing demographics, becoming climate neutral, resilient, and economically affordable, while remaining a driver for development and shared prosperity. However, the infrastructure sector remains one of the least innovative and digitalized, plagued by delays, cost overruns, and benefit shortfalls. The authors assessed trends and barriers in the planning and delivery of infrastructure based on secondary research, qualitative
interviews with internationally leading experts, and expert workshops. The analysis concludes that the root-cause of the industry’s problems is the prevailing fragmentation of the infrastructure value chain and a lacking long-term vision for infrastructure. To help overcome these challenges, an integration of the value chain is needed. The authors propose that this could be achieved through a use-case-based, as well as vision and governance-driven creation of federated digital platforms applied to infrastructure projects and outline a concept. Digital platforms enable full-lifecycle participation and responsible governance guided by a shared infrastructure vision. This paper has contributed as policy recommendation to the Group of Twenty (G20) in 2021.
Thermochemical surface hardening is used to overcome the weak mechanical performance of austenitic and duplex stainless steels. Both low-temperature carburizing and nitrocarburizing can improve the hardness, wear, galling, and cavitation resistance, while maintaining their good corrosion resistance. Therefore, it is crucial to not form chromium-rich precipitates during hardening as these can deteriorate the passivity of the alloy. The hardening parameters, the chemical composition of the steel, and the manufacturing route of a component determine whether precipitates are formed. This article gives an overview of suitable alloys for low-temperature surface hardening and the performance under corrosive loading.
Schreibberater an den Universitäten und Hochschulen und Verfasser von Ratgeberliteratur zum wissenschaftlichen Schreiben haben vermutlich zum überwiegenden Teil einen geistes- und kulturwissenschaftlichen Hintergrund. Die Fachkulturen, in denen sie das Handwerk des Zitierens erlernt haben, sind jedoch grundverschieden von jenen der nicht-textbasierten Wissenschaften: der Ingenieurfächer, der (nicht rein theoretischen) Naturwissenschaften, der Medizin und auch der empirischen Wirtschafts- und Sozialwissenschaften und der empirischen Psychologie. Die Unterschiede sind besonders augenfällig bei der Frage nach der Belegpflicht einer übernommenen Information, also bei der jeweiligen Entscheidung darüber, ob eine Quelle angegeben werden muss oder nicht.
Ignorantia doctorum
(2022)
Since the turn of the millennium, many writing centers have been established at universities in the German-speaking world, in order to support students in academic writing. This essay argues for offering subject-anchored directive guidance and using scientific texts as a basis for model learning. It states that the rhetorical tradition is hardly taken into account in the writing centers. Five arguments for this ignorance are discussed and, if possible, dispelled: the antiquity argument, which considers rhetoric outdated; the orality argument, which understands rhetoric as irrelevant to writing; the moral argument, which condemns rhetoric as a tool for demagogues; the positivist argument, which criticizes rhetoric as unempirical; and the didactic argument, which rejects rhetoric as a rigid doctrine. The discussion shows, however, that the rhetorical tradition, with its normative power and centuries of teaching practice, is a treasure trove of writing didactics that holds many resources, such as well-founded assessment criteria for the quality, appropriateness, and usefulness of texts.
„Nackte Tatsachen genügen nicht, um den Leser bis zum Schluss bei der Stange zu halten. [Auch der] Verfasser von [Fach- und] Sachtexten hat das Recht – wenn nicht gar die Pflicht –, die Wunder der Erzählkunst für sich zu erschließen, um seine Texte zu einer interessanten und vergnüglichen Lektüre zu machen.“ – Der Vortrag geht der Frage nach, ob diese Forderung des vielgelesenen Ratgeberautors Sol Stein berechtigt ist und inwiefern sie eingelöst werden kann. Zu diesem Zweck werden Stil- und Kompositionsmittel aus den Bereichen Lexik, Grammatik, Syntax, Stilistik, Erzählstrategie und Textorganisation anhand von Formulierungsbeispielen vorgestellt und auf ihre Funktion in Erzählliteratur oder Journalismus hin untersucht. In einem zweiten Schritt wird jeweils nach der Übertragbarkeit dieser sprachlichen Mittel auf Fachtexte gefragt. Anhand parallel ausgerichteter Beispiele wird eine mögliche verlagerte Funktion im Wissenschaftskontext skizziert. Der erhoffte Erkenntnisgewinn des Vortrags betrifft sowohl das Verfassen eigener Publikationen als auch die Vermittlung von Schreibfähigkeiten für künftige Fachautor*innen, weshalb die Formulierungsbeispiele verschiedensten, auch technischen und naturwissenschaftlichen Domänen entnommen sind.
(Literaturangabe: Stein, Sol: Über das Schreiben [1995], dt. Waltraud Götting, 10. Aufl., Zweitausendeins, Frankfurt a. M., 2006, zit. S. 49)
Der Einsatz von Plagiatssoftware sollte auf die Zielgruppe abgestimmt sein. Da die Beurteilung des digitalen Prüfberichts Expertenwissen voraussetzt, erscheint ein unbegleiteter Zugriff nicht sinnvoll. Da meist nicht mit Betrugsabsicht, sondern aus Regelunkenntnis plagiiert wird, ist eine Beschränkung auf eine flächendeckende Detektion keine Lösung. Umfassendes Regelwissen, ein Verständnis für die grundlegende Bedeutung intertextueller Bezüge sowie selbstverantwortliches Handeln sind Vermittlungsziele der HTWG-Schreibberatung. Zu diesem Zweck macht das HTWG-Modell den Schreibkursbesuch zur Zulassungsvoraussetzung für die freiwillige Plagiatskontrolle und behält sich die Deutungshoheit über den Prüfbericht vor.
Die GIGA-Adaptionsmethode
(2022)
Der Aufsatz stellt eine schreibdidaktische Lehrmethode vor, die auf einem Cicero-Zitat über das Aptum, die Angemessenheit des Stils, basiert. Nach einigen psychologischen Vorüberlegungen zur Differenz von Sprech- und Schreibsituation wird das Zitat im Hochschulschreibunterricht in Hinblick auf die darin genannten Stilfaktoren analysiert. Die Ergebnisliste dient als Grundlage einer Methode, mit der sich die stilistische Passgenauigkeit von Texten aller Art stark verbessern lässt. Ziel ist es, eine möglicherweise schreibferne Klientel dazu zu motivieren, zu einem musterhaften, zweckdienlichen, sachgerechten und zielgruppenorientierten Schreiben zu finden.
Wissenschaftlich Schreiben
(2014)
Mit diesen Praxismaterialien wird jedes Schreibtraining zum Erfolg! Zielführende Übungen vermitteln Studierenden die Standards des wissenschaftlichen Schreibens. Studien- und Abschlussarbeiten werden auf dieser Grundlage stilsicher verfasst, fundiert belegt, sinnvoll gegliedert und systematisch optimiert. Bewährte Strategien, konkrete Hinweise und ausführliche Musterlösungen sichern den Lernprozess ab. Perfekt geeignet für Seminare und Workshops.
Female Entrepreneurship has gained interest over the last 20 years. Therefore, this paper analyses 7,320 articles of the research field ‘women in entrepreneurial context’ published in 885 journals. The sample is analyzed by using a machine learning and text mining based methodological approach. Aiming to provide a broad overview over the research literature, 41 clusters and 11 superordinate topics were identified. Major developments of research attention are outlined by analyzing bibliometric data of the period from 2000 to 2020. Overall growth in terms of research attention measured by the development of yearly citations per article is best noticeable in clusters ‘corporate social responsibility’, ‘brand’, and ‘corporate (-governance)’, and in superordinate topics ‘performance’, ‘education’, and ‘corporate (board/ management)’. There are also indicators for an overall increase of research attention and cluster variety. The synthesis provides an insight into most trending superordinate topics. Therefore, this literature review gives a comprehensive and descriptive overview as well as an insight into thematic trend developments of the research field.
Entrepreneurial motivations have become a frequently discussed topic in entrepreneurship research. However, few studies investigated entrepreneurs' motivation across gender and different venture types and tend to rely on surveys or case studies. By using a text mining approach, we investigate if there are differences between male and female entrepreneurs' motivation and if female entrepreneurs' motivation differs across different venture types. This text mining approach in combination with a qualitative content analysis was used to examine unique motivational data from 472 entrepreneurial projects from three different entrepreneurship support programs in Norway and Sweden. Findings suggest that motivation of female and male entrepreneurs differ only slightly, while motivation of female entrepreneurs differs according to the different venture types. We thus contribute to a better understanding of entrepreneurial motivation and to a better understanding of why female entrepreneurs start a business. This can, for instance, benefit the improvement of future female entrepreneurship support programs.
We have analyzed a pool of 37,839 articles published in 4,404 business-related journals in the entrepreneurship research field using a novel literature review approach that is based on machine learning and text data mining. Most papers have been published in the journals ‘Small Business Economics’, ‘International Journal of Entrepreneurship and Small Business’, and ‘Sustainability’ (Switzerland), while the sum of citations is highest in the ‘Journal of Business Venturing’, ‘Entrepreneurship Theory and Practice’, and ‘Small Business Economics’. We derived 29 overarching themes based on 52 identified clusters. The social entrepreneurship, development, innovation, capital, and economy clusters represent the largest ones among those with high thematic clarity. The most discussed clusters measured by the average number of citations per assigned paper are research, orientation, capital, gender, and growth. Clusters with the highest average growth in publications per year are social entrepreneurship, innovation, development, entrepreneurship education, and (business-) models. Measured by the average yearly citation rate per paper, the thematic cluster ‘research’, mostly containing literature studies, received most attention. The MLR allows for an inclusion of a significantly higher number of publications compared to traditional reviews thus providing a comprehensive, descriptive overview of the whole research field.
Preliminary results of homomorphic deconvolution application to surface EMG signals during walking
(2021)
Homomorphic deconvolution is applied to sEMG signals recorded during walking. Gastrocnemius lateralis and tibialis anterior signals were acquired according to SENIAM recommendation. MUAP parameters like amplitude and scale were estimated, whilst the MUAP shape parameter was fixed. This features a useful time-frequency representation of sEMG signal. Estimation of scale MUAP parameter was verified extracting the mean frequency of filtered EMG signal, extracted from the scale parameter estimated with two different MUAP shape values.
The present work proposes the use of modern ICT technologies such as smartphones, NFCs, internet, and web technologies, to help patients in carrying out their therapies. The implemented system provides a calendar with a reminder of the assumptions, ensures the drug identification through NFC, allows remote assistance from healthcare staff and family members to check and manage the therapy in real-time. The system also provides centralized information on the patient's therapeutic situation, helpful in choosing new compatible therapies.
Nowadays, the number of flexible and fast human to application system interactions is dramatically increasing. For instance, citizens interact with the help of the internet to organize surveys or meetings (in real-time) spontaneously. These interactions are supported by technologies and application systems such as free wireless networks, web -or mobile apps. Smart Cities aim at enabling their citizens to use these digital services, e.g., by providing enhanced networks and application infrastructures maintained by the public administration. However, looking beyond technology, there is still a significant lack of interaction and support between "normal" citizens and the public administration. For instance, democratic decision processes (e.g. how to allocate public disposable budgets) are often discussed by the public administration without citizen involvement. This paper introduces an approach, which describes the design of enhanced interactional web applications for Smart Cities based on dialogical logic process patterns. We demonstrate the approach with the help of a budgeting scenario as well as a summary and outlook on further research.
Die Arbeit stellt das Wesen und die Grundzüge der Lohnsteuer, einschließlich einem kurzen Abriß der Entwicklung des Lohnsteuerverfahrens, dar. Sie ist weder als Anleitung zur Ausfüllung eines Antrags auf Lohnsteuerjahresausgleich, noch als Nachschlagwerk gedacht. Der Leser dieser Ausarbeitung kennt die wesentlichen Grundzüge der Einkommensteuer, sowie deren besonderen Merkmale, wie z.B. der "Steuerprogression". Deshalb wurde hier auf die Darstellung der Einkommensteuer verzichtet; aus diesem Grunde wurde auch nicht die Veranlagung von Arbeitnehmern zur Einkommensteuer in die Ausarbeitung aufgenommen.
Identität versus Interessen
(2004)
Die Rückkehr des Individuums in die Governanceethik – Polylingualität als Einfallstor der Tugend
(2006)
In spite of the amount of new tools and methodologies adopted in the road infrastructure sector, the performance of road infrastructure projects is not constantly improving. Considering that the volume of projects undertaken is forecasted to increase every year, this is a substantial issue for the road infrastructure sector. Hence this work focuses on the principles of Blockchain Technology, road infrastructure sector and the information exchange with the aim to use the advantages of the Blockchain Technology in supporting to overcome the various challenges along the life cycle of road infrastructure projects.
Within the scope of this paper, two studies were conducted. First, focus groups were used to explore where society (road infrastructure sector) stands in terms of industry 4.0 and to get a better understanding if and where the principles of Blockchain Technology can be used when managing projects in the road infrastructure sector. Second, semi-structured interviews were administrated with experts of the road infrastructure sector and experts of Blockchain Technology to better understand the interrelation between these two areas. Based on the outcome of the two studies, technology barriers and enablers were explored for the purpose of improved information exchange within the road infrastructure sector.
The two studies revealed that there are significant and strong interrelations between the principles of the Blockchain Technology, project management within the road infrastructure sector and information exchange. These interrelations are complex and diverse, but overall it can be concluded that the adoption of the principles of Blockchain Technology into the field of information exchange improves the management of road infrastructure projects. Based on the two studies a theoretical framework was developed.
In summary this research showed that trust is an important factor and builds the foundation for communication and to ensure a proper information exchange. Within the scope of this thesis, it was demonstrated that the principles of the Blockchain Technology can be used to increase transparency, traceability and immutability during the life cycle of road infrastructure projects in the area of information exchange.
Mapping of tree seedlings is useful for tasks ranging from monitoring natural succession and regeneration to effective silvicultural management. Development of methods that are both accurate and cost-effective is especially important considering the dramatic increase in tree planting that is required globally to mitigate the impacts of climate change. The combination of high-resolution imagery from unmanned aerial vehicles and object detection by convolutional neural networks (CNNs) is one promising approach. However, unbiased assessments of these models and methods to integrate them into geospatial workflows are lacking. In this study, we present a method for rapid, large-scale mapping of young conifer seedlings using CNNs applied to RGB orthomosaic imagery. Importantly, we provide an unbiased assessment of model performance by using two well-characterised trial sites together containing over 30,000 seedlings to assemble datasets with a high level of completeness. Our results showed CNN-based models trained on two sites detected seedlings with sensitivities of 99.5% and 98.8%. False positives due to tall weeds at one site and naturally regenerating seedlings of the same species led to slightly lower precision of 98.5% and 96.7%. A model trained on examples from both sites had 99.4% sensitivity and precision of 97%, showing applicability across sites. Additional testing showed that the CNN model was able to detect 68.7% of obscured seedlings missed during the initial annotation of the imagery but present in the field data. Finally, we demonstrate the potential to use a form of weakly supervised training and a tile-based processing chain to enhance the accuracy and efficiency of CNNs applied to large, high-resolution orthomosaics.
Der Trend zu leichteren Konstruktionen und größeren Spannweiten macht es notwendig, den dynamischen Charakter der Einwirkungen auf die Tragsicherheit und Gebrauchstauglichkeit der Bauwerke stärker als bisher zu berücksichtigen; neben aerodynamischen und seismischen Einflüssen sind es solche aus Maschinenanlagen, aus dem Straßen- und Eisenbahnverkehr sowie von Menschen induzierte Einwirkungen, und nicht zuletzt Katastrophenlastfälle, wie Anprall, Flugzeugabsturz und anderes.
Ausgehend von den Grundlagen der Dynamik werden Berechnungs- und Bewertungsverfahren unterschiedlicher Strenge dargestellt und anhand zahlreicher Beispiele praxisbezogen erläutert. Die mathematischen Verfahren werden in einem ausführlichen Anhang dargelegt, die einzelnen Kapitel sind jeweils durch umfangreiche Hinweise auf die Fachliteratur ergänzt.
In this paper we provide a performance analysis framework for wireless industrial networks by deriving a service curve and a bound on the delay violation probability. For this purpose we use the (min,×)stochastic network calculus as well as a recently presented recursive formula for an end-to-end delay bound of wireless heterogeneous networks. The derived results are mapped to WirelessHART networks used in process automation and were validated via simulations. In addition to WirelessHART, our results can be applied to any wireless network whose physical layer conforms the IEEE 802.15.4 standard, while its MAC protocol incorporates TDMA and channel hopping, like e.g. ISA100.11a or TSCH-based networks. The provided delay analysis is especially useful during the network design phase, offering further research potential towards optimal routing and power management in QoS-constrained wireless industrial networks.
Die vorliegende Arbeit soll durch einen möglichst gesamtheitlichen Vergleich von Fertigbadzellen und der konventionellen Ausführung von Bädern als eine Entscheidungshilfe für zukünftige Projekte dienen. Durch den immer stärker werdenden Fachkräftemangel sind Unternehmen auf Methoden angewiesen, welche diesem Mangel entgegenwirken. Eine Möglichkeit bietet die Verwendung von Fertigbadmodulen, welche noch viele weitere Vorteile sowohl für den Nutzer als auch den Generalunternehmer darstellt. Ziel der Arbeit ist es, neben der Ermittlung der Mindeststückzahl für eine sinnvolle Anwendung, eines Kosten- und Bauzeitenvergleichs auch einen Ausblick auf einen vermehrten Einsatz von Fertigbädern im Wohnungsbau zu geben. Im Rahmen der Arbeit wurden Experteninterviews geführt, um die gesammelten Erfahrungen auf den Baustellen sowie die Informationen aus Internet- und Literaturrecherchen mit aktuellen und aussagekräftigen Informationen zu stärken. Zuletzt fasst ein Leitfaden alle wichtigen Aspekte zusammen, die berücksichtigt werden sollten, wenn es um die Frage geht, ob der Einsatz von Fertigbädern sinnvoll ist. Zu den wichtigsten Erkenntnissen gehört die Tatsache, dass beim betrachteten Beispielprojekt eine deutliche Kosteneinsparung durch eine Verkürzung der Gesamtbauzeit sowie durch einen reduzierten Koordinationsaufwand ermittelt werden konnte. Weiterhin zeigen die Erkenntnisse aus den Interviews eine zukünftig verstärkte Orientierung in Richtung des Wohnungsbaus. Dies zeigt sich in den Anwendungszahlen sowie durch Maßnahmen zur stärkeren Etablierung im Wohnungsbau. Zusammenfassend kann die Anwendung von Fertigbadzellen besonders bei Projekten mit hoher Wiederholungsrate empfohlen werden, jedoch sollte auch hier in Zukunft der Wohnungsbau unter Berücksichtigung bestimmter Prämissen nicht vernachlässigt werden.
In recent years, there has been a noticeable trend towards a general contractor strategy for plant engineering companies. Multiple disciplines and departments must be administered in a joint project. In the process, different work results are often managed in various systems without any associative relationship. A possible way to address this complexity is to implement a specifically tailored PLM strategy to gain a competitive advantage. Maturity models as well as methods to evaluate possible benefits constitute increasingly applied tools during this journey. Both methods have been theoretically described in previous publications. However, this paper should provide insights in the practical application within machinery industry. Therefore, a medium-sized German plant engineering company serves as an example for determining the scope and value of a multi-national overarching Product Lifecycle Management architecture as the central piece of a future digitalization strategy. The company’s current maturity levels for several digitalization capabilities are evaluated, prioritized and benchmarked against a set of similar companies. This allows to derive suitable target states in terms of maturity levels as well as the technical specification of digitalization use cases. In order to provide profound data for cost justification the resulting benefits are quantified.
Digitalization is one of the most frequently discussed topics in industry. New technologies, platform concepts and integrated data models do enable disruptive business models and drive changes in organization, processes, and tools. The goal is to make a company more efficient, productive and ultimately profitable. However, many companies are facing the challenge of how to approach digital transformation in a structured way and to realize these potential benefits. What they realize is that Product Lifecycle Management plays a key role in digitalization intends, as object, structure and process management along the life cycle is a foundation for many digitalization use cases. The introduced maturity model for assessing a firm’s capabilities along the product lifecycle has been used almost two hundred times. It allows a company to compare its performance with an industry specific benchmark to reveal individual strengths and weaknesses. Furthermore, an empirical study produced multidimensional correlation coefficients, which identify dependencies between business model characteristics and the maturity level of capabilities.
Die Praxis zeigt, dass die meisten Leistungsmessungssysteme für den Vertriebsbereich konzeptuell bereits an ihre Grenzen gestoßen sind und den heutigen Geschäftsgegebenheiten kaum noch entsprechen können. Mehr noch, durch ihre „übertriebene“ Fokussierung auf die finanz-orientierten Indikatoren wirken sie sogar hinderlich im Konkurrenzkampf um den „besten Kunden“. Im Rahmen des Projektes, in welchem diese Arbeit entstanden ist, wurde für den Vertriebsmanager die Sales Performance Analyse als ein leistungsfähiges Controllinginstrument konzipiert und implementiert. Sie basiert auf dem bekannten Ansatz der Balanced Scorecard von Kaplan/Norton und bietet eine über alle Bereiche hinweg ausgewogene und strategieorientierte Betrachtung relevanter Vertriebsabläufe. Sie ermöglicht somit das Controlling, d.h. das Messen und Steuern, der relevanten Erfolgsfaktoren auf Basis der vier Perspektiven: Finanzen, Interne Prozesse, Kunden und Lernen&Wachstum. Um den vordefinierten inhaltlichen und funktionalen Ansprüchen gerecht zu werden, ist die Sales Performance Analyse in einer Systemlandschaft von SAP-eigenen Lösungen implementiert worden: Enterprise Portal, BW (Business Warehouse), CRM (Customer Relationship Management), SEM (Strategic Enterprise Management) und HR (Human Ressources). In diesem Projekt wurde ein funktionsfähiger Prototyp erstellt, der aktuell zu Präsentationszwecken bei den internationalen Branchenveranstaltungen und im laufenden SAP-Lösungs-Vertrieb verwendet wird.
While existing resource extraction debates have contributed to a better understanding of national economic and political dilemmas and institutional responses, there are flaws in understanding the specific relevance of the various types of mining schemes for rural households to deal with the various problems they are confronted with. Our paper examines the perceptions of gold mining effects on households in Northern Burkina Faso. The findings of our survey across six districts representing different mining schemes (industrial, artisanal, no mining) highlight the fact that artisanal gold mining can generate job opportunities and cash income for local households; whereas industrial gold mining widely fails to do so. However, the general economic and environmental settings exert a much stronger influence on the household state. Gold mining effects are perceived as being less advantageous in districts where people are suffering from a lack of education, a higher vulnerability to drought and poor market access. Our findings provide empirical support for those who back the enhanced formalization of artisanal and small-scale mining (ASM) and policies that entail more rigorous state monitoring of mining concessions, especially in economic and environmentally disadvantaged contexts. Effectively addressing communal and pro-poor development requires greater attention to the political economy of ASM and corporate mining. It also calls for a greater inclusion of local mining stakeholders and a more effective alignment of international regulatory and advocacy efforts.
Organisation und Sprache
(2004)
Tugend als social strings
(2006)
Low-Code Development Platforms (LCDPs) enable non-information technology (IT) personnel to develop applications and workflows independently of the IT department. Consequently, these digital platforms help to overcome the growing need for software development. However, science and practice warn of several barriers that slow down or hinder the usage of LCDPs. This publication scientifically identifies, analyzes, and discusses challenges during implementation and application of LCDPs from both perspectives in a holistic manner. Therefore, we conduct an exploratory study (data from scientific literature, expert interviews, and practical studies) and assign the challenges to the socio-technical system model. The results show that the scientific and practical communities recognize common challenges (especially knowledge transfer) but also perceive differences related to technological (science) and social (practice) aspects. This paper proposes future research directions for academia, such as governance, culture change, and value evaluation of LCDPs. Additionally, practitioners can prepare for possible challenges when using LCPDs.
Low-Code Development Plattformen (LCDPs) fördern die digitale Transformation von Organisationen, indem sie die Applikationsentwicklung durch FachbereichsmitarbeiterInnen ohne tiefgreifende Programmierkenntnisse – sogenannte Citizen Developer – ermöglichen. Marktforschungsinstitute prognostizieren, dass in den nächsten Jahren mehr als die Hälfte aller Applikationen mit LCDPs entwickelt werden. Nichtsdestotrotz stehen Organisationen vor der Herausforderung, sich für die richtigen Implementierungs- und Anwendungsansätze von LCDPs zu entscheiden. Dieser Artikel liefert daher ein umfassendes Bild über das praktische Verständnis und aktuelle Ansätze in verschiedenen Organisationen und leitet daraus Handlungsempfehlungen ab. Dafür wurden 16 Experteninterviews durchgeführt und wissenschaftlich analysiert. Die Ergebnisse zeigen, dass die Praxis grundsätzlich ein ähnliches Verständnis des Begriffs LCDP hat. Die Initiative für die Einführung kommt meist aus den Fachbereichen, die Entscheidung für oder gegen die LCDP-Implementierung wird jedoch meist von der Geschäftsführung in Kooperation mit der IT-Abteilung getroffen. Dabei unterscheiden sich die aktuellen Anwendungsansätze: Unternehmen nutzen entweder einen Self-Service-Ansatz durch die Fachbereiche oder integrieren die Entscheidung über eine potenzielle LCDP-Entwicklung durch die Citizen Developer in das bestehende Demand-Management der IT-Abteilung. Eine etablierte und adaptive Governance ist für beide Ansätze eine wichtige Voraussetzung. Die Erkenntnisse des Beitrags tragen zur wissenschaftlichen Diskussion bei, da dieser Artikel eine der ersten umfassenden und wissenschaftlich fundierten qualitativen Analysen über aktuelle praktische Adoptionsansätze der Praxis liefert. PraktikerInnen erfahren zudem, wie andere Unternehmen mit aktuellen Herausforderungen umgehen und welche Ansätze erfolgversprechend sind.
Market research institutes forecast a growing relevance of Low-Code Development Platforms (LCDPs) for organizations. Moreover, the rising number of scientific publications in recent years shows the increasing interest of the academic community. However, an overview of current research focuses and fruitful future research topics is missing. This paper conducts a first scientific literature review on LCDPs to close this gap. The socio-technical system (STS) model, which categorizes information systems into a social and a technical system, serves to analyze the identified 32 publications. Most of current research focuses on the technical system (technology or task). In contrast, only three publications explicitly target the social system (structure or people). Hence, this paper enables future research to address the identified research gaps. Additionally, practitioners gain awareness of technical and social aspects involved in the development, implementation, and application of LCDPs.
This paper describes the effectiveness and efficiency of Virtual Reality training during a commissioning process. Therefore, 500 picking orders with more than 2000 part-picking operations with 30 test persons have been conducted and analyzed in the Modellfabrik Bodensee. The study points out the advantages and disadvantages of virtual training in comparison to a real execution of a picking process with and without any training.
NAND flash memory is widely used for data storage due to low power consumption, high throughput, short random access latency, and high density. The storage density of the NAND flash memory devices increases from one generation to the next, albeit at the expense of storage reliability.
Our objective in this dissertation is to improve the reliability of the NAND flash memory with a low hard implementation cost. We investigate the error characteristic, i.e. the various noises of the NAND flash memory. Based on the error behavior at different life-aging stages, we develop offset calibration techniques that minimize the bit error rate (BER).
Furthermore, we introduce data compression to reduce the write amplification effect and support the error correction codes (ECC) unit. In the first scenario, the numerical results show that the data compression can reduce the wear-out by minimizing the amount of data that is written to the flash. In the ECC scenario, the compression gain is used to improve the ECC capability. Based on the first scenario, the write amplification effect can be halved for the considered target flash and data model. By combining the ECC and data compression, the NAND flash memory lifetime improves three fold compared with uncompressed data for the same data model.
In order to improve the data reliability of the NAND flash memory, we investigate different ECC schemes based on concatenated codes like product codes, half-product codes, and generalized concatenated codes (GCC). We propose a construction for high-rate GCC for hard-input decoding. ECC based on soft-input decoding can significantly improve the reliability of NAND flash memories. Therefore, we propose a low-complexity soft-input decoding algorithm for high-rate GCC.
These days computer analysis of ECG (Electrocardiograms) signals is common. There are many real-time QRS recognition algorithms; one of these algorithms is Pan-Tompkins Algorithm. Which the Pan-Tompkins Algorithm can detect QRS complexes of ECG signals. The proposed algorithm is analysed the data stream of the heartbeat based on the digital analysis of the amplitude, the bandwidth, and the slope. In addition to that, the stress algorithm compares whether the current heartbeat is similar or different to the last heartbeat after detecting the ECG signals. This algorithm determines the stress detection for the patient on the real-time. In order to implement the new algorithm with higher performance, the parallel programming language CUDA is used. The algorithm determines stress at the same time by determining the RR interval. The algorithm uses a different function as beat detector and a beat classifier of stress.
Error correction coding based on soft-input decoding can significantly improve the reliability of non-volatile flash memories. This work proposes a soft-input decoder for generalized concatenated (GC) codes. GC codes are well suited for error correction in flash memories for high reliability data storage. We propose GC codes constructed from inner extended binary Bose-Chaudhuri-Hocquenghem (BCH) codes and outer Reed-Solomon codes. The extended BCH codes enable an efficient hard-input decoding. Furthermore, a low-complexity soft-input decoding method is proposed. This bit-flipping decoder uses a fixed number of test patterns and an algebraic decoder for soft-decoding. An acceptance criterion for the final candidate codeword is proposed. Combined with error and erasure decoding of the outer Reed-Solomon codes, this acceptance criterion can improve the decoding performance and reduce the decoding complexity. The presented simulation results show that the proposed bit-flipping decoder in combination with outer error and erasure decoding can outperform maximum likelihood decoding of the inner codes.
The introduction of multi level cell (MLC) and triple level cell (TLC) technologies reduced the reliability of flash memories significantly compared with single level cell (SLC) flash. The reliability of the flash memory suffers from various errors causes. Program/erase cycles, read disturb, and cell to cell interference impact the threshold voltages. With pre-defined fixed read thresholds a voltage shift increases the bit error rate (BER). This work proposes a read threshold calibration method that aims on minimizing the BER by adapting the read voltages. The adaptation of the read thresholds is based on the number of errors observed in the codeword protecting a small amount of meta-data. Simulations based on flash measurements demonstrate that this method can significantly reduce the BER of TLC memories.
As part of large-scale project in the Lake Constance region that borders Germany, Austria and Switzerland, seamless learning prototypes are designed, implemented, and evaluated. A design-based research (DBR) approach is used for the development, implementation and evaluation. The project consists of one base project that brings together instructional design and instructional technology specialists as well as (educational) software architects, supporting seven subprojects in developing seamless-learning lighthouse implementations mostly within higher and further education.
Industrial partners are involved in all subprojects. The authors are unaware of similar seamless learning projects using a DBR approach that include evaluation and redesign and allow for comparison of experiences by applying DBR across subprojects. The project aims to generate novel seamless learning implementations, respective novel research and novel software supporting different aspects of seamless learning. The poster will present the project design, delineate areas of research and development, and include initial empirical results.
Experimental Validation of Ellipsoidal Techniques for State Estimation in Marine Applications
(2022)
A reliable quantification of the worst-case influence of model uncertainty and external disturbances is crucial for the localization of vessels in marine applications. This is especially true if uncertain GPS-based position measurements are used to update predicted vessel locations that are obtained from the evaluation of a ship’s state equation. To reflect real-life working conditions, these state equations need to account for uncertainty in the system model, such as imperfect actuation and external disturbances due to effects such as wind and currents. As an application scenario, the GPS-based localization of autonomous DDboat robots is considered in this paper. Using experimental data, the efficiency of an ellipsoidal approach, which exploits a bounded-error representation of disturbances and uncertainties, is demonstrated.
Reliability Assessment of an Unscented Kalman Filter by Using Ellipsoidal Enclosure Techniques
(2022)
The Unscented Kalman Filter (UKF) is widely used for the state, disturbance, and parameter estimation of nonlinear dynamic systems, for which both process and measurement uncertainties are represented in a probabilistic form. Although the UKF can often be shown to be more reliable for nonlinear processes than the linearization-based Extended Kalman Filter (EKF) due to the enhanced approximation capabilities of its underlying probability distribution, it is not a priori obvious whether its strategy for selecting sigma points is sufficiently accurate to handle nonlinearities in the system dynamics and output equations. Such inaccuracies may arise for sufficiently strong nonlinearities in combination with large state, disturbance, and parameter covariances. Then, computationally more demanding approaches such as particle filters or the representation of (multi-modal) probability densities with the help of (Gaussian) mixture representations are possible ways to resolve this issue. To detect cases in a systematic manner that are not reliably handled by a standard EKF or UKF, this paper proposes the computation of outer bounds for state domains that are compatible with a certain percentage of confidence under the assumption of normally distributed states with the help of a set-based ellipsoidal calculus. The practical applicability of this approach is demonstrated for the estimation of state variables and parameters for the nonlinear dynamics of an unmanned surface vessel (USV).
Schatten-BI - Was tun?
(2015)
Shadow IT risk
(2015)
Schatten-IT
(2015)
IT-Governance
(2023)
Die digitale Transformation verstärkt den Einfluss der Informationstechnologie auf den Unternehmenserfolg erheblich. Damit erhöhen sich auch die Anforderungen an das Führungssystem der IT in den Unternehmen. Hier gilt die einfache Weisheit: Ein ungeeignetes Managementsystem bringt in der Regel schlechtere Entscheidungen mit sich.
Wie Sie zielorientiert bestimmen, wer im Unternehmen wie auf IT-relevante Entscheidungen einwirken soll, zeigt Ihnen Christopher Rentrop mit viel Übersicht:
- Grundlegende Ziele und Erfolgsfaktoren der IT-Governance
- Gestaltungselemente der IT-Governance: Strukturen und Prozesse, Entscheidungsrechte, relationale Mechanismen u.a.
- COBIT als Rahmenwerk der IT-Governance
- Spezifische Entscheidungsdomänen, Handlungsfelder und Verantwortlichkeiten
- Management, Weiterentwicklung und Erfolgsmessung der IT-Governance
Eine prägnante Orientierungshilfe, die Sie Schritt für Schritt zu einer organisationsgerechten Ausgestaltung des Führungssystems der IT leitet.
Schatten-IT
(2015)
Das Thema Schatten-IT hat erst in den letzten Jahren in Wissenschaft und Praxis erheblich an Bedeutung gewonnen.1 Unter dem Begriff Schatten-IT werden dabei Systeme verstanden, welche die Fachbereiche in Eigenregie beschaffen oder erstellen und die nicht in das IT-Servicemanagement des Unternehmens eingebunden sind.2 Das Thema Schatten-IT ist dabei weder neu noch selten. Trotz dieser langen Geschichte war über die Verbreitung der Schatten-IT wenig Genaues bekannt. Im einem Beitrag in dieser Zeitschrift im Jahr 2011 haben wir die Risiken der Schatten-IT beschrieben und damit die Notwendigkeit für die Auseinandersetzung mit dem Thema Schatten-IT hergeleitet.3 Im Forschungsprojekt Schatten-IT des Konstanzer Instituts für Prozesssteuerung wurde dann in den vergangenen Jahren eine Methode zum Management der Schatten-IT entwickelt und in verschiedenen Fallstudien evaluiert. Ziel dieses Beitrages ist es, die empirischen Ergebnisse aus den Fallstudien vorzustellen.4 Die Datenbasis bilden dabei die Fallstudien in vier Unternehmen, in denen insgesamt 386 Schatten-IT Systeme gefunden wurden. Zwei der Unternehmen stammen aus der Finanz- und Versicherungsbranche, die beiden anderen aus der Fertigungsindustrie. Je ein Abteilungsbereich wurde für die Untersuchung herangezogen. Aus diesen Fallstudien stellt der Beitrag im zweiten Kapitel die Ausprägungen der aufgedeckten Schatten-IT vor. Im dritten Kapitel folgt dann die Beschreibung der geschäftlichen Relevanz der Schatten-IT. Im vierten Abschnitt gehen wir auf die Qualität der Schatten-IT Systeme ein. In der Fortsetzung des Artikels werden in einem weiteren Beitrag die möglichen Prüfungsansätze beschrieben.
Schatten-IT (Teil 2)
(2017)
Der Begriff Schatten-IT beschreibt Systeme, die außerhalb des Verantwortungsbereiches der IT durch die Fachabteilung eigenständig entwickelt und betrieben werden. Obwohl dieses Phänomen weit verbreitet ist, blieb diese Schatten-IT in Wissenschaft und in Praxis lange unbeachtet. Seit einiger Zeit ist eine verstärkte Auseinandersetzung mit dem Phänomen erkennbar. Jedoch gibt es bisher nur wenige empirisch gestützte Erkenntnisse über die Schatten-IT. Diese Lücke wird in dem vorliegenden Beitrag angegangen. Auf Basis von vier Fallstudien werden die Verbreitung, Verwendung und die Qualität der Schatten-IT ermittelt. Dabei zeigt sich, dass die Schatten-IT mit durchaus erheblichen Risiken verbunden sein kann. Demgegenüber bleibt die Qualität jedoch merklich zurück. Erkennbar ist dabei auch, dass die Schatten-IT nicht im Blickfeld der Kontrollsysteme der Unternehmen erscheint. Durch Maßnahmen wie die Steigerung der Awareness der Schatten-IT im Unternehmen sowie eine Entwicklung der Schatten-IT hin zu einer gesteuerten Fachbereichs-IT lassen sich die Risiken wirksam reduzieren. Zudem sind die Kontrollsysteme der Unternehmen so anzupassen, dass auch die Schatten-IT in diesen sichtbar wird.
Nach heutigem Stand der Technik kommen für die Dekontamination von Störstellen wie z.B. Ecken und Innenkanten, weitestgehend Technik aus dem konventionellen Sanierungsbereich zum Einsatz. Maschinen wie Nadelpistolen und Stockgeräte belasten das Arbeitspersonal mit starken Vibrationen und hohen Rückstellkräften. Daher sind entsprechend lange Pausenzeiten erforderlich, wodurch die ohnehin schon geringe Abtragleistung weiter gesenkt wird. Neben dem zusätzlichen Mehraufwand kann die Technik, aufgrund fehlender Absaugungseinrichtungen, unter Umständen zu einer Kontaminationsverschleppung führen. Hierbei werden in bereits dekontaminierten Bereichen kontaminierte Partikel verteilt, wodurch die erzielten Bearbeitungsfortschritte teilweise rückgängig gemacht werden.
Aufgrund der Vielzahl von Nachteilen, die bei den bisher eingesetzten Geräten auftreten, wurde das Forschungsprojekt EKONT-1 zur „Entwicklung eines innovativen, teilautomatisierten Gerätes für eine trocken-mechanische Ecken-, Kanten- und Störstellendekontamination in kerntechnischen Anlagen“ angestoßen und durchgeführt. Im Rahmen dieses Projektes konnten viele neue Erkenntnisse gewonnen und mehrere funktionsfähige Prototypen entwickelt, gebaut und sowohl im Labor als auch im praktischen Einsatz getestet werden. Da im Laufe der Versuche noch einige Verbesserungspotenziale aufgetreten sind, wurde zum 01.07.23 das Folge Projekt EKONT-2 gestartet, was sich mit der Weiterentwicklung der existierenden Prototypen beschäftigt.
Technologiebasierte Startups leisten einen wesentlichen Beitrag zur wirtschaftlichen sowie gesellschaftlichen Entwicklung. Im Zuge ihrer Gründung benötigen sie Unterstützung in Form von Risikokapital, das in der Seed- und Early-Stage primär durch Business Angels (BAs) bereitgestellt wird. Die Abläufe und Bewertungskriterien des BA Investmentprozesses sind bisher jedoch unzureichend erforscht. Der vorliegende Beitrag nutzt Experteninterviews im Rahmen einer Fallstudie des baden-württembergischen entrepreneurialen Ökosystems zur Identifikation des Vorgehens von BAs bei der Bewertung und Auswahl technologiebasierter Startups. Zudem werden die Kriterien, nach denen BAs vielversprechende von scheiternden Startups unterscheiden abgeleitet. Somit trägt der Beitrag zur Öffnung der „Black Box” von Investmentaktivitäten in den frühsten Gründungsphasen bei.
Digitalisierung im Bauwesen
(2015)
In the digital age, information technology (IT) is a strategic asset for organizations. As a result, the IT costs are rising, and the cost-effective management of IT is crucial. Nevertheless, organizations still face major challenges and former studies lack comprehensiveness and depth. The goal of this paper is to generate a deep and holistic view on current management challenges of IT costs. In 15 expert interviews, we identify 23 challenges divided into 7 categories. The main challenges are to ensure transparency on IT cost information, to demonstrate the business impact of IT as well as to change the mindset for the value of IT and overcoming them requires attention to their interactions. Hence, this paper leads to a better understanding of the issues that IT cost management (ITCM) faces in the digital age and builds a base for future research.
Nowadays, organizations must invest strategically in information technology (IT) and choose the right digital initiatives to maximize their benefit. Nevertheless, Chief Information Officers still struggle to communicate IT costs and demonstrate the business value of IT. The goal of this paper is to support their effective communication. In focus groups, we analyzed how different stakeholders perceive IT costs and the business value of IT as the basis of communication. We identified 16 success factors to establish effective communication. Hence, this paper enables a better understanding of the perception and the operationalization of effective communication.
Nowadays, information technology (IT) is a strategic asset for organizations. As a result, the IT costs are rising and there is a need for transparency about their root causes. Cost drivers as an instrument in IT cost management enable a better transparency and understanding of costs. However, there is a lack of IT cost driver research with a focus on the strategic position of IT within organizations. The goal of this paper is to develop a comprehensive overview of strategic drivers of IT costs. The Delphi study leads to the identification and validation of 17 strategic drivers. Hence, this paper builds a base for cost driver analysis and contributes to a better understanding of the causes of costs. It facilitates future research regarding cost behavior and the business value of IT. Additionally, practitioners gain awareness of levers to influence IT costs and consequences of managerial decisions on their IT spend.
We identify 74 generic, reusable technical requirements based on the GDPR that can be applied to software products which process personal data. The requirements can be traced to corresponding articles and recitals of the GDPR and fulfill the key principles of lawfulness and transparency. Therefore, we present an approach to requirements engineering with regard to developing legally compliant software that satisfies the principles of privacy by design, privacy by default as well as security by design.
The overall goal of this work is to detect and analyze a person's movement, breathing and heart rate during sleep in a common bed overnight without any additional physical contact. The measurement is performed with the help of
sensors placed between the mattress and the frame. A two-stage pattern classification algorithm based has been implemented that applies statistics analysis to recognize the position of patients. The system is implemented in a sensors-network, hosting several nodes and communication end-points to support quick and efficient classification. The overall tests show convincing results for the position recognition and a reasonable overlap in matching.
The goal of this paper pretends to show how a bed system with an embedded system with sensor is able to analyze a person’s movement, breathing and recognizing the positions that the subject is lying on the bed during the night without any additional physical contact. The measurements are performed with sensors placed between the mattress and the frame. An Intel Edison board was used as an endpoint that served as a communication node from the mesh network to external service. Two nodes and Intel Edison are attached to the bottom of the bed frame and they are connected to the sensors.
Research Report
(2024)
Requirements Engineering in Business Analytics for Innovation and Product Lifecycle Management
(2014)
Considering Requirements Engineering (RE) in business analytics, involving market oriented management, computer science and statistics, may be valuable for managing innovation in Product Lifecycle Management (PLM). RE and business analytics can help maximize the value of corporate product information throughout the value chain starting with innovation management. Innovation and PLM must address 1) big data, 2) development of well-defined business goals and principles, 3) cost/benefit analysis, 4) continuous change management, and 5) statistical and report science. This paper is a positioning note that addresses some business case considerations for analytics project involving PLM data, patents, and innovations. We describe a number of research challenges in RE that addresses business analytics when high PLM data should be turned into a successful market oriented innovation management strategy. We provide a draft on how to address these research challenges.
Deep Learning-based EEG Detection of Mental Alertness States from Drivers under Ethical Aspects
(2022)
One of the most critical factors for a successful road trip is a high degree of alertness while driving. Even a split second of inattention or sleepiness in a crucial moment, will make the difference between life and death. Several prestigious car manufacturers are currently pursuing the aim of automated drowsiness identification to resolve this problem. The path between neuro-scientific research in connection with artificial intelligence and the preservation of the dignity of human individual’s and its inviolability, is very narrow. The key contribution of this work is a system of data analysis for EEGs during a driving session, which draws on previous studies analyzing heart rate (ECG), brain waves (EEG), and eye function (EOG). The gathered data is hereby treated as sensitive as possible, taking ethical regulations into consideration. Obtaining evaluable signs of evolving exhaustion includes techniques that obtain sleeping stage frequencies, problematic are hereby the correlated interference’s in the signal. This research focuses on a processing chain for EEG band splitting that involves band-pass filtering, principal component analysis (PCA), independent component analysis (ICA) with automatic artefact severance, and fast fourier transformation (FFT). The classification is based on a step-by-step adaptive deep learning analysis that detects theta rhythms as a drowsiness predictor in the pre-processed data. It was possible to obtain an offline detection rate of 89% and an online detection rate of 73%. The method is linked to the simulated driving scenario for which it was developed. This leaves space for more optimization on laboratory methods and data collection during wakefulness-dependent operations.
Rheumatoid arthritis is an autoimmune disease that causes chronic inflammation of synovial joints, often resulting in irreversible structural damage. The activity of the disease is evaluated by clinical examinations, laboratory tests, and patient self-assessment. The long-term course of the disease is assessed with radiographs of hands and feet. The evaluation of the X-ray images performed by trained medical staff requires several minutes per patient. We demonstrate that deep convolutional neural networks can be leveraged for a fully automated, fast, and reproducible scoring of X-ray images of patients with rheumatoid arthritis. A comparison of the predictions of different human experts and our deep learning system shows that there is no significant difference in the performance of human experts and our deep learning model.
Nowadays, most digital modulation schemes are based on conventional signal constellations that have no algebraic group, ring, or field properties, e.g. square quadrature-amplitude modulation constellations. Signal constellations with algebraic structure can enhance the system performance. For instance, multidimensional signal constellations based on dense lattices can achieve performance gains due to the dense packing. The algebraic structure enables low-complexity decoding and detection schemes. In this work, signal constellations with algebraic properties and their application in spatial modulation transmission schemes are investigated. Several design approaches of two- and four-dimensional signal constellations based on Gaussian, Eisenstein, and Hurwitz integers are shown. Detection algorithms with reduced complexity are proposed. It is shown, that the proposed Eisenstein and Hurwitz constellations combined with the proposed suboptimal detection can outperform conventional two-dimensional constellations with ML detection.
This work proposes a construction for low-density parity-check (LDPC) codes over finite Gaussian integer fields. Furthermore, a new channel model for codes over Gaussian integers is introduced and its channel capacity is derived. This channel can be considered as a first order approximation of the additive white Gaussian noise channel with hard decision detection where only errors to nearest neighbors in the signal constellation are considered. For this channel, the proposed LDPC codes can be decoded with a simple non-probabilistic iterative decoding algorithm similar to Gallager's decoding algorithm A.
This paper proposes a novel transmission scheme for generalized multistream spatial modulation. This new approach uses one Mannheim error correcting codes over Gaussian or Eisenstein integers as multidimensional signal constellations. These codes enable a suboptimal decoding strategy with near maximum likelihood performance for transmission over the additive white Gaussian noise channel. In this contribution, this decoding algorithm is generalized to the detection for generalized multistream spatial modulation. The proposed method can outperform conventional generalized multistream spatial modulation with respect to decoding performance, detection complexity, and spectral efficiency.
Spatial modulation is a low-complexity multipleinput/ multipleoutput transmission technique. The recently proposed spatial permutation modulation (SPM) extends the concept of spatial modulation. It is a coding approach, where the symbols are dispersed in space and time. In the original proposal of SPM, short repetition codes and permutation codes were used to construct a space-time code. In this paper, we propose a similar coding scheme that combines permutation codes with codes over Gaussian integers. Short codes over Gaussian integers have good distance properties. Furthermore, the code alphabet can directly be applied as signal constellation, hence no mapping is required. Simulation results demonstrate that the proposed coding approach outperforms SPM with repetition codes.
Four-Dimensional Hurwitz Signal Constellations, Set Partitioning, Detection, and Multilevel Coding
(2021)
The Hurwitz lattice provides the densest four-dimensional packing. This fact has motivated research on four-dimensional Hurwitz signal constellations for optical and wireless communications. This work presents a new algebraic construction of finite sets of Hurwitz integers that is inherently accompanied by a respective modulo operation. These signal constellations are investigated for transmission over the additive white Gaussian noise (AWGN) channel. It is shown that these signal constellations have a better constellation figure of merit and hence a better asymptotic performance over an AWGN channel when compared with conventional signal constellations with algebraic structure, e.g., two-dimensional Gaussian-integer constellations or four-dimensional Lipschitz-integer constellations. We introduce two concepts for set partitioning of the Hurwitz integers. The first method is useful to reduce the computational complexity of the symbol detection. This suboptimum detection approach achieves near-maximum-likelihood performance. In the second case, the partitioning exploits the algebraic structure of the Hurwitz signal constellations. We partition the Hurwitz integers into additive subgroups in a manner that the minimum Euclidean distance of each subgroup is larger than in the original set. This enables multilevel code constructions for the new signal constellations.
Multi-dimensional spatial modulation is a multipleinput/ multiple-output wireless transmission technique, that uses only a few active antennas simultaneously. The computational complexity of the optimal maximum-likelihood (ML) detector at the receiver increases rapidly as more transmit antennas or larger modulation orders are employed. ML detection may be infeasible for higher bit rates. Many suboptimal detection algorithms for spatial modulation use two-stage detection schemes where the set of active antennas is detected in the first stage and the transmitted symbols in the second stage. Typically, these detection schemes use the ML strategy for the symbol detection. In this work, we consider a suboptimal detection algorithm for the second detection stage. This approach combines equalization and list decoding. We propose an algorithm for multi-dimensional signal constellations with a reduced search space in the second detection stage through set partitioning. In particular, we derive a set partitioning from the properties of Hurwitz integers. Simulation results demonstrate that the new algorithm achieves near-ML performance. It significantly reduces the complexity when compared with conventional two-stage detection schemes. Multi-dimensional constellations in combination with suboptimal detection can even outperform conventional signal constellations in combination with ML detection.
This work proposes a suboptimal detection algorithm for generalized multistream spatial modulation. Many suboptimal detection algorithms for spatial modulation use two-stage detection schemes where the set of active antennas is detected in the first stage and the transmitted symbols in the second stage. For multistream spatial modulation with large signal constellations the second detection step typically dominates the detection complexity. With the proposed detection scheme, the modified Gaussian approximation method is used for detecting the antenna pattern. In order to reduce the complexity for detecting the signal points, we propose a combined equalization and list decoding approach. Simulation results demonstrate that the new algorithm achieves near-maximum-likelihood performance with small list sizes. It significantly reduces the complexity when compared with conventional two-stage detection schemes.