Refine
Year of publication
- 2020 (139) (remove)
Document Type
- Conference Proceeding (47)
- Article (36)
- Report (14)
- Part of a Book (12)
- Book (6)
- Other Publications (6)
- Doctoral Thesis (4)
- Master's Thesis (4)
- Journal (Complete Issue of a Journal) (4)
- Bachelor Thesis (3)
Keywords
- 3D ship detection (1)
- Accelerometers (1)
- Accessible Tourism (1)
- Actions (1)
- Adaptive (1)
- Adivasi (1)
- Agiles Lehren (1)
- Agiles Management (1)
- Apnoe (1)
- Architektur (1)
Institute
- Fakultät Architektur und Gestaltung (4)
- Fakultät Bauingenieurwesen (9)
- Fakultät Elektrotechnik und Informationstechnik (1)
- Fakultät Informatik (20)
- Fakultät Maschinenbau (5)
- Fakultät Wirtschafts-, Kultur- und Rechtswissenschaften (13)
- Institut für Angewandte Forschung - IAF (4)
- Institut für Naturwissenschaften und Mathematik - INM (2)
- Institut für Optische Systeme - IOS (8)
- Institut für Strategische Innovation und Technologiemanagement - IST (10)
Digitalization is one of the most frequently discussed topics in industry. New technologies, platform concepts and integrated data models do enable disruptive business models and drive changes in organization, processes, and tools. The goal is to make a company more efficient, productive and ultimately profitable. However, many companies are facing the challenge of how to approach digital transformation in a structured way and to realize these potential benefits. What they realize is that Product Lifecycle Management plays a key role in digitalization intends, as object, structure and process management along the life cycle is a foundation for many digitalization use cases. The introduced maturity model for assessing a firm’s capabilities along the product lifecycle has been used almost two hundred times. It allows a company to compare its performance with an industry specific benchmark to reveal individual strengths and weaknesses. Furthermore, an empirical study produced multidimensional correlation coefficients, which identify dependencies between business model characteristics and the maturity level of capabilities.
The transformation to an Industry 4.0, which is in general seen as a solution to increasing market challenges, is forcing companies to radically change their way of thinking and to be open to new forms of cooperation. In this context, the opening-up of the innovation process is widely seen as a necessity to meet these challenges, especially for small and medium enterprises (SMEs). The aim of the study therefore is to analyze how cooperation today can be characterized, how this character has changed since the establishment of the term Industry 4.0 at Hanover Fair in 2011 and which cooperation strategies have proven successful. The analysis consists of a quantitative, secondary data analysis that includes country-specific data from 35 European countries of 2010 and 2016 collected by the European Commission and the OECD. The research, focusing on the secondary sector, shows that multinational enterprises MNEs still tend to cooperate more than SMEs, with a slight overall trend towards protectionism. Nevertheless, there is a clear tendency towards the opening-up of SMEs. In this regard, especially universities, competitors and suppliers have become increasingly attractive as cooperation partners for SMEs.
Die Nibelungenbrücke Worms
(2020)
Despite the importance of Social Life Cycle Sustainability Assessment (S-LCSA), little research has addressed its integration into Product Lifecycle Management (PLM) systems. This paper presents a structured review of relevant research and practice. Also, to address practical aspects in more detail, it focuses on challenges and potential for adoption of such an integrated system at an electronics company.
We began by reviewing literature on implementations of Social-LCSA and identifying research needs. Then we investigated the status of Social-LCSA within the electronics industry, both by reviewing literature and interviewing decision makers, to identify challenges and the potential for adopting S-LCSA at an electronics company. We found low maturity of Social-LCSA, particularly difficulty in quantifying social sustainability. Adoption of Social-LCSA was less common among electronics industry suppliers, especially mining & smelting plants. Our results could provide a basis for conducting case studies that could further clarify issues involved in integrations of Social-LCSA into PLM systems.
Jahresbericht 2020
(2020)
This paper examines the corporate organisational aspects of the implementation of Industry 4.0. Industry 4.0 builds on new technologies and appears as a disruptive innovation to manufacturing firms. Although we do have a good understanding of the technical components, the implementation of the management and organisational aspects of Industry 4.0 is under-researched. It is challenging to find qualitative empirical evidence which provides comprehensive insights about real implementation cases. Based on a case study in a German high value manufacturing firm, we explore the corporate organisation and implementation of Industry 4.0. By using the framework of Complex Adaptive System (CAS), we have identified three key factors which facilitate the implementation of Industry 4.0 namely 1.) Organisational structure changes such as the foundation of a central department for digital transformation, 2.) The election of a Chief Digital Officer as a personnel change, and 3.) Corporate opening up towards cooperating with partners as a cultural change. We have furthermore found that Lean Management is an important enabler that ensures readiness for the adoption of Industry 4.0.
Forecasting is crucial for both system planning and operations in the energy sector. With increasing penetration of renewable energy sources, increasing fluctuations in the power generation need to be taken into account. Probabilistic load forecasting is a young, but emerging research topic focusing on the prediction of future uncertainties. However, the majority of publications so far focus on techniques like quantile regression, ensemble, or scenario-based methods, which generate discrete quantiles or sets of possible load curves. The conditioned probability distribution remains unknown and can only be estimated when the output is post-processed using a statistical method like kernel density estimation.
Instead, the proposed probabilistic deep learning model uses a cascade of transformation functions, known as normalizing flow, to model the conditioned density function from a smart meter dataset containing electricity demand information for over 4,000 buildings in Ireland. Since the whole probability density function is tractable, the parameters of the model can be obtained by minimizing the negative loglikelihood through the state of the art gradient descent. This leads to the model with the best representation of the data distribution.
Two different deep learning models have been compared, a simple three-layer fully connected neural network and a more advanced convolutional neural network for sequential data processing inspired by the WaveNet architecture. These models have been used to parametrize three different probabilistic models, a simple normal distribution, a Gaussian mixture model, and the normalizing flow model. The prediction horizon is set to one day with a resolution of 30 minutes, hence the models predict 48 conditioned probability distributions.
The normalizing flow model outperforms the two other variants for both architectures and proves its ability to capture the complex structures and dependencies causing the variations in the data. Understanding the stochastic nature of the task in such detail makes the methodology applicable for other use cases apart from forecasting. It is shown how it can be used to detect anomalies in the power grid or generate synthetic scenarios for grid planning.
Diese Bachelorarbeit behandelt die Prozessoptimierung des Bemusterungsprozesses mithilfe Lean und agilen Methoden. Die Arbeit orientiert sich dabei an dem Unternehmen Ed. Züblin AG und deren vorhandenen Möglichkeiten. Das Konzept lässt sich jedoch mindestens zu teilen mit den nötigen Anpassungen auf andere Unternehmen und Projekte übertragen. Mithilfe einer Wertstromanalyse und Interviews wurde der Ist-Prozess aufgezeigt. Dabei kamen verschiedene Durchführungen der Bemusterungen im Unternehmen mit unterschiedlichen Problemen zum Vorschein. Unter anderem gab es durch fehlende oder falsche Planung ständige Anforderungsänderungen, Lücken in der Durchführung und Einschränkungen in der Kommunikation. Eine Umstrukturierung des Prozessablaufes, Anpassungen in der Planung mithilfe von Sprints und Überlegungen zur Organisation und den Mitarbeitern sollen die Probleme in den Griff bekommen. Dieser Beitrag soll somit eine Aufklärung zur Bemusterung sein und Anreize und Ideen zur Verbesserung liefern.
In order to elaborate inflation and deflation tendencies due to the COVID-19 pandemic and how they are tried to be actively influenced, this paper compares news regarding the measurements of central banks in Europe, USA and Japan. Factors affecting inflation are defined in conjunction with the typical measurements of central banks and conclusions are drawn in respect to differences of the most recent correcting behavior. The paper is concluded by discussing how price levels might develop during and after the crisis.
Cities around the world are facing an increasing number of global and local challenges, such as climate change and scarcity of raw materials. At the same time trends like digitalization, globalization and networking gain in importance. For this reason, cities have started imple-menting smart solutions within the urban structure in order to evolve towards a Smart City. In Botswana, the Maun Science Park is intended to provide a best practice approach for a Bot-swanan Smart City. Since Smart City concepts have to be specifically tailored to local condi-tions, the first main goal of this thesis is to develop a synthesis concept for the Maun Science Park. A key problem in cities is the utilization of space, which is further intensified by increasing urbanization and population growth. Therefore, the second main goal is to develop approaches of (digitally) re-programmable space to use available areas intelligently and optimized.
Within the thesis, human-centered design has been applied as structure-giving methodology. By clarifying relevant Smart City contents, considering reference examples as well as identify-ing local challenges and requirements, an appropriate concept has been developed with hu-man-focus. Furthermore, the methodologies of literature research and expert interviews have been used as input in the individual human-centered design phases. In combination with an innovation funnel, the methodology human-centered design forms the structure of the thesis.
In total, ten main solution areas and 37 sub-segments have been identified for the synthesis concept of Maun Science Park. Additionally, a concept for Smart Buildings has been devel-oped as a part of the synthesis concept and as an essential infrastructure component of the Maun Science Park (three main segments, 16 sub-segments). Based on expert input, a priori-tization has been determined by evaluating the impact and economic affordability of the indi-vidual sub-areas. Moreover, individual key areas have been highlighted by identifying direct interactions between sub-segments and on the basis of expert input – these are particularly related to the segments Smart Data and Smart People. Besides the synthesis concept, ap-proaches of (digitally) re-programmable space have been created. Thereby, ten approaches refer to the conversion, reuse or expansion abilities of space within daily, weekly or life cycle. In addition, the conventional (digitally) re-programmable space idea has been extended by two new considerations – “multi-purpose use of built-up space” and “concept programming in the planning phase”. Finally, within an overall consideration – synthesis concept combined with approaches of (digitally) re-programmable space – the added value of the developed contents has been outlined, positive and negative aspects have been identified within a SWOT analysis and the business model of the Maun Science Park approach has been verified in a Business Model Canvas.
Through explicit elaboration, classification and prioritization of solution areas, the developed concept can serve as a basis for further project steps. Based on the defined requirements of the sub-segments, solutions can be developed with regard to the entire Smart City context.
Throughout this thesis, the implementation of tools for knowledge management as a key factor for sustainable corporate development, is presented. In industries with a high fluc-tuation rate, such as construction, efficient knowledge management is of particular im-portance. Companies feel the effects of negligent handling of this resource especially dur-ing the Corona pandemic. Restructuring leads to experienced employees leaving the com-pany – and with them the know-how and experience gained. With a systematic knowledge transfer, the most important insights in such situations remain within the or-ganization. Thus, the company becomes crisis-proof and receives all the tools it needs to grow healthily again after the recession. Practical data from competitors indicates that knowledge management promises savings potential of several million euros per year for BAM. Further potentials in the areas of sustainability, customer- and employee satisfac-tion as well as occupational safety, which do not lead to savings, are also worth mention-ing. This thesis determines the current maturity level of knowledge management at BAM, before introducing processes and systems that successively drive the improvement. The developed methods simultaneously help to prevent and solve problems and systematical-ly promote the continuous improvement of all work processes in the company.
Detailed steps are presented to carry out change management towards the successful introduction and further development of knowledge management at BAM. A major focus is on interpersonal factors. The related topic of knowledge culture was recently ranked by german think-tank Zukunftsinstitut as one of the top 5 megatrends for companies in the 2020s. The methods developed, contribute to the creation of such a culture and to the transformation of BAM towards a learning organization. Knowledge management identi-fies with the BAM values. In the course of this thesis it will be shown how the system by its very nature, helps to implement these values in the work of every employee.
The results of this elaboration were recently awarded the Digital Construction Award 2020 for Business Excellence at BAM Deutschland AG.
Botswana, a new construction project – the Maun Science Park - is to be built with a focus on sustainability and to create a new living space for the rapidly growing population in Africa. The project will be a blueprint for future projects in Africain terms of progress, technology and sustainability. This thesis will deal with its financial framework and will serve as a basis for the development of ways and means of financing such projects.
Traditional Western philosophy, cognitive science and traditional HCI frameworks approach the term digital and its implications with an implicit dualism (nature/cul-ture, theory /practice, body/mind, human/machine). What lies between is a feature of our postmodern times, in which different states, conditions or positions merge and co-exist in a new, hybrid reality, a “continuous beta” (Mühlenbeck & Skibicki, 2007) version of becoming .Post-digitality involves the physical dimensions of spatio-temporal engagements. This new ontological paradigm reconceptualizes digital technology through the ex-perience of the human body and its senses, thus emphasizing form-taking, situation-al engagement and practice rather than symbolic, disembodied rationality. This rais-es two questions in particular: how to encourage curiosity, playfulness, serendipity, emergence, discourse and collectivity? How to construct working methods without foregrounding and dividing the subject into an individual that already takes posi-tion? This paper briefly outlines the rhizomatic framework that I developed within my PhD research. This attempts to overcome two prevailing tendencies: first, the one-sided view of scientific approaches to knowledge acquisition and the pure-ly application-oriented handling of materials, technologies and machines; second, the distanced perception of the world. In contrast, my work involves project-driven alchemic curiosity and doing research through artistic design practice. This means thinking through materials, technologies and machinic interactions. Now, at the end of this PhD journey, 10 interdisciplinary projects have emerged from this ontological queer-paradigm that is post-digital–crafting 4.0. Below I illustrate this approach and its outcomes.
Pop-up Workshopreihe
(2020)
Production and marketing of cereal grains are some of the main activities in developing countries to ensure food security. However, the food gap is complicated further by high postharvest loss of grains during storage. This study aimed to compare low‐cost modified‐atmosphere hermetic storage structures with traditional practice to minimize quantitative and qualitative losses of grains during storage. The study was conducted in two phases: in the first phase, seven hermetic storage structures with or without smoke infusion were compared, and one selected structure was further validated at scaled‐up capacity in the second phase.
Von wegen Bauschutt
(2020)
RC-Betone sind keine Neu-Entwicklungen, aber sie erleben seit circa 15 Jahren in Deutschland eine Renaissance mit Materialzusammensetzungen, die den heutigen Anforderungen an Normalbetone gerecht werden. Es gab immer wieder Abschnitte in der (Bau-)Geschichte, in denen Gebäude aus Ziegelsplitt-Betonen errichtet wurden, wie das Max-Kade-Studentenwohnheim in Stuttgart und das Technische Rathaus in Tübingen. Beide stammen aus der Nachkriegszeit und weisen einen guten Erhaltungszustand auf. Sie sind Beispiele für die Bewährung "historischer" Ziegelsplitt-Betone in der Baupraxis und ihre lange technische Lebensdauer.
Weder für moderne Recycling-Betone gemäß Regelwerk noch für Ziegelsplittbetone der Nachkriegsjahre bestehen prinzipielle Bedenken gegen deren Einsatz oder die Weiternutzung im Hochbau. Die Autoren wünschen sich mehr Akzeptanz und Vertrauen in Recyclingbaustoffe und dass sich für "Vintage" im Baubereich irgendwann ein ähnliches Interesse herausbildet wie für Vintage-Möbel oder Used-Look-Kleidung - und dies nicht nur hinsichtlich der Wiederverwendung gebrauchter Türen und Treppen, sondern auch für mineralische Massenbaustoffe wie Beton. Der Beitrag veranschaulicht anhand erfolgreich realisierter Objektbeispiele, wie Hochhäuser (z.B. das Studentenwohnheim Max-Kade-Haus in Stuttgart, 1953, aus Bauschuttbeton) oder Sakralgebäude (Fatima-Kirche in Kassel aus Sichtbeton mit Ziegelbruch, 60 Jahre alt) sowie auch Verwaltungsbauten (Technisches Rathaus in Tübingen aus den 1950er Jahren) erfolgreich und nachhaltig mit Recyclingmaterialien errichtet wurden.
In modern fruit processing technology, non-destructive quality measuring techniques aresought for determining and controlling changes in the optical, structural, and chemical properties of theproducts. In this context, changes inside the product can be measured during processing. Especiallyfor industrial use, fast, precise, but robust methods are particularly important to obtain high-qualityproducts. In this work, a newly developed multi-spectral imaging system was implemented andadapted for drying processes. Further it was investigated if the system could be used to link changesin the surface spectral reflectance during mango drying with changes in moisture content andcontents of chemical components. This was achieved by recovering the spectral reflectance frommulti-spectral image data and comparing the spectral changes with changes of the total soluble solids(TSS), pH-value and the relative moisture contentxwbof the products. In a first step, the camera wasmodified to be used in drying, then the changes in the spectra and quality criteria during mangodrying were measured. For this, mango slices were dried at air temperatures of 40–80◦C and relativeair humidities of 5%–30%. Samples were analyzed and pictures were taken with the multi-spectralimaging system. The quality criteria were then predicted from spectral data. It could be shown thatthe newly developed multi-spectral imaging system can be used for quality control in fruit drying.There are strong indications as well, that it can be employed for the prediction of chemical qualitycriteria of mangoes during drying. This way, quality changes can be monitored inline during theprocess using only one single measuring device.
Die wenigen Literaturangaben zu Sorptionsisothermen von mineralischen Estrichen beziehen sich im Wesentlichen auf Calciumsulfatestriche und genormte Zementestriche, sowie i.d.R. nur auf eine festgesetzte Lufttemperatur (= 20 Grad C). Daher war es das Anliegen der im Beitrag beschriebenen Untersuchung, die Feuchtigkeitseigenschaften von Estrichen bei unterschiedlichen Klimaten mithilfe von Sorptionsisothermen zu charakterisieren. Ergänzend sollten die seit ca. 20 Jahren marktüblichen ternären Schnellzemente mit untersucht und die baupraktisch interessanten Temperaturen von 15 Grad C und 25 Grad C einbezogen werden. Ebenso wurden die Auswirkungen der Klimabedingungen auf der Baustelle (Jahreszeit, Luftfeuchtigkeit, Temperatur) auf den Hydratationsvorgang der Estriche untersucht. In Kombination mit den Ergebnissen der Gefügeuntersuchungen (u.a. Hg-Porosimetrie) wird belegt, weshalb sich die zement- und schnellzementgebundenen Estriche vollkommen anders verhalten als die calciumsulfatgebundenen Estriche. Dieses unterschiedliche Verhalten ist auch einer der Gründe, warum Estriche mit der KRL-Methode in Bezug auf ihren Feuchtegehalt nicht bewertet werden können. Deshalb folgt ein Vergleich der Materialfeuchtemessungen "KRL-Methode" mit der handwerksüblichen und seit Jahrzehnten in der Praxis bewährten "CM-Methode".
Location-aware mobile devices are becoming increasingly popular and GPS sensors are built into nearly every portable unit with computational capabilities. At the same time, the emergence of location-aware virtual services and ideas calls for new efficient spatial real-time queries. Communication latency in mobile environments interacting with high decentralization and the need of scalability in high-density systems with immense client counts leads to major challenges. In this paper we describe a decentralized architecture for continuous range queries in settings in which both, the requested and the requesting clients, are mobile. While prior works commonly use a request-response approach we provide a stream-based adaptive grid solution dealing with arbitrary high client counts and improving communication latency that meets given hard real-time constraints.
A residual neural network was adapted and applied to the Physionet/Computing data in Cardiology Challenge 2020 to detect 24 different classes of cardiac abnormalities from 12-lead. Additive Gaussian noise, signal shifting, and the classification of signal sections of different lengths were applied to prevent the network from overfitting and facilitating generalization. Due to the use of a global pooling layer after the feature extractor, the network is independent of the signal’s length. On the hidden test set of the challenge, the model achieved a validation score of 0.656 and a full test score of 0.27, placing us 15th out of 41 officially ranked teams (Team name: UC_Lab_Kn). These results show the potential of deep neural networks for ap- plication to raw data and a complex multi-class multi-label classification problem, even if the training data is from di- verse datasets and of differing lengths.
Forschungsfrage: Welche Rollen lassen sich in Corporate Entrepreneurship identifizieren? Wie unterscheiden sich diese anhand verschiedener Merkmale und welche Fähigkeiten scheinen besonders relevant für ihre erfolgreiche Ausführung?
Methodik: Explorative Studie mit 56 semi-strukturierten Interviews mit Corporate-Entrepreneurship-Aktivitäten im DACH-Raum
Praktische Implikationen: Ein genaues Verständnis über die jeweiligen Rollen, ihre Unterschiedlichkeiten und Anforderungen ist notwendig, um die verschiedenen Corporate-Entrepreneurship-Aktivitäten mit passendem Personal zu besetzen.
Der digitale Seilüberwacher
(2020)
In beispielhafter Zusammenarbeit zwischen Industrie (Geobrugg AG, Romanshorn/Schweiz) und Wissenschaft (WITg Institut für Werkstoffsystemtechnik Thurgau an der Hochschule Konstanz, Tägerwilen/Schweiz) wurde mit Unterstützung der Schweizer staatlichen Innovationsförderung (KTI, hee Innosuisse) ein neues Werkstoff- und Fertigungskonzept für den Bau von Fischzuchtnetzen aus hochfesten nichtrostenden Stahldrähten entwickelt.
Diese Entwicklung wurde 2019 von Swiss Inox, der Schweizer Innovationspreis Prix Inox ausgezeichnet.
What drives entrepreneurial action to create a lasting impact? The creation of new ventures that aim at having an impact beyond their financial performance face additional challenges: achieving economic sustainability and at the same time addressing social or environmental issues. Little is known on how these new hybrid organizations, aiming for multiple impact dimensions, manage to be congruent with their blended values. A dataset of 4,125 early-stage ventures is used to gain insights into how blended values are converted into financial, social and environmental impacts, giving shape to different types of hybrid organizations. Our findings suggest new hybrid organizations might opt to sacrifice financial impact to achieve social impact, yet this is not the case when they aim to generate environmental or sustainable impact. Therefore, the tensions and sacrifices related to holding blended values are not homogeneous across all types of new hybrid organizations.
Uncertainty about the future requires companies to create discontinuous innovations. Established companies, however, struggle to do so; whereas independent startups seem to better cope with this. Consequently, established companies set up entrepreneurial initiatives to make use of startups' benefits. Consequently, this led-amongst others-to great interest in socalled corporate entrepreneurship (CE) programs and to the development and characterization of several different forms. Their processes to achieve certain objectives, yet, are still rather ineffective. Thus, considerations of the actions performed in preparation for and during CE programs could be one approach to improve this but are still absent today. Furthermore, the increasing use of several CE programs in parallel seems to bear the potential for synergies and, thus, more efficient use of resources. Aiming to provide insights to both issues, this study analyzes actions of CE programs, by looking at interviews with managers of seven corporate incubators and accelerator programs of five established German tech-companies.
In today's volatile world, established companies must be capable of optimizing their core business with incremental innovations while simultaneously developing discontinuous innovations to maintain their long-term competitiveness. Balancing both is a major challenge for companies, since different types of innovation require different organizational structures, operational modes and management styles. Established companies tend to excel in improving their current business through incremental innovations which are closely related to their current knowledge base and competencies. However, this often goes hand in hand with challenges in the exploration of knowledge that is new to the company and that is essential for the development of discontinuous innovations. In this respect, the concept of corporate entrepreneurship is recognized as a way to strengthen the exploration of new knowledge and to support the development of discontinuous innovation. For managing corporate entrepreneurship more effectively, it is crucial to understand which types of knowledge can be created through corporate entrepreneurship and which organizational designs are more suited to gain certain types of knowledge. To answer these questions, this study analyzed 23 semi-structured interviews conducted with established companies that are running such entrepreneurial activities. The results show (1) that three general types of knowledge can be explored through corporate entrepreneurship and (2) that some organizational designs are more suited to explore certain knowledge types than others are.
We have analyzed a pool of 37,839 articles published in 4,404 business-related journals in the entrepreneurship research field using a novel literature review approach that is based on machine learning and text data mining. Most papers have been published in the journals ‘Small Business Economics’, ‘International Journal of Entrepreneurship and Small Business’, and ‘Sustainability’ (Switzerland), while the sum of citations is highest in the ‘Journal of Business Venturing’, ‘Entrepreneurship Theory and Practice’, and ‘Small Business Economics’. We derived 29 overarching themes based on 52 identified clusters. The social entrepreneurship, development, innovation, capital, and economy clusters represent the largest ones among those with high thematic clarity. The most discussed clusters measured by the average number of citations per assigned paper are research, orientation, capital, gender, and growth. Clusters with the highest average growth in publications per year are social entrepreneurship, innovation, development, entrepreneurship education, and (business-) models. Measured by the average yearly citation rate per paper, the thematic cluster ‘research’, mostly containing literature studies, received most attention. The MLR allows for an inclusion of a significantly higher number of publications compared to traditional reviews thus providing a comprehensive, descriptive overview of the whole research field.
1863 dichtet der Volksautor Wilhelm Busch die Geschichte von Max und Moritz, zweier Lausbuben, die gegen Regeln und Sitten der Dorfgemeinschaft verstoßen und das Zusammenleben empfindlich stören. Busch zeigt in seiner Dichtung schon in der Einleitung den Kern des Problems auf. „Ach, was muß man oft von bösen Kindern hören oder lesen! Wie zum Beispiel hier von diesen, Welche Max und Moritz hießen;
Die, anstatt durch weise Lehren Sich zum Guten zu bekehren, Oftmals noch darüber lachten Und sich heimlich lustig machten. Ja, zur Übeltätigkeit, Ja, dazu ist man bereit!“ Max und Moritz weigern sich, die Schule zu besuchen und sich regelkonform und anständig zu verhalten. Ihre Uneinsichtigkeit und ihre Rücksichtslosigkeit enden für die beiden tödlich, und erst mit diesem Ende kehrt wieder Ruhe im Dorf ein. Der Zusammenhang zwischen der bekannten Kindergeschichte „Max und Moritz“ von Wilhelm Busch und aktuellen Wirtschafts- und Unternehmensskandalen scheint vielleicht auf den ersten Blick etwas weit hergeholt. Als „modernes Märchen“ sprechen diese Geschichte und ihre Moral aber letztlich genau von dem, was Unternehmen heute weltweit zu schaffen macht. Ohne die bekannten Lausbubenstreiche mit Spielarten moderner Wirtschaftskriminalität gleichsetzen zu wollen, lässt sich aus den sieben Streichen für unser Thema folgendes ableiten:
– Regelwidriges Verhalten kann überall auftauchen
– Unter dem Fehlverhalten einzelner haben alle zu leiden
– Das Funktionieren des übergeordneten Systems wird nachhaltig gestört
– Und: klassische „Erziehungsversuche“ erweisen sich häufig als wirkungslos
Aufgrund der Corona-Pandemie kam es 2020 zu einer verstärkten Nutzung von Homeoffice und Teleworking. Sowohl bzgl. der Wahl des Arbeitsortes als auch der genutzen Kommunikationstechnologien existieren Pfadabhängigkeiten. Der Beitrag thematisiert diese Pfadabhängigkeiten systematisch, insbesondere ihre Ursachen und Folgen sowie die Möglichkeiten zur Pfadbrechung.
40 Jahre Neuland des Denkens
(2020)
Vor 40 Jahren erschien Frederic Vesters Hauptwerk „Neuland des Denkens“. Der Beitrag beleuchtet die wesentlichen Themen dieses programmatischen Buches im Hinblick auf Vesters Biokybernetik und deren Anwendung auf zahlreiche aktuelle Fragen in der Nachhaltigkeits-Debatte, z.B. Klimawandel-Problematik und Energiewende.
We provide an overview of the ongoing discussions on the objectives of the energy transition in the form of a conceptual framework, intending to facilitate the search for the most viable options for a successful transformation of the energy system. For this purpose, we examine the development of energy policy goals in Germany in the past and present, whereby we give an overview of objectives and assessment approaches from politics, economics, and science. Moreover, we then merge the different views into a common framework and analyze the central conflict between the wholeness of a hypothetical target circle and the simplification in favor of a hypothetical target point in more detail.
This paper proposes a novel transmission scheme for generalized multistream spatial modulation. This new approach uses one Mannheim error correcting codes over Gaussian or Eisenstein integers as multidimensional signal constellations. These codes enable a suboptimal decoding strategy with near maximum likelihood performance for transmission over the additive white Gaussian noise channel. In this contribution, this decoding algorithm is generalized to the detection for generalized multistream spatial modulation. The proposed method can outperform conventional generalized multistream spatial modulation with respect to decoding performance, detection complexity, and spectral efficiency.
Soft-input decoding of concatenated codes based on the Plotkin construction and BCH component codes
(2020)
Low latency communication requires soft-input decoding of binary block codes with small to medium block lengths.
In this work, we consider generalized multiple concatenated (GMC) codes based on the Plotkin construction. These codes are similar to Reed-Muller (RM) codes. In contrast to RM codes, BCH codes are employed as component codes. This leads to improved code parameters. Moreover, a decoding algorithm is proposed that exploits the recursive structure of the concatenation. This algorithm enables efficient soft-input decoding of binary block codes with small to medium lengths. The proposed codes and their decoding achieve significant performance gains compared with RM codes and recursive GMC decoding.
The reliability of flash memories suffers from various error causes. Program/erase cycles, read disturb, and cell to cell interference impact the threshold voltages and cause bit errors during the read process. Hence, error correction is required to ensure reliable data storage. In this work, we investigate the bit-labeling of triple level cell (TLC) memories. This labeling determines the page capacities and the latency of the read process. The page capacity defines the redundancy that is required for error correction coding. Typically, Gray codes are used to encode the cell state such that the codes of adjacent states differ in a single digit. These Gray codes minimize the latency for random access reads but cannot balance the page capacities. Based on measured voltage distributions, we investigate the page capacities and propose a labeling that provides a better rate balancing than Gray labeling.
Side Channel Attack Resistance of the Elliptic Curve Point Multiplication using Eisenstein Integers
(2020)
Asymmetric cryptography empowers secure key exchange and digital signatures for message authentication. Nevertheless, consumer electronics and embedded systems often rely on symmetric cryptosystems because asymmetric cryptosystems are computationally intensive. Besides, implementations of cryptosystems are prone to side-channel attacks (SCA). Consequently, the secure and efficient implementation of asymmetric cryptography on resource-constrained systems is demanding. In this work, elliptic curve cryptography is considered. A new concept for an SCA resistant calculation of the elliptic curve point multiplication over Eisenstein integers is presented and an efficient arithmetic over Eisenstein integers is proposed. Representing the key by Eisenstein integer expansions is beneficial to reduce the computational complexity and the memory requirements of an SCA protected implementation.
In this article, we give the construction of new four-dimensional signal constellations in the Euclidean space, which represent a certain combination of binary frequency-shift keying (BFSK) and M-ary amplitude-phase-shift keying (MAPSK). Description of such signals and the formulas for calculating the minimum squared Euclidean distance are presented. We have developed an analytic building method for even and odd values of M. Hence, no computer search and no heuristic methods are required. The new optimized BFSK-MAPSK (M = 5,6,···,16) signal constructions are built for the values of modulation indexes h =0.1,0.15,···,0.5 and their parameters are given. The results of computer simulations are also provided. Based on the obtained results we can conclude, that BFSK-MAPSK systems outperform similar four-dimensional systems both in terms of minimum squared Euclidean distance and simulated symbol error rate.
This work presents a new concept to implement the elliptic curve point multiplication (PM). This computation is based on a new modular arithmetic over Gaussian integer fields. Gaussian integers are a subset of the complex numbers such that the real and imaginary parts are integers. Since Gaussian integer fields are isomorphic to prime fields, this arithmetic is suitable for many elliptic curves. Representing the key by a Gaussian integer expansion is beneficial to reduce the computational complexity and the memory requirements of secure hardware implementations, which are robust against attacks. Furthermore, an area-efficient coprocessor design is proposed with an arithmetic unit that enables Montgomery modular arithmetic over Gaussian integers. The proposed architecture and the new arithmetic provide high flexibility, i.e., binary and non-binary key expansions as well as protected and unprotected PM calculations are supported. The proposed coprocessor is a competitive solution for a compact ECC processor suitable for applications in small embedded systems.
Deep neural networks (DNNs) are known for their high prediction performance, especially in perceptual tasks such as object recognition or autonomous driving. Still, DNNs are prone to yield unreliable predictions when encountering completely new situations without indicating their uncertainty. Bayesian variants of DNNs (BDNNs), such as MC dropout BDNNs, do provide uncertainty measures. However, BDNNs are slow during test time because they rely on a sampling approach. Here we present a single shot MC dropout approximation that preserves the advantages of BDNNs without being slower than a DNN. Our approach is to analytically approximate for each layer in a fully connected network the expected value and the variance of the MC dropout signal. We evaluate our approach on different benchmark datasets and a simulated toy example. We demonstrate that our single shot MC dropout approximation resembles the point estimate and the uncertainty estimate of the predictive distribution that is achieved with an MC approach, while being fast enough for real-time deployments of BDNNs.
Probabilistic Deep Learning
(2020)
Probabilistic Deep Learning is a hands-on guide to the principles that support neural networks. Learn to improve network performance with the right distribution for different data types, and discover Bayesian variants that can state their own uncertainty to increase accuracy. This book provides easy-to-apply code and uses popular frameworks to keep you focused on practical applications.
At present, the majority of the proposed Deep Learning (DL) methods provide point predictions without quantifying the model's uncertainty. However, a quantification of the reliability of automated image analysis is essential, in particular in medicine when physicians rely on the results for making critical treatment decisions. In this work, we provide an entire framework to diagnose ischemic stroke patients incorporating Bayesian uncertainty into the analysis procedure. We present a Bayesian Convolutional Neural Network (CNN) yielding a probability for a stroke lesion on 2D Magnetic Resonance (MR) images with corresponding uncertainty information about the reliability of the prediction. For patient-level diagnoses, different aggregation methods are proposed and evaluated, which combine the individual image-level predictions. Those methods take advantage of the uncertainty in the image predictions and report model uncertainty at the patient-level. In a cohort of 511 patients, our Bayesian CNN achieved an accuracy of 95.33% at the image-level representing a significant improvement of 2% over a non-Bayesian counterpart. The best patient aggregation method yielded 95.89% of accuracy. Integrating uncertainty information about image predictions in aggregation models resulted in higher uncertainty measures to false patient classifications, which enabled to filter critical patient diagnoses that are supposed to be closer examined by a medical doctor. We therefore recommend using Bayesian approaches not only for improved image-level prediction and uncertainty estimation but also for the detection of uncertain aggregations at the patient-level.
Mapping of tree seedlings is useful for tasks ranging from monitoring natural succession and regeneration to effective silvicultural management. Development of methods that are both accurate and cost-effective is especially important considering the dramatic increase in tree planting that is required globally to mitigate the impacts of climate change. The combination of high-resolution imagery from unmanned aerial vehicles and object detection by convolutional neural networks (CNNs) is one promising approach. However, unbiased assessments of these models and methods to integrate them into geospatial workflows are lacking. In this study, we present a method for rapid, large-scale mapping of young conifer seedlings using CNNs applied to RGB orthomosaic imagery. Importantly, we provide an unbiased assessment of model performance by using two well-characterised trial sites together containing over 30,000 seedlings to assemble datasets with a high level of completeness. Our results showed CNN-based models trained on two sites detected seedlings with sensitivities of 99.5% and 98.8%. False positives due to tall weeds at one site and naturally regenerating seedlings of the same species led to slightly lower precision of 98.5% and 96.7%. A model trained on examples from both sites had 99.4% sensitivity and precision of 97%, showing applicability across sites. Additional testing showed that the CNN model was able to detect 68.7% of obscured seedlings missed during the initial annotation of the imagery but present in the field data. Finally, we demonstrate the potential to use a form of weakly supervised training and a tile-based processing chain to enhance the accuracy and efficiency of CNNs applied to large, high-resolution orthomosaics.
Three-dimensional ship localization with only one camera is a challenging task due to the loss of depth information caused by perspective projection. In this paper, we propose a method to measure distances based on the assumption that ships lie on a flat surface. This assumption allows to recover depth from a single image using the principle of inverse perspective. For the 3D ship detection task, we use a hybrid approach that combines image detection with a convolutional neural network, camera geometry and inverse perspective. Furthermore, a novel calculation of object height is introduced. Experiments show that the monocular distance computation works well in comparison to a Velodyne lidar. Due to its robustness, this could be an easy-to-use baseline method for detection tasks in navigation systems.
Die Projektaufgabe bestand darin, den aktuellen Laborversuch zu modernisieren, indem die Kommunikation zwischen dem Versuchsaufbau und Laborrechner nicht wie bisher über Wandlerkarten stattfindet, sondern über EtherCAT und TwinCAT 3.
Die Installation von TwinCAT 3 mit den zugehörigen Erweiterungen und erforderlichen Programmen stellt sich als sehr umfangreich und schwierig dar, was die Installationsanleitungen zeigen. Außerdem gab es sehr viele Fehlerquellen, die nicht auf Anhieb ersichtlich waren, wie das Aktualisieren der aktuellen MATLAB Version. Ist die Installation abgeschlossen kann die Kommunikation zwischen MATLAB und TwinCAT relativ einfach umgesetzt werden.
In der Projektarbeit wurde anfangs dann die Kommunikation mit mehreren Tests überprüft und Optimierungen vorgenommen. So wurde zum Beispiel die Wegbegrenzung angepasst. Schwierigkeiten zeigten sich bei der Bedienung über MATLAB oder beim Abstürzen von MATLAB, da beim Stoppen oder Abstürzen von MATLAB, der zuletzt gesendete Wert immer noch an TwinCAT 3 anliegt und somit der Aktor weiter verfahren würde. Diese sehr gefährliche Situation wäre ein gravierender Nachteil, gegenüber der alten Kommunikation mit einer Wandlerkarte. Um einen sicheren Stopp zu garantieren, wird über ein neues TcCOM Objekt der Matlab-Status mit einem Togglebit überprüft, ändert sich der Wert des Bits nicht mehr, stoppt die Anlage sicher.
Um einen Vergleich mit dem bisherigen Masterversuch erhalten zu können, wurde die Strecke mit der neuen Kommunikation untersucht und ein passender Regler dafür auszulegt.
Die Auswertung der Impulsantwort sowie der „Spectrum-Analyse“ zeigten beim Vergleich mit den Schnittstellen gleiche Ergebnisse, somit sind die Versuche bei dem Laborversuch ohne Einschränkungen durchführbar. Die Auslegung des Reglers zeigte entgegen den Prognosen der Beckhoff-Experten sehr gute Ergebnisse und die Kommunikation über die Schnittstelle zeigte keine Probleme.
Einschränkungen zeigten sich jedoch bei der einzustellenden Abtastzeit, da eine Abtastzeit unter 2ms nicht möglich ist. Zwar kann man eine geringere Abtastzeit einstellen, jedoch zeigt sich bei der Auswertung, dass die Schnittstelle mit Abtastzeiten unter 2ms Probleme aufweist. Die Rechendauer wird deutlich größer und die größere Anzahl an Messpunkte kann nicht richtig verarbeitet werden. Ein Regler kann damit nicht implementiert werden.
Die Projektarbeit konnte somit erfolgreich angeschlossen werden und bis auf die aufwendige Installation sind die Erweiterungen von Beckhoff sehr zuverlässig und gut zu bedienen. Die ersten Voruntersuchen waren positiv, somit kann auch an weiteren Laborrechnern eine Umstellung der Schnittstelle in Betracht gezogen werden.
Creative industry and cultural tourism destination Lake Constance - a media discourse analysis
(2020)
The following media discourse analysis examines the news media coverage of four regional online newspapers, about the topics “creative industries” and “cultural tourism” at Lake Constance region in the period from 2006 until 2016. The results show that, besides event-relater reporting, there is currently no vibrant media discourse on the topics “creative industries” and “cultural tourism”. Even though the image of the Lake Constance region is heavily influenced by tourism, “cultural tourism” also plays a secondary role when it comes to regional news reporting. Moreover, discourses do not overlap and thus no synergies within the local media discourse are formed. This result is relevant for the regional tourism development, because the cooperation between “creative industries” and “cultural tourism” creates opportunities such as the expansion of the tourism offer and an extension of the tourist season. To activate unused opportunities at the different destinations of the region, a supra-regional visibility of the sector “creative industries” should be developed and the cooperation of the sector with local stakeholders of cultural tourism should be promoted.
This paper presents the goals, service design approach, and the results of the project “Accessible Tourism around Lake Constance”, which is currently run by different universities, industrial partners and selected hotels in Switzerland, Germany and Austria. In the 1st phase, interviews with different persons with disabilities and elderly persons have been conducted to identify the barriers and pains faced by tourists who want to spend their holidays in the region of Lake Constance as well as possible assistive technologies that help to overcome these barriers. The analysis of the interviews shows that one third of the pains and barriers are due to missing, insufficient, wrong or inaccessible information about the
accessibility of the accommodation, surroundings, and points of interests during the planning phase of the holidays. Digital assistive technologies hence play a
major role in bridging this information gap. In the 2nd phase so-called Hotel-Living-Labs (HLL) have been established where the identified assistive technologies
can be evaluated. Based on these HLLs an overall service for accessible holidays has been designed and developed. In the last phase, this service has been implemented
based on the HLLs as well as the identified assistive technologies and is currently field tested with tourists with disabilities from the three participated countries.
Philosophie & Rhetorik
(2020)
Digital technology and architecture have become inseparable, with new approaches and methodologies not just affecting the workflows and practice of architects but shaping the very character of architecture.
This compendious work offers a wide-ranging orientation to the new landscape with its opportunities, its challenges, and its vast potential.
Shared Field, Divided Field
(2020)
A conceptual framework for indigenous ecotourism projects – a case study in Wayanad, Kerala, India
(2020)
This paper analyses indigenous ecotourism in the Indian district of Wayanad, Kerala, using a conceptual framework based on a PATA 2015 study on indigenous tourism that includes the criteria: human rights, participation, business and ecology. Detailed indicator sets for each criterion are applied to a case study of the Priyadarshini Tea Environs with a qualitative research approach addressing stakeholders from the public sector, non-governmental organisations, academia, tour operators and communities including Adivasi and non-Adivasi. In-depth interviews were supported by participant and non-participant observations. The authors adapted this framework to the needs of the case study and consider that this modified version is a useful tool for academics and practitioners wishing to evaluate and develop indigenous ecotourism projects. The results show that the Adivasi involved in the Priyadarshini Tea Environs project benefit from indigenous ecotourism. But they could profit more if they had more involvement in and control of the whole tourism value chain.
The ageing infrastructure in ports requires regular inspection. This inspection is currently carried out manually by divers who sense by hand the entire underwater infrastructure. This process is cost-intensive as it involves a lot of time and human resources. To overcome these difficulties, we propose to scan the above and underwater port structure with a Multi-SensorSystem, and -by a fully automated processto classify the obtained point cloud into damaged and undamaged zones. We make use of simulated training data to test our approach since not enough training data with corresponding class labels are available yet. To that aim, we build a rasterised heightfield of a point cloud of a sheet pile wall by cutting it into verticall slices. The distance from each slice to the corresponding line generates the heightfield. This latter is propagated through a convolutional neural network which detects anomalies. We use the VGG19 Deep Neural Network model pretrained on natural images. This neural network has 19 layers and it is often used for image recognition tasks. We showed that our approach can achieve a fully automated, reproducible, quality-controlled damage detection which is able to analyse the whole structure instead of the sample wise manual method with divers. The mean true positive rate is 0.98 which means that we detected 98 % of the damages in the simulated environment.
Modeling a suitable birth density is a challenge when using Bernoulli filters such as the Labeled Multi-Bernoulli (LMB) filter. The birth density of newborn targets is unknown in most applications, but must be given as a prior to the filter. Usually the birth density stays unchanged or is designed based on the measurements from previous time steps.
In this paper, we assume that the true initial state of new objects is normally distributed. The expected value and covariance of the underlying density are unknown parameters. Using the estimated multi-object state of the LMB and the Rauch-Tung-Striebel (RTS) recursion, these parameters are recursively estimated and adapted after a target is detected.
The main contribution of this paper is an algorithm to estimate the parameters of the birth density and its integration into the LMB framework. Monte Carlo simulations are used to evaluate the detection driven adaptive birth density in two scenarios. The approach can also be applied to filters that are able to estimate trajectories.
Durch die zunehmende Vernetzung und den Anstieg von eingesetzter Hard- und Software hat sich die Komplexität der Unternehmensarchitektur von Unternehmen über die Jahre stetig erhöht. Das Aufkommen nutzerfreundlicher Informationstechnologie (IT)-Lösungen befähigt außerdem Fachbereiche, IT innovativ einzusetzen. Dies erhöht die Heterogenität und damit nochmals die Komplexität der Unternehmensarchitektur. Darüber hinaus treibt dieser IT-Einsatz die Digitalisierung in den Unternehmen maßgeblich voran. Dies wirft die Frage auf, ob Unternehmen überhaupt noch eine Relevanz in der Reduktion der Komplexität durch IT-Integration sehen oder ob dies vor dem Hintergrund der Digitalisierung schon ein alter Hut ist. Experteninterviews und eine qualitative Datenanalyse zeigen, dass IT-Integration und Digitalisierung keine disjunkten Phänomene sind, sondern sich gegenseitig beeinflussen. Die Ergebnisse betonen, wie unterschiedlich der Begriff aufgefasst werden kann und dass die einheitliche Nutzung damit essenziell ist. Darüber hinaus zeigen sie, dass Digitalisierung einerseits Treiber der IT-Integration ist, andererseits aber auch die Möglichkeiten zur Umsetzung verändert. Dabei ist die Integrationsentscheidung durch die Vielzahl an Vor- und Nachteile komplex. Fachbereichs-IT ist selten explizites Ziel von IT-Integrationsprojekten. Der Beitrag zeigt den wissenschaftlichen Forschungsbedarf in neuen technologischen Möglichkeiten zur IT-Integration und in der Balance von Flexibilität und IT-Integration in der Unternehmensarchitektur. Er beleuchtet, dass eine gemeinsame Sprache die Basis für IT-Integrationsprojekte ist und dass eine Kultur, in der Fachbereiche aktiv an IT-Integrationsentscheidungen teilhaben, das Ziel eines jeden Unternehmens sein sollte. Insgesamt zeigen die Analysen, dass IT-Integration noch lange kein alter Hut, sondern, im Gegenteil, brandaktuell ist.
Kleine und mittelständische Unternehmen (KMU) sind bekannt für ihre Innovationskraft und bilden das Rückgrat der deutschen Wirtschaft. Wie Studien zeigen sind sie in Bezug auf Compliance-Maßnahmen im Vergleich zu
kapitalmarktorientierten Unternehmen jedoch im Rückstand. Eine gesonderte Betrachtung der IT-Compliance erfolgt dabei in den Studien in der Regel nicht. Auch wenn zu den Gründen und Motiven fehlender IT-Compliance-Strukturen in KMU kaum Forschungsergebnisse vorliegen, zeigen doch die vielen Publikationen, die sich mit Teilaspekten von Compliance und KMU beschäftigen, dass Handlungsbedarf besteht. Insbesondere die aktuellen Veränderungen unter dem Stichwort Digitalisierung deuten auf eine gesteigerte Bedeutung von IT-Compliance-Maßnahmen vor allem in mittelständischen Unternehmen. In dieser Arbeit sollen daher mithilfe einer Literaturrecherche die aktuell behandelten Themen in Bezug auf IT-Compliance und KMU analysiert sowie aktuelle Themenschwerpunkte herausgearbeitet werden.
In this thesis, the recognition problem and the properties of eigenvalues and eigenvectors of matrices which are strictly sign-regular of a given order, i.e., matrices whose minors of a given order have the same strict sign, are considered. The results are extended to matrices which are sign-regular of a given order, i.e., matrices whose minors of a given order have the same sign or are allowed to vanish. As a generalization, a new type of matrices called oscillatory of a specific order, are introduced. Furthermore, the properties for this type are investigated. Also, same applications to dynamic systems are given.
The expansion of a given multivariate polynomial into Bernstein polynomials is considered. Matrix methods for the calculation of the Bernstein expansion of the product of two polynomials and of the Bernstein expansion of a polynomial from the expansion of one of its partial derivatives are provided which allow also a symbolic computation.
Totally nonnegative matrices, i.e., matrices having all their minors nonnegative, and matrix intervals with respect to the checkerboard partial order are considered. It is proven that if the two bound matrices of such a matrix interval are totally nonnegative and satisfy certain conditions, then all matrices from this interval are also totally nonnegative and satisfy the same conditions.
Let A = [a_ij] be a real symmetric matrix. If f:(0,oo)-->[0,oo) is a Bernstein function, a sufficient condition for the matrix [f(a_ij)] to have only one positive eigenvalue is presented. By using this result, new results for a symmetric matrix with exactly one positive eigenvalue, e.g., properties of its Hadamard powers, are derived.
We propose and apply a requirements engineering approach that focuses on security and privacy properties and takes into account various stakeholder interests. The proposed methodology facilitates the integration of security and privacy by design into the requirements engineering process. Thus, specific, detailed security and privacy requirements can be implemented from the very beginning of a software project. The method is applied to an exemplary application scenario in the logistics industry. The approach includes the application of threat and risk rating methodologies, a technique to derive technical requirements from legal texts, as well as a matching process to avoid duplication and accumulate all essential requirements.
We present source code patterns that are difficult for modern static code analysis tools. Our study comprises 50 different open source projects in both a vulnerable and a fixed version for XSS vulnerabilities reported with CVE IDs over a period of seven years. We used three commercial and two open source static code analysis tools. Based on the reported vulnerabilities we discovered code patterns that appear to be difficult to classify by static analysis. The results show that code analysis tools are helpful, but still have problems with specific source code patterns. These patterns should be a focus in training for developers.
Lean Digitalization
(2020)
Schreiben im Studium
(2020)
Sollte das wissenschaftliche Schreiben als allgemeine Wissenschaftssprache vermittelt werden oder sollten die sprachlichen Anforderungen in den jeweiligen Disziplinen berücksichtigt werden? Die Antwort auf diese Frage hängt auch davon ab, wie stark sich die Wissenschaftssprachen unterscheiden. In der vorliegenden Studie wurden wissenschaftssprachliche Besonderheiten anhand zweier Korpora untersucht, die deutschsprachige Dissertationen der Fächer Betriebswirtschaftslehre (BWL) bzw. Maschinenbau (MB) umfassten und in denen der Gebrauch von Mehrworteinheiten (z.B. im vergleich zu den) verglichen wurde. Die Untersuchung verdeutlicht den unterschiedlichen Gebrauch der Mehrworteinheiten zwischen den Disziplinen: Nur vierzehn der 50 häufigsten Mehrworteinheiten wurden in beiden Korpora genutzt. Die Ergebnisse werden mit Blick auf die Vermittlung des wissenschaftlichen Schreibens erörtert. Es werden Überlegungen zur Weiterentwicklung der Methode angestellt und es wird diskutiert, wie die Korpuslinguistik die Schreibvermittlung im Studium unterstützen könnte.
Which testing formats are most appropriate for assessing academic language skills? Christian Krekeler addresses this question for informal language assessment in the classroom. In the first part of this chapter discrete items, isolated tests of writing, and integrated tests, three formats which are frequently used in standardised assessment, are investigated and their usefulness for classroom assessment, where diagnosis, achievement and learning are important test functions, is discussed. Because academic language use is cognitively demanding, it is argued that assessment of academic language skills should include authentic and complex tasks as well as subject specific content. The resulting tensions between complex, authentic and context sensitive tasks and the requirements of language assessment methodology are discussed in the second part of the chapter.
For a long time, the use of intermediate products in production has been growing more rapidly in most countries than domestic production. This is a strong indication of more interdependency in production. The main purpose of input-output analysis is to study the interdependency of industries in an economy. Often the term interindustry analysis is also used. Therefore, the exchange of intermediate products is a key issue of input-output analysis. We will use input–output data for this study that the author prepared for the new ‘Handbook on Supply, Use and Input–Output Tables with Extensions and Applications’ of the United Nations. The supply use and input–output tables contain separate valuation matrices for trade margins, transport margins, value added tax, other taxes on products and subsidies on products. For the study, two input–output models were developed to evaluate the impact of fuel subsidy and taxation reform on output, gross domestic product, inflation and trade. Six scenarios are discussed covering different aspects of the reform.
The evaluation of the effectiveness of different machine learning algorithms on a publicly available database of signals derived from wearable devices is presented with the goal of optimizing human activity recognition and classification. Among the wide number of body signals we choose a couple of signals, namely photoplethysmographic (optically detected subcutaneous blood volume) and tri-axis acceleration signals that are easy to be simultaneously acquired using commercial widespread devices (e.g. smartwatches) as well as custom wearable wireless devices designed for sport, healthcare, or clinical purposes. To this end, two widely used algorithms (decision tree and k-nearest neighbor) were tested, and their performance were compared to two new recent algorithms (particle Bernstein and a Monte Carlo-based regression) both in terms of accuracy and processing time. A data preprocessing phase was also considered to improve the performance of the machine learning procedures, in order to reduce the problem size and a detailed analysis of the compression strategy and results is also presented.
Good sleep is crucial for a healthy life of every person. Unfortunately, its quality often decreases with aging. A common approach to measuring the sleep characteristics is based on interviews with the subjects or letting them fill in a daily questionnaire and afterward evaluating the obtained data. However, this method has time and personal costs for the interviewer and evaluator of responses. Therefore, it would be important to execute the collection and evaluation of sleep characteristics automatically. To do that, it is necessary to investigate the level of agreement between measurements performed in a traditional way using questionnaires and measurements obtained using electronic monitoring devices. The study presented in this manuscript performs this investigation, comparing such sleep characteristics as "time going to bed", "total time in bed", "total sleep time" and "sleep efficiency". A total number of 106 night records of elderly persons (aged 65+) were analyzed. The results achieved so far reveal the fact that the degree of agreement between the two measurement methods varies substantially for different characteristics, from 31 minutes of mean difference for "time going to bed" to 77 minutes for "total sleep time". For this reason, a direct exchange of objective and subjective measuring methods is currently not possible.
Polysomnography is a gold standard for a sleep study, and it provides very accurate results, but its cost (both personnel and material) are quite high. Therefore, the development of a low-cost system for overnight breathing and heartbeat monitoring, which provides more comfort while recording the data, is a well-motivated challenge. The system proposed in this manuscript is based on the usage of resistive pressure sensors installed under the mattress. These sensors can measure slight pressure changes provoked during breathing and heartbeat. The captured signal requires advanced processing, like applying filters and amplifiers before the analog signal is ready for the next step. Then, the output signal is digitalized and further processed by an algorithm that performs a custom filtering before it can recognize breathing and heart rate in real-time. The result can be directly visualized. Furthermore, a CSV file is created containing the raw data, timestamps, and unique IDs to facilitate further processing. The achieved results are promising, and the average deviation from a reference device is about 4bpm.
Cardiovascular diseases are directly or indirectly responsible for up to 38.5% of all deaths in Germany and thus represent the most frequent cause of death. At present, heart diseases are mainly discovered by chance during routine visits to the doctor or when acute symptoms occur. However, there is no practical method to proactively detect diseases or abnormalities of the heart in the daily environment and to take preventive measures for the person concerned. Long-term ECG devices, as currently used by physicians, are simply too expensive, impractical, and not widely available for everyday use. This work aims to develop an ECG device suitable for everyday use that can be worn directly on the body. For this purpose, an already existing hardware platform will be analyzed, and the corresponding potential for improvement will be identified. A precise picture of the existing data quality is obtained by metrological examination, and corresponding requirements are defined. Based on these identified optimization potentials, a new ECG device is developed. The revised ECG device is characterized by a high integration density and combines all components directly on one board except the battery and the ECG electrodes. The compact design allows the device to be attached directly to the chest. An integrated microcontroller allows digital signal processing without the need for an additional computer. Central features of the evaluation are a peak detection for detecting R-peaks and a calculation of the current heart rate based on the RR interval. To ensure the validity of the detected R-peaks, a model of the anatomical conditions is used. Thus, unrealistic RR-intervals can be excluded. The wireless interface allows continuous transmission of the calculated heart rate. Following the development of hardware and software, the results are verified, and appropriate conclusions about the data quality are drawn. As a result, a very compact and wearable ECG device with different wireless technologies, data storage, and evaluation of RR intervals was developed. Some tests yelled runtimes up to 24 hours with wireless Lan activated and streaming.
In previous studies, we used a method for detecting stress that was based exclusively on heart rate and ECG for differentiation between such situations as mental stress, physical activity, relaxation, and rest. As a response of the heart to these situations, we observed different behavior in the Root Mean Square of the Successive differences heartbeats (RMSSD). This study aims to analyze Virtual Reality via a virtual reality headset as an effective stressor for future works. The value of the Root Mean Square of the Successive Differences is an important marker for the parasympathetic effector on the heart and can provide information about stress. For these measurements, the RR interval was collected using a breast belt. In these studies, we can observe the Root Mean Square of the successive differences heartbeats. Additional sensors for the analysis were not used. We conducted experiments with ten subjects that had to drive a simulator for 25 minutes using monitors and 25 minutes using virtual reality headset. Before starting and after finishing each simulation, the subjects had to complete a survey in which they had to describe their mental state. The experiment results show that driving using virtual reality headset has some influence on the heart rate and RMSSD, but it does not significantly increase the stress of driving.
This work is a study about a comparison of survey tools and it should help developers in selecting a suited tool for application in an AAL environment. The first step was to identify the basic required functionality of the survey tools used for AAL technologies and to compare these tools by their functionality and assignments. The comparative study was derived from the data obtained, previous literature studies and further technical data. A list of requirements was stated and ordered in terms of relevance to the target application domain. With the help of an integrated assessment method, the calculation of a generalized estimate value was performed and the result is explained. Finally, the planned application of this tool in a running project is explained.
This paper presents the implementation of deep learning methods for sleep stage detection by using three signals that can be measured in a non-invasive way: heartbeat signal, respiratory signal, and movement signal. Since signals are measurements taken during the time, the problem is seen as time-series data classification. Deep learning methods are chosen to solve the problem are convolutional neural network and long-short term memory network. Input data is structured as a time-series sequence of mentioned signals that represent 30 seconds epoch, which is a standard interval for sleep analysis. The records used belong to the overall 23 subjects, which are divided into two subsets. Records from 18 subjects were used for training the data and from 5 subjects for testing the data. For detecting four sleep stages: REM (Rapid Eye Movement), Wake, Light sleep (Stage 1 and Stage 2), and Deep sleep (Stage 3 and Stage 4), the accuracy of the model is 55%, and F1 score is 44%. For five stages: REM, Stage 1, Stage 2, Deep sleep (Stage 3 and 4), and Wake, the model gives an accuracy of 40% and F1 score of 37%.
Für die Überwachung des Schlafs zu Hause sind nichtinvasive Methoden besonders gut anwendbar. Die Signale, die häufig überwacht werden, sind Herzfrequenz und Atemfrequenz. Die Ballistokardiographie (BCG)ist eine Technik, bei der die Herzfrequenz aus den mechanischen Schwingungen des Körpers bei jedem Herzzyklus gemessen wird. Kürzlich wurden Übersichtsarbeiten veröffentlicht. Die Untersuchung soll in einem ersten Ansatz bewerten, ob die Herzfrequenz anhand von BCG erkannt werden kann. Die wesentlichen Randbedingungen sind, ob dies gelingt, wenn der Sensor unter der Matratze positioniert wird und kostengünstige Sensoren zum Einsatz kommen.
Die Schlafapnoe ist eine häufig auftretende Schlafstörung,
die unterschiedliche Auswirkungen auf unseren Alltag hat; so wurde z. B.
über eine Tagesschläfrigkeit von etwa 25 % der Patienten mit obstruktiver
Schlafapnoe (OSA) berichtet. Ziel dieser Arbeit ist die Entwicklung eines
Systems, das eine nichtinvasive Erkennung der Schlafapnoe in häuslicher
Umgebung ermöglichen soll.
In diesem Beitrag wird eine Methode des maschinellen Lernens entwickelt, die die Schlafstadienerkennung untersucht. Übliche Methoden der Schlafanalyse basieren auf der Polysomnographie (PSG). Der präsentierte Ansatz basiert auf Signalen, die ausschließlich nicht-invasiv in einer häuslichen Umgebung gemessen werden können. Bewegungs-, Herzschlags- und Atmungssignale können vergleichsweise leicht erfasst werden aber die Erkennung der Schlafstadien ist dadurch erschwert. Die Signale werden als Zeitreihenfolge strukturiert und in Epochen überführt. Die Leistungsfähigkeit von maschinellem Lernen wird der Polysomnographie gegenübergestellt und bewertet.
Due to its economic size, economic policy measures, in particular trade policies, have a far‐reaching impact on global economic developments. This chapter quantifies the economic consequences of US protectionist trade aspirations. It focuses on trade policy scenarios, which have been communicated by the current US administration as potential new trade policies. The chapter draws on the results of a study of the ifo Institute conducted on behalf of the Bertelsmann Foundation. In the first simulation, a retraction from the North American Free Trade Agreement is considered. The chapter then illustrates the potential consequences of a “border tax adjustment” policy. It also simulates further measures to protect the US market by presuming an increase in American duties. The chapter presents robust quantitative results that can be expected if an increasingly protectionist US trade policy were to be implemented.
This article introduces the Global Sanctions Data Base (GSDB), a new dataset of economic sanctions that covers all bilateral, multilateral, and plurilateral sanctions in the world during the 1950–2016 period across three dimensions: type, political objective, and extent of success. The GSDB features by far the most cases amongst data bases that focus on effective sanctions (i.e., excluding threats) and is particularly useful for analysis of bilateral international transactional data (such as trade flows). We highlight five important stylized facts: (i) sanctions are increasingly used over time; (ii) European countries are the most frequent users and African countries the most frequent targets; (iii) sanctions are becoming more diverse, with the share of trade sanctions falling and that of financial or travel sanctions rising; (iv) the main objectives of sanctions are increasingly related to democracy or human rights; (v) the success rate of sanctions has gone up until 1995 and fallen since then. Using state-of-the-art gravity modeling, we highlight the usefulness of the GSDB in the realm of international trade. Trade sanctions have a negative but heterogeneous effect on trade, which is most pronounced for complete bilateral sanctions, followed by complete export sanctions.
Seamless-Learning-Plattform
(2020)
Die Überwindung des Bruchs (Seam) beim Lernen im Studium zwischen dem Hochschulkontext und der beruflichen Praxis ist durch die zeitlich, räumlich und organisatorisch bedingte Trennung der relevanten Akteure (u. a. Lehrende, Lernende, Unternehmensvertreter) eine sehr große Herausforderung (Milrad et al., 2013). Eine seamless-learning-basierte Konzeption einer Lehrveranstaltung auf Basis agiler Werte und Methoden (u. a. inkrementelles Vorgehen, Fokus auf lernendenzentrierte Veranstaltungen, individualisiertes Lernenden-Feedback) kann bei der Überwindung dieses bedeutenden Bruchs helfen. In dem Poster wird das grundsätzliche Design eines derartigen agilen SL-Konzepts auf Basis eines iterativ, inkrementellen Vorgehens innerhalb eines Semesterzyklus von 15 Wochen in drei Lernsprints erörtert. Darüber hinaus wird über erste Lehrerfahrungen der Dozierenden sowohl aus der Hochschule als auch aus dem industriellen Umfeld und Lernerfahrungen der Studierenden aus den vergangenen zwei Jahren berichtet.
The Montgomery multiplication is an efficient method for modular arithmetic. Typically, it is used for modular arithmetic over integer rings to prevent the expensive inversion for the modulo reduction. In this work, we consider modular arithmetic over rings of Gaussian integers. Gaussian integers are subset of the complex numbers such that the real and imaginary parts are integers. In many cases Gaussian integer rings are isomorphic to ordinary integer rings. We demonstrate that the concept of the Montgomery multiplication can be extended to Gaussian integers. Due to independent calculation of the real and imaginary parts, the computation complexity of the multiplication is reduced compared with ordinary integer modular arithmetic. This concept is suitable for coding applications as well as for asymmetric key cryptographic systems, such as elliptic curve cryptography or the Rivest-Shamir-Adleman system.
In this work, we investigate a hybrid decoding approach that combines algebraic hard-input decoding of binary block codes with soft-input decoding. In particular, an acceptance criterion is proposed which determines the reliability of a candidate codeword. For many received codewords the stopping criterion indicates that the hard-decoding result is sufficiently reliable, and the costly soft-input decoding can be omitted. The proposed acceptance criterion significantly reduces the decoding complexity. For simulations we combine the algebraic hard-input decoding with ordered statistics decoding, which enables near maximum likelihood soft-input decoding for codes of small to medium block lengths.
Multi-dimensional spatial modulation is a multipleinput/ multiple-output wireless transmission technique, that uses only a few active antennas simultaneously. The computational complexity of the optimal maximum-likelihood (ML) detector at the receiver increases rapidly as more transmit antennas or larger modulation orders are employed. ML detection may be infeasible for higher bit rates. Many suboptimal detection algorithms for spatial modulation use two-stage detection schemes where the set of active antennas is detected in the first stage and the transmitted symbols in the second stage. Typically, these detection schemes use the ML strategy for the symbol detection. In this work, we consider a suboptimal detection algorithm for the second detection stage. This approach combines equalization and list decoding. We propose an algorithm for multi-dimensional signal constellations with a reduced search space in the second detection stage through set partitioning. In particular, we derive a set partitioning from the properties of Hurwitz integers. Simulation results demonstrate that the new algorithm achieves near-ML performance. It significantly reduces the complexity when compared with conventional two-stage detection schemes. Multi-dimensional constellations in combination with suboptimal detection can even outperform conventional signal constellations in combination with ML detection.
Spatial modulation is a low-complexity multipleinput/ multipleoutput transmission technique. The recently proposed spatial permutation modulation (SPM) extends the concept of spatial modulation. It is a coding approach, where the symbols are dispersed in space and time. In the original proposal of SPM, short repetition codes and permutation codes were used to construct a space-time code. In this paper, we propose a similar coding scheme that combines permutation codes with codes over Gaussian integers. Short codes over Gaussian integers have good distance properties. Furthermore, the code alphabet can directly be applied as signal constellation, hence no mapping is required. Simulation results demonstrate that the proposed coding approach outperforms SPM with repetition codes.
Many resource-constrained systems still rely on symmetric cryptography for verification and authentication. Asymmetric cryptographic systems provide higher security levels, but are very computational intensive. Hence, embedded systems can benefit from hardware assistance, i.e., coprocessors optimized for the required public key operations. In this work, we propose an elliptic curve cryptographic coprocessors design for resource-constrained systems. Many such coprocessor designs consider only special (Solinas) prime fields, which enable a low-complexity modulo arithmetic. Other implementations support arbitrary prime curves using the Montgomery reduction. These implementations typically require more time for the point multiplication. We present a coprocessor design that has low area requirements and enables a trade-off between performance and flexibility. The point multiplication can be performed either using a fast arithmetic based on Solinas primes or using a slower, but flexible Montgomery modular arithmetic.
Side Channel Attack Resistance of the Elliptic Curve Point Multiplication using Gaussian Integers
(2020)
Elliptic curve cryptography is a cornerstone of embedded security. However, hardware implementations of the elliptic curve point multiplication are prone to side channel attacks. In this work, we present a new key expansion algorithm which improves the resistance against timing and simple power analysis attacks. Furthermore, we consider a new concept for calculating the point multiplication, where the points of the curve are represented as Gaussian integers. Gaussian integers are subset of the complex numbers, such that the real and imaginary parts are integers. Since Gaussian integer fields are isomorphic to prime fields, this concept is suitable for many elliptic curves. Representing the key by a Gaussian integer expansion is beneficial to reduce the computational complexity and the memory requirements of a secure hardware implementation.
Dieses besondere Lehrbuch führt in die Berechnung von Verbrennungsmotoren ein, geht dabei von aktuellen Fragestellungen z. B. zur Fahrzeugdynamik oder Motorthermodynamik aus und stellt bei der Lösung die notwendige Theorie mit bereit. Damit auch Quereinsteiger erfolgreich sind, ist in einem Verzeichnis aufgeführt, welche theoretischen Kenntnisse man für die Lösung der jeweiligen Aufgabe benötigt und in welchem Abschnitt des Buches diese hergeleitet werden. Alle Berechnungen werden in Excel durchgeführt.
In der vorliegenden Auflage wurden aktuelle Beispiele zur Motoreffizienz, zum Motorkennfeld, zur Lastpunktanhebung, zum Fahrzyklus WLTC und zu realen Fahrzyklen (RDE) ergänzt.
Der Inhalt:
- Fahrwiderstand und Motorleistung
- Kraftstoffe und Stöchiometrie
- Motorleistung und Mitteldruck
- Motorthermodynamik
- Motormechanik
- Fahrzeugdynamik
- Hybrid- und Elektrofahrzeuge
- Aufladung von Verbrennungsmotoren
Die Zielgruppen:
Studierende des Maschinenbaus und der Kfz-Technik an Hochschulen in Bachelor- und Masterstudiengängen; Motoreningenieure, die sich mit verbrennungsmotorischen Fragestellungen beschäftigen; Absolventen von Weiterbildungskursen auf Meisterniveau
Der Autor Dr.-Ing. Klaus Schreiner ist Professor an der HTWG Konstanz und Leiter des Labors für Verbrennungsmotoren in der Fakultät Maschinenbau. Er war viele Jahre lang Didaktikbeauftragter seiner Hochschule. Im Jahr 2008 erhielt er den Lehrpreis des Landes Baden-Württemberg.