Refine
Year of publication
Document Type
- Conference Proceeding (642) (remove)
Language
- English (492)
- German (149)
- Multiple languages (1)
Keywords
- 360-degree coverage (1)
- 3D Extended Object Tracking (1)
- 3D Extended Object Tracking (EOT) (2)
- 3D shape tracking (1)
- 3D ship detection (1)
- AAL (1)
- ADAM (1)
- AHI (1)
- Abrasive grain material (1)
- Abtragsprinzip (1)
Institute
- Fakultät Bauingenieurwesen (9)
- Fakultät Elektrotechnik und Informationstechnik (10)
- Fakultät Informatik (50)
- Fakultät Maschinenbau (9)
- Fakultät Wirtschafts-, Kultur- und Rechtswissenschaften (8)
- Institut für Angewandte Forschung - IAF (53)
- Institut für Optische Systeme - IOS (19)
- Institut für Strategische Innovation und Technologiemanagement - IST (29)
- Institut für Systemdynamik - ISD (64)
- Institut für Werkstoffsystemtechnik Konstanz - WIK (5)
One major realm of Condition Based Maintenance is finding features that reflect the current health state of the asset or component under observation. Most of the existing approaches are accompanied with high computational costs during the different feature processing phases making them infeasible in a real-world scenario. In this paper a feature generation method is evaluated compensating for two problems: (1) storing and handling large amounts of data and (2) computational complexity. Both aforementioned problems are existent e.g. when electromagnetic solenoids are artificially aged and health indicators have to be extracted or when multiple identical solenoids have to be monitored. To overcome those problems, Compressed Sensing (CS), a new research field that keeps constantly emerging into new applications, is employed. CS is a data compression technique allowing original signal reconstruction with far fewer samples than Shannon-Nyquist dictates, when some criteria are met. By applying this method to measured solenoid coil current, raw data vectors can be reduced to a way smaller set of samples that yet contain enough information for proper reconstruction. The obtained CS vector is also assumed to contain enough relevant information about solenoid degradation and faults, allowing CS samples to be used as input to fault detection or remaining useful life estimation routines. The paper gives some results demonstrating compression and reconstruction of coil current measurements and outlines the application of CS samples as condition monitoring data by determining deterioration and fault related features. Nevertheless, some unresolved issues regarding information loss during the compression stage, the design of the compression method itself and its influence on diagnostic/prognostic methods exist.
Research on Shadow IT is facing a conceptual dilemma in cases where previously "covert" systems developed by business entities (individual users, business workgroups, or business units) are integrated in the organizational IT management. These systems become visible, are therefore not "in the shadows" anymore, and subsequently do not fit to existing definitions of Shadow IT. Practice shows that some information systems share characteristics of Shadow IT, but are created openly in alignment with the IT department. This paper therefore proposes the term "Business-managed IT" to describe "overt" information systems developed or managed by business entities. We distinguish Business-managed IT from Shadow IT by illustrating case vignettes. Accordingly, our contribution is to suggest a concept and its delineation against other concepts. In this way, IS researchers interested in IT originated from or maintained by business entities can construct theories with a wider scope of application that are at the same time more specific to practical problems. In addition, value-laden terminology is complemented by a vocabulary that values potentially innovative developments by business entities more adequately. From a practical point of view, the distinction can be used to discuss the distribution of task responsibilities for information systems.
Generating synthetic data is a relevant point in the machine learning community. As accessible data is limited, the generation of synthetic data is a significant point in protecting patients' privacy and having more possibilities to train a model for classification or other machine learning tasks. In this work, some generative adversarial networks (GAN) variants are discussed, and an overview is given of how generative adversarial networks can be used for data generation in different fields. In addition, some common problems of the GANs and possibilities to avoid them are shown. Different evaluation methods of the generated data are also described.
Die dynamische Belastung durch Sprünge von Menschen ist eine der maßgebenden personeninduzierten Einwirkungen bei dem Nachweis der Gebrauchstauglichkeit von Decken mit schwingungsgefährdenden Geräten. Sie kommt beispielsweise in Ganglaboren und anderen Forschungseinrichtungen bei biomechanischen Untersuchungen in der Sprungkraftdiagnostik vor. Die in diesem Zusammenhang typischen Sprungarten, nämlich der Countermovement Jump (beidbeiniger vertikaler Sprung mit Ausholbewegung) und der Drop Jump (Niederhochsprung), wurden anhand von über 200 Kraftverlaufsmessungen analysiert und daraus die dazugehörigen idealisierten Belastungsmodelle ermittelt. Damit wurde ein vereinfachtes Nachweisverfahren zur Abschätzung der Schwingungsantwort einer Decke entwickelt. Als Ergebnis kann die Beschleunigung der Decke in Abhängigkeit von der modalen Masse und den Werten der Eigenformen am Lastangriffspunkt und am Ausgabepunkt angegeben werden.
For some years, universities in countries where the first language is not English choose English as the medium of instruction. In German universities, instruction in German is still the dominant form, which makes university study in Germany less accessible to international students. To attract international students and to improve career prospects for home students, many German universities offer programmes taught in English or in a combination of German and English. It is widely expected that the implementation of EMI-programmes leads to improvements in English language proficiency (ELP). However, it has emerged that substantial gains in ELP in EMI programmes will only occur as the result of content and language integrated learning.
Sprachliche Anforderungen in verschiedenen Fächern, Vermittlungskonzepte und Kursorganisation
(2016)
Der von Ehlich eingeführte Begriff „alltägliche Wissenschaftssprache“ hat die Diskussion um die Sprachvermittlung für den Sprachgebrauch im Studium maßgeblich geprägt. Auch die Abkürzung „AWS“ ist mittlerweile recht verbreitet. Gemeint sind sprachliche Elemente, die neben der Fachterminologie in unterschiedlichen Disziplinen weitgehend ähnlich sind. Man geht davon aus, dass diese Elemente in mehreren Disziplinen verwendet werden. Mit dieser Argumentation lässt sich eine disziplinübergreifende Sprachvermittlung begründen. In diesem Beitrag wird das AWS-Konzept einer neuerlichen Betrachtung unterzogen.
Dazu werden folgende Fragen formuliert:
1. Ist ein disziplinübergreifender Ansatz für die Kursorganisation hilfreich? Das AWS-Konzept bezieht seine Attraktivität auch aus dem praktischen Nutzen für die Planung von Sprachkursen, denn es ist häufig schwierig, gesonderte Sprachkurse für verschiedene Disziplinen anzubieten.
2. Ist das Konzept der AWS aus linguistischer Sicht zutreffend? Zur Frage, wie unterschiedlich die sprachlichen Anforderungen der einzelnen wissenschaftlichen Disziplinen sind, gibt es in der Literatur divergierende Auffassungen. Einige Forschungsansätze und Ergebnisse werden in diesem Beitrag skizziert.
3. Überzeugt das Konzept der AWS als Grundlage für Vermittlungsprozesse? Ein Blick auf Lehrmaterialien verdeutlicht, wie Wissenschaftssprache disziplinübergreifend vermittelt werden kann und welche Probleme sich ergeben.
Die hier vorgestellten Überlegungen münden letztlich in den Appell, die unterschiedlichen sprachlichen Anforderungen in den verschiedenen Fächern nach Möglichkeit zu berücksichtigen.
When integrating task-based learning into LSP courses, assessment procedures have to reflect the communicative goals of such tasks. However, performances on authentic and complex tasks are often difficult to score, because they involve not only specific language ability but also field specific knowledge. Moreover, the emphasis of tasks on meaning suggests that the communicative outcome should be regarded as the main criterion of success. This paper focuses on task design and scoring. It is argued that tasks have to be adapted to assessment purposes. However, to avoid a misalignment between teaching/learning and assessment, they should comprise the main features of LSP tasks in the sense of task-based language learning.
Assessment Literacy
(2015)
Scoring LSP tasks
(2016)
I4 Production
(2017)
Smart factory and education
(2016)
The introduction of cyber physical systems into production companies is highly changing working conditions and processes as well as business models. In practice a growing discrepancy between big and small respectively medium-sized companies can be observed. Bridging that gap a university smart factory is introduced to give that companies a platform to trial, educate employees and access consultancy. Realizing the smart factory a highly integrated, open and standardized automation concept is shown comprising single devices, production lines up to a higher automation system maintaining a community or business models.
Lernfabrik
(2016)
Die Einführung von cyberphysischen Systemen in der Fertigung wird die Arbeitsbedingungen und Prozesse genauso wie Geschäftsmodelle stark verändern. In der Praxis kann eine wachsende Diskrepanz zwischen Großunternehmen und KMU beobachtet werden. Genau diese Diskrepanz soll die im Folgenden präsentierte Lernfabrik überbrücken, die Unternehmen eine Plattform zum Probieren bietet, die Möglichkeit zur Ausbildung von Studenten und Mitarbeitern schafft und Beratungsangebote bereithält. Zur Umsetzung wird ein integriertes, offenes und standardisiertes Automatisierungskonzept vorgestellt, das einzelne Geräte, ganze Produktionslinien bis hin zu höheren Automatisierungssystemen umfasst und auch eine Community bereitstellt sowie zur Umsetzung neuer Geschäftsmodelle dient.
Durch die Einführung cyber-physischer Systeme in der Produktion ändern sich Arbeitsbedingungen und Prozesse sowie Geschäftsmodelle. In der Praxis kann eine wachsende Diskrepanz zwischen großen und kleinen bzw. mittelständischen Unternehmen beobachtet werden. Um diese Diskrepanz zu überbrücken, wird eine Modellfabrik vorgestellt, die Unternehmen eine Plattform zum Probieren bietet, die Möglichkeit zur Ausbildung von Studenten und Mitarbeitern schafft und Beratungsangebote bereithält. Am Beispiel einer dezentralisierten Demonstrationsfabrik wird ein hochintegriertes, offenes und standardisiertes Automatisierungskonzept vom Shopfloor bis hin zu einem Business Ecosystem präsentiert. Eine Suchmaschine dient als Basis für ein Fertigungsmanagementsystem (MES). Die Informationsverarbeitung erfolgt nach dem Pull-Prinzip, welches bei einem nach Lean optimierten Prozess bereits Anwendung findet.
Industry 4.0
(2017)
The development of a new product can be accelerated by using an approach called crowdsourcing. The engineers compete and try their best to provide the related solution based on the given product requirement submitted in the online crowdsourcing platform. The one who has submitted the best solution get a financial reward. This approach is proven to be three time faster than the conventional one. However, the crowdsourcing process is usually not transparent to a new user. The risk for the execution of a new project for developing a new product is not easy to be calculated [1, 2]. We developed a method InnoCrowd to handle this problem and the new user could use during the planning of a new product development project. This system uses AI concepts to generate a knowledgebase representing histories of successful product development projects. The system uses the knowledge to determine qualitative and quantitative risks of a new project. This paper describes the new method, the InnoCrowd design, and results of a validation experiment based on data from a current crowdsourcing platform. Finally, we compare InnoCrowd to related methods and systems in terms of design and benefits.
Product development and product manufacturing are entering a new era, namely an era where engineering tasks are executed under collaboration of all involved parties. Engineers and potential customers work together mainly in a virtual world for the design and realization of the product. We address this so called “crowdsourcing” trend in the automotive industry that lowers cost and accelerates production of new car. Current practice and prior studies fail to handle data management and collaboration aspects in sufficient detail. We propose a PLM based crowdsourcing platform that applies best practices to the established approach and adapt it with new methods for handling specific requirements. Our work provides a basis for establishing an improved collaboration platform to support a Gig Economy in the automotive industry.
Die Studienanfänger in den technischen Studiengängen der Hochschulen für angewandte Wissenschaften haben nicht nur in Mathematik sondern auch in Physik sehr unterschiedliche Vorkenntnisse. Obwohl diese Fächer für das grundlegende Verständnis technischer Vorgänge von großer Bedeutung sind, kann die Ausbildung in diesen Bereichen angesichts der begrenzten dafür im Verlauf des Studiums zur Verfügung stehenden Zeitfenster nicht bei Null anfangen. Für Mathematik wurde daher von der Arbeitsgruppe cosh ein Mindestanforderungskatalog zusammengestellt und 2014 veröffentlicht. Er beschreibt Kenntnisse und Fertigkeiten, die Studienanfänger zur erfolgreichen Aufnahme eines WiMINT-Studiums (Wirtschaft, Mathematik, Informatik, Naturwissenschaft, Technik) an einer Hochschule benötigen. Inzwischen hat sich nun eine Arbeitsgruppe von Physikerinnen und Physikern an Hochschulen in Baden-Württemberg gebildet, deren Ziel es ist, einen analogen Mindestanforderungskatalog für den Bereich Physik zu erstellen. Hier wird der aktuell erreichte Stand der Arbeiten vorgestellt.
Technology-based ventures provide an important route for successful technology transfer [1], [2]. Their founders are supported in successful technology commercialization by innovation intermediaries [3]. Accordingly, the performance of an innovation system, at least to some extent, depends on the efficiency of these intermediaries in terms of the impact of their scarce resources on the survival and growth of technology-based ventures. To increase their efficiency, intermediaries typically optimize their "intake" by requesting a formal business plan to base their selection on as a hygiene factor [4]-[7]. Thus, some scholars argue that written business plans show significant distortion as being produced only to attract support from innovation intermediaries [6], [8]. Accordingly, they rarely serve for these addressees as a source of information for analyzing the strengths and weaknesses of ventures, in order to derive actionable conclusions and more effectively support ventures [9], [10]. Addressees search for different indicators in business plans for their evaluation [11]. The descriptions of these indicators only evince little empirical proof for the performance of technology-based venture's [8], [12]. This gap is herein addressed, in contrast to the lacking empirical insight, as the most frequently produced artifact of early-stage technology ventures is at the same time a written business plan [10], [13]. This paper addresses this gap by conceptualizing transaction relations described in the written business plan as a means for working around the inevitable inaccuracies and uncertainties that delimit the explanatory abilities [14] of the snapshot model [10] presented by a business plan. Using a qualitative content analysis, we derive from the descriptions of transaction relations in a written business plan valid indicators for the maturity of the venture's value-network in different dimensions [15]. To this extent, this paper presents the findings from a pre-study that was conducted based on a sample of forty business plans from an overall population of 800 business plans in a longitudinal sample from one of Europe's most active innovation systems, the regional State of Baden-Württemberg. Such findings may be used by innovation intermediaries to enhance their efficiency, by enabling these to not only derive individual support strategies for business acceleration but also to analyze the impact of support measures by reliably monitoring maturity progress in venture activities.
Prior quantitative research identified in the text of technology-based ventures' business plans distinctive performance patterns of evolving business models. Accordingly, interactions with customers, financiers, and people and the patenting strategy's status evolved and served as indicators of early-stage tech ventures' performance. With longitudinal data from five venture cases, this research sheds light on the evolving business model by validating the performance patterns, and elucidating how and why the ventures' business models evolved. Based on a generic systems theory framework for the indicators, the explanatory case studies re-contextualize the performance patterns taken from the snapshot perspective of business plans to the longitudinal perspective of technology-based ventures' life-cycle. This research confirms the relation of business model patterns of digital and non-digital ventures to the performance groups of failure, survival, or success and suggests a broader systems perspective for further research.
Validity of the business model is a key indicator for buying into ventures in the early-stage. Business models of early-stage ventures decrease in validity when developing the business over the progressing stages of the business life-cycle. By doing so, the ventures are validating their business model when building transaction relationships to the surrounding value network. In prior research, we developed a research design based on existing business innovation proposals (onepager, pitch decks, business plans) that is assumed to evaluate the status of business model validation. The core hypothesis of the research design is that transaction relations represent a strong anchor between the business model and the business reality, thus providing information on the business model validity. In this research, we test this hypothesis by designing and analyzing a survey that was directed to founders taking part in a business plan competition. We compared the relationships described in the submitted business plans to the relations explicitely stated in the follow-up questionnaire. We identified that the described relations to customers, investors, and people (human resources) match the relationships expressed in questionnaires quite well. A significant disagreement, however, exists in the relationships to suppliers. We conclude that there is still a theoretical and empirical gap that leads to disagreement between business plans and reality in the group of suppliers.
The business plan is one of the most frequently available artifacts to innovation intermediaries of technology-based ventures' presentations in their early stages [1]–[4]. Agreement on the evaluations of venturing projects based on the business plans highly depends on the individual perspective of the readers [5], [6]. One reason is that little empirical proof exists for descriptions in business plans that suggest survival of early-stage technology ventures [7]–[9]. We identified descriptions of transaction relations [10]–[13] as an anchor of the snapshot model business plan to business reality [13]. In the early-stage, surviving ventures are building transaction relations to human resources, financial resources, and suppliers on the input side, and customers on the output side of the business towards a stronger ego-centric value network [10]–[13]. We conceptualized a multidimensional measurement instrument that evaluates the maturity of this ego-centric value networks based on the transaction relations of different strength levels that are described in business plans of early-stage technology ventures [13]. In this paper, the research design and the instrument are purified to achieve high agreement in the evaluation of business plans [14]–[16]. As a result, we present an overall research design that can reach acceptable quality for quantitative research. The paper so contributes to the literature on business analysis in the early-stage of technology-based ventures and the research technique of content analysis.
Entrepreneurial employees
(2019)
Volatile markets and accelerating innovation cycles progressively force established companies to adopt alternative innovation strategies such as entrepreneurship. Due to the key role entrepreneurial employees play for strengthening the company's abilities for innovation and change, various concepts have emerged like corporate entrepreneurship or intrapreneurship. While the extant literature has increasingly examined only specific issues of entrepreneurial employees, an overall view on it lacks investigation. Therefore, the purpose of this paper is to structurally present current research on entrepreneurial employees by conducting a broad systematic literature review. The resulting research streams contribute to a clearer justification for future research and are a first step towards a comprehensive research view related to intrapreneurship.
Research credits corporate entrepreneurship (CE) with enabling established companies to create new types of innovation. Scholars have focused on the organizational design of CE activities, proposing specific organizational units. These semi-autonomous units create a tense management situation between the core organization and its CE activities. Management and organization research considers control as a key managerial function for help. However, control has received limited research attention regarding CE units, leaving design issues for appropriate control of CE units unanswered. In this study, we link management control and CE to illustrate how control is understood in the context of CE. For this, we scanned the CE literature to identify underlying attributes and characteristics that allow specifying control for CE. We identified 11 attributes to describe control for CE activities in a first round and to derive future research paths.
Corporate Entrepreneurship (CE) has now evolved into an imperative innovation practice of established companies. Despite organizational design models for CE activities and companies' frequent initiation of new activities, effectively managing them remains a challenging endeavor which results in disappointment about the outcomes of CE and its early termination. We assume specific types of goals for CE as one element of this unresolved management issue. While both practice and literature address goals in different contexts, no uniform picture has emerged so far. Although goals are commonly used to categorize CE activities, they seldomly seem to be the core subject of investigation. Based on this preliminary analysis and consolidation, we put the goals of CE in focus. In a systematic literature review, we reveal aspects of goals to unmask the different types of goals and their underlying dimensions and characteristics. Our review contributes to a better understanding of goals by (1) organizing relevant literature on goals of CE in a specific classification process, (2) describing dimensions and attributes for a systematic classification of CE goals; and (3) providing a framework showing differences of goals for the CE context. We conclude with a discussion and hints for future research paths.
Guiding through the Fog
(2021)
Corporate Entrepreneurship (CE) programs are formalized efforts to realize entrepreneurial activities in established companies. Despite the growing and evolving landscape of CE programs, effectively managing them remains a challenging endeavor which results in disappointing outcomes and oftentimes leads to the early termination of such programs. We unmask the differences in goal setting of CE programs and highlight that setting appropriate goals is imperative for their desired outcomes. In practice, companies seem to struggle with the goal setting, and scholars have not yet fully solved the puzzle of goals setting in the context of CE programs either. Therefore, we set out to explore the current state of goal setting in the context of CE programs building upon 61 semi-structured interviews with CE program executives from cross-industry companies with different sizes. Our study contributes to a better understanding of goal setting in the context of CE programs by (1) characterizing the goal setting of CE programs based on goal attributes and goal types and (2) identifying differences among the goal setting of CE programs. We provide implications to practice for a more effective management of CE programs and conclude with a discussion for future research on the impact of the different goal settings.
We present an analysis of how to determine security requirements for software that controls routing decisions in the distribution of discrete physical goods. Requirements are derived from stakeholder interests and threat scenarios. Three deployment scenarios are discussed: cloud and hybrid deployment as well as on-premise installation for legacy sites.
Deep neural networks have been successfully applied to problems such as image segmentation, image super-resolution, coloration and image inpainting. In this work we propose the use of convolutional neural networks (CNN) for image inpainting of large regions in high-resolution textures. Due to limited computational resources processing high-resolution images with neural networks is still an open problem. Existing methods separate inpainting of global structure and the transfer of details, which leads to blurry results and loss of global coherence in the detail transfer step. Based on advances in texture synthesis using CNNs we propose patch-based image inpainting by a single network topology that is able to optimize for global as well as detail texture statistics. Our method is capable of filling large inpainting regions, oftentimes exceeding quality of comparable methods for images of high-resolution (2048x2048px). For reference patch look-up we propose to use the same summary statistics that are used in the inpainting process.
In this paper we present a method using deep learning to compute parametrizations for B-spline curve approximation. Existing methods consider the computation of parametric values and a knot vector as separate problems. We propose to train interdependent deep neural networks to predict parametric values and knots. We show that it is possible to include B-spline curve approximation directly into the neural network architecture. The resulting parametrizations yield tight approximations and are able to outperform state-of-the-art methods.
Deep 3D
(2017)
In the reverse engineering process one has to classify parts of point clouds with the correct type of geometric primitive. Features based on different geometric properties like point relations, normals, and curvature information can be used, to train classifiers like Support Vector Machines (SVM). These geometric features are estimated in the local neighborhood of a point of the point cloud. The multitude of different features makes an in-depth comparison necessary. In this work we evaluate 23 features for the classification of geometric primitives in point clouds. Their performance is evaluated on SVMs when used to classify geometric primitives in simulated and real laser scanned point clouds. We also introduce a normalization of point cloud density to improve classification generalization.
Creating cages that enclose a 3D-model of some sort is part of many preprocessing pipelines in computational geometry. Creating a cage of preferably lower resolution than the original model is of special interest when performing an operation on the original model might be to costly. The desired operation can be applied to the cage first and then transferred to the enclosed model. With this paper the authors present a short survey of recent and well known methods for cage computation.
The authors would like to give the reader an insight in common methods and their differences.
Cultural Mapping 4.0
(2021)
Die Bodenseeregion gehört zu einer der ältesten Kulturlandschaften Europas. Ihre regionale kulturelle Identität trägt zum Image sowie Identifikation seitens der Bevölkerung mit der Bodenseeregion bei. Dennoch mangelt es an einer ganzheitlichen, die gesamte Bodenseeregion umfassende Betrachtung der Frage, was die kulturelle Identität der Bodenseeregion ausmacht. Das Forschungsprojekt «CultMap4.0» hat daher zum Ziel, aus einer räumlichen Perspektive die Wechselwirkung zwischen regionaler Identität, Kultur und Mobilität zu untersuchen. Neben der Leitfrage, was kulturelle Identität in einer grenzüberschreitenden und diversen Region wie dem Bodensee zu sein und leisten vermag, werden im Rahmen von vier Themenschwerpunkten folgende Forschungsfragen untersucht: Wie nehmen einheimische Bevölkerung, Unternehmen und TouristInnen die regionale kulturelle Identität (Eigenbild) und das Image (Fremdbild) der Bodenseeregion wahr? Wie kann der Ansatz des “Cultural Mappings” durch partizipative Kartierung digital transformiert werden, und wie können kulturelle Identität und Mobilität mit digitalem Storytelling in Storymaps visualisiert werden? Welchen Beitrag kann “Cultural Mapping 4.0” als partizipatives Werkzeug zur Regionalplanung und zur Kommunikation mit Stakeholdern in der Bodenseeregion und anderenorts leisten? Die dabei entstehenden Storymaps – interaktive Webinhalte aus Texten, Karten und weiteren Medien – zur kulturellen Identität der Bodenseeregion sollen auf der Plattform “Cultural Mapping” Project Lake Constance” veröffentlicht werden, um so von den Stakeholdern als Planungs- und Entscheidungstool sowie fürs Standortmarketing genutzt werden zu können.
Due to the rising need for palliative care in Russia, it is crucial to provide timely and high-quality solutions for patients, relatives, and caregivers. A methodology for remote monitoring of patients in need of palliative care and the requirements will be developed for a hardware-software complex for remote monitoring of patients' health at home.
Cultural Mapping 4.0
(2021)
Cultural mapping aims to capture and visualize tangible and intangible cultural assets. This extend abstract proposes the consequent extension of analogue forms of cultural mapping using digital technologies, and its contribution is two-fold. First, the necessary theoretical basis is provided by a literature review of the still-young field of cultural mapping and the complementary disciplines of participatory mapping and digital story-mapping. Second, we propose a digitally enhanced Cultural Mapping 4.0 vision based on a case study from an ongoing research project in the Lake Constance region. Digital participatory mapping approaches are applied to capture data, and to validate and disseminate the results, story-mapping - a spatial form of digital storytelling - is used.
Conducting surveillance impact assessment is the first step to solve the "Who monitors the monitor?" problem. Since the surveillance impacts on different dimensions of privacy and society are always changing, measuring compliance and impact through metrics can ensure the negative consequences are minimized to acceptable levels. To develop metrics systematically for surveillance impact assessment, we follow the top-down process of the Goal/Question/Metric paradigm: 1) establish goals through the social impact model, 2) generate questions through the dimensions of surveillance activities, and 3) develop metrics through the scales of measure. With respect to the three factors of impact magnitude: the strength of sources, the immediacy of sources, and the number of sources, we generate questions concerning surveillance activities: by whom, for whom, why, when, where, of what, and how, and develop metrics with the scales of measure: the nominal scale, the ordinal scale, the interval scale, and the ratio scale. In addition to compliance assessment and impact assessment, the developed metrics have the potential to address the power imbalance problem through sousveillance, which employs surveillance to control and redirect the impact exposures.
It is widely recognized that sustainability is a new challenge for many manufacturing companies. In this paper, we tackle this issue by presenting an approach that deals with material and substance compliance within Product Lifecycle Management in a complex value chain. Our analysis explains why, how and when sustainable manufacturing arises, and it identifies, quantifies and evaluates the environmental impact of a new product. We propose (I) a Life Cycle Assessment tool (LCA) and (II) a model to validate this approach and evaluate the risk of noncompliance in supply chain. Our LCA approach provides comprehensive information on environmental impacts of a product.
Product and materials cycles are parallel and intersecting, making it challenging to integrate Material Selection Process across Product Lifecycle Management, Integration of LCA with PLM. We provide only a foundation. Further research in systems engineering is necessary. LCA is sensitive to data quality. Outsourcing production and having problems in supplier cooperation can result in material mismatch (such as property, composition mismatching) in the production process due to that may cause misleading of LCA results.
This paper also describes research challenges using riskbased due diligence.