Refine
Year of publication
Document Type
- Article (218) (remove)
Language
- English (218) (remove)
Keywords
- (Strict) sign-regularity (1)
- 1D-CNN (1)
- 3D urban planning (1)
- AAL (2)
- Aboriginal people (1)
- Abstract interpretation (1)
- Accelerometer calibration (1)
- Accelerometer sensor (1)
- Accelerometers (1)
- Actuators (1)
Institute
- Fakultät Architektur und Gestaltung (1)
- Fakultät Bauingenieurwesen (11)
- Fakultät Elektrotechnik und Informationstechnik (1)
- Fakultät Informatik (7)
- Fakultät Maschinenbau (3)
- Fakultät Wirtschafts-, Kultur- und Rechtswissenschaften (27)
- Institut für Angewandte Forschung - IAF (30)
- Institut für Optische Systeme - IOS (9)
- Institut für Strategische Innovation und Technologiemanagement - IST (6)
- Institut für Systemdynamik - ISD (27)
In several organizations, business workgroups autonomously implement information technology (IT) outside the purview of the IT department. Shadow IT, evolving as a type of workaround from nontransparent and unapproved end-user computing (EUC), is a term used to refer to this phenomenon, which challenges norms relative to IT controllability. This report describes shadow IT based on case studies of three companies and investigates its management. In 62% of cases, companies decided to reengineer detected instances or reallocate related subtasks to their IT department. Considerations of risks and transaction cost economics with regard to specificity, uncertainty, and scope explain these actions and the resulting coordination of IT responsibilities between the business workgroups and IT departments. This turns shadow IT into controlled business-managed IT activities and enhances EUC management. The results contribute to the governance of IT task responsibilities and provide a way to formalize the role of workarounds in business workgroups.
Traditional Western philosophy, cognitive science and traditional HCI frameworks approach the term digital and its implications with an implicit dualism (nature/cul-ture, theory /practice, body/mind, human/machine). What lies between is a feature of our postmodern times, in which different states, conditions or positions merge and co-exist in a new, hybrid reality, a “continuous beta” (Mühlenbeck & Skibicki, 2007) version of becoming .Post-digitality involves the physical dimensions of spatio-temporal engagements. This new ontological paradigm reconceptualizes digital technology through the ex-perience of the human body and its senses, thus emphasizing form-taking, situation-al engagement and practice rather than symbolic, disembodied rationality. This rais-es two questions in particular: how to encourage curiosity, playfulness, serendipity, emergence, discourse and collectivity? How to construct working methods without foregrounding and dividing the subject into an individual that already takes posi-tion? This paper briefly outlines the rhizomatic framework that I developed within my PhD research. This attempts to overcome two prevailing tendencies: first, the one-sided view of scientific approaches to knowledge acquisition and the pure-ly application-oriented handling of materials, technologies and machines; second, the distanced perception of the world. In contrast, my work involves project-driven alchemic curiosity and doing research through artistic design practice. This means thinking through materials, technologies and machinic interactions. Now, at the end of this PhD journey, 10 interdisciplinary projects have emerged from this ontological queer-paradigm that is post-digital–crafting 4.0. Below I illustrate this approach and its outcomes.
The actual task of electrocardiographic examinations is to increase the reliability of diagnosing the condition of the heart. Within the framework of this task, an important direction is the solution of the inverse problem of electrocardiography, based on the processing of electrocardiographic signals of multichannel cardio leads at known electrode coordinates in these leads (Titomir et al. Noninvasiv electrocardiotopography, 2003), (Macfarlane et al. Comprehensive Electrocardiology, 2nd ed. (Chapter 9), 2011).
Many countries offer state credit guarantees to support credit-constrained exporters. The policy instrument is commonly justified by governments as a means to mitigating adverse outcomes of financial market frictions for exporting firms. Accumulated returns to the German state credit guarantee scheme deriving from risk-compensating premia have outweighed accumulated losses over the past 60 years. Why do private financial agents not step in and provide insurance given that the state-run program yields positive returns? We argue that costs of risk diversification, liquidity management, and coordination among creditors limit the ability of private financial agents to offer comparable insurance products. Moreover, we suggest that the government’s greater effectiveness in recovering claims in foreign countries endows the state with a cost advantage in dealing with the risks involved in large export projects. We test these hypotheses using monthly firm-level data combined with official transaction-level data on covered exports of German firms and find suggestive evidence that positive effects on trade are due to mitigated financial constraints: State credit guarantees benefit firms that are dependent on external finance, if the value at risk which they seek to cover is large, and at times when refinancing conditions on the private financial market are tight.
This article introduces the Global Sanctions Data Base (GSDB), a new dataset of economic sanctions that covers all bilateral, multilateral, and plurilateral sanctions in the world during the 1950–2016 period across three dimensions: type, political objective, and extent of success. The GSDB features by far the most cases amongst data bases that focus on effective sanctions (i.e., excluding threats) and is particularly useful for analysis of bilateral international transactional data (such as trade flows). We highlight five important stylized facts: (i) sanctions are increasingly used over time; (ii) European countries are the most frequent users and African countries the most frequent targets; (iii) sanctions are becoming more diverse, with the share of trade sanctions falling and that of financial or travel sanctions rising; (iv) the main objectives of sanctions are increasingly related to democracy or human rights; (v) the success rate of sanctions has gone up until 1995 and fallen since then. Using state-of-the-art gravity modeling, we highlight the usefulness of the GSDB in the realm of international trade. Trade sanctions have a negative but heterogeneous effect on trade, which is most pronounced for complete bilateral sanctions, followed by complete export sanctions.
Multi-faceted stresses of social, environmental, and economic nature are increasingly challenging the existence and sustainability of our societies. Cities in particular are disproportionately threatened by global issues such as climate change, urbanization, population growth, air pollution, etc. In addition, urban space is often too limited to effectively develop sustainable, nature-based solutions while accommodating growing populations. This research aims to provide new methodologies by proposing lightweight green bridges in inner-city areas as an effective land value capture mechanism. Geometry analysis was performed using geospatial and remote sensing data to provide geometrically feasible locations of green bridges. A multi-criteria decision analysis was applied to identify suitable locations for green bridges investigating Central European urban centers with a focus on German cities as representative examples. A cost-benefit analysis was performed to assess the economic feasibility using a case study. The results of the geometry analysis identified 3249 locations that were geometrically feasible to implement a green bridge in German cities. The sample locations from the geometry analysis were proved to be validated for their implementation potential. Multi-criteria decision analysis was used to select 287 sites that fall under the highest suitable class based on several criteria. The cost-benefit analysis of the case study showed that the market value of the property alone can easily outweigh the capital and maintenance costs of a green bridge, while the indirect (monetary) benefits of the green space continue to increase the overall value of the green bridge property including its neighborhood over time. Hence, we strongly recommend light green bridges as financially sustainable and nature-based solutions in cities worldwide.
“Crowd contamination”?
(2023)
Misconduct allegations have been found to not only affect the alleged firm but also other, unalleged firms in form of reputational and financial spillover effects. It has remained unexplored, however, how the number of prior allegations against other firms matters for an individual firm currently facing an allegation. Building on behavioral decision theory, we argue that the relationship between allegation prevalence among other firms and investor reaction to a focal allegation is inverted U-shaped. The inverted U-shaped effect is theorized to emerge from the combination of two effects: In the absence of prior allegations against other firms, investors fail to anticipate the focal allegation, and hence react particularly negatively (“anticipation effect”). In the case of many prior allegations against other firms, investors also react particularly negatively because investors perceive the focal allegation as more warranted (“evaluation effect”). The multi-industry, empirical analysis of 8,802 misconduct allegations against US firms between 2007 and 2017 provides support for our predicted, inverted U-shaped effect. Our study complements recent misconduct research on spillover effects by highlighting that not only a current allegation against an individual firm can “contaminate” other, unalleged firms but that also prior allegations against other firms can “contaminate” investor reaction to a focal allegation against an individual firm.
Cities around the world are facing the implications of a changing climate as an increasingly pressing issue. The negative effects of climate change are already being felt today. Therefore, adaptation to these changes is a mission that every city must master. Leading practices worldwide demonstrate various urban efforts on climate change adaptation (CCA) which are already underway. Above all, the integration of climate data, remote sensing, and in situ data is key to a successful and measurable adaptation strategy. Furthermore, these data can act as a timely decision support tool for municipalities to develop an adaptation strategy, decide which actions to prioritize, and gain the necessary buy-in from local policymakers. The implementation of agile data workflows can facilitate the integration of climate data into climate-resilient urban planning. Due to local specificities, (supra)national, regional, and municipal policies and (by) laws, as well as geographic and related climatic differences worldwide, there is no single path to climate-resilient urban planning. Agile data workflows can support interdepartmental collaboration and, therefore, need to be integrated into existing management processes and government structures. Agile management, which has its origins in software development, can be a way to break down traditional management practices, such as static waterfall models and sluggish stage-gate processes, and enable an increased level of flexibility and agility required when urgent. This paper presents the findings of an empirical case study conducted in cooperation with the City of Constance in southern Germany, which is pursuing a transdisciplinary and trans-sectoral co-development approach to make management processes more agile in the context of climate change adaptation. The aim is to present a possible way of integrating climate data into CCA planning by changing the management approach and implementing a toolbox for low-threshold access to climate data. The city administration, in collaboration with the University of Applied Sciences Constance, the Climate Service Center Germany (GERICS), and the University of Stuttgart, developed a co-creative and participatory project, CoKLIMAx, with the objective of integrating climate data into administrative processes in the form of a toolbox. One key element of CoKLIMAx is the involvement of the population, the city administration, and political decision-makers through targeted communication and regular feedback loops among all involved departments and stakeholder groups. Based on the results of a survey of 72 administrative staff members and a literature review on agile management in municipalities and city administrations, recommendations on a workflow and communication structure for cross-departmental strategies for resilient urban planning in the City of Constance were developed.
Healthy and good sleep is a prerequisite for a rested mind and body. Both form the basis for physical and mental health. Healthy sleep is hindered by sleep disorders, the medically diagnosed frequency of which increases sharply from the age of 40. This chapter describes the formal specification of an on-course practical implementation for a non-invasive system based on biomedical signal processing to support the diagnosis and treatment of sleep-related diseases. The system aims to continuously monitor vital data during sleep in a patient’s home environment over long periods by using non-invasive technologies. At the center of the development is the MORPHEUS Box (MoBo), which consists of five main conceptualizations: the MoBo core, the MoBo-HW, the MoBo algorithm, the MoBo API, and the MoBo app. These synergistic elements aim to support the diagnosis and treatment of sleep-related diseases. Although there are related developments in individual aspects concerning the system, no comparative approach is known that gives a similar scope of functionality, deployment flexibility, extensibility, or the possibility to use multiple user groups. With the specification provided in this chapter, the MORPHEUS project sets a good platform, data model, and transmission strategies to bring an innovative proposal to measure sleep quality and detect sleep diseases from non-invasive sensors.
This paper compares two popular scripting implementations for hardware prototyping: Python scripts exe- cut from User-Space and C-based Linux-Driver processes executed from Kernel-Space, which can provide information to researchers when considering one or another in their implementations. Conclusions exhibit that deploying software scripts in the kernel space makes it possible to grant a certain quality of sensor information using a Raspberry Pi without the need for advanced real-time operational systems.
A nonlinear mathematical model for the dynamics of permanent magnet synchronous machines with interior magnets is discussed. The model of the current dynamics captures saturation and dependency on the rotor angle. Based on the model, a flatness-based field-oriented closed-loop controller and a feed-forward compensation of torque ripples are derived. Effectiveness and robustness of the proposed algorithms are demonstrated by simulation results.
We have introduced in this paper new variants of two methods for projecting Supply and Use Tables that are based on a distance minimisation approach (SUT-RAS) and the Leontief model (SUT-EURO). We have also compared them under similar and comparable exogenous information, i.e.: with and without exogenous industry output, and with explicit consideration of taxes less subsidies on products. We have conducted an empirical assessment of all of these methods against a set of annual tables between 2000 and 2005 for Austria, Belgium, Spain and Italy. From the empirical assessment, we obtained three main conclusions: (a) the use of extra information (i.e. industry output) generally improves projected estimates in both methods; (b) whenever industry output is available, the SUT-RAS method should be used and otherwise the SUT-EURO should be used instead; and (c) the total industry output is best estimated by the SUT-EURO method when this is not available.
The main objective of this paper is to revisit the Euro method in a critical and constructive way.Wehave analysed some arguments against the Euro method published recently in the literature as well as some other relevant aspects of the SUT-Euro and SUT-RAS methods not covered before. Although not being the Euro method perfect, we believe that there is still space for the use of the Euro method in updating/regionalizing Supply and Use tables.
Software startups
(2016)
Software startup companies develop innovative, software-intensive products within limited time frames and with few resources, searching for sustainable and scalable business models. Software startups are quite distinct from traditional mature software companies, but also from micro-, small-, and medium-sized enterprises, introducing new challenges relevant for software engineering research. This paper’s research agenda focuses on software engineering in startups, identifying, in particular, 70+ research questions in the areas of supporting startup engineering activities, startup evolution models and patterns, ecosystems and innovation hubs, human aspects in software startups, applying startup concepts in non-startup environments, and methodologies and theories for startup research. We connect and motivate this research agenda with past studies in software startup research, while pointing out possible future directions. While all authors of this research agenda have their main background in Software Engineering or Computer Science, their interest in software startups broadens the perspective to the challenges, but also to the opportunities that emerge from multi-disciplinary research. Our audience is therefore primarily software engineering researchers, even though we aim at stimulating collaborations and research that crosses disciplinary boundaries. We believe that with this research agenda we cover a wide spectrum of the software startup industry current needs.
Despite the increased attention dedicated to research on the antecedents and determinants of new venture survival in entrepreneurship, defining and capturing survival as an outcome represents a challenge in quantitative studies. This paper creates awareness for ventures being inactive while still classified as surviving based on the data available. We describe this as the ‘living dead’ phenomenon, arguing that it yields potential effects on the empirical results of survival studies. Based on a systematic literature review, we find that this issue of inactivity has not been sufficiently considered in previous new venture survival studies. Based on a sample of 501 New Technology-Based Firms, we empirically illustrate that the classification of living dead ventures into either survived or failed can impact the factors determining survival. On this basis, we contribute to an understanding of the issue by defining the ‘living dead’ phenomenon and by proposing recommendations for research practice to solve this issue in survival studies, taking the data source, the period under investigation and the sample size into account.
This paper builds upon the widely-used resource-based approach to explaining survival of new technology-based firms (NTBFs). However, instead of looking at the NTBF's initial resource configuration, a process-oriented perspective is taken by focusing on the entrepreneur's ability to transform resources in response to triggers resulting from market interactions. Transaction relations reflect these interactions and are thus operationalized with a suggested method for measuring the status of venture emergence (VE) applicable to early-stage NTBFs. NTBFs' value network maturity is reflected in the number and strength of their transaction relations in the four market dimensions customer, investor, partner, and human resource. Business plans of NTBFs represent the artifact that contains this data in the form of transaction relation descriptions. Using content analysis, a multi-step combined human and computer coding process has been developed to annotate and classify transaction relations from business plans in order to empirically determine NTBFs' status of VE. Results of the business plan analysis suggest that the level of transaction relations allows to draw conclusions on the VE status. Moreover, applying the developed process, first analysis of a business plan coding test shows that the transaction relation based VE status significantly relates to NTBF survival capability.
In this article, we give the construction of new four-dimensional signal constellations in the Euclidean space, which represent a certain combination of binary frequency-shift keying (BFSK) and M-ary amplitude-phase-shift keying (MAPSK). Description of such signals and the formulas for calculating the minimum squared Euclidean distance are presented. We have developed an analytic building method for even and odd values of M. Hence, no computer search and no heuristic methods are required. The new optimized BFSK-MAPSK (M = 5,6,···,16) signal constructions are built for the values of modulation indexes h =0.1,0.15,···,0.5 and their parameters are given. The results of computer simulations are also provided. Based on the obtained results we can conclude, that BFSK-MAPSK systems outperform similar four-dimensional systems both in terms of minimum squared Euclidean distance and simulated symbol error rate.
In this letter, we present an approach to building a new generalized multistream spatial modulation system (GMSM), where the information is conveyed by the two active antennas with signal indices and using all possible active antenna combinations. The signal constellations associated with these antennas may have different sizes. In addition, four-dimensional hybrid frequency-phase modulated signals are utilized in GMSM. Examples of GMSM systems are given and computer simulation results are presented for transmission over Rayleigh and deep Nakagami- m flat-fading channels when maximum-likelihood detection is used. The presented results indicate a significant improvement of characteristics compared to the best-known similar systems.
Production and marketing of cereal grains are some of the main activities in developing countries to ensure food security. However, the food gap is complicated further by high postharvest loss of grains during storage. This study aimed to compare low‐cost modified‐atmosphere hermetic storage structures with traditional practice to minimize quantitative and qualitative losses of grains during storage. The study was conducted in two phases: in the first phase, seven hermetic storage structures with or without smoke infusion were compared, and one selected structure was further validated at scaled‐up capacity in the second phase.
Bernstein polynomials on a simplex V are considered. The expansion of a given polynomial p into these polynomials provides bounds for range of p over V. Bounds for the range of a rational function over V can easily be obtained from the Bernstein expansions of the numerator and denominator polynomials of this function. In this paper it is shown that these bounds converge monotonically and linearly to the range of the rational function if the degree of the Bernstein expansion is elevated. If V is subdivided then the convergence is quadratic with respect to the maximum of the diameters of the subsimplices.
Matrix methods for the computation of bounds for the range of a complex polynomial and its modulus over a rectangular region in the complex plane are presented. The approach relies on the expansion of the given polynomial into Bernstein polynomials. The results are extended to multivariate complex polynomials and rational functions.
Tests for speeding up the determination of the Bernstein enclosure of the range of a multivariate polynomial and a rational function over a box and a simplex are presented. In the polynomial case, this enclosure is the interval spanned by the minimum and the maximum of the Bernstein coefficients which are the coefficients of the polynomial with respect to the tensorial or simplicial Bernstein basis. The methods exploit monotonicity properties of the Bernstein coefficients of monomials as well as a recently developed matrix method for the computation of the Bernstein coefficients of a polynomial over a box.
In this paper, multivariate polynomials in the Bernstein basis over a simplex (simplicial Bernstein representation) are considered. Two matrix methods for the computation of the polynomial coefficients with respect to the Bernstein basis, the so-called Bernstein coefficients, are presented. Also matrix methods for the calculation of the Bernstein coefficients over subsimplices generated by subdivision of the standard simplex are proposed and compared with the use of the de Casteljau algorithm. The evaluation of a multivariate polynomial in the power and in the Bernstein basis is considered as well. All the methods solely use matrix operations such as multiplication, transposition, and reshaping; some of them rely also on the bidiagonal factorization of the lower triangular Pascal matrix or the factorization of this matrix by a Toeplitz matrix. The latter one enables the use of the Fast Fourier Transform hereby reducing the amount of arithmetic operations.
In this paper, multivariate polynomials in the Bernstein basis over a box (tensorial Bernstein representation) are considered. A new matrix method for the computation of the polynomial coefficients with respect to the Bernstein basis, the so-called Bernstein coefficients, is presented and compared with existing methods. Also matrix methods for the calculation of the Bernstein coefficients over subboxes generated by subdivision of the original box are proposed. All the methods solely use matrix operations such as multiplication, transposition and reshaping; some of them rely on the bidiagonal factorization of the lower triangular Pascal matrix or the factorization of this matrix by a Toeplitz matrix. In the case that the coefficients of the polynomial are due to uncertainties and can be represented in the form of intervals it is shown that the developed methods can be extended to compute the set of the Bernstein coefficients of all members of the polynomial family.
In tomato drying, degradation in final quality may occur based on the drying method used and predrying preparation. Hence, this research was conducted to evaluate the effect of different predrying treatments on physicochemical quality and drying kinetics of twin-layer-solar-tunnel-dried tomato slices. During the experimental work, tomato slices of var. Galilea were used. As predrying treatments, 0.5% calcium chloride (CaCl2), 0.5% ascorbic acid (C6H8O6), 0.5% citric acid (C6H8O7), and 0.5% sodium chloride (NaCl) were used. The tomato samples were sliced to 5 mm thickness, socked in the pretreatments for ten minutes, and dried in a twin layer solar tunnel dryer under the weather conditions of Jimma, Ethiopia. Untreated samples were used as control. The moisture losses from the samples were monitored by weighing samples at 2 h interval from each treatment. SAS statistical software version 9.2 was used for analyzing data on the physicochemical quality of tomato slices in CRD with three replications. From the experimental result, it was observed that dried tomato slices pretreated with 0.5% ascorbic acid gave the best retention of vitamin C and total phenolic content with a high sugar/acid ratio. Better retention of lycopene and fast drying were observed in dried tomato slices pretreated with 0.5% sodium chloride, and pretreating tomatoes with 0.5% citric acid resulted in better color values than the other treatments. Compared to the control, pretreating significantly preserved the overall quality of dried tomato slices and increased the moisture removal rate in the twin layer solar tunnel dryer.
Border issues continue to be of interest in tourism literature, most significantly that which focusses on cross-border shopping (e.g., currency values, taxation,
security). Borders as destinations are recognized in this area but the notion of shopping as a destination is perhaps less acknowledged. Following a review of the relevant literature, including the presentation of a table summarizing key areas of cross-border tourism research around the world, this paper presents a unique example of a border region with two-way traffic for cross-border shopping tourism: the border between Germany and Switzerland.
The particular case is where two cities meet at the border: Konstanz, Germany and Kreuzlingen, Switzerland. An intercept survey and key informant interviews were conducted in both communities in the spring of 2015. The results indicate high levels of traffic for various products and services. And while residents are generally satisfied with cross-border shopping in their communities, there are emerging issues related to volume and, in particular, too many in Konstanz and not enough in Kreuzlingen.
The paper concludes with a discussion that includes the development of a model cross-border shopping tourism that recognizes the multiple layers in space and destination.
The paper concludes with a proposal to further investigate the particular issues related to the volume on both sides of borders where cross-border shopping is the destination.
A conceptual framework for indigenous ecotourism projects – a case study in Wayanad, Kerala, India
(2020)
This paper analyses indigenous ecotourism in the Indian district of Wayanad, Kerala, using a conceptual framework based on a PATA 2015 study on indigenous tourism that includes the criteria: human rights, participation, business and ecology. Detailed indicator sets for each criterion are applied to a case study of the Priyadarshini Tea Environs with a qualitative research approach addressing stakeholders from the public sector, non-governmental organisations, academia, tour operators and communities including Adivasi and non-Adivasi. In-depth interviews were supported by participant and non-participant observations. The authors adapted this framework to the needs of the case study and consider that this modified version is a useful tool for academics and practitioners wishing to evaluate and develop indigenous ecotourism projects. The results show that the Adivasi involved in the Priyadarshini Tea Environs project benefit from indigenous ecotourism. But they could profit more if they had more involvement in and control of the whole tourism value chain.
Purpose – The purpose of this paper is to examine visitor management in the German-Swiss border area of the Lake Constance region. Taking a customer perspective, it determines the requirements for an application with the ability to optimize personal mobility.
Design/methodology/approach – A quantitative study and a survey of focus groups were conducted to identify movement patterns of different types of visitors and their requirements concerning the development of a visitor management application.
Findings – Visitors want an application that provides real-time forecasts of issues such as traffic, parking and queues and, at the same time, enables them to create a personal activity schedule based on this information.
Research limitations/implications – Not every subsample reached a sufficient number of cases to yield representative results.
Practical implications – The results may lead to an optimization and management separation of mobility flows in the research area and be helpful to municipal planners, destination marketing organizations and visitors.
Originality/value – The German border cities of Konstanz, Radolfzell and Singen in the Lake Constance region need improved visitor management, mainly because of a high level of shopping tourism by Swiss visitors to Germany. In the Summer months, Lake Constance is also a popular destination for leisure tourists, which causes overtourism. For the first time, the results of this research presented here offer possible solutions, in particular by showing how a mobile application for visitors could defuse the situation.
The aim of this paper is to portray the risks of climate change for low mountain range tourism and to develop sustainable business models as adaption strategy. A mixed-method-approach is applied combining secondary analysis, a quantitative survey, and qualitative in-depth-interviews in a transdisciplinary setting. Results show, that until now, climate change impacts on the snow situation in the Black Forest – at least above 1,000 m – have been mild and compensated by artificial snowmaking, and up to now have not had measurable effects on tourism demand. In general, the Black Forest appears to be an attractive destination for more reasons than just snow. The climate issue seems to be regarded as a rather incidental occurrence with little importance to current business decisions. However, the authors present adaption strategies as alternatives for snow tourism, e. g. the implementation of hiking hostels, since climate change will make winter tourism in the Black Forest impossible in the long run.
The Black Forest offers renewable energy as a specific tourist destination in the form of bioenergy villages (BEV). Particularly expert tourists tend to visit them. The results of two quantitative surveys on the supply and demand side show that there is, up to now, an untapped potential among experienceoriented
tourists for this type of niche tourism.
The Kerala tourism model
(2017)
Sustainable tourism in Kerala is on the rise. Therefore, this South Indian state is assessed according to the sustainable tourism criteria of the Strasdas et al. (2007) framework. Kerala as a state does not qualify as a sustainable tourism destination, although individual success stories at the NGO and government level exist. This conceptual paper delivers a detailed analysis of the three dimensions of sustainability, i.e. ecology, economy and socio-cultural aspects, of the ‘Kerala tourism model’ and discusses the question of whether this model can be transferred to other developing countries. Copyright © 2016 John Wiley & Sons, Ltd and ERP Environment
Purpose The purpose of this paper is to find out tourism movement patterns via the tracking of tourists with the help of positioning systems like GPS in the rural area of the Lake Constance destination in Germany. In doing so past, present and future of tourist tracking is illustrated. Design/methodology/approach The tracking is realized via common smartphones extended by an app, with dedicated sensors like position loggers and a survey. The three different approaches are applied in order to compare and cross-check results (triangulation of data and methods). Findings Movement patterns turned out to be diverse and individualistic within the rural destination of Lake Constance and following an ants trail in sub-destinations like the city of Constance. Repeat visitors and first-time visitors alike always visit the bigger cities and main day-trip destinations of the Lake. A possible prediction tool enables new avenues of governing tourism movement patterns. Research limitations/implications The tracking techniques can be developed further into the direction of “quantified self” using gamification in order to make the tracking app even more attractive. Practical implications An algorithm-based prediction tool would offer new perspectives to the management of tourism movements. Social implications Further research is needed to overcome the feeling of invasiveness of the app to allow tracking with that approach. Originality/value This study is original and innovative because of the first-time use of a smartphone app in tourist tracking, the application on a rural destination and the conceptual description of a prediction tool.
This paper presents a framework to assess the cultural sustainability of Aboriginal tourism in British Columbia, which meets must take into account the protection of human rights, good self-governance, identity, control of land, the tourism product’s authenticity, and a market-ready tourism product. These criteria are specified by two indicators each. The cultural sustainability framework was generated by triangulating qualitative research methods like experts’ interviews, secondary research, and participant and non-participant observations. This paper is thus conceptual in nature and inductive in its approach. It partly leverages a collaborative approach, as it includes interviewees in an iterative research loop. Furthermore, the paper shows why cultural sustainability is a determinant of the success of Aboriginal tourism.
This paper applies the concept of Soja’s Thirdspace to the phenomenon of Lazgi dance and tourism in Uzbekistan. In doing so it analyses the different levels of perception (including Firstspace and Secondspace) of Lazgi and tourism via an autoethnographic lens. Complemented by expert interviews, the interaction of Lazgi and tourism is examined and characteristics of the Lazgisphere (world of Lazgi) in Uzbekistan are distilled. The results show that Lazgi is often directly or indirectly connected with tourism in Uzbekistan, but even more so serves to reaffirm national identity.
E-mobility in Tourism
(2018)
This article examines chances for and obstacles to e-mobility in tourism at the cross-border region of Lake Constance, Germany. Using secondary internet research, a database of key e-mobility supply factors was generated and visualized utilizing a geographical information system. The results show that fragmentation in infrastructure and information due to the cross-border situation of the four-country region is the main obstacle for e-mobility in tourism in the Lake Constance region. Cooperation and coordination of the supply side of e-mobility in the Lake Constance region turned out to be weak. To improve the chances of e-mobility in cross-border tourism a more client-oriented approach regarding information, accessibility, and conditions of use is necessary.
A post-growth economy is a comparatively new paradigm in the tourism discourse. The aim of this article is to find out the commonalities between this concept and Māori tourism and in which way the latter can contribute to a post-growth economy. A qualitative mixed method approach, including in-depth-interviews, participant observation, and secondary analysis is applied. The results show that there is a lot of overlap between Māori tourism and a post-growth economy. Differences are visible, as well, regarding the value approach of Māori tourism and the indicator approach of a post-growth economy. Especially the social innovation created in Aotearoa New Zealand at the instigation of Māori groups of granting legal personhood to parts of nature may serve as a driver for a form of tourism that is in line with the idea of a post-growth economy.
The aim of this paper is to find out in how accommodation providers in the Seychelles perceive climate change and what mitigation and adaptation measures they can provide. In order to answer these questions, a qualitative mixed-method-approach, comprised of twenty semi-structured interviews, an online-survey and participant observation was used. Results show that accommodation providers especially perceive the effects of climate change that directly affect their business and that they have already partly implemented some mitigation and adaptation measures. However, strategies and regulations are needed at the Seychelles’ government level and on a global level to actually achieve CO2 neutral travel.
This paper examines the interdependencies of tourism, Buddhism and sustainability combining in-depth-interviews with Buddhism experts and non-participant observation in a mixed-method approach. The area under investigation is the Alpine region of Austria, Germany and Switzerland, since it is home to Asian and Western forms of Buddhism tourism alike. Results show that Buddhism tourism as a value-based activity on the one hand is not commercial, but since demand is rising, on the other hand tendencies towards more commercial forms can be observed. As a modest form of activity Buddhism tourism does not shape the landscape of the Alpine area and by its nature it incorporates sustainability.
Code-based cryptosystems are promising candidates for post-quantum cryptography. Recently, generalized concatenated codes over Gaussian and Eisenstein integers were proposed for those systems. For a channel model with errors of restricted weight, those q-ary codes lead to high error correction capabilities. Hence, these codes achieve high work factors for information set decoding attacks. In this work, we adapt this concept to codes for the weight-one error channel, i.e., a binary channel model where at most one bit-error occurs in each block of m bits. We also propose a low complexity decoding algorithm for the proposed codes. Compared to codes over Gaussian and Eisenstein integers, these codes achieve higher minimum Hamming distances for the dual codes of the inner component codes. This property increases the work factor for a structural attack on concatenated codes leading to higher overall security. For comparable security, the key size for the proposed code construction is significantly smaller than for the classic McEliece scheme based on Goppa codes.
Generalized Concatenated Codes over Gaussian and Eisenstein Integers for Code-Based Cryptography
(2021)
The code-based McEliece and Niederreiter cryptosystems are promising candidates for post-quantum public-key encryption. Recently, q-ary concatenated codes over Gaussian integers were proposed for the McEliece cryptosystem together with the one-Mannheim error channel, where the error values are limited to Mannheim weight one. Due to the limited error values, the codes over Gaussian integers achieve a higher error correction capability than maximum distance separable (MDS) codes with bounded minimum distance decoding. This higher error correction capability improves the work factor regarding decoding attacks based on information-set decoding. The codes also enable a low complexity decoding algorithm for decoding beyond the guaranteed error correction capability. In this work, we extend this coding scheme to codes over Eisenstein integers. These codes have advantages for the Niederreiter system. Additionally, we propose an improved code construction based on generalized concatenated codes. These codes extent the rate region where the work factor is beneficial compared to MDS codes. Moreover, generalized concatenated codes are more robust against structural attacks than ordinary concatenated codes.
Large-scale quantum computers threaten the security of today's public-key cryptography. The McEliece cryptosystem is one of the most promising candidates for post-quantum cryptography. However, the McEliece system has the drawback of large key sizes for the public key. Similar to other public-key cryptosystems, the McEliece system has a comparably high computational complexity. Embedded devices often lack the required computational resources to compute those systems with sufficiently low latency. Hence, those systems require hardware acceleration. Lately, a generalized concatenated code construction was proposed together with a restrictive channel model, which allows for much smaller public keys for comparable security levels. In this work, we propose a hardware decoder suitable for a McEliece system based on these generalized concatenated codes. The results show that those systems are suitable for resource-constrained embedded devices.
The performance and reliability of non-volatile NAND flash memories deteriorate as the number of program/erase cycles grows. The reliability also suffers from cell to cell interference, long data retention time, and read disturb. These processes effect the read threshold voltages. The aging of the cells causes voltage shifts which lead to high bit error rates (BER) with fixed pre-defined read thresholds. This work proposes two methods that aim on minimizing the BER by adjusting the read thresholds. Both methods utilize the number of errors detected in the codeword of an error correction code. It is demonstrated that the observed number of errors is a good measure for the voltage shifts and is utilized for the initial calibration of the read thresholds. The second approach is a gradual channel estimation method that utilizes the asymmetrical error probabilities for the one-to-zero and zero-to-one errors that are caused by threshold calibration errors. Both methods are investigated utilizing the mutual information between the optimal read voltage and the measured error values.
Numerical results obtained from flash measurements show that these methods reduce the BER of NAND flash memories significantly.
Infrastructure-making in interwar India was a dynamic, multilayered process involving roads and vehicles in urban and rural sites. One of their strongest playgrounds was Bombay Presidency and the Central Provinces in central and western India. Focusing on this region in the interwar period, this paper analyzes the varied relationship between peasant households and town-centred modernizing agents in the making of road transport infrastructures. The central argument of this paper is about the persistence of bullock carts over motor cars in the region. This persistence was grounded in the specific regional environment, the effects of the 1930s economic depression, and the priorities of social classes. Pinpointing these connections, the paper highlights that “modernization” of infrastructure was not a simple, linear process of progressivist change, nor did it mean the survival of apparently “old” technologies in the modern era. Instead, the paper pays attention to conflicting social complexities, implications, and meanings of the connection between infrastructure and modernity that modernization assumptions often overlook. Here, the paper shows how technological change occurred as a result of real, material class interests pulling infrastructural technology in different directions. This was where and why arguments of road-motor lobbyists and cart advocates eventually clashed, and Gandhian social workers resisted motor transport in defense of peasant interests.
This paper introduces the third update/release of the Global Sanctions Data Base (GSDB-R3). The GSDB-R3 extends the period of coverage from 1950–2019 to 1950–2022, which includes two special periods—COVID-19 and the new sanctions against Russia. This update of the GSDB contains a total of 1325 cases. In response to multiple inquiries and requests, the GSDB-R3 has been amended with a new variable that distinguishes between unilateral and multilateral sanctions. As before, the GSDB comes in two versions, case-specific and dyadic, which are freely available upon request at GSDB@drexel.edu. To highlight one of the new features of the GSDB, we estimate the heterogeneous effects of unilateral and multilateral sanctions on trade. We also obtain estimates of the effects on trade of the 2014 sanctions on Russia.
Although the Hospice Foundation in Constance knew they had a personnel
problem, they were unsure how to begin to fix it. In addition to difficulties in
finding and keeping employees, the Hospice Foundation’s employees were
often on sick leave, adding pressure on remaining staff. Twelve communication
design students in the masters program at the University of Applied
Sciences in Constance (HTWG Konstanz) conducted a study aimed at
identifying the causes for these problems and, more generally, understanding
how the employees work and feel. Even though the methods in this
study are well known, it presents an important prototype for designers and
design researchers because of its success in finding useful insights. It also
serves as a pre-design project briefing for both management and designers.
It demonstrates the usefulness of qualitative methods in providing a deeper
understanding of a complex situation and its usefulness as a strategic tool
and for defining a project’s focus and scope. Ideally, it also provides insights
into health care for the elderly.
R concretes with a proportion of recycled aggregates are standardized normal concretes which are allowed for use in Germany up to strength class C30/37. Because of the good technical properties and the ecological advantages, the article presents possible applications in the field of concrete products and precast concrete elements. Read part 2 of the paper.
R concretes with a proportion of recycled aggregates are standardized normal concretes which are allowed for use in Germany up to strength class C30/37. Because of the good technical properties and the ecological advantages, the article presents possible applications in the field of concrete products and precast concrete elements. Read part 1 of the paper.
Thermal shape memory alloys show extraordinary material properties and can be used as actuators, dampers and sensors. Since their discovery in the middle of the last century they have been investigated and further developed. The majority of the industrial applications with the highest material sales can still be found in the medical industry, where they are used due to their superelastic and thermal shape memory effect, e.g. as stents or as guidewires and tools in the minimal invasive surgery. Particularly in recent years, more and more applications have been developed for other industrial fields, e.g. for the household goods, civil engineering and automotive sector. In this context it is worth mentioning that for the latter sector, million seller series applications have found their way into some European automobile manufacturers. The German VDI guideline for shape memory alloys introduced in 2017 will give the material a further boost in application. Last but not least the new production technologies of additive manufacturing with metal laser sintering plants open up additional applications for these multifunctional materials. This paper gives an overview of the extraordinary material properties of shape memory components, shows examples of different applications and discusses European trends against the background of the most recent standard and new production technologies.
In automotive a lot of electromagnetically, pyrotechnically or mechanically driven actuators are integrated to run comfort systems and to control safety systems in modern passenger cars. Using shape memory alloys (SMA) the existing systems could be simplified, performing the same function through new mechanisms with reduced size, weight, and costs. A drawback for the use of SMA in safety systems is the lack of materials knowledge concerning the durability of the switching function (long-time stability of the shape memory effect). Pedestrian safety systems play a significant role to reduce injuries and fatal casualties caused by accidents. One automotive safety system for pedestrian protection is the bonnet lifting system. Based on such an application, this article gives an introduction to existing bonnet lifting systems for pedestrian protection, describes the use of quick changing shape memory actuators and the results of the study concerning the long-time stability of the tested NiTi-wires. These wires were trained, exposed up to 4years at elevated temperatures (up to 140°C) and tested regarding their phase change temperatures, times, and strokes. For example, it was found that A P-temperature is shifted toward higher temperatures with longer exposing periods and higher temperatures. However, in the functional testing plant a delay in the switching time could not be detected. This article gives some answers concerning the long-time stability of NiTi-wires that were missing till now. With this knowledge, the number of future automotive applications using SMA can be increased. It can be concluded, that the use of quick changing shape memory actuators in safety systems could simplify the mechanism, reduce maintenance and manufacturing costs and should be insertable also for other automotive applications.
Characterization of NiTi Shape Memory Damping Elements designed for Automotive Safety Systems
(2014)
Actuator elements made of NiTi shape memory material are more and more known in industry because of their unique properties. Due to the martensitic phase change, they can revert to their original shape by heating when subjected to an appropriate treatment. This thermal shape memory effect (SME) can show a significant shape change combined with a considerable force. Therefore such elements can be used to solve many technical tasks in the field of actuating elements and mechatronics and will play an increasing role in the next years, especially within the automotive technology, energy management, power, and mechanical engineering as well as medical technology. Beside this thermal SME, these materials also show a mechanical SME, characterized by a superelastic plateau with reversible elongations in the range of 8%. This behavior is based on the building of stress-induced martensite of loaded austenite material at constant temperature and facilitates a lot of applications especially in the medical field. Both SMEs are attended by energy dissipation during the martensitic phase change. This paper describes the first results obtained on different actuator and superelastic NiTi wires concerning their use as damping elements in automotive safety systems. In a first step, the damping behavior of small NiTi wires up to 0.5 mm diameter was examined at testing speeds varying between 0.1 and 50 mm/s upon an adapted tensile testing machine. In order to realize higher testing speeds, a drop impact testing machine was designed, which allows testing speeds up to 4000 mm/s. After introducing this new type of testing machine, the first results of vertical-shock tests of superelastic and electrically activated actuator wires are presented. The characterization of these high dynamic phase change parameters represents the basis for new applications for shape memory damping elements, especially in automotive safety systems.
This paper describes the rationale and the development of a structured digital approach for measuring corporate environmental sustainability using performance metrics.
It is impossible to imagine today's age without the preservation of our environment, not even in the corporate environment. Currently, sustainability is mostly only rudimentarily considered in companies, mostly only with written down phrases on the website. This will no longer be sufficient in the future, which is why companies should record sustainability on a numerical basis. Based on the development of a workable concept for companies, a small empirical study was carried out, which can be used to numerically measure the sustainability performance of companies. Two utility analyses were completed.
One of them was supplemented by expert interviews. Well-known practitioners from the business world were interviewed and asked for their assessment of ecological performance indicators. The result of the research is an indicator-based concept that can be applied in corporate practice to determine ecological sustainability performance.
Generalised concatenated (GC) codes are well suited for error correction in flash memories for high-reliability data storage. The GC codes are constructed from inner extended binary Bose–Chaudhuri–Hocquenghem (BCH) codes and outer Reed–Solomon codes. The extended BCH codes enable high-rate GC codes and low-complexity soft input decoding. This work proposes a decoder architecture for high-rate GC codes. For such codes, outer error and erasure decoding are mandatory. A pipelined decoder architecture is proposed that achieves a high data throughput with hard input decoding. In addition, a low-complexity soft input decoder is proposed. This soft decoding approach combines a bit-flipping strategy with algebraic decoding. The decoder components for the hard input decoding can be utilised which reduces the overhead for the soft input decoding. Nevertheless, the soft input decoding achieves a significant coding gain compared with hard input decoding.
This paper proposes a soft input decoding algorithm and a decoder architecture for generalized concatenated (GC) codes. The GC codes are constructed from inner nested binary Bose-Chaudhuri-Hocquenghem (BCH) codes and outer Reed-Solomon codes. In order to enable soft input decoding for the inner BCH block codes, a sequential stack decoding algorithm is used. Ordinary stack decoding of binary block codes requires the complete trellis of the code. In this paper, a representation of the block codes based on the trellises of supercodes is proposed in order to reduce the memory requirements for the representation of the BCH codes. This enables an efficient hardware implementation. The results for the decoding performance of the overall GC code are presented. Furthermore, a hardware architecture of the GC decoder is proposed. The proposed decoder is well suited for applications that require very low residual error rates.
This paper proposes a pipelined decoder architecture for generalised concatenated (GC) codes. These codes are constructed from inner binary Bose-Chaudhuri-Hocquenghem (BCH) and outer Reed-Solomon codes. The decoding of the component codes is based on hard decision syndrome decoding algorithms. The concatenated code consists of several small BCH codes. This enables a hardware architecture where the decoding of the component codes is pipelined. A hardware implementation of a GC decoder is presented and the cell area, cycle counts as well as the timing constraints are investigated. The results are compared to a decoder for long BCH codes with similar error correction performance. In comparison, the pipelined GC decoder achieves a higher throughput and has lower area consumption.
Purpose
In order to combat climate change and safeguard a liveable future we need fundamental and rapid social change. Climate communication can play an important role to nurture the public engagement needed for this change, and higher education for sustainability can learn from climate communication.
Approach
The scientific evidence base on climate communication for effective public engagement is summarised into ten key principles, including ‘basing communication on people’s values’, ‘conscious use of framing’, and ‘turning concern into action’. Based on the author’s perspective and experience in the university context, implications are explored for sustainability in higher education.
Findings
The article provides suggestions for teaching (e.g. complement information with consistent behaviour by the lecturer, integrate local stories, and provide students with basic skills to communicate climate effectively), for research (e.g. make teaching for effective engagement the subject of applied research), for universities’ third mission to contribute to sustainable development
in the society (e.g. provide climate communication trainings to empower local stakeholders), andgreening the campus (develop a proper engagement infrastructure, e.g. by a university storytelling exchange on climate action).
Originality
The article provides an up-to-date overview of climate communication research, which is in itself original. This evidence base holds interesting learnings for institutions of higher education, and the link between climate communication and universities has so far not been explored comprehensively.
Background: Polysomnography (PSG) is the gold standard for detecting obstructive sleep apnea (OSA). However, this technique has many disadvantages when using it outside the hospital or for daily use. Portable monitors (PMs) aim to streamline the OSA detection process through deep learning (DL).
Materials and methods: We studied how to detect OSA events and calculate the apnea-hypopnea index (AHI) by using deep learning models that aim to be implemented on PMs. Several deep learning models are presented after being trained on polysomnography data from the National Sleep Research Resource (NSRR) repository. The best hyperparameters for the DL architecture are presented. In addition, emphasis is focused on model explainability techniques, concretely on Gradient-weighted Class Activation Mapping (Grad-CAM).
Results: The results for the best DL model are presented and analyzed. The interpretability of the DL model is also analyzed by studying the regions of the signals that are most relevant for the model to make the decision. The model that yields the best result is a one-dimensional convolutional neural network (1D-CNN) with 84.3% accuracy.
Conclusion: The use of PMs using machine learning techniques for detecting OSA events still has a long way to go. However, our method for developing explainable DL models demonstrates that PMs appear to be a promising alternative to PSG in the future for the detection of obstructive apnea events and the automatic calculation of AHI.
Introduction. Despite its high accuracy, polysomnography (PSG) has several drawbacks for diagnosing obstructive sleep apnea (OSA). Consequently, multiple portable monitors (PMs) have been proposed. Objective. This systematic review aims to investigate the current literature to analyze the sets of physiological parameters captured by a PM to select the minimum number of such physiological signals while maintaining accurate results in OSA detection. Methods. Inclusion and exclusion criteria for the selection of publications were established prior to the search. The evaluation of the publications was made based on one central question and several specific questions. Results. The abilities to detect hypopneas, sleep time, or awakenings were some of the features studied to investigate the full functionality of the PMs to select the most relevant set of physiological signals. Based on the physiological parameters collected (one to six), the PMs were classified into sets according to the level of evidence. The advantages and the disadvantages of each possible set of signals were explained by answering the research questions proposed in the methods. Conclusions. The minimum number of physiological signals detected by PMs for the detection of OSA depends mainly on the purpose and context of the sleep study. The set of three physiological signals showed the best results in the detection of OSA.
The massive use of patient data for the training of artificial intelligence algorithms is common nowadays in medicine. In this scientific work, a statistical analysis of one of the most used datasets for the training of artificial intelligence models for the detection of sleep disorders is performed: sleep health heart study 2. This study focuses on determining whether the gender and age of the patients have a relevant influence to consider working with differentiated datasets based on these variables for the training of artificial intelligence models.
The goal of the presented project is to develop the concept of home ehealth centers for barrier-free and cross-border telemedicine. AAL technologies are already present on the market but there is still a gap to close until they can be used for ordinary patient needs. The general idea needs to be accompanied by new services, which should be brought together in order to provide a full coverage of service for the users. Sleep and stress were chosen as predominant diseases for a detailed study within this project because of their widespread influence in the population. The executed scientific study of available home devices analyzing sleep has provided the necessary to select appropriate devices. The first choice for the project implementation is the device EMFIT QS+. This equipment provides a part of a complete system that a home telemedical hospital can provide at a level of precision and communication with internal and/or external health services.
In extended object tracking, a target is capable to generate more than one measurement per scan. Assuming the target being of elliptical shape and given a point cloud of measurements, the Random Matrix Framework can be applied to concurrently estimate the target’s dynamic state and extension. If the point cloud contains also clutter measurements or origins from more than one target, the data association problem has to be solved as well. However, the well-known joint probabilistic data association method assumes that a target can generate at most one detection. In this article, this constraint is relaxed, and a multi-detection version of the joint integrated probabilistic data association is proposed. The data association method is then combined with the Random Matrix framework to track targets with elliptical shape. The final filter is evaluated in the context of tracking smaller vessels using a high resolution radar sensor. The performance of the filter is shown in simulation and in several experiments.
Ferromagnetism is of increasing importance in the growing field of electromobility and data storage. In stable austenitic steels, the occurrence of ferromagnetism is not expected and would also interfere with many applications. However, ferromagnetism in austenitic stainless steels after low-temperature nitriding has already been shown in the past. Herein, the presence of ferromagnetism in austenitic steels is discovered after low-temperature carburization (Kolsterizing), which represents a novel and unique finding. A zone of expanded austenite is established on various austenitic stainless steels by low-temperature carburization and the respective ferromagnetism is investigated in relation to the alloy composition. The ferromagnetism occurring is determined by means of a commercial magnetoinductive sensor (Feritscope). Ferromagnetic domains are visualized by magnetic force microscopy and a ferrofluid. X-ray diffraction measurements indicate a clear difference in the lattice expansion of the different alloys. Furthermore, a different appearance of the magnetizable microstructure regions (magnetic domain structure) is detected depending on the grain orientation determined by electron backscatter diffraction (EBSD). Strongly pronounced magnetic domains show no linear lattice defects, whereas in small magnetizable areas linear lattice defects are detected by electron channeling contrast imaging and EBSD.
Investigation of magnetic effects on austenitic stainless steels after low temperature carburization
(2018)
This work aims at investigating the magnetic effects of austenitc stainless steels which can occur after a low temperature carburisation depending on the alloy. Samples were prepared of different alloys and subjected to a multiple low temperature carburisation to obtain different treatment conditions for each alloy. The layer characterisation was carried out by light microscope and also by hardening profiles and shows that the layer develops with each additional treatment cycle. A lattice expansion could be detected in all treated samples by X-ray diffraction. Magnetisability was measured using Feritscope and SQUID measurements. Not all alloys showed magnetisability after treatment. In addition to MFM measurements, experiments with Ferrofluid were also used to visualize the magnetic areas. These studies show that only about half of the formed layer becomes magnetisable and has a domain-like structure.
Atom interferometers have a multitude of proposed applications in space including precise measurements of the Earth's gravitational field, in navigation & ranging, and in fundamental physics such as tests of the weak equivalence principle (WEP) and gravitational wave detection. While atom interferometers are realized routinely in ground-based laboratories, current efforts aim at the development of a space compatible design optimized with respect to dimensions, weight, power consumption, mechanical robustness and radiation hardness. In this paper, we present a design of a high-sensitivity differential dual species 85Rb/87Rb atom interferometer for space, including physics package, laser system, electronics and software. The physics package comprises the atom source consisting of dispensers and a 2D magneto-optical trap (MOT), the science chamber with a 3D-MOT, a magnetic trap based on an atom chip and an optical dipole trap (ODT) used for Bose-Einstein condensate (BEC) creation and interferometry, the detection unit, the vacuum system for 10-11 mbar ultra-high vacuum generation, and the high-suppression factor magnetic shielding as well as the thermal control system.
The laser system is based on a hybrid approach using fiber-based telecom components and high-power laser diode technology and includes all laser sources for 2D-MOT, 3D-MOT, ODT, interferometry and detection. Manipulation and switching of the laser beams is carried out on an optical bench using Zerodur bonding technology. The instrument consists of 9 units with an overall mass of 221 kg, an average power consumption of 608 W (819 W peak), and a volume of 470 liters which would well fit on a satellite to be launched with a Soyuz rocket, as system studies have shown.
Insecurity Refactoring is a change to the internal structure of software to inject a vulnerability without changing the observable behavior in a normal use case scenario. An implementation of Insecurity Refactoring is formally explained to inject vulnerabilities in source code projects by using static code analysis. It creates learning examples with source code patterns from known vulnerabilities.
Insecurity Refactoring is achieved by creating an Adversary Controlled Input Dataflow tree based on a Code Property Graph. The tree is used to find possible injection paths. Transformation of the possible injection paths allows to inject vulnerabilities. Insertion of data flow patterns introduces different code patterns from related Common Vulnerabilities and Exposures (CVE) reports. The approach is evaluated on 307 open source projects. Additionally, insecurity-refactored projects are deployed in virtual machines to be used as learning examples. Different static code analysis tools, dynamic tools and manual inspections are used with modified projects to confirm the presence of vulnerabilities.
The results show that in 8.1% of the open source projects it is possible to inject vulnerabilities. Different inspected code patterns from CVE reports can be inserted using corresponding data flow patterns. Furthermore the results reveal that the injected vulnerabilities are useful for a small sample size of attendees (n=16). Insecurity Refactoring is useful to automatically generate learning examples to improve software security training. It uses real projects as base whereas the injected vulnerabilities stem from real CVE reports. This makes the injected vulnerabilities unique and realistic.
The estimation of the holding periods of financial products has to be done in a dynamic process in which the size of the observation time interval influences the result. Small intervals will produce smaller average holding periods than bigger ones. The approach developed in this paper offers the possibility of estimating this average independently of the size of this time interval. This method is demonstrated on the example of two distributions, based on the exponential and the geometric probability functions. The estimation will be found by maximizing the likelihood function.
The effect on the mean-variance space of restrictions on a variable is investigated in this
paper. A restriction may be the placing of upper and lower bounds on a variable.
Another limitation is the loss of the continuity of a variable.
Average marks for Examinations are considered in an application of this limited meanvariance
space. In this case, the bounds are given by the highest and the lowest possible mark (e.g. 1.0 and 5.0). The limitation of the mean-variance space depends on the number of students who participate in the examination. The restriction of the loss of continuity is shown by the use of discrete marks (e.g. 1.0, 1.3, 1.7, 2.0, …). Furthermore,
the Target-Shortfall-Probability lines are integrated into the mean-variance space. These
lines are used to indicate the proportion of students who have good or very good marks in the examination. In financial markets, Target-Shortfall-Probability is used as a risk criterion.
To evaluate the quality of sleep, it is important to determine how much time was spent in each sleep stage during the night. The gold standard in this domain is an overnight polysomnography (PSG). But the recording of the necessary electrophysiological signals is extensive and complex and the environment of the sleep laboratory, which is unfamiliar to the patient, might lead to distorted results. In this paper, a sleep stage detection algorithm is proposed that uses only the heart rate signal, derived from electrocardiogram (ECG), as a discriminator. This would make it possible for sleep analysis to be performed at home, saving a lot of effort and money. From the heart rate, using the fast Fourier transformation (FFT), three parameters were calculated in order to distinguish between the different sleep stages. ECG data along with a hypnogram scored by professionals was used from Physionet database, making it easy to compare the results. With an agreement rate of 41.3%, this approach is a good foundation for future research.
While driving, stress is caused by situations in which the driver estimates their ability to manage the driving demands as insufficient or loses the capability to handle the situation. This leads to increased numbers of driver mistakes and traffic violations. Additional stressing factors are time pressure, road conditions, or dislike for driving. Therefore, stress affects driver and road safety. Stress is classified into two categories depending on its duration and the effects on the body and psyche: short-term eustress and constantly present distress, which causes degenerative effects. In this work, we focus on distress. Wearable sensors are handy tools for collecting biosignals like heart rate, activity, etc. Easy installation and non-intrusive nature make them convenient for calculating stress. This study focuses on the investigation of stress and its implications. Specifically, the research conducts an analysis of stress within a select group of individuals from both Spain and Germany. The primary objective is to examine the influence of recognized psychological factors, including personality traits such as neuroticism, extroversion, psychoticism, stress and road safety. The estimation of stress levels was accomplished through the collection of physiological parameters (R-R intervals) using a Polar H10 chest strap. We observed that personality traits, such as extroversion, exhibited similar trends during relaxation, with an average heart rate 6% higher in Spain and 3% higher in Germany. However, while driving, introverts, on average, experienced more stress, with rates 4% and 1% lower than extroverts in Spain and Germany, respectively.
So is About Urbanity
(2016)
Südstadt Tübingen is a successful military land redevelopment project, in which functional mix and urban diversity have been identified as main goals. In the whole development process, joint building venture has been utilized as important urban development instrument. On one hand, this instrument helps citizens development adaptive and affordable living space and let them find own identity to the new quarter; on the other hand, it is also important to cultivate urban diversity and suitable spatial feeling, mix relationship in living and working conditions, so that the quarters will have very flexible structure to accommodate diversified urban life styles.
The number of home office workers sitting for many hours is increasing. The sensor chair is tracking users’ sitting behavior which the help of pressure sensors and tries to avoid wrong postures which may cause diseases. The system provides live monitoring of the pressure distribution via web interface, as well as sitting posture prediction in real time. Posture analysis is realized through machine learning algorithm using a decision tree classifier that is compared to a random forest. Data acquisition and aggregation for the learning process happens with a mobile app adding users biometrical data and the taken sitting posture as label. The sensor chair is able to differentiate between an arched back, a neutral posture or a laid back position taken on the chair. The classifier achieves an accuracy of 97.4% on our test set and is comparable to the performance of the random forest with 98.9%.
This paper examines how varying antidumping methodologies applied within the World Trade Organization differ in the extent to which they reduce targeted exports. We show that antidumping duties, on average, hit Chinese exporters harder than those of other targeted countries. This difference can be traced back in part to China's non-market economy status, which affects the way antidumping duties are calculated. Furthermore, we show that the type of imposed duty matters, as ad-valorem duties affect exports differently compared to specific duties or duties conditional on the export price. Overall, however, antidumping duties remain effective in reducing imports independent of market economy status.
This work presents a new concept to implement the elliptic curve point multiplication (PM). This computation is based on a new modular arithmetic over Gaussian integer fields. Gaussian integers are a subset of the complex numbers such that the real and imaginary parts are integers. Since Gaussian integer fields are isomorphic to prime fields, this arithmetic is suitable for many elliptic curves. Representing the key by a Gaussian integer expansion is beneficial to reduce the computational complexity and the memory requirements of secure hardware implementations, which are robust against attacks. Furthermore, an area-efficient coprocessor design is proposed with an arithmetic unit that enables Montgomery modular arithmetic over Gaussian integers. The proposed architecture and the new arithmetic provide high flexibility, i.e., binary and non-binary key expansions as well as protected and unprotected PM calculations are supported. The proposed coprocessor is a competitive solution for a compact ECC processor suitable for applications in small embedded systems.
Modular arithmetic over integers is required for many cryptography systems. Montgomeryreduction is an efficient algorithm for the modulo reduction after a multiplication. Typically, Mont-gomery reduction is used for rings of ordinary integers. In contrast, we investigate the modularreduction over rings of Gaussian integers. Gaussian integers are complex numbers where the real andimaginary parts are integers. Rings over Gaussian integers are isomorphic to ordinary integer rings.In this work, we show that Montgomery reduction can be applied to Gaussian integer rings. Twoalgorithms for the precision reduction are presented. We demonstrate that the proposed Montgomeryreduction enables an efficient Gaussian integer arithmetic that is suitable for elliptic curve cryptogra-phy. In particular, we consider the elliptic curve point multiplication according to the randomizedinitial point method which is protected against side-channel attacks. The implementation of thisprotected point multiplication is significantly faster than comparable algorithms over ordinary primefields.
The Lempel–Ziv–Welch (LZW) algorithm is an important dictionary-based data compression approach that is used in many communication and storage systems. The parallel dictionary LZW (PDLZW) algorithm speeds up the LZW encoding by using multiple dictionaries. This simplifies the parallel search in the dictionaries. However, the compression gain of the PDLZW depends on the partitioning of the address space, i.e. on the sizes of the parallel dictionaries. This work proposes an address space partitioning technique that optimises the compression rate of the PDLZW. Numerical results for address spaces with 512, 1024, and 2048 entries demonstrate that the proposed address partitioning improves the performance of the PDLZW compared with the original proposal. These address space sizes are suitable for flash storage systems. Moreover, the PDLZW has relative high memory requirements which dominate the costs of a hardware implementation. This work proposes a recursive dictionary structure and a word partitioning technique that significantly reduce the memory size of the parallel dictionaries.
The Burrows–Wheeler transformation (BWT) is a reversible block sorting transform that is an integral part of many data compression algorithms. This work proposes a memory-efficient pipelined decoder for the BWT. In particular, the authors consider the limited context order BWT that has low memory requirements and enable fast encoding. However, the decoding of the limited context order BWT is typically much slower than the encoding. The proposed decoder pipeline provides a fast inverse BWT by splitting the decoding into several processing stages which are executed in parallel.
Input–Output modellers are often faced with the task of estimating missing Use tables at basic prices and also valuation matrices of the individual countries. This paper examines a selection of estimation methods applied to the European context where the analysts are not in possession of superior data. The estimation methods are restricted to the use of automated methods that would require more than just the row and column sums of the tables (as in projections) but less than a combination of various conflicting information (as in compilation). The results are assessed against the official Supply, Use and Input–Output tables of Belgium, Germany, Italy, Netherlands, Finland, Austria and Slovakia by using matrix difference metrics. The main conclusion is that using the structures of previous years usually performs better than any other approach.
Climate protection in Seychelles through tourism: the advantages of a small-sized destination
(2020)
CO2 abatement costs are often low in developing countries. This is why most carbon offset projects are being implemented there. Nevertheless, this does not mean that the holiday resort and the project country are in any way related to each other. Linking compensation projects with the destination country could increase the willingness of air travellers to finance voluntary CO2 compensation measures.
This paper describes how a possible combination of CO2 compensation projects in the Seychelles could affect the voluntary carbon offset behaviour of Seychelles tourists. On the one hand, the issue of whether the voluntary willingness of Seychelles travellers to compensate can be increased is examined; on the other hand, whether tourists would be willing to visit a co-financed project in the Seychelles.
As a result, the willingness of tourists to offset air-travel carbon emissions can be increased. Important factors for this are e.g. that all persons have adequate information and that the carbon offset providers display a high degree of transparency. In addition, a broad interest in visiting the projects in the Seychelles during the holiday was expressed. An important condition for this is the spatial vicinity to the project. Due to its small size, the Seychelles are an ideal location for fulfilling this premise.
Electricity generation from renewable energies often fluctuates due to weather and other natural effects. The instrument of control energy (balancing energy) can compensate for these fluctuations and thus guarantee the system and supply security of the electricity grid. Luxury hotels on tourist islands could react to fluctuations in electricity generation and provide balancing energy. The purpose of this paper is to investigate the electricity consumption of luxury hotels to assess their potential as a source for providing control energy.
Four-Dimensional Hurwitz Signal Constellations, Set Partitioning, Detection, and Multilevel Coding
(2021)
The Hurwitz lattice provides the densest four-dimensional packing. This fact has motivated research on four-dimensional Hurwitz signal constellations for optical and wireless communications. This work presents a new algebraic construction of finite sets of Hurwitz integers that is inherently accompanied by a respective modulo operation. These signal constellations are investigated for transmission over the additive white Gaussian noise (AWGN) channel. It is shown that these signal constellations have a better constellation figure of merit and hence a better asymptotic performance over an AWGN channel when compared with conventional signal constellations with algebraic structure, e.g., two-dimensional Gaussian-integer constellations or four-dimensional Lipschitz-integer constellations. We introduce two concepts for set partitioning of the Hurwitz integers. The first method is useful to reduce the computational complexity of the symbol detection. This suboptimum detection approach achieves near-maximum-likelihood performance. In the second case, the partitioning exploits the algebraic structure of the Hurwitz signal constellations. We partition the Hurwitz integers into additive subgroups in a manner that the minimum Euclidean distance of each subgroup is larger than in the original set. This enables multilevel code constructions for the new signal constellations.
Rheumatoid arthritis is an autoimmune disease that causes chronic inflammation of synovial joints, often resulting in irreversible structural damage. The activity of the disease is evaluated by clinical examinations, laboratory tests, and patient self-assessment. The long-term course of the disease is assessed with radiographs of hands and feet. The evaluation of the X-ray images performed by trained medical staff requires several minutes per patient. We demonstrate that deep convolutional neural networks can be leveraged for a fully automated, fast, and reproducible scoring of X-ray images of patients with rheumatoid arthritis. A comparison of the predictions of different human experts and our deep learning system shows that there is no significant difference in the performance of human experts and our deep learning model.
The overall goal of this work is to detect and analyze a person's movement, breathing and heart rate during sleep in a common bed overnight without any additional physical contact. The measurement is performed with the help of
sensors placed between the mattress and the frame. A two-stage pattern classification algorithm based has been implemented that applies statistics analysis to recognize the position of patients. The system is implemented in a sensors-network, hosting several nodes and communication end-points to support quick and efficient classification. The overall tests show convincing results for the position recognition and a reasonable overlap in matching.
We identify 74 generic, reusable technical requirements based on the GDPR that can be applied to software products which process personal data. The requirements can be traced to corresponding articles and recitals of the GDPR and fulfill the key principles of lawfulness and transparency. Therefore, we present an approach to requirements engineering with regard to developing legally compliant software that satisfies the principles of privacy by design, privacy by default as well as security by design.
Reliability Assessment of an Unscented Kalman Filter by Using Ellipsoidal Enclosure Techniques
(2022)
The Unscented Kalman Filter (UKF) is widely used for the state, disturbance, and parameter estimation of nonlinear dynamic systems, for which both process and measurement uncertainties are represented in a probabilistic form. Although the UKF can often be shown to be more reliable for nonlinear processes than the linearization-based Extended Kalman Filter (EKF) due to the enhanced approximation capabilities of its underlying probability distribution, it is not a priori obvious whether its strategy for selecting sigma points is sufficiently accurate to handle nonlinearities in the system dynamics and output equations. Such inaccuracies may arise for sufficiently strong nonlinearities in combination with large state, disturbance, and parameter covariances. Then, computationally more demanding approaches such as particle filters or the representation of (multi-modal) probability densities with the help of (Gaussian) mixture representations are possible ways to resolve this issue. To detect cases in a systematic manner that are not reliably handled by a standard EKF or UKF, this paper proposes the computation of outer bounds for state domains that are compatible with a certain percentage of confidence under the assumption of normally distributed states with the help of a set-based ellipsoidal calculus. The practical applicability of this approach is demonstrated for the estimation of state variables and parameters for the nonlinear dynamics of an unmanned surface vessel (USV).
Experimental Validation of Ellipsoidal Techniques for State Estimation in Marine Applications
(2022)
A reliable quantification of the worst-case influence of model uncertainty and external disturbances is crucial for the localization of vessels in marine applications. This is especially true if uncertain GPS-based position measurements are used to update predicted vessel locations that are obtained from the evaluation of a ship’s state equation. To reflect real-life working conditions, these state equations need to account for uncertainty in the system model, such as imperfect actuation and external disturbances due to effects such as wind and currents. As an application scenario, the GPS-based localization of autonomous DDboat robots is considered in this paper. Using experimental data, the efficiency of an ellipsoidal approach, which exploits a bounded-error representation of disturbances and uncertainties, is demonstrated.
As part of large-scale project in the Lake Constance region that borders Germany, Austria and Switzerland, seamless learning prototypes are designed, implemented, and evaluated. A design-based research (DBR) approach is used for the development, implementation and evaluation. The project consists of one base project that brings together instructional design and instructional technology specialists as well as (educational) software architects, supporting seven subprojects in developing seamless-learning lighthouse implementations mostly within higher and further education.
Industrial partners are involved in all subprojects. The authors are unaware of similar seamless learning projects using a DBR approach that include evaluation and redesign and allow for comparison of experiences by applying DBR across subprojects. The project aims to generate novel seamless learning implementations, respective novel research and novel software supporting different aspects of seamless learning. The poster will present the project design, delineate areas of research and development, and include initial empirical results.
Error correction coding based on soft-input decoding can significantly improve the reliability of non-volatile flash memories. This work proposes a soft-input decoder for generalized concatenated (GC) codes. GC codes are well suited for error correction in flash memories for high reliability data storage. We propose GC codes constructed from inner extended binary Bose-Chaudhuri-Hocquenghem (BCH) codes and outer Reed-Solomon codes. The extended BCH codes enable an efficient hard-input decoding. Furthermore, a low-complexity soft-input decoding method is proposed. This bit-flipping decoder uses a fixed number of test patterns and an algebraic decoder for soft-decoding. An acceptance criterion for the final candidate codeword is proposed. Combined with error and erasure decoding of the outer Reed-Solomon codes, this acceptance criterion can improve the decoding performance and reduce the decoding complexity. The presented simulation results show that the proposed bit-flipping decoder in combination with outer error and erasure decoding can outperform maximum likelihood decoding of the inner codes.
While existing resource extraction debates have contributed to a better understanding of national economic and political dilemmas and institutional responses, there are flaws in understanding the specific relevance of the various types of mining schemes for rural households to deal with the various problems they are confronted with. Our paper examines the perceptions of gold mining effects on households in Northern Burkina Faso. The findings of our survey across six districts representing different mining schemes (industrial, artisanal, no mining) highlight the fact that artisanal gold mining can generate job opportunities and cash income for local households; whereas industrial gold mining widely fails to do so. However, the general economic and environmental settings exert a much stronger influence on the household state. Gold mining effects are perceived as being less advantageous in districts where people are suffering from a lack of education, a higher vulnerability to drought and poor market access. Our findings provide empirical support for those who back the enhanced formalization of artisanal and small-scale mining (ASM) and policies that entail more rigorous state monitoring of mining concessions, especially in economic and environmentally disadvantaged contexts. Effectively addressing communal and pro-poor development requires greater attention to the political economy of ASM and corporate mining. It also calls for a greater inclusion of local mining stakeholders and a more effective alignment of international regulatory and advocacy efforts.
Mapping of tree seedlings is useful for tasks ranging from monitoring natural succession and regeneration to effective silvicultural management. Development of methods that are both accurate and cost-effective is especially important considering the dramatic increase in tree planting that is required globally to mitigate the impacts of climate change. The combination of high-resolution imagery from unmanned aerial vehicles and object detection by convolutional neural networks (CNNs) is one promising approach. However, unbiased assessments of these models and methods to integrate them into geospatial workflows are lacking. In this study, we present a method for rapid, large-scale mapping of young conifer seedlings using CNNs applied to RGB orthomosaic imagery. Importantly, we provide an unbiased assessment of model performance by using two well-characterised trial sites together containing over 30,000 seedlings to assemble datasets with a high level of completeness. Our results showed CNN-based models trained on two sites detected seedlings with sensitivities of 99.5% and 98.8%. False positives due to tall weeds at one site and naturally regenerating seedlings of the same species led to slightly lower precision of 98.5% and 96.7%. A model trained on examples from both sites had 99.4% sensitivity and precision of 97%, showing applicability across sites. Additional testing showed that the CNN model was able to detect 68.7% of obscured seedlings missed during the initial annotation of the imagery but present in the field data. Finally, we demonstrate the potential to use a form of weakly supervised training and a tile-based processing chain to enhance the accuracy and efficiency of CNNs applied to large, high-resolution orthomosaics.
The present work proposes the use of modern ICT technologies such as smartphones, NFCs, internet, and web technologies, to help patients in carrying out their therapies. The implemented system provides a calendar with a reminder of the assumptions, ensures the drug identification through NFC, allows remote assistance from healthcare staff and family members to check and manage the therapy in real-time. The system also provides centralized information on the patient's therapeutic situation, helpful in choosing new compatible therapies.
Thermochemical surface hardening is used to overcome the weak mechanical performance of austenitic and duplex stainless steels. Both low-temperature carburizing and nitrocarburizing can improve the hardness, wear, galling, and cavitation resistance, while maintaining their good corrosion resistance. Therefore, it is crucial to not form chromium-rich precipitates during hardening as these can deteriorate the passivity of the alloy. The hardening parameters, the chemical composition of the steel, and the manufacturing route of a component determine whether precipitates are formed. This article gives an overview of suitable alloys for low-temperature surface hardening and the performance under corrosive loading.
Twenty-first century infrastructure needs to respond to changing demographics, becoming climate neutral, resilient, and economically affordable, while remaining a driver for development and shared prosperity. However, the infrastructure sector remains one of the least innovative and digitalized, plagued by delays, cost overruns, and benefit shortfalls. The authors assessed trends and barriers in the planning and delivery of infrastructure based on secondary research, qualitative
interviews with internationally leading experts, and expert workshops. The analysis concludes that the root-cause of the industry’s problems is the prevailing fragmentation of the infrastructure value chain and a lacking long-term vision for infrastructure. To help overcome these challenges, an integration of the value chain is needed. The authors propose that this could be achieved through a use-case-based, as well as vision and governance-driven creation of federated digital platforms applied to infrastructure projects and outline a concept. Digital platforms enable full-lifecycle participation and responsible governance guided by a shared infrastructure vision. This paper has contributed as policy recommendation to the Group of Twenty (G20) in 2021.
If the process contains a delay (dead time), the Nyquist criterion is well suited to derive a PI or PID tuning rule because the delay is taken into account without approximation. The tuning of the speed of the closed loop enters naturally by the crossover frequency. The goal of robustness and performance is translated into the phase margin.
Background
This is a systematic review protocol to identify automated features, applied technologies, and algorithms in the electronic early warning/track and triage system (EW/TTS) developed to predict clinical deterioration (CD).
Methodology
This study will be conducted using PubMed, Scopus, and Web of Science databases to evaluate the features of EW/TTS in terms of their automated features, technologies, and algorithms. To this end, we will include any English articles reporting an EW/TTS without time limitation. Retrieved records will be independently screened by two authors and relevant data will be extracted from studies and abstracted for further analysis. The included articles will be evaluated independently using the JBI critical appraisal checklist by two researchers.
Discussion
This study is an effort to address the available automated features in the electronic version of the EW/TTS to shed light on the applied technologies, automated level of systems, and utilized algorithms in order to smooth the road toward the fully automated EW/TTS as one of the potential solutions of prevention CD and its adverse consequences.
This study aims to adapt CEFR in developing an integrative approach-based teaching material model for a pre-basic BISOL class. The method used in this research is the development research design by Borg and Gall. This study was development research. The stages are identification of the problem, formulation of a hypothetical draft model; feasibility testing by experts; product revision; and test product effectiveness. The data were collected through survey techniques, interviews, and documentation. The needs identification results revealed data encompassing 10 themes, 5 tasks per theme, and diverse evaluations comprising theory, in-class practice, and real-world field assignments, both on an individual and group basis. These identified needs require alignment with CEFR A1 for the development of BISOL learning. These findings were subsequently incorporated into the design of the teaching material model, and the results indicated that tailoring CEFR to BISOL as an integrative language teaching material model was feasible for application in the classroom, as assessed by experts. The implications suggest that integrating CEFR into BISOL is highly feasible for the development of teaching materials, and teachers can leverage this instructional model to enhance students' proficiency in the Indonesian language.