The research group AIST offers students the opportunity to deal with their diploma theses (Bachelor, Master) in the context of research projects in a practical way and thus to deal with the respective topic area more intensively. The practical support is provided by a research assistant (in addition to the assessor from the University of Applied Sciences).


In her master’s thesis, Sophie Bauernfeind dealt with tasks in the context of the PICA research project.


The global impact of the COVID-19 pandemic highlighted the need for effective healthcare system planning and resource allocation. The pandemic showed the importance of accurate simulations for healthcare systems, to be as efficient as possible. However, public medical datasets often lack specificity to individual hospitals or healthcare providers, leading to a potential difference in patient demographics and needs.

This master thesis combines a literature review, focusing on the Synthea tool, and an implementation of a tool based on the Synthea system. The tool creates standardized patient information, considering and incorporating the statistical information of the provided patient data. The goal is to improve Synthea for patient generation, incorporating additional information from provided patient data.

This tool was used to change and adapt Synthea’s ground truth. The analysis has indicated that this has an influence on the generation process of Synthea. It is therefore possible to generate more similar synthetic patients.

In her master’s thesis, Elisabeth Mayrhuber dealt with tasks in cooperation with STIWA Holding GmbH.


Analysing log data is a widely adopted technique in the industry known as Process Mining (PM) to assess the performance and development of systems. However, traditional analysis approaches often overlook the valuable semantic information that can be extracted from log data. By incorporating semantic metadata, provided by domain experts or extracted from the dataset itself, the quality of insights can be enhanced, offering new possibilities to draw meaningful conclusions from the data. This master’s thesis aims to create a semantic header from domain knowledge, represented as an ontology. The ontology will capture significant process semantics, including entities and relationships between features. By integrating this semantic header into event data, new opportunities for data analysis arise. The primary advantage lies in the ability to shift the perspective through which the data is analysed by creating a new event log with the same activities but with a different case identifier. This perspective shift enables, analysing a process from different objects, while still using familiar formats like eXtensible Event Stream (XES) without requiring complex data exchange protocols.
Analysing processes from diverse perspectives is crucial for gaining insights into bottlenecks and identifying areas where performance enhancements can be implemented. For example, in a laboratory setting, events can be analysed from a laboratory perspec-
tive, an employee, or even individual patients. By dynamically changing the perspective of the data based on semantic metadata, analysts can uncover hidden patterns and make informed decisions. Further, this thesis analyses common data and exchange formats, like Mining eXtensible Markup Language (MXML), XES, Object-Centric Event Log (OCEL) and Object-Centric Event Data (OCED) used in PM and outlines the main requirements that need to be addressed within data exchange formats.

Andreas Erhard dealt with tasks from the PICA project in their master thesis.


In the hospital and radiology sector, the increasing number of people to be treated and costs are a growing problem. It is becoming increasingly important to strive for a more efficient use of resources in order to meet both the needs of the patients and the requirements of the organization. In this context, it is crucial to collect data in a more targeted manner and to make it measurable in order to identify and improve weak points in the processes. Process mining offers a promising possibility for this. By analyzing and optimizing processes on the basis of data collected in information systems, it can help to shorten waiting times for patients and at the same time reduce organizational costs.
In radiology in particular, process mining offers great potential for improving process quality and efficiency. By analyzing workflows, bottlenecks and inefficient processes can be identified and workflow can be optimized. However, a major hurdle for the use of process mining in radiology is the insufficient data quality. Therefore, it is necessary to meet certain organizational and data-specific criteria to improve data quality and enable the use of process mining in radiology.
Maturity models have been developed to capture and evaluate these criteria. These allow a systematic evaluation of data quality and organizational requirements for the use of process mining in radiology. The aim of this master thesis is to create a canonical model based on research and interviews with experts on maturity models, which should facilitate the use of process mining in radiology and improve the quality and efficiency of the processes. By creating such a model, hospitals and radiology departments can be enabled to improve data quality.


Martin Hanreich wrote his master’s thesis on aspects of the GEMINI project.


Clustering is a widely used machine learning technique that can be used to sort elements into groups according to their similarity to each other. For the machine learning algorithm to decide which elements should be in the same cluster it needs information about the similarity between the elements. For numerical values, this similarity is easy to calculate since simple mathematical operations can be applied. For so-called categorical values like ’coarse’, ’Data Scientist’, or ’Google Chrome’ this similarity calculation is much more difficult. To use them for the clustering task several methods can be applied like encoding the values or using the distribution of the values in the dataset to quantify their similarity. Another way of looking at the categorical data values is by interpreting them as a textual representation of a real-world object or concept. The similarity between them is calculated based on their real-world properties and function. For example for the word ’cat’ the words ’animal’, ’pet’, four-legged’ can come to mind. This task seems straightforward at first but if it should be used more often in a practical setting many hurdles need to be overcome. In recent years a new model type in the area of deep learning called the transformer has come to prominence which set the benchmark for a variety of tasks using natural language. Especially a model called GPT-3 which is from the family of transformer models has received a lot of attention for its near human-like text generation. This thesis explores how a model like GPT-3 can be used for the task of semantic clustering. For this task, several semantic similarity measures involving the use of the GPT-3 model are discussed and then evaluated on some benchmark datasets using human judgement as reference. Then it is described how the similarity measures can be incorporated into the clustering task. Practical considerations are discussed and possible issues which could hinder the practical application. The general approach and these problems are showcased based on a created prototype for semantic clustering written in the programming language Python. This thesis offers an overview of this broad topic while diving deeper into certain aspects which need to be considered to make it more practical for real-world applications.

Konstantin Papesh wrote his master’s thesis on aspects of the SOCToolkit project.


Security analysts use Security Orchestration, Automation, and Response (SOAR) platforms to dissect and analyse possible malicious data. These platforms rely on external services to enrich the given data. However, the data quality of new services is often unknown. To mitigate this issue, metrics can be established to assess different parameters in connection with data quality.

This thesis analyses Cyber Threat Intelligence (CTI) metrics currently proposed in the literature on their viability within Structured Threat Information Expression (STIX)-based SOAR platforms and implements the first version of such a measuring framework. Multiple metrics are compared against a list of requirements set by the nature of SOAR platforms. After viable metrics are identified, they are implemented within a framework which hooks into an existing SOAR platform. Finally, the framework is tested, and the calculated metrics are discussed.

The conclusion is that there are metrics available that can be altered to work with SOAR platforms. However, some metrics rely on parameters not readily accessible from SOAR platforms. That means the design of these platforms also needs to consider the requirements for data quality frameworks.

Clara Diesenreiter wrote her bachelor’s thesis on aspects of the PICA project.


As a consequence of demographic change towards an ageing society, more attention is being paid to the care sector. Due to population trends and socio-economic constraints, there is an increased specialization and concentration of resources in the health care system. In order to guarantee high-quality health care, a continuous transfer of information between service providers is indispensable. Advancing digitalization facilitates the structured exchange of care data, through digital documents, across the boundaries of individual IT systems. This requires interoperability of communication channels and data. International terminology systems come into play here, aiming to provide precise names and identifiers for clinically relevant data, such as diagnoses or actions. The aim of this thesis is to evaluate suitable communication standards and terminology systems to advance the digital documentation of care services. Since there are no harmonized nationwide specifications for care concepts and are concepts and care services, the present work only refers to the system of mobile care in Upper Austria. To achieve the research objective, the fields of action of mobile services were analysed. The care acts derived from the analysis could be encoded using two terminology systems, International Classification for Nursing Practice (ICNP) and Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT). Based on the coded care acts, an interoperable example document was created that corresponds to the Clinical Document Architecture (CDA).
The thesis shows that it is possible to encode care acts. On one hand, this was done using expressions that were already predefined by the terminology systems. On the other hand, expressions were proposed that correspond to the rules of the terminology systems. Thus, it could be shown that both ICNP and SNOMED CT can be sufficiently extended to satisfy the needs of the applied nursing concept as well as the care services. For the Austria-wide digital documentation of care activities, uniform care concepts are needed to create a consistent database that can be transferred into suitable terminology systems.

Anthony Alessi (PXL University, Belgium) continued within his bachelor thesis, the results from the research project EpiMon during his semester abroad in Hagenberg.


Advanced Information Systems and Technology, or AIST for short, is a research group of the University of Applied Sciences Upper Austria, located at the Campus Hagenberg, Upper Austria. Their main focus lies on Machine Learning & Data Mining, Computer Vision and eHealth [1]. For this internship, the task was to work on the EpiMon project.

The goal of the EpiMon project is to detect uprising epileptic seizures of infantile to juvenile patients when they are sleeping. One of the signs of an uprising epileptic seizure is the patient waking up and permanently gazing, after which a seizure may occur. This is called the Prévost sign. There are other signs such as muscle contraction, but technology to monitor these symptoms already exist. This project focuses on the eyes specifically and consists of two hardware components: a Raspberry Pi with night-vision cameras and a smartphone.

This solution uses night-vision cameras to monitor the patient while asleep. These cameras are connected to a Raspberry Pi, which acts as the main system that connects all the other components. The images of the cameras get transferred to the Raspberry Pi, which runs face detection to discover the position of the face on the image and crop it. Afterwards it feeds the image of the cropped face to a model which can recognize whether the patient has opened or closed eyes. The alarm will ring when open eyes are detected for a longer period of time. The mobile application on the smartphone is used to control when the Raspberry Pi starts monitoring, along with several settings.

The research topic of this thesis is Face Recognition. A patient might be sleeping in the same bed as a partner or family member. The main system must know which face it has to monitor. By making several pictures of the patient and their family members from different angles, the necessary data for the face recognition model can be provided.


Simone Sandler wrote her master’s thesis on one aspect of the VOIGAS project regarding the classification of restaurant articles into a taxonomy.


This thesis deals with the classification of articles from different restaurants into a taxonomy. The features available are the name of the article and a restaurant internal category. Both features are strings provided by the restaurant owner and therefore error prone.Methods are developed to classify this kind of data into hierarchically structured
categories. In this thesis the categories are represented by a food and drink taxonomy, which is stored as a tree. The methods can be divided into preclassification and classification methods. Preclassification methods attempt to find the best subtree inside the categories tree for an article and classification methods classify the article inside this subtree. In total of three preclassification and two classification methods are developed.The first preclassification method is called Category Similarity Preclassification and works by comparing the name of the internal category with the category names inside the taxonomy. The Common Ancestor Preclassification searches for the common ancestor of already classified items with the same internal category and the Substring Preclassification comperes the internal category with substrings that are unique for one category.The classification methods are called String Similarity Classification and Substring Classification. The first method compares the article name with names of already classified articles and the second method compares the article name of substrings unique for a category inside the preclassified subtree.These methods are part of a semi-automatic classification system, that deals with
the classification of the articles using the developed methods and offers the possibility to extend the food and drink taxonomy when needed.
Due to the error prone data the percentage of classifiable articles is estimated to be 72%. The classification system is able to classify this amount of articles with a accuracy of 83% for finding the best possible category and 90% for finding a fitting category.

Sophie Bauernfeind worked as part of her bachelor thesis – with the title “FHIR-Tooling: An Interactive Editor for Designing and Writing FHIR-Shorthand Specifications​” – on a tool support for creating FHIR specifications in Microsoft Visual Studio Code during her summer internship in the Oppa project.


Standardization and interoperability of software systems in medicine is a particularly important topic. The latest Health Level 7 (HL7®) standard – Fast Healthcare Interoperability Resources (FHIR®) – is very present in the community. FHIR maps medically relevant use cases, such as an immunization record, with implementation guides. An implementation guide is composed of several medical components, which means that many lines of code come together and the guide takes on a large size. To facilitate the creation of such implementation guides, the FHIR-Shorthand (FSH) language was developed. These FSH files can be created and edited in any editor. However, there is little software support for this language.
The goal of this work is to implement such a tool support. Color marking, automatic completion of structures and keywords as well as checking the validity, of the created file, are functions that shall be provided. The created support should be freely available and actively developed by the HL7 FHIR community.

Clara Kainz worked on her bachelor thesis during her summer internship at project Flink. She was concerned with data augmentation methods for training data sets in the context of deep learning with the soccer simulation game FIFA20.


Within the EDEN project Lukas Reithmeier worked in the field of neural networks on his master thesis “Influence of depth data on the performance of instance segmentation utilizing Mask R-CNN and RGB-D images”.


Elevators are a vital means of transportation in the modern urban life. If an emergency happens in an elevator a person must press an emergency button calling for help. Since pressing the emergency button is not always possible, the research group AIST is developing a system that detects emergencies in elevators using RGB-D camera footage, compromising both RGB images as well as depth images. The detection of persons in various poses and various different objects as well as the creation of masks from classified persons and objects to track these over the course of several camera frames, it is aimed to add an instance segmentation module using the Mask R-CNN algorithm to this system.

Due to metal walls and mirrors elevator environments are highly reflective and noisy, especially the depth images provided by RGB-D cameras. Instance segmentation algorithms, like the Mask R-CNN, are well understood when using RGB images. This raises the question if instance segmentation can be improved when using the highly reflective and noisy depth images provided by the RGB-D cameras.

In this thesis a comparison of four different model-versions, which differ in their input and backbone network and use the Mask R-CNN algorithm is done. The first uses solely RGB images, the second solely depth images and the third uses RGB-D images. These three model-versions use a ResNet-FPN backbone network. The fourth model-version uses RGB-D images and a FuseNet-FPN backbone network. This thesis also introduces the Elevator RGB-D dataset that contains RGB-D images from elevator scenes. To provide an equitable comparison the hyperparameters of the four model-versions are optimized using tree-structured parzen estimators. To improve the generalization of the trained models, these models are pre-trained using the SUN RGB-D dataset. Transfer learning is used to initialize the models with the pre-trained weights. These models are trained using the Elevator RGB-D dataset.

In this thesis the equivalence of using RGB and depth images combined and solely RGB images to perform instance segmentation in regards of the results in highly reflective and noisy elevator environments was proven. Both approaches lead to better results than relying on depth images only. The usage of a backbone convolutional neural network specialized on RGB-D images improves the performance of the Mask R-CNN.

Johann Aichberger wrote his master thesis on “Mining Software Repositories for the Effects of Design Patterns on Software Quality”.


Design patterns are reusable solutions for commonly occurring problems in software design. First described in 1994 by the Gang of Four, they have gained widespread adoption in many areas of software development throughout the years. Furthermore, design pat-terns have also garnered an active research community around them, which investigates the effects that design patterns have on different software quality attributes. However, a common shortcoming of existing studies is that they only analyze the quality effects of design patterns on a relatively small scale, covering no more than a few hundred projects per case study. This calls into question how generalizable the results of these small-scale case studies are. Pursuing more generalizable results, this thesis conducts a much larger-scale analysis of the quality effects of design patterns. To accomplish this, software metric and design pattern data for 90,000 projects from the Maven Central repository is collected using the metrics calculation tool CKJM extended and the design pattern detection tool SSA. Correlations between design patterns and software quality attributes are then analyzed using software metrics as proxies for software quality by following the methodology described by the QMOOD quality model. The results of the analysis suggest that design patterns are positively correlated with functionality and reusability, but negatively corre-lated with understandability, which is consistent with the results of existing smaller-scalecase studies.

As part of the Kimiku Project, Eva-Maria Spitzer worked on her master thesis on “An Exploratory Approach for Finding Similarities Within Heterogeneous Data Sets of Small and Medium-Sized Enterprises”.


Customer loyalty programs are an essential tool for enterprises to get a better understanding of their customer’s needs and to take appropriate actions to increase their satisfaction. A successful realisation of such programs requires detailed analyses, assuming a lot of customer data. However, many Small and Medium-Sized Enterprises have little amounts of data. Thus, they can only use small data sets for analyses, which can lead to bad models and inaccurate results.

The idea is to find enterprises that are similar in their data characteristics in order to use a trained model from one enterprise for another similar enterprise. If an enterprise has enough data to build a good model, this model can be applied to the data of similar enterprises with less data.

The thesis describes a possible approach to identify similar enterprises, as well as the identification of features and algorithms that lead to good performing models with the respective data sets. For this purpose, data sets of six Small and Medium-Sized Enterprises are used, which were recorded by an Austrian software company via a customer loyalty app.

Extracted data characteristics identify similar enterprises. These characteristics are related to the specific use case and aim to represent the data of the respective enterprises in the best possible way. For the determination of the performance of different features and algorithms (e.g. Random Forest, Support Vector Regression) for different data sets, a regression model is trained and evaluated for each feature/algorithm combination. The particular combinations of features and algorithms are clustered together with the data characteristics using Agglomerative Hierarchical Clustering. Error metrics derived from the runs of the regression models evaluate the performance of the respective feature/algorithm combinations.

This work shows some challenges in detecting enterprises with similar data sets as well as feature and algorithm combinations that work best for specific data sets. Despite the small amount of available data, it is shown that it is feasible to find similar enterprises by using data characteristics. Although the results do not indicate features and algorithms that influence the regression task across enterprises, it was possible to observe the influence of features on specific data sets. The results of this thesis provide further research opportunities, such as detailed analyses of particular features or the prediction of the Normalised Root Mean Square Error for given features, algorithms and data characteristics for a regression task. In summary, this work provides the foundations for applying a trained model to other data sets.

Sophie Bauernfeind worked as part of her bachelor thesis – with the title “FHIR-Tooling: An Interactive Editor for Designing and Writing FHIR-Shorthand Specifications​” – on a tool support for creating FHIR specifications in Microsoft Visual Studio Code during her summer internship in the Oppa project.

Clara Kainz worked on her bachelor thesis during her summer internship at project Flink. She was concerned with data augmentation methods for training data sets in the context of deep learning with the soccer simulation game FIFA20.


David Baumgartner has worked on master thesis within the EDEN project.


Elevators are an essential part of the lives of millions of people every day. Most people believe that elevators work 24/7 a year, but what about human problems like heart attacks and inability to press the emergency button? There exist real cases where people died in elevators, which could have been prevented if an autonomous system could have detected people with emergency need on the ground. An additional challenge for this project is to be GDPR compliant and by that not to monitor people using the elevators. Automatic emergency detection in elevators is of interest because it enables exactly that scenario of detecting human problems. The project where this is realized consists of a client running autonomously in the elevator and tracking the emergency state in it and optimizing background service. This work proposes a system that aims at solving the self-optimization of the classification in the elevators system as the background service. There exist more than one issue for such a system. First, how to extract the correct label from new arriving samples from a real emergency or action? Second, what is the most efficient parameter setting for a classifier and when is a classifier thoroughly tested and can be deployed to the client running in the elevator. The goal for this work is to present a prototype that aims at solving the main issues. One example, therefore, is the dynamic of new classes rising during the runtime and not wasting resources on creating a new flat classifier. The results further showcase, based on two different datasets, the amount of time that is required to find a better solution than manually searching for one and its downside. One of the most relevant results is the overall structure of the solution that combines state of the art technologies into one system and demonstrates a solution that is extendable in the future.

Rainer Meindl has worked on master thesis within the EDEN project.


Detecting and reacting to emergency situations in daily life have always been a topicof research the past few years, but have always focused on either individuals prone toemergency situations, such as the elderly, or on a broader scale, like detecting activityin large crowds or pedestrians. Due to rapidly increasing availability and power of ex-ternal sensors as well as introduction of more reliable data the emergency detection andmanagement should also be able in confined spaces, such as elevators.Together withVIEW – Elevatoras domain experts and partner theFH-OOE AISTresearch group this thesis focuses on introducing activity recognition and subsequentlyemergency detection into the elevator domain. It builds on top of a stateless systemfor object and person detection, which has been implemented as part of the researchproject, and uses the produced stateless data of the system. But instead of using stochas-tic methods or artificial intelligence, this thesis aims to resolve the problem of emergencydetection by introducing a generic state oriented component, as it allows for easy, de-terministic and human readable actions and result.While introducing the reader to the basics of activity recognition and state trackingtechniques reflections are made on which approaches are the most feasible and why.It compares multiple state machine designs and ultimately suggests a new state ma-chine definition based on coloured petri nets, thedynamic non-deterministic petri net(DNPN).Based on the DNPN a state machine is designed, allowing the tracking of the elevatorstate and all its occupants, including people and their objects. Furthermore an auxiliarysystem is designed, the system preparedness, connecting with the state machine andallowing for further evaluation of the current elevator scenario and possibly predictingan emergency event.Handling and evaluating these emergency scenarios is shown in a prototype, imple-mented as part of the research project. It implements the defined DNPN and definesdata structures in order to aggregate the stateless data for the state machine. Ulti-mately, the prototype as a whole is used to evaluate the state machine by feeding itpreviously generated test data, which has a fixed script and outcome. In the conclusionthe results of the state machine are discussed, referencing the expected outcome definedin the script, as well as suggesting continued work to improve the DNPN definition andin fact, the whole system.

Andreas Pointner worked on ‘Graph-based transformations for model-driven software development’ within the project PASS .


Model transformation is a relevant part in modern software development. There are a few new interesting topics rising, especially in the area of Model-Driven Development (MDD). This thesis focuses on the development of a graph transformation framework. Part of the thesis is a theoretical analysis of the basics of model transformation. That does not only contain different types of model transformation, but also the definition of models and meta models. Moreover, theoretical concepts such as Triple Graph Grammar (TGG) will be analysed. Furthermore, big frameworks like Atlas Transformation Language (ATL) and Epsilon Transformation Language (ETL) are going to be explained. Apart from the already known concepts, a new concept will be introduced, about how a graph database like Neo4j can be combined with model transformation or with a model transformation framework. The core of the thesis is the design and implementation of the graph transformation framework. The different concepts, used design patterns as well as the analogies to the previously analysed frameworks are going to be discussed. The framework is going to be analysed by two different scenarios. The first one is the transformation of a graph into an XML representation. The second one is part of a research project, where a transformation between a 2D representation and a 3D representation of a building plan was created. Finally, the advantages as well as disadvantages of the frameworks are mentioned and an outlook to further work is given. This is going to show that especially the independence of other frameworks as well as the loose coupling to other frameworks is one of the major advantages of the framework. Nevertheless, it will also show, that a lot of functionality that is provided by other frameworks is missing. Even though some functionality of other frameworks is associated with a big overhead.

Within the research project PASS, Christoph Praschl dealt with the ‘detection of information loss in model transformation’.


The model term refers to a simplified representation of objects, processes or other subjects, and is used in the discipline of software ngineering to represent an abbreviated realism excerpt. The model transformation extends this area by the transfer of information between several models and is an integral part of modern software development, especially in the field of model-driven software development. This thesis deals with various possibilities for the detection of information loss in the field of model transformation. This is necessary to be able to ensure that information is transferred from a source, to a target model correctly, as well as to detect semantic differences between affected models. In the focus of this essay are the two research questions “Where does a model lose its information when transferring it to another model?” and “Has the semantics of a data set been changed by the transformation?”. The first of the two research questions is the preservation of information. According to this, data should not be corrupted and should reach the correct position in the target model, whereas the second problem focuses on the recognition of model characteristics in which the affected models differ. This is about information that exists in the target model but not in the source model. To answer the two questions, fundamentals of modeling as well as theoretical concepts and approaches in the field of model transformation and verification are presented. Furthermore, two graph-based implementations are introduced, which allow the identification of model characteristics affected by information loss. These are, in particular, the approach of a graph-based constraint solver and a method for recognizing node patterns using a Neo4j graph database. In addition, the verification component of the used transformation framework is explained, which enables rudimentary model checks. Finally, the presented practical methods are evaluated using two examples. This evaluation compares the verification methods and results in various advantages and disadvantages, while also demonstrating the basic applicability of the implementations for the detection of information loss.

Ignace Jordens has worked on his bachelor thesis within his Eurasmus internship in the EDEN project.


EDEN, which stands for emergency detection in elevator networks and is a project of the AIST
research group at the University of Applied Sciences Upper Austria, aims to use sensors and cameras
to automatically detect emergencies in elevators, evaluate them, put them into context and take
appropriate actions. This bachelor’s thesis tackles the classification of the status of an elevator door.
In order to classify certain emergencies correctly, it is vitally important that the classification system
knows all the involved parameters. A certain situation can have different interpretations if all parameters are considered. The current state of the door is in this case a very important parameter.
The EDEN project uses an Intel RealSense D435 camera as a device to capture images and depth
information. These images and their corresponding depth information are analysed by the project, which is written in C++ and uses the OpenCV framework for computer vision. A first part of this paper is to research the usages of the provided depth and RGB information by the camera to detect the status of an elevator door. In the research, the different possible approaches are discussed. The most feasible approaches are elaborated in a proof of concept. The first step of the detection of the door status is localising the door itself using depth and RGB information. This is followed by the extraction of the floor, which is achieved by using edge detection and extraction techniques. With the location of the door and the floor known to the application, the status of the door can be determined. In order to correctly classify this status, the research focuses on different methods to detect the status and strategies to reduce noise interference which is caused by either the recording equipment or objects blocking the vision of the door. The second part of the paper focuses on testing the capabilities of the Intel RealSense D435 camera, more specifically the accuracy of the depth information it can provide. To comply with certain ISOstandards, an elevator car cannot have a height difference of more than 20 millimetres compared to the outlying floor. The research investigates if the D435 camera can detect such a small height difference while still maintaining a visual overview of the entire elevator car, so the application can still detect occurring emergencies.

Simone Sandler wrote her bachelor thesis within the MoxUP project.


This bachelor thesis deals with the transformation of building models. These models should be prepared for 3D printing. The basis for the transformation are OBJ files which correspond to the so-called “o3D” standard. This standard facilitates the programmatic processing of 3D Files. It was used and created by the company moxVR, a start-up based in Linz. This company offers their customers to have their future home 3D printed. This makes it easier to visualize your home. The 3D files come from the architect of the building. In order to print the files in 3D, all furniture must first be removed from the building and doors and windows with openings replaced. Afterwards the model should be divided into the individual floors and provided with holding elements in order to make a coherence of the model possible. Finally, it must be possible to convert the model into a valid STL file, as this is required for 3D printing.

This thesis was written by Jacqueline Schwebach as part of her work on the REPO project.


With the introduction of the electronic health file in Austria (ELGA), a first step has already been taken towards better networking of the various health service providers (GDA). It contributes significantly to improving the quality of patient care and the Austrian e-health infrastructure. Radiological findings, medical and nursing discharge letters and e-medication can already be viewed in the ELGA. The electronic health record is being expanded step by step and continuously supplemented with new applications.

In the course of the practical training, a prototype will be further developed to facilitate the cross-institutional cooperation of radiologists in private practice and in hospitals with the help of the Austrian e-health infrastructure. An already completed project is also being revised for use in other projects. Code duplicates will be removed and a documentation of the interfaces will be provided.


Anna Lackerbauer worked on her master’s thesis as part of the cooperative research project eConsent at the Centre for Global eHealth Innovation of the University Health Network Toronto.


One of the key elements for protecting human subjects of research studies and patients who receive medical treatment is obtaining informed consent. Not only is this about giving the subject or patient freedom of choice, but it is a whole process that provides sufficient information, asserts comprehension and later documents the decision made. Currently, this is very often achieved by oral information sessions, in some cases supported with print-out material, and the later signature of a paper-based consent form. Transforming it to a digital process of obtaining consent electronically (eConsent) has the potential for increasing comprehension, data quality and patient empowerment while at the same time reducing costs. This thesis identifies eight requirements for an eConsent architecture for research studies as well as for medical treatment. Subsequently, a backend model for this architecture based on the HL7 FHIR standard is proposed and implemented as part of an open-source prototype. The thesis was realised in cooperation with two stakeholders in Toronto, Canada: The Centre for Global eHealth Innovation and Dr. Alvin Lin. The proposed concept makes use of the existing consent model of HL7 FHIR, which has been implemented for the privacy consent use case. Moreover, some extensions of the standard help meet the requirements while focusing on the capability to auto-generate a user interface (UI). To enable semantic interoperability with other health information
systems, SNOMED CT is used as an internationally standardised terminology for selected predefined parts of the information. The proposed eConsent architecture meets most of the identified requirements. That said, the system is limited by the low maturity of the implemented FHIR resources and the fact that the terminology for the use case is currently not exhaustive. Additional custom extensions of the used FHIR resources or switching to another digital source of information than the proposed FHIR QuestionnaireResponse must be considered.

Johann Aichberger has designed and implemented a system architecture for mixed reality board games for the I2F research project.


Over the last few years, the introduction of many new Augmented Reality (AR) devices like the Microsoft HoloLens has greatly contributed to the popularity and more widespread usage of AR. Supported by recent improvements in hardware and software technologies, even common smartphones have now reached the point where they can be used as a viable alternative for some AR features that were limited to more specialized devices not too long ago.

Rudy Games, an Austrian company based in Linz, wants to utilize this surge of smartphones with AR capabilities by developing a Mixed Reality board game which improves upon the classic board game formula by adding an AR dimension to it. To accomplish this, the company started the interface2face Mixed Reality Game research project in cooperation with two research groups of the University of Applied Sciences Upper Austria, Campus Hagenberg, in November 2017. The goal of this project is to produce a working prototype of such a game which can be used by the company as the basis for a commercially viable product. This bachelor thesis contributes to this goal by defining some fundamental parts of the system architecture of the project. Concerning the hardware architecture, the main question that should be answered was whether typical board game elements like tokens, cards and pieces of a modular board can be used as anchor points for AR content. The results of this evaluation showed that the angle between board pieces (which can potentially be quite far away from a player) and the smartphone held by a player sitting at the same table was too shallow to make a reliable detection possible. Since the detection was otherwise quite good, this problem was solved by introducing Personal Interaction Spaces (PIS). These are placed directly in front of the player and replace the board pieces as primary anchors for AR content. Because they are so close to the player, it is far easier to hold the smartphone directly above them, which alleviates any problems that would be caused by shallow angles.

In terms of software architecture, the Model-View-Controller (MVC) pattern as well as Reactive Programming and the Entity-Component-System (ECS) pattern were all evaluated as potential candidates for the foundation of the architecture. The MVC pattern, which is mostly used for graphical user interfaces, did not show promising results during the implementation of a first minigame. While Reactive Programming left an impression that was a lot better, it didn’t seem to be suitable as the main component of the software architecture either. Fortunately, the results which could be achieved with the ECS pattern were very convincing in all aspects, making it the perfect choice for the foundation of the architecture.

Daniel Stigler has worked on his bachelor thesis within the EDEN project.


In this thesis an evaluation of different algorithms of machine learning for information retrieval from image data for later emergency detection in elevator systems shall be performed. For this purpose, a classification prototype will be created, which will be divided into three parts to determine the information. In the first step, images of segmented objects are analyzed and it is determined whether the object is a person or an object. In the second part, objects are classified into further object categories, which allows a statement about an existing dangerous situation to be made. In the third part, objects classified as human beings are subjected to a posture classification based on their silhouette shape, which can later trigger an emergency signal if a person is lying on the ground for a longer period of time. The tested algorithms are provided by OpenCV and are limited to K-Nearest Neighbor, Support Vector Machine, Random Forest and neural network. The results show that Support Vector Machines, with a hit rate of over 98%, using HOG descriptors, are best suited for categorizing objects into human and object. Also, compared to other classifiers, this combination provided the best results for further grouping of objects, but with a 73% classification rate, it is not very satisfying. When classifying posture using silhouette features, the neural network proved to be the most suitable classifier, with a correct classification of 92% of all test data.

Lukas Reithmeier did research for his bachelor thesis on “Analyses of building plans” as part of the PASS project.


The analysis of building plans with regard to accessibility or problems of escape routes is a difficult task. The PASS project (Plan Analytics using Self learning Solutions) therefore develops analyses of building plans that have previously been transferred from a 2D building plan to an interim model. These analyses include a validation of accessibility, a simulation-based analysis of escape routes, and the optimal placement of furniture in rooms using machine learning algorithms.


This bachelor thesis was created within the research project Drive for Knowledge.


Modern simulators are inefficient in terms of cost as well as space. Furthermore it is not reaching its full potential since plenty of characteristics of the real world, like proper Field–of–View, are implemented baldy, or not at all.

The trending technology Virtual Reality creates a model of the real world, which is interactable by new and more intuitive control–schemes. Thats how the virtual reality creates a much stronger immersion which then increases the effektivity of any simulation. On the one hand the technology should enable creating cheaper alternatives to the more traditional hardware by minimizing the real hardware in the simulation. On the other hand virtual reality can be used in existing simulators, usually as output device, to increase the effectivity of said simulators.

Currently there are two state–of–the–art devices on the market, the HTC Vive and the Oculus Rift. Both are so called Head Mounted Displays, which help creating the necessary immersion for the technology. Both of these devices are analyzed based on hardwarespezification, APIs and control schemes.

During this bachelor thesis a protoype is created to properly present the features of this technology. This prototype solely relies on the virtual part of the software, except the HTC Vive which is utilized as output and input device. This example should clarify that with minimal costs and effort one can still create a quite efficient simulation.

This thesis was done by Rainer Meindl during his internship in the VREHA research project.


The main question of this thesis is how and if electromyography can be used as an Inputsystem on an Android application. This is to be researched as a part of a research project with the company Psii.Rehab, which is tightly corresponding to topics such as \emph{mobile virtual reality} and fingetracking. Thus Hardware has to be evaluated which includes, but is not limited to, mobile virtual reality goggles, fingertrackingsensors and electromyographs with the overall goal to achieve plattform–independency. Since the application uses aspects of virtual reality and should be mostly platform independent, it stands to reason that the Unity Framework should be used.

The mainfocus of the thesis lies on the electromyograph Thalmic Myo, since it is one of the few devices that on the one hand allows accessing raw electromyographic signal and using that data in unity and on the other hand is widely available. This is also related to problems because alot of the requested features are incomplete, or missing completely when using the device on android. Thats why the software provided by the manufacturers has to be extended and the missing functionality on android has to be reimplemented.

The data generated by the Thalmic Myo has to be normalized and, because of the requested functionality, abstracted. Only after that process the data can be used in the application. In this case application rather means a small showcase, which demonstrates the implementation of an inputsystem based on a electromyographic signal.

As part of the Drive for Knowledge research project, Andreas Pointner worked on his theoretical bachelor thesis on ‘Edge-Detection for window detection in vehicles’.


In the last couple of years, the terms Virtual Reality and Augmented Reality increased in importance. For most of the provided functionality a basic VR/AR-device is enough. On the other hand, there are some areas of application where additional technology is mandatory. An example, where additional technology is needed, is the training of emergency drivers. For this scenario and exact position of the windscreen is needed. So that, additional hazards can be displayed. This thesis is focusing on this problem and tries to show the possibilities with edge detection algorithms to detect windscreens in vehicles. Therefore, there are two central questions. “It is possible to detect windows purely by means of edge detection” and “Which are the most common algorithms of edge detection and what are their main advantages and disadvantages”. To answer these two questions, this thesis is split into two parts. On the one hand, there is a theoretical part, and on the other hand, there are elaborations of prototypes. The theoretical part, describes the basic functionality of different edge detection algorithms. As well as it checks the possibility to use it for detecting windows. Essentially, the following operators / procedures were considered: Roberts, Prewitt, Sobel, Kompass-Gradient, Kirsch, Marr-Hildreth und Canny. Three prototype were developed, to compare the algorithms in its functionality. The selected ones are Marr-Hildreth and Canny, because they can be parameterized and therefore they are quite well for this scenario. The third one is Prewitt, which is used to compare it to the non-parameterized ones. With the results of the prototypes, it is now able to check which of them solves which scenario best. It also describes, which settings and parameters are needed to achieve this result. For the prototypes following algorithms were used: Prewitt, Marr-Hildreth und Canny. The achieved results of the prototypes were very different. In some scenarios, the edge detection algorithms had trouble to deal with the scenario. Whereas in some scenarios the results were good. Generally, the prototypes showed the typical problems with edge detection algorithms. Disturbing factors such as rushing or blurring were a common problem. The use of edge detection algorithms to detect windows is possible for some standard scenarios. Especially when there are very few interfering factors. To enable the detection in other scenarios, additional procedures are required. An example therefore would be object detection.

The practical bachelor thesis ‘Image processing methods for personal identification’ by Andreas Pointner was developed in the course of the research project GUIDE.


In this thesis the applicability of image processing algorithms for robust person identification is evaluated. At first the state of the domain in person identification will be analyzed in detail. Later their automation possibilities will be identified. The first step is to analyze the process and represent it in the BPMN. Therefore the key areas will be modeled to achieve a legal implementation. The three main components in the process, which are implemented by this thesis, are: the recognition of the identity card, the validation of safety features, as well as the reading of the machine-readable areas. The process starts with detecting an identity card inside a picture. The algorithmic foundations are explained for this purpose, as well as an implementation of Hough-Lines, which is used for the detection, is provided. After a successful detection and equalization of the identity card, the next step is to validate the Holograms. Therefore binary and color segmentation is used to analyze the ID card. After that the binary image can be compared to a template. With a simple pixel comparison a confidence value can be calculated. This value will be analyzed over multiple images to make a clear statement of the validity of the hologram. Later on the thesis describes the reading of the machine readable part using optical character recognition (OCR). In this process step mostly preprocessing algorithms are implemented. The actual OCR will then be provided by an OCR Framework called Tesseract. Finally, the different results of each step are evaluated. The result shows that the hologram detection has some deficits in its current implementation. Therefore a final chapter describes improvement and optimization possibilities.

This theoretical bachelor thesis with the title ‘Augmented Reality Frameworks’ is part of the Drive for Knowledge project.


There are different frameworks for the area of the currently omnipresent technology Augmented Reality which support developers creating an application in this domain with various for this area suitable functionalities. This paper deals with the definition and the fundamentals of Augmented Reality. Afterwards there is given a differentiation to the related technology Virtual Reality, to come over to an explanation of the technical basis in terms of software as well as hardware, to well-known problems associated with the use of Augmented Reality systems. Based on this knowledge there follows the definition of Augmented Reality Frameworks, the various basic components and a small selection of design patterns which are implemented by several software modules. Furthermore, an overview of existing solutions from various manufacturers will be discussed after passing on typical functionalities which are provided by several of these frameworks. Finally, based on predefined premises the frameworks Vuforia and Kudan AR Engine are selected, will be explained with the aid of prototypes and are compared in a final chapter in the disciplines of marker-based and marker-less tracking. With the actual question of this elaborate (Which AR-frameworks exist and which functionality do those covers?) as ubiquitous problem statement, the attention should be drawn to differences which may be important in the implementation of applications in the field of Augmented Reality to create a technical fundamental as well as a basis for conceptual understanding.

This practical bachelor thesis on ‘Image-based orientation in the outdoor area’ was developed within the Drive for Knowledge research project.


Especially in the field of augmented reality, spatial orientation is of high relevance for applications. Apart from the position in the three-dimensional space, this is one of the most important factors to be able to represent virtual objects in space precisely and correctly. The orientation in the case of so-called head-mounted displays, such as the Microsoft HoloLens, is equal to the direction of view, whereby this data decides about which information is displayed in the user´s field of view and which is not. From the current state of the art this is primarily achieved marker-based with visual indications in the environment and/or with the help of sensors. Based on the augmented reality frameworks Vuforia and Kudan AR, this paper discusses two common possibilities for marker-based determination of the own orientation and describes the problem of this feature-based approach that applications depend on a single fixed position or on an enormous number of reference images for each virtual object. This lack of practicality leads to the necessity of an alternative approach from which the aim of this discourse is derived to find a feature-independent, image-based attempt, that determines the orientation based on the calculation of the image shift. This approach is based on the hypothesis, which states that the rotation around one’s own axis can be determined in a simplified manner by a translation of the input images. The implementation of this hypothesis is evaluated premised on several test scenarios. Based on these results, the question of this paper “Is it possible to determine outdoor the precise orientation in space at a constant starting point with image-based processes?” is answered and is considered as underpinned because of the evaluation result of a median difference of 0.00015° between the real and the calculated rotation.


This practical bachelor thesis was written by Anna Lackerbauer as part of the Kimbo Project.


It is increasingly popular to bind mobile devices on an existing health care system by making them sources to documents and consumer details. However, this is still facing lots of challenges. That is why Integrating the Healthcare Enterprise (IHE) is developing a new profile for mobile document exchange. This thesis documents the approach followed during an internship of the author where research was done on challenges that occurred when implementing the profile. Since the mobile document exchange and FHIR-Standard have still been under development during this period, several contradictory version controls came up, why some decisions had to be made. These decisions are also reasoned and recorded in this thesis. Furthermore it is described how the integration into the existing system took place. This thesis especially serves for documenting the implementation of the currently used version of that profile, named Mobile access to Health Documents (MHD).

The theoretical Bachelo thesis entitled “City routing system for people with reduced mobility” was written by Anna Lackerbauer as part of the Gallneukirchen special exhibition.


For persons with reduced mobility it is exhausting and most of the time not trivial to find an accessible route. That is why a system needs to be in place that meets their demands and which finds the route which is most helpful. The geoinformatics sector is many-faceted, this coupled with given restrictions due to accessibility means there are some remarkable challenges to meet for software developers who evolve such a system. Based on already existing, or in-progress projects, interviews and literature research, as well as an independent practice concerning height data, this work provides an overview of the challenges a developer has to deal with. Furthermore it proposes solutions and a prototype to evaluate the accuracy of interpolation algorithms providing test results. This work can serve as a rationale for a prototypical implementation of a routing system for people with reduced mobility or supply decision support on which component to use, considering various comparisons of algorithms and maps.