Research is the core task of STI Innsbruck. Our motto is "Enabling Semantics". Find out more about our current research directions!


This page lists the current projects with STI Innsbruck. Please see the archive and the historical projects for more on completed projects.

Large Knowledge Collider (LarKC)

The aim of the EU FP 7 Large-Scale Integrating Project LarKC is to develop the Large Knowledge Collider (LarKC, for short, pronounced "lark"), a platform for massive distributed incomplete reasoning that will remove the scalability barriers of currently existing reasoning systems for the Semantic Web. This will be achieved by:  

  • Enriching the current logic-based Semantic Web reasoning methods with methods from information retrieval, machine learning, information theory, databases, and probabilistic reasoning.
  • Employing cognitively inspired approaches and techniques such as spreading activation, focus of attention, reinforcement, habituation, relevance reasoning, and bounded rationality.
  • Building a distributed reasoning platform and realizing it both on a high-performance computing cluster and via "computing at home".

  The consortium is an interdisciplinary team of engineers and researchers in Computing Science, Web Science and Cognitive Science, well qualified to realize this ambitious vision. The Large Knowledge Collider will be an open architecture. Researchers and practitioners from outside the consortium will be encouraged to develop and plug in their own components to drive parts of the system. This will make the Large Knowledge Collider a generic platform, and not just a single reasoning engine. The success of the Large Knowledge Collider will be demonstrated in three end-user case studies. The first one is from the telecommunication sector. It aims at real-time aggregation and analysis of location data obtained from mobile phones carried by the population of a city, in order to regulate city infrastructure functions such as public transport and to provide context-sensitive navigation information. The other two case studies are in the life-sciences domain, related respectively to drug discovery and carcinogenesis research. Both will demonstrate that the capabilities of the Large Knowledge Collider go well beyond what is possible with current Semantic Web infrastructure.


NIWA and Wandermann designed a project called LBSCult that has initiated the construction of a network of competency for the development of an electronic culture guide for Vienna. Based on data models of Vienna's Culture Goods Cadaster and the picture archive of the Austrian National Libraray the concept of an open platform (LBSCult Service) has been created which connects multiple culture data bases, further content data bases, and GIS-systems and improves user inquiries with the help of Semantic Web technologies. It should enable the reception of touristic, cultural and historic information that is contextually related to the user's local position and personal interests, with mobile devices like PDA or smartphone.

Linked Data Benchmark Council (LDBC)

Non-relational data management is emerging as a critical need for the new data economy based on large, distributed, heterogeneous, and complexly structured data sets. This new data management paradigm also provides an opportunity for research results to impact young innovative companies working on new RDF and graph data management technologies to start playing a significant role in this new data economy. Standards and benchmarking are two of the most important factors for the development of new information technology, yet there is still no comprehensive suite of benchmarks and benchmarking practices for RDF and graph databases, nor is there an authority for setting benchmark definitions and auditing official results. Without them, the future development and uptake of these technologies is at risk by not providing industry with clear, user-driven targets for performance and functionality.The goal of the Linked Data Benchmark Council (LDBC) project is to create the first comprehensive suite of open, fair and vendor-neutral benchmarks for RDF/graph databases together with the LDBC foundation which will define processes for obtaining, auditing and publishing results. The core scientific innovation of LDBC is therefore to define meaningful benchmarks derived from a combination of actual usage scenarios combined with the technical insight of top database systems researchers and architects in the choke points of current technology. LDBC will bring together a broad community of researchers and RDF and graph database vendors to establish an independent authority, the LDBC foundation, responsible for specifying benchmarks, benchmarking procedures and verifying/publishing results. The forum created will become a long-surviving, industry supported association similar to the TPC. Vendors and user organisations will participate in order to influence benchmark design and to make use of the obvious marketing opportunities.

Linked Data for Cultural and Touristic domains (LDCT)

The Linked Data for Cultural and Touristic domains (LDCT) project will focus on leveraging semantic technologies into the cultural and touristic market, enabling intelligent solutions to integrate and analyze data from various sources, including touristic service providers, touristic associations and organizations, related public data sources, social media, and online news. It will provid a set of integrated tools that will optimize the visibility of touristic and cultural contents, both in touristic aggregators and concrete touristic service providers that will be able to integrate and align touristic offers with related cultural events, as well as to effectively disseminate their various offers along with related events. Data and content will be not only semantically searchable, but also aligned in order to foster synergies between different touristic service providers. Finally, the effective dissemination through various social media channels of the aligned content will increase the market reach of touristic service providers, as well as the engagement with their potential customers.

Manufacturing Service Ecosystem (MSEE)

By 2015, novel service-oriented management methodologies and the Future Internet universal business infrastructure will enable European virtual factories and enterprises to self-organize in distributed, autonomous, interoperable, non-hierarchical innovation ecosystems of tangible and intangible manufacturing assets, to be virtually described, on-the-fly composed and dynamically delivered as a Service, end-to-end along the globalised value chain." The first Grand Challenge for MSEE project is to make SSME (Service Science Management and Engineering) evolve towards Manufacturing Systems and Factories of the Future, i.e. from a methodological viewpoint to adapt, modify, extend SSME concepts so that they could be applicable to traditionally product-oriented enterprises; from an implementation viewpoint to instantiate Future Internet service oriented architectures and platforms for global manufacturing service systems. The second Grand Challenge for MSEE project is to transform current manufacturing hierarchical supply chains into manufacturing open ecosystems, i.e. on the one side to define and implement business processes and policies to support collaborative innovation in a secure industrial environment; on the other side to define a new collaborative architecture for ESA (Enterprise Software and Applications), to support business-IT interaction and distributed decision making in virtual factories and enterprises. The synthesis of the two Grand Challenges above in industrial business scenarios and their full adoption in some European test cases will result in new Virtual Factory Industrial Models, where service orientation and collaborative innovation will support a new renaissance of Europe in the global manufacturing context.


MindLab is a cooperative research project with the goal to develop methods and software tools for modeling and implementing scalability for knowledge graphs.

Feratel and Onlim provide dialogue-based access to touristic information, products, and services. But meaningful dialogues require large amounts of knowledge to be available in a machine-processable way. For this purpose, the Mindlab project develops knowledge graph technologies. In MindLab, we will develop methods and tools that allow information providers to construct a Knowledge Graph relevant to their content. In detail:

Semantic annotations: Methods and tools for the manual, semi-automatic and automated generation of semantic annotations and their integration into a knowledge graph.
Quality control of Knowledge Graphs: Methods and tools for semi-automated and automated quality control and improvement of knowledge graphs.
Connecting Knowledge Graphs: Methods and tools for semi-automated and automated extension of a knowledge graph with other heterogeneous and dynamic information sources and knowledge graphs.
Life cycle of knowledge graphs: Methods and tools for the manual, semi-automatic and automated generation of semantic annotations and their integration into a knowledge graph.
Mapping: Methods and tools for manual, semi-automatic and automatic mapping of unstructured data into machine processable form.

Talking Knowledge Graphs:

Presentation:Talking Knowledge Graphs, MindLab New York, 2019

Additional link:

Talk's video link:



MUSING aims at developing a new generation of Business Intelligence (BI) tools and modules founded on semantic-based knowledge and content systems. MUSING will integrate Semantic Web and Human Language technologies and combine declarative rule-based methods and statistical approaches for enhancing the technological foundations of knowledge acquisition and reasoning in BI applications. The breakthrough impact of MUSING on semantic-based BI will be measured in three strategic, vertical domains: Finance, through development and validation of next generation (Basel II and beyond) semantic-based BI solutions, with particular reference to Credit Risk Management; Internationalisation, through development and validation of next generation semantic-based internationalisation platforms; Operational Risk Management, through development and validation of semantic-driven knowledge systems for measurement and mitigation tools, with particular reference to IT operational risks faced by IT-intensive organisations.

Next Generation Communication Services - Mobile / Fixed Line Integration Applikations (MOFA)

The increasing convergence of services of telecommunication providers with services of the Internet area induces eTel to start a research project evaluating the possibilities of a fusion of classical enterprise communication with mobile terminals. In the context of the project DERI’s task was, among other things, the development of a software solution based on Office-PBX Swyx, the development of the software architecture and specification of the APIs as well as the setup of a pilot environment (SIP capable terminal, Office PBX Swyx).

On-demand Data-driven Production of Touristic Service Packages (TourPack)

While the touristic service offers become present and bookable in abundance on the ICT communication channels, TourPack aims to build a linked data -empowered system for touristic service packaging. Integrating information from multiple sources and systems employing linked data as a global information integration platform, and mining from the depths of the “closed” data, the touristic service package production system will be able to cater to creating the most optimal travel experience for the traveler. Further, the service packages will be efficiently published and made bookable to the end consumers via intelligently selected most suitable communication and booking channels: especially the ICT channels with rapidly growing user audiences, such as the social media and the mobile apps.

OntoHealth - Problem aware Semantic eHealth Services

Healthcare is  nowadays facing a huge number of patients worldwide mainly due to the growing aging population and population with chronic non-communicable diseases in western countries. This increase in patients implies (among others) an increase of the techniques and medical knowledge for diagnosis treatment that are needed which turns into a serious increase of the health care expenses. Hence, in order to stay financially feasible, there is a need to keep costs low while at the same time, level of quality in patient’s care should be guaranteed.

Information systems and specifically, the EHR (electronic health record) play an important role to achieve such an ambitious goal. The EHR allows closing the gap between institution-specific patient data and collecting patient’s health data which can be also exchanged between various healthcare providers. Support interoperability between EHR systems is then an important challenge and different standards have already been developed to address it. However, one problem that has not been tackled sufficiently is the functionality level of the EHR.  EHR systems are still monolithic systems from the functionality perspective. Hence, based on service orientation, OntoHealth aims to establish functionally flexible, standard-based EHR systems that allow answering respectively support solving clinical problems on an individual, case-specific basis.



We are hiring!

Check out our researcher job positions at