This project deals broadly with problems in the analysis of the way in which quantificational material is integrated with non-quantificational material in noun phrases in natural language. It studies these interactions with particular reference to the behavior of the comparative and superlative and quantificational adjectives (‘much/little’), and with particular reference to Arabic, which has cross-linguistically rare properties that shed new light on the these issues.
The project addresses questions of what it means to develop good and trustworthy AI applications. What are the ethical, legal and technical constraints to take into account and how to develop concrete ethics-by-design principles and implement these in a technical platform aiming for truly explainable and accountable AI? The research project brings together experts from ethics, AI ethics, computer science, AI, natural language processing, and data science.
Typology of Vowel and Consonant Quantity in Southern German varieties: Acoustic, Perception, and Articulatory Analyses of Adult and Child Speakers
An extended investigation of the (in)stability of phonemic quantity in vowel plus consonant (VC) sequences in southern German varieties has provided new evidence (1) for the Bavarian VC timing system to be currently changing both in Austria and Germany primarily due to dialect levelling, (2) for the cross-generational stability of VC patterns in Swiss dialects, and (3) for the emergence of aspirated stops in younger speakers of Bavarian and Swiss dialects.
The project researches human-robot interaction from a collaborative point of view where the mutual ability of human and robot to understand each other’s signals is core to the successful completion of joint tasks, and also plays a major role for human acceptance of and trust in the robot as a workmate. We study close collaborations of humans with cobots employing both Virtual Reality scenarios and real world encounters with a collaborative robot in industry-relevant scenarios.
In a series of co-operation projects with Lego, OFAI has developed technology which allows generating verbal building instructions from abstract representations based on which Lego's visual building instructions are generated. With these verbal instructions that can be accessed via speech or Braille, blind and seeing impaired are being enabled to construct and experience LEGO models on their own.
The main target of STADIEM/Aiconix is to foster language technology for dialects and regional varieties. Using the AI-based platform provided by Aiconix, clients will be able to access and interact with huge amounts of digital media content, whereas the principal aim of this project is to significantly enhance ASR with respect to speech data from regional standards and dialects. In this 'develop phase' we primarily focus on Austrian varieties while foreseeing to scale up the methods upon other varieties of German and other languages.
The project analyses the time users spend on web pages of derStandard.at and how this relates to what people say and how the tone is in forum postings. A major aim is to investigate whether and how various linguistic and semantic aspects of the user-generated content influence the time users spend in a forum as readers and what motivates users to become active posters. A particular focus lies on female posters. While the percentage of female and male readers of derStandard.at is near equal (45% to 55%), there is a large gender mismatch in active posters, i.e., only 20% of those who write in forums are female). Therefore, important goals for the project are to find out reasons for this gender disparity and to take measures to encourage female contributions. Methods of data science and natural language processing are employed to identify correlations between forum dwell time, and linguistic and semantic properties of forum contributions, including gender-fair language use. Deep learning approaches are applied to identify mysogynic content. All this helps forum moderators to counteract female discrimination in the web.
The project supports automatic analysis of conversations between doctors and patients such as explanations about treatments, surgeries, or aftercare. It combines Automatic Speech recognition (ASR) with Natural Language Processing (NLP). A system is developed that records doctor-patient conversation, analyzes and assesses what has been said contextually and semantically, and transfers this information into written documentation.
As the sensor systems of industrial robots continuously improve, more and more companies take the step towards cobots (collaborative robots) and integrate human-robot collaboration in their manufacturing processes. In the project Human Tutoring of Robots in Industry, we investigate requirements from industrial companies and research pathways to transfer insights from basic research on self-learning robots to industrial application. We pursue this endeavour based on an implemented model integrated into a robot that learns new actions and objects through observing and listening to their human tutor, conduct interviews and an online survey with stakeholders from industry.