Doctoral Consortium

Our 5 accepted doctoral consortium participants will present their PhD work to a panel of experts on the first morning of the conference. This session is closed to participants and panel members. However, there will be an opportunity for general TVX attendees to hear about the outcomes of the session and the featured work at 17:00-17:30 on Day 1 (5th June) in the Quays Theatre.

Audio-Visual Analysis for Predicting Engaging Conversational Videos and Engaged Audiences in Online settings

  • Chinchu Thomas – Multimodal Perception Lab, International Institute of Information Technology, Bangalore, Karnataka, India

Abstract: Automatic analysis of online video for understanding the engagement can be useful for various applications like recommender systems. It can used in different online learning environment and multimedia systems. Unfortunately predicting `how engaging conversational videos are’ is less studied in the literature. The existing works used naive methods and features for the study. This thesis is towards developing a stronger methodology to understand and predict the engagement of conversational videos and the audience in online settings using audio-visual analysis. The relation between the engagement of conversational videos and the overall effectiveness of the video is studied. The dependency of engagement to the popularity of the video is also explored in this study.

Design of an Application for collaboration and interaction with animated content for children in a television ecosystem

  • Jorge Teixeira Marques Universidade de Aveiro, Aveiro, Portugal

Abstract: The ongoing research project presented in this paper aims to propose and evaluate models of interaction with audiovisual animated content in an interactive television ecosystem. Through the referred models we aim to understand the extent to which an interactive animation application can encourage the shared participation of children at primary school level, in taking an active role while watching TV animation. Here we present conceptual and empirical methodology adopted to develop the research work and current state of research.

AI Assisted Video Workflows: Exploring UIs for Human-AI Collaboration in Video Production

  • Than Htut Soe – University of Bergen, Bergen, Hordaland, Norway

Abstract: Video production and distribution have become both very affordable and accessible. A large body of research is available in machine learning for audio, visual and language processing and more recently generation of multimedia content. Machine learning is provides a materials for designing innovative video production workflows. However, there is a lack of studies and expertise around how would video editors receive and use machine learning in their work. As a part of ongoing university and industry joint innovation project, I aim to explore the the challenges of integrating machine learning into video editing workflows. Through developing AI embedded prototypes for video production and using them to run studies, we aim to explore the design space of using AI in video editing interfaces and potential of human-AI collaboration in creative designs.

Augmented Reality Television

  • Pejman Saeghe – School of Computer Science/Interaction Analysis and Modelling Lab, University of Manchester, Manchester, Lancashire, United Kingdom Research and Development, BBC, Salford, Lancashire, United Kingdom

Abstract: Augmented reality (AR) has shown potential in creating engaging entertainment experiences for the general public. In this paper we take a user-centred design approach to a specific case of AR entertainment, specifically AR TV hybrid experience. We first investigate the passive AR TV viewing experience by adding AR artefacts to an existing TV programme. A prototype was implemented augmenting a popular nature documentary. Synchronised content was delivered using a Microsoft HoloLens and a TV. We evaluated the prototype with a user-study (n=12). Our results suggest adding AR artefacts to an existing TV programme can create an engaging user experience. We propose a hackathon and subsequent prototyping to explore stakeholder expectations, in particular content creators and early adopters. Findings from this body of work will help TV content creators in producing engaging experiences that leverage AR’s affordances.

Values-Led Intergenerational Participatory Design of Interactive Media to Enable Playful Interaction Between Preschool Children and Older Users

  • Veronica Pialorsi – School of Arts & Media, The University of Salford, Manchester, United Kingdom School of Arts & Media, The University of Salford, Manchester, United Kingdom

Abstract: This research aims to explore how to engage with preschool children and older users in values-led participatory design processes. The project would result in a set of methodological recommendations and guidelines on how to design interactive media aimed at an intergenerational audience.