{"id":936,"date":"2019-04-26T14:03:35","date_gmt":"2019-04-26T14:03:35","guid":{"rendered":"https:\/\/tvx.acm.org\/2019\/?page_id=936"},"modified":"2019-04-26T14:03:35","modified_gmt":"2019-04-26T14:03:35","slug":"doctoral-consortium-2","status":"publish","type":"page","link":"https:\/\/tvx.acm.org\/2019\/doctoral-consortium-2\/","title":{"rendered":"Doctoral Consortium"},"content":{"rendered":"<div class=\"flex_column av_one_full  flex_column_div av-zero-column-padding first  \" style='border-radius:0px; '><section class=\"av_textblock_section \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class='avia_textblock  av_inherit_color '  style='color:#83a846; '  itemprop=\"text\" ><h1>Doctoral Consortium<\/h1>\n<\/div><\/section><br \/>\n<section class=\"av_textblock_section \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class='avia_textblock  '   itemprop=\"text\" ><p>Our 5 accepted doctoral consortium participants will present their PhD work to a panel of experts on the first morning of the conference. This session is closed to participants and panel members. However, there will be an opportunity for general TVX attendees to hear about the outcomes of the session and the featured work at 17:00-17:30 on Day 1 (5th June) in the Quays Theatre.<\/p>\n<\/div><\/section><br \/>\n<div style='height:20px' class='hr hr-invisible  '><span class='hr-inner ' ><span class='hr-inner-style'><\/span><\/span><\/div><br \/>\n<article class=\"iconbox iconbox_left    \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class=\"iconbox_content\"><header class=\"entry-content-header\"><div class=\"iconbox_icon heading-color \" aria-hidden='true' data-av_icon='\ue8c9' data-av_iconfont='entypo-fontello'  style='color:#83a846; ' ><\/div><h3 class='iconbox_content_title '  itemprop=\"headline\"   style='color:#83a846; '>Audio-Visual Analysis for Predicting Engaging Conversational Videos and Engaged Audiences in Online settings<\/h3><\/header><div class='iconbox_content_container  '  itemprop=\"text\"  ><ul>\n<li>Chinchu Thomas &#8211; Multimodal Perception Lab, International Institute of Information Technology, Bangalore, Karnataka, India<\/li>\n<\/ul>\n<p><strong>Abstract:\u00a0<\/strong>Automatic analysis of online video for understanding the engagement can be useful for various applications like recommender systems. It can used in different online learning environment and multimedia systems. Unfortunately predicting `how engaging conversational videos are&#8217; is less studied in the literature. The existing works used naive methods and features for the study. This thesis is towards developing a stronger methodology to understand and predict the engagement of conversational videos and the audience in online settings using audio-visual analysis. The relation between the engagement of conversational videos and the overall effectiveness of the video is studied. The dependency of engagement to the popularity of the video is also explored in this study.<\/p>\n<\/div><\/div><footer class=\"entry-footer\"><\/footer><\/article><br \/>\n<article class=\"iconbox iconbox_left    \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class=\"iconbox_content\"><header class=\"entry-content-header\"><div class=\"iconbox_icon heading-color \" aria-hidden='true' data-av_icon='\ue8c9' data-av_iconfont='entypo-fontello'  style='color:#83a846; ' ><\/div><h3 class='iconbox_content_title '  itemprop=\"headline\"   style='color:#83a846; '>Design of an Application for collaboration and interaction with animated content for children in a television ecosystem<\/h3><\/header><div class='iconbox_content_container  '  itemprop=\"text\"  ><ul>\n<li>Jorge Teixeira Marques Universidade de Aveiro, Aveiro, Portugal<\/li>\n<\/ul>\n<p><strong>Abstract:\u00a0<\/strong>The ongoing research project presented in this paper aims to propose and evaluate models of interaction with audiovisual animated content in an interactive television ecosystem. Through the referred models we aim to understand the extent to which an interactive animation application can encourage the shared participation of children at primary school level, in taking an active role while watching TV animation. Here we present conceptual and empirical methodology adopted to develop the research work and current state of research.<\/p>\n<\/div><\/div><footer class=\"entry-footer\"><\/footer><\/article><br \/>\n<article class=\"iconbox iconbox_left    \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class=\"iconbox_content\"><header class=\"entry-content-header\"><div class=\"iconbox_icon heading-color \" aria-hidden='true' data-av_icon='\ue8c9' data-av_iconfont='entypo-fontello'  style='color:#83a846; ' ><\/div><h3 class='iconbox_content_title '  itemprop=\"headline\"   style='color:#83a846; '>AI Assisted Video Workflows: Exploring UIs for Human-AI Collaboration in Video Production<\/h3><\/header><div class='iconbox_content_container  '  itemprop=\"text\"  ><ul>\n<li>Than Htut Soe &#8211; University of Bergen, Bergen, Hordaland, Norway<\/li>\n<\/ul>\n<p><strong>Abstract:\u00a0<\/strong>Video production and distribution have become both very affordable and accessible. A large body of research is available in machine learning for audio, visual and language processing and more recently generation of multimedia content. Machine learning is provides a materials for designing innovative video production workflows. However, there is a lack of studies and expertise around how would video editors receive and use machine learning in their work. As a part of ongoing university and industry joint innovation project, I aim to explore the the challenges of integrating machine learning into video editing workflows. Through developing AI embedded prototypes for video production and using them to run studies, we aim to explore the design space of using AI in video editing interfaces and potential of human-AI collaboration in creative designs.<\/p>\n<\/div><\/div><footer class=\"entry-footer\"><\/footer><\/article><br \/>\n<article class=\"iconbox iconbox_left    \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class=\"iconbox_content\"><header class=\"entry-content-header\"><div class=\"iconbox_icon heading-color \" aria-hidden='true' data-av_icon='\ue8c9' data-av_iconfont='entypo-fontello'  style='color:#83a846; ' ><\/div><h3 class='iconbox_content_title '  itemprop=\"headline\"   style='color:#83a846; '>Augmented Reality Television<\/h3><\/header><div class='iconbox_content_container  '  itemprop=\"text\"  ><ul>\n<li>Pejman Saeghe &#8211; School of Computer Science\/Interaction Analysis and Modelling Lab, University of Manchester, Manchester, Lancashire, United Kingdom Research and Development, BBC, Salford, Lancashire, United Kingdom<\/li>\n<\/ul>\n<p><strong>Abstract:\u00a0<\/strong>Augmented reality (AR) has shown potential in creating engaging entertainment experiences for the general public. In this paper we take a user-centred design approach to a specific case of AR entertainment, specifically AR TV hybrid experience. We first investigate the passive AR TV viewing experience by adding AR artefacts to an existing TV programme. A prototype was implemented augmenting a popular nature documentary. Synchronised content was delivered using a Microsoft HoloLens and a TV. We evaluated the prototype with a user-study (n=12). Our results suggest adding AR artefacts to an existing TV programme can create an engaging user experience. We propose a hackathon and subsequent prototyping to explore stakeholder expectations, in particular content creators and early adopters. Findings from this body of work will help TV content creators in producing engaging experiences that leverage AR&#8217;s affordances.<\/p>\n<\/div><\/div><footer class=\"entry-footer\"><\/footer><\/article><br \/>\n<article class=\"iconbox iconbox_left    \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class=\"iconbox_content\"><header class=\"entry-content-header\"><div class=\"iconbox_icon heading-color \" aria-hidden='true' data-av_icon='\ue8c9' data-av_iconfont='entypo-fontello'  style='color:#83a846; ' ><\/div><h3 class='iconbox_content_title '  itemprop=\"headline\"   style='color:#83a846; '>Values-Led Intergenerational Participatory Design of Interactive Media to Enable Playful Interaction Between Preschool Children and Older Users<\/h3><\/header><div class='iconbox_content_container  '  itemprop=\"text\"  ><ul>\n<li>Veronica Pialorsi &#8211; School of Arts &amp; Media, The University of Salford, Manchester, United Kingdom School of Arts &amp; Media, The University of Salford, Manchester, United Kingdom<\/li>\n<\/ul>\n<p><strong>Abstract:\u00a0<\/strong>This research aims to explore how to engage with preschool children and older users in values-led participatory design processes. The project would result in a set of methodological recommendations and guidelines on how to design interactive media aimed at an intergenerational audience.<\/p>\n<\/div><\/div><footer class=\"entry-footer\"><\/footer><\/article><\/p><\/div>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":2,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-936","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/tvx.acm.org\/2019\/wp-json\/wp\/v2\/pages\/936","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/tvx.acm.org\/2019\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/tvx.acm.org\/2019\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/tvx.acm.org\/2019\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/tvx.acm.org\/2019\/wp-json\/wp\/v2\/comments?post=936"}],"version-history":[{"count":5,"href":"https:\/\/tvx.acm.org\/2019\/wp-json\/wp\/v2\/pages\/936\/revisions"}],"predecessor-version":[{"id":955,"href":"https:\/\/tvx.acm.org\/2019\/wp-json\/wp\/v2\/pages\/936\/revisions\/955"}],"wp:attachment":[{"href":"https:\/\/tvx.acm.org\/2019\/wp-json\/wp\/v2\/media?parent=936"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}