{"id":920,"date":"2019-04-26T13:53:48","date_gmt":"2019-04-26T13:53:48","guid":{"rendered":"https:\/\/tvx.acm.org\/2019\/?page_id=920"},"modified":"2019-05-08T08:49:10","modified_gmt":"2019-05-08T08:49:10","slug":"work-in-progress","status":"publish","type":"page","link":"https:\/\/tvx.acm.org\/2019\/work-in-progress\/","title":{"rendered":"Work in Progress"},"content":{"rendered":"<div class=\"flex_column av_one_full  flex_column_div av-zero-column-padding first  \" style='border-radius:0px; '><section class=\"av_textblock_section \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class='avia_textblock  av_inherit_color '  style='color:#83a846; '  itemprop=\"text\" ><h1>Work in Progress<\/h1>\n<\/div><\/section><br \/>\n<section class=\"av_textblock_section \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class='avia_textblock  '   itemprop=\"text\" ><section class=\"av_textblock_section \">\n<div class=\"avia_textblock \">\n<p>The following work in progress submissions will be presented as posters with an optional demo. These poster presentations will take place on the third day of the conference (June 7th) as part of our interactive conference \u2018bazaar\u2019 in BBC\u2019s Quay House.\u00a0There will also be a series of lighting talks introducing the work in progress submissions at 17:00-17:30 on Day 2 (June 6th) in the Quays Theatre.<\/p>\n<\/div>\n<\/section>\n<div class=\"hr hr-invisible avia-builder-el-3 el_after_av_textblock el_before_av_icon_box \"><\/div>\n<\/div><\/section><br \/>\n<div style='height:20px' class='hr hr-invisible  '><span class='hr-inner ' ><span class='hr-inner-style'><\/span><\/span><\/div><br \/>\n<article class=\"iconbox iconbox_left    \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class=\"iconbox_content\"><header class=\"entry-content-header\"><div class=\"iconbox_icon heading-color \" aria-hidden='true' data-av_icon='\ue864' data-av_iconfont='entypo-fontello'  style='color:#83a846; ' ><\/div><h3 class='iconbox_content_title '  itemprop=\"headline\"   style='color:#83a846; '>Towards Automatic Cinematography and Annotation for 360\u00b0 Video<\/h3><\/header><div class='iconbox_content_container  '  itemprop=\"text\"  ><ul>\n<li>Hannes Fassold &#8211; JOANNEUM RESEARCH, DIGITAL, Graz, Austria<\/li>\n<li>Barnabas Takacs &#8211; Digital Elite \/ PanoCAST, Los Angeles, California, United States<\/li>\n<\/ul>\n<p><strong>Abstract:\u00a0<\/strong>Omnidirectional (360\u00b0) video is a novel media format, rapidly becoming adopted in media production and consumption as part of today&#8217;s ongoing virtual reality revolution. Due to its novelty, there is a lack of tools for producing highly engaging 360\u00b0 video for consumption on a multitude of platforms (VR headsets, smartphones or conventional TV sets). In this work, we describe our preliminary work on tools for automating several tasks in the production of 360\u00b0 video, which are tedious and time consuming when done manually. We propose tools for automated cinematography (generate a lean-back experience without user interaction for conventional TV sets) and automated annotation of 360\u00b0 video for simplifying linking to other resources like text or 2D images\/videos. Both tools employ deep learning based methods for extracting the information about the objects in the scene. We will discuss the current state of these tools and ways how to improve the tools in the future.<\/p>\n<\/div><\/div><footer class=\"entry-footer\"><\/footer><\/article><br \/>\n<article class=\"iconbox iconbox_left    \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class=\"iconbox_content\"><header class=\"entry-content-header\"><div class=\"iconbox_icon heading-color \" aria-hidden='true' data-av_icon='\ue864' data-av_iconfont='entypo-fontello'  style='color:#83a846; ' ><\/div><h3 class='iconbox_content_title '  itemprop=\"headline\"   style='color:#83a846; '>Understanding User Attention In VR Using Gaze Controlled Games<\/h3><\/header><div class='iconbox_content_container  '  itemprop=\"text\"  ><ul>\n<li>Mr Murtada Dohan &#8211; Faculty of Arts, Science &amp; Technology, University of Northampton, Northampton, United Kingdom<\/li>\n<li>Dr Mu Mu &#8211; The University of Northampton, Northampton, United Kingdom<\/li>\n<\/ul>\n<p><strong>Abstract:\u00a0<\/strong>Understanding the user&#8217;s intent has a pivotal role in developing immersive and personalised media applications. This paper introduces our recent research and user experiments towards interpreting user attention in virtual reality (VR). We designed a gaze-controlled Unity VR game for this study and implemented additional libraries to bridge raw eye-tracking data with game elements and mechanics. The experimental data show distinctive patterns of fixation spans which are paired with user interviews to help us explore characteristics of user attention.<\/p>\n<\/div><\/div><footer class=\"entry-footer\"><\/footer><\/article><br \/>\n<article class=\"iconbox iconbox_left    \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class=\"iconbox_content\"><header class=\"entry-content-header\"><div class=\"iconbox_icon heading-color \" aria-hidden='true' data-av_icon='\ue864' data-av_iconfont='entypo-fontello'  style='color:#83a846; ' ><\/div><h3 class='iconbox_content_title '  itemprop=\"headline\"   style='color:#83a846; '>TV Channels in Your Pocket! Linking Smart Pockets to Smart TVs<\/h3><\/header><div class='iconbox_content_container  '  itemprop=\"text\"  ><ul>\n<li>Irina Popovici &#8211; MintViz Lab, University Stefan cel Mare of Suceava, Suceava, Romania<\/li>\n<li>Radu-Daniel Vatavu &#8211; MintViz Lab, University Stefan cel Mare of Suceava, Suceava, Romania<br \/>\nWenjun Wu Beihang University, Beijing, China<\/li>\n<\/ul>\n<p><strong>Abstract:\u00a0<\/strong>We present a gesture-based user interface for smart TVs that employs deictic gestures to control the content displayed on the TV screen. Our interface implements an instance of the &#8220;Smart-Pockets&#8221; interaction technique, where links to digital content, in our case to users&#8217; preferred television channels and shows, are stored inside users&#8217; pockets and readily accessed with a mere pointing of the hand to those pockets. Pointing gestures to the pockets and towards the TV screen are detected using the Inertial Measurement Unit embedded in Myo, a smart armband. We discuss the ways in which our prototype opens new opportunities for hybrid, gesture- and pointing-based interactions for smart TVs, and also opportunities for designing interactions that take place at the periphery of user attention.<\/p>\n<\/div><\/div><footer class=\"entry-footer\"><\/footer><\/article><br \/>\n<article class=\"iconbox iconbox_left    \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class=\"iconbox_content\"><header class=\"entry-content-header\"><div class=\"iconbox_icon heading-color \" aria-hidden='true' data-av_icon='\ue864' data-av_iconfont='entypo-fontello'  style='color:#83a846; ' ><\/div><h3 class='iconbox_content_title '  itemprop=\"headline\"   style='color:#83a846; '>How VR 360\u00ba Impacts the Immersion of the Viewer of Suspense AV Content<\/h3><\/header><div class='iconbox_content_container  '  itemprop=\"text\"  ><ul>\n<li>Tiffany Marques &#8211; Department of Communication and Art, Aveiro University , Oliveira do Bairro, Portugal<\/li>\n<li>M\u00e1rio Vairinhos &#8211; Comunication and Art, University of Aveiro, Aveiro, Aveiro, Portugal<\/li>\n<li>Pedro Almeida &#8211; Digimedia, University of Aveiro, Aveiro, Aveiro, Portugal<\/li>\n<\/ul>\n<p><strong>Abstract:\u00a0<\/strong>Virtual Reality (VR) is increasingly a tempting option for creators seeking more immersive VR audiovisual experiences. In the suspense genre, VR promises to deliver a more impactful experience to the viewers. Nevertheless, it\u2019s needed to confirm if such promise is real. This study focused on the creation of a suspense genre content to understand the impact on the immersive experience of the viewer if presented in stereoscopy VR 360\u00ba. An evaluation was developed, with a convenience sample of 36 participants. The immersive differences were evaluated when viewing the same audiovisual content of suspense in different formats: VR 360\u00ba; 360\u00ba, and; 2D non-panoramic. The results showed that the VR 360\u00ba intensifies the perceptual immersion, but diminishes the narrative immersion, a consequence of the 360\u00ba.<\/p>\n<\/div><\/div><footer class=\"entry-footer\"><\/footer><\/article><br \/>\n<article class=\"iconbox iconbox_left    \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class=\"iconbox_content\"><header class=\"entry-content-header\"><div class=\"iconbox_icon heading-color \" aria-hidden='true' data-av_icon='\ue864' data-av_iconfont='entypo-fontello'  style='color:#83a846; ' ><\/div><h3 class='iconbox_content_title '  itemprop=\"headline\"   style='color:#83a846; '>Visual Augmentation of the Television Watching Experience: Manifesto and Agenda<\/h3><\/header><div class='iconbox_content_container  '  itemprop=\"text\"  ><ul>\n<li>Irina Popovici &#8211; MintViz Lab, University Stefan cel Mare of Suceava, Suceava, Romania<\/li>\n<li>Radu-Daniel Vatavu &#8211; MintViz Lab, University Stefan cel Mare of Suceava, Suceava, Romania<\/li>\n<\/ul>\n<p><strong>Abstract:\u00a0<\/strong>We present an agenda for the visual augmentation of television watching based on recently booming technology, such as smart wearables and Augmented\/Mixed Reality. Our agenda goes beyond second-screen viewing trends to explore the opportunities delivered by wearable devices and gadgets, such as smartglasses and head-mounted displays, to deliver rich visual experiences to viewers. While still a work in progress, we hope that our contribution will be inspiring to the TVX community and, consequently, foster critical and constructive discussions towards new devices, application opportunities, and tools to augment visually the television watching experience.<\/p>\n<\/div><\/div><footer class=\"entry-footer\"><\/footer><\/article><br \/>\n<article class=\"iconbox iconbox_left    \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class=\"iconbox_content\"><header class=\"entry-content-header\"><div class=\"iconbox_icon heading-color \" aria-hidden='true' data-av_icon='\ue864' data-av_iconfont='entypo-fontello'  style='color:#83a846; ' ><\/div><h3 class='iconbox_content_title '  itemprop=\"headline\"   style='color:#83a846; '>Augmented Fast-Forwarding: Can we Improve Advertising Impact by Enriching Fast-forwarded Commercials?<\/h3><\/header><div class='iconbox_content_container  '  itemprop=\"text\"  ><ul>\n<li>PhD Saar Bossuyt &#8211; University College Leuven Limburg, Leuven, Belgium<\/li>\n<li>Roos Voorend &#8211; Meaningful Interactions Lab, KU Leuven &#8211; imec, Leuven, Belgium<\/li>\n<li>David Geerts &#8211; Meaningful Interactions Lab (mintlab), KU Leuven, Leuven, Belgium<\/li>\n<\/ul>\n<p><strong>Abstract:\u00a0<\/strong>The trend of time-shifted viewing worries television networks and advertisers, as time-shifting viewers often fast-forward through commercials, resulting in lower advertising impact. The present research tests an alternative solution to this problem by augmenting fast-forwarded commercials with brand logos placed in the center of the screen. We tested the potential of augmented fast-forwarding in an experiment in which participants watched a television show interrupted by a commercial break using three experimental conditions: commercials played at regular speed, at fast-forwarded speed, or at fast-forwarded speed but enriched with logos. Advertising impact was measured during the commercial break (using eye-tracking glasses), right after the TV show (brand recognition), and the day after the TV show (day after brand recognition). Interestingly, the results showed that augmented fast-forwarding performed equally well as regular-speed viewing on two out of three advertising impact measures.<\/p>\n<\/div><\/div><footer class=\"entry-footer\"><\/footer><\/article><br \/>\n<article class=\"iconbox iconbox_left    \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class=\"iconbox_content\"><header class=\"entry-content-header\"><div class=\"iconbox_icon heading-color \" aria-hidden='true' data-av_icon='\ue864' data-av_iconfont='entypo-fontello'  style='color:#83a846; ' ><\/div><h3 class='iconbox_content_title '  itemprop=\"headline\"   style='color:#83a846; '>Disruptive Approaches for Subtitling in Immersive Environments<\/h3><\/header><div class='iconbox_content_container  '  itemprop=\"text\"  ><ul>\n<li>Dr Chris J Hughes &#8211; University of Salford, Manchester, United Kingdom<\/li>\n<li>Mario Montagud Climent &#8211; i2CAT Foundation, Barcelona, Spain<\/li>\n<li>Mr. Peter tho Pesch &#8211; Institut f\u00fcr Rundfunktechnik GmbH, Munich, Germany<\/li>\n<\/ul>\n<p><strong>Abstract:<\/strong> The Immersive Accessibility Project (ImAc) explores how accessibility services can be integrated with 360-degree video as well as new methods for enabling universal access to immersive content. ImAc is focused on inclusivity and addresses the needs of all users, including those with sensory or learning disabilities, of all ages and considers language and user preferences. The project focuses on moving away from the constraints of existing technologies and explores new methods for creating a personal experience for each consumer. It is not good enough to simply retrofit subtitles into immersive content: this paper attempts to disrupt the industry with new and often controversial methods. This paper provides an overview of the ImAc project and proposes guiding methods for subtitling in immersive environments. We discuss the current state-of-the-art for subtitling in immersive environments and the rendering of subtitles in the user interface within the ImAc project. We then discuss new experimental rendering modes that have been implemented including a responsive subtitle approach, which dynamically re-blocks subtitles to fit the available space and explore alternative rendering techniques where the subtitles are attached to the scene.<\/p>\n<\/div><\/div><footer class=\"entry-footer\"><\/footer><\/article><br \/>\n<article class=\"iconbox iconbox_left    \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class=\"iconbox_content\"><header class=\"entry-content-header\"><div class=\"iconbox_icon heading-color \" aria-hidden='true' data-av_icon='\ue864' data-av_iconfont='entypo-fontello'  style='color:#83a846; ' ><\/div><h3 class='iconbox_content_title '  itemprop=\"headline\"   style='color:#83a846; '>Bandersnatch, Yea or Nay? Reception and User Experience of an Interactive Digital Narrative Video<\/h3><\/header><div class='iconbox_content_container  '  itemprop=\"text\"  ><ul>\n<li>Dr. Christian Roth &#8211; Professorship Interactive Narrative Design, HKU University of the Arts, Utrecht, Netherlands<\/li>\n<li>Hartmut Koenitz &#8211; HKU University of the Arts Utrecht, Utrecht, Netherlands<\/li>\n<\/ul>\n<p><strong>Abstract:<\/strong>\u00a0The Netflix production Bandersnatch represents a potentially crucial step for interactive digital narrative videos, due to the platform\u2019s reach, popularity, and ability to finance costly experimental productions. Indeed, Netflix has announced that it will invest more into interactive narratives \u2013 moving into romance and other genres \u2013 which makes Bandersnatch even more important as first step and harbinger of things yet to come. For us, the question was therefore how audiences react to Bandersnatch. What are the factors driving user\u2019s enjoyment and what factors might mitigate the experience. For example, novelty value of an interactive experience on Netflix might be a crucial aspect or the combination with the successful series Black Mirror. We approach these questions from two angels \u2013 with a critical analysis of the work itself, including audience reactions and an initial user study using Roth\u2019s measurement toolbox (N = 32).<\/p>\n<\/div><\/div><footer class=\"entry-footer\"><\/footer><\/article><br \/>\n<article class=\"iconbox iconbox_left    \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class=\"iconbox_content\"><header class=\"entry-content-header\"><div class=\"iconbox_icon heading-color \" aria-hidden='true' data-av_icon='\ue864' data-av_iconfont='entypo-fontello'  style='color:#83a846; ' ><\/div><h3 class='iconbox_content_title '  itemprop=\"headline\"   style='color:#83a846; '>EmoJar: Collecting and Reliving Happy and Memorable Media Moments<\/h3><\/header><div class='iconbox_content_container  '  itemprop=\"text\"  ><ul>\n<li>Pedro Carvalho &#8211; LASIGE, Faculdade de Ci\u00eancias, Universidade de Lisboa, Lisboa, Portugal<\/li>\n<li>Prof. Teresa Chambel &#8211; LASIGE, Faculdade de Ci\u00eancias, Universidade de Lisboa, Lisboa, Portugal<\/li>\n<\/ul>\n<p><strong>Abstract:\u00a0<\/strong>This paper explores the potential of media and how it can be leveraged to create a tool to help individuals become more aware of their emotions and promote their psychological wellbeing. It discusses main motivation and background and presents EmoJar, an interactive application being designed and developed to allow users to collect and review media that have significant impact and remind them of the good things they experience along time. EmoJar is based on the Happiness Jar concept that gets enriched with media and its emotional impact, as an extension to Media4WellBeing, aligning with the goals and approaches of Positive Psychology and Positive Computing.<\/p>\n<\/div><\/div><footer class=\"entry-footer\"><\/footer><\/article><br \/>\n<article class=\"iconbox iconbox_left    \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class=\"iconbox_content\"><header class=\"entry-content-header\"><div class=\"iconbox_icon heading-color \" aria-hidden='true' data-av_icon='\ue864' data-av_iconfont='entypo-fontello'  style='color:#83a846; ' ><\/div><h3 class='iconbox_content_title '  itemprop=\"headline\"   style='color:#83a846; '>PokeRepo Go++: One-man Live Reporting System with a Commentator Function<\/h3><\/header><div class='iconbox_content_container  '  itemprop=\"text\"  ><ul>\n<li>Yoshinari Takegawa &#8211; Future University, Hakodate, Japan<\/li>\n<li>Kohei Matsumura &#8211; Ritsumeikan University, Shiga, Japan<\/li>\n<li>Hiroyuki Manabe &#8211; Information Science and Engineering, Shibaura Institute of Technology, Koto-ku, Tokyo, Japan<\/li>\n<\/ul>\n<p><strong>Abstract:\u00a0<\/strong>In this paper, with propose PokeRepo Go++, which is our one-man live reporting system (PokeRepo Go) with an added commentator function that enables outside experts to make comments. As a consequence of the spread of live broadcast streaming services, anybody is now able to broadcast his\/her own interests, concerns and everyday occurrences. To support reports made by a single person in the form of a live broadcast, we have developed and actually operated PokeRepo Go. PokeRepo Go only had a function to transmit video to viewers in one direction, i.e. non-interactively. For this reason, because an interviewee or reporter had no reaction from the audience, there was uncertainty as to whether the broadcast content was being conveyed to the audience as per his\/her intention. This exposed the importance of two-directional, i.e. interactive, communication. In PokeRepo Go++, we provide a commentator function by which commentators can seamlessly participate in live broadcast content and communicate naturally with an interviewee. Additionally, we considered the UI design to realize the compatibility of use of the commentator function with use of the pre-existing PokeRepo Go functions, such as filming operations (including acquirement of video\/audio and control of lighting) and editing operations. At a demo session in a domestic conference we operated the prototype system of PokeRepo Go++ and evaluated the usefulness thereof.<\/p>\n<\/div><\/div><footer class=\"entry-footer\"><\/footer><\/article><br \/>\n<article class=\"iconbox iconbox_left    \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class=\"iconbox_content\"><header class=\"entry-content-header\"><div class=\"iconbox_icon heading-color \" aria-hidden='true' data-av_icon='\ue864' data-av_iconfont='entypo-fontello'  style='color:#83a846; ' ><\/div><h3 class='iconbox_content_title '  itemprop=\"headline\"   style='color:#83a846; '>Measuring Audience Appreciation via Viewing Pattern Analysis<\/h3><\/header><div class='iconbox_content_container  '  itemprop=\"text\"  ><ul>\n<li>Amaya Herranz Donnan &#8211; British Broadcasting Corporation, London, United Kingdom<\/li>\n<\/ul>\n<p><strong>Abstract:\u00a0<\/strong>Accurately quantifying audience appreciation poses significant technical challenges, privacy concerns and difficulties in scaling the results to realistic audience sizes. This paper presents a new approach to appreciation measurement based on the analysis of BBC iPlayer on-demand viewing pattern data, such as the timeline of the user\u2019s interactions with the play button, combined with appreciation scores from traditional feedback surveys. This methodology infers implicit viewer appreciation automatically, without adding significant cost or time overheads and without requiring additional input from the participant or the use of intrusive methods, such as facial recognition. The results obtained, based on data from a sample of over 27,000 iPlayer users, show accuracy scores above 90% for predictions generated using computationally efficient models, including Decision Trees and Random Forests. The analysis suggests that the user\u2019s appreciation of a programme can be predicted based on their online viewing behaviour, potentially improving our understanding of the audience.<\/p>\n<\/div><\/div><footer class=\"entry-footer\"><\/footer><\/article><br \/>\n<article class=\"iconbox iconbox_left    \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class=\"iconbox_content\"><header class=\"entry-content-header\"><div class=\"iconbox_icon heading-color \" aria-hidden='true' data-av_icon='\ue864' data-av_iconfont='entypo-fontello'  style='color:#83a846; ' ><\/div><h3 class='iconbox_content_title '  itemprop=\"headline\"   style='color:#83a846; '>Augmenting Public Reading Experience to Promote Care Home Residents\u2019 Social Interaction<\/h3><\/header><div class='iconbox_content_container  '  itemprop=\"text\"  ><ul>\n<li>Kai Kang &#8211; Industrial Design Department, Eindhoven University of Technology, Eindhoven, Netherlands<\/li>\n<li>dr. Jun Hu &#8211; Industrial Design, Eindhoven University of Technology, Eindhoven, Netherlands<\/li>\n<li>dr.ir. Bart Hengeveld &#8211; Industrial Design, Eindhoven University of Technology, Eindhoven, Netherlands<\/li>\n<li>Caroline Hummels Industrial Design, Eindhoven University of Technology, Eindhoven, Netherlands<\/li>\n<\/ul>\n<p><strong>Abstract:\u00a0<\/strong>Institutional care settings are often described as places where residents suffer from social isolation. Although sharing media preferences, consumption patterns and practices is believed to be effective to trigger communications and develop friendships between older adults, it rarely happens in care homes. Our research explores the potential to promote residents\u2019 social interaction by augmenting public print media. In this work-in-progress, we started with newspapers as an example to understand residents\u2019 information sources, media habits and preferences. We were also interested in their perceptions of the attractiveness and sociability of augmented print media. The findings showed that the participants held positive attitudes on such technologies. Preliminary design requirements were summarized to inform the future development of related social technologies in public caring environments.<\/p>\n<\/div><\/div><footer class=\"entry-footer\"><\/footer><\/article><br \/>\n<article class=\"iconbox iconbox_left    \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class=\"iconbox_content\"><header class=\"entry-content-header\"><div class=\"iconbox_icon heading-color \" aria-hidden='true' data-av_icon='\ue864' data-av_iconfont='entypo-fontello'  style='color:#83a846; ' ><\/div><h3 class='iconbox_content_title '  itemprop=\"headline\"   style='color:#83a846; '>Augmenting Television With Augmented Reality<\/h3><\/header><div class='iconbox_content_container  '  itemprop=\"text\"  ><ul>\n<li>Pejman Saeghe &#8211; School of Computer Science\/Interaction Analysis and Modelling Lab, University of Manchester, Manchester, Lancashire, United Kingdom Research and Development, BBC, Salford, Lancashire, United Kingdom<\/li>\n<li>Sarah Clinch &#8211; School of Computer Science, University of Manchester, Manchester, United Kingdom<\/li>\n<li>Bruce Weir &#8211; Research and Development, The British Broadcasting Corporation, Salford, Lancashire, United Kingdom<\/li>\n<li>Maxine Glancy &#8211; BBC Research &amp; Development, Media City UK, Manchester, United Kingdom<\/li>\n<li>Dr. Vinoba Vinayagamoorthy &#8211; BBC, London, United Kingdom<\/li>\n<li>Ollie Pattinson &#8211; Research and Development, The British Broadcasting Corporation, Salford, Lancashire, United Kingdom<\/li>\n<li>Professor Stephen Robert Pettifer &#8211; School of Computer Science, University of Manchester, Manchester, Lancashire, United Kingdom<\/li>\n<li>Robert Stevens &#8211; School of Computer Science, University of Manchester, Manchester, Lancashire, United Kingdom<\/li>\n<\/ul>\n<p><strong>Abstract:\u00a0<\/strong>This paper explores the effects of adding augmented reality (AR) artefacts to an existing TV programme. A prototype was implemented augmenting a popular nature documentary. Synchronised content was delivered over a Microsoft HoloLens and a TV. Our preliminary findings suggest that the addition of AR to an existing TV programme can result in creation of engaging experiences. However, presenting content outside the traditional TV window challenges traditional storytelling conventions and viewer expectations. Further research is required to understand the risks and opportunities presented when adding AR artefacts to TV.<\/p>\n<\/div><\/div><footer class=\"entry-footer\"><\/footer><\/article><\/p><\/div>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":2,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-920","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/tvx.acm.org\/2019\/wp-json\/wp\/v2\/pages\/920","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/tvx.acm.org\/2019\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/tvx.acm.org\/2019\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/tvx.acm.org\/2019\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/tvx.acm.org\/2019\/wp-json\/wp\/v2\/comments?post=920"}],"version-history":[{"count":17,"href":"https:\/\/tvx.acm.org\/2019\/wp-json\/wp\/v2\/pages\/920\/revisions"}],"predecessor-version":[{"id":1023,"href":"https:\/\/tvx.acm.org\/2019\/wp-json\/wp\/v2\/pages\/920\/revisions\/1023"}],"wp:attachment":[{"href":"https:\/\/tvx.acm.org\/2019\/wp-json\/wp\/v2\/media?parent=920"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}