{"id":910,"date":"2019-04-26T13:17:54","date_gmt":"2019-04-26T13:17:54","guid":{"rendered":"https:\/\/tvx.acm.org\/2019\/?page_id=910"},"modified":"2019-04-29T08:21:51","modified_gmt":"2019-04-29T08:21:51","slug":"demos-programme","status":"publish","type":"page","link":"https:\/\/tvx.acm.org\/2019\/demos-programme\/","title":{"rendered":"Demos"},"content":{"rendered":"<div class=\"flex_column av_one_full  flex_column_div av-zero-column-padding first  \" style='border-radius:0px; '><section class=\"av_textblock_section \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class='avia_textblock  av_inherit_color '  style='color:#83a846; '  itemprop=\"text\" ><h1>Demos<\/h1>\n<\/div><\/section><br \/>\n<section class=\"av_textblock_section \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class='avia_textblock  '   itemprop=\"text\" ><p>The following demonstrations will be presented on the third day of the conference (June 7th) as part of our interactive conference &#8216;bazaar&#8217; in BBC&#8217;s Quay House. There will also be a series of lighting talks introducing the demos at 17:00-17:30 on Day 2 (June 6th) in the Quays Theatre.<\/p>\n<\/div><\/section><br \/>\n<div style='height:20px' class='hr hr-invisible  '><span class='hr-inner ' ><span class='hr-inner-style'><\/span><\/span><\/div><br \/>\n<article class=\"iconbox iconbox_left    \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class=\"iconbox_content\"><header class=\"entry-content-header\"><div class=\"iconbox_icon heading-color \" aria-hidden='true' data-av_icon='\ue849' data-av_iconfont='entypo-fontello'  style='color:#83a846; ' ><\/div><h3 class='iconbox_content_title '  itemprop=\"headline\"   style='color:#83a846; '>ImAc Player: Enabling a Personalized Consumption of Accessible Immersive Contents<\/h3><\/header><div class='iconbox_content_container  '  itemprop=\"text\"  ><ul>\n<li>Mario Montagud Climent &#8211; Media &amp; Internet Area, i2cat Foundation, Barcelona, Spain Department of Informatics, University of Valencia, Valencia, Spain<\/li>\n<li>Isaac Fraile &#8211; i2CAT, Barcelona, Spain<\/li>\n<li>Einar Meyerson Fundaci\u00f3 &#8211; i2CAT, Barcelona, Spain<\/li>\n<li>Mar\u00eda Gen\u00eds &#8211; i2CAT Foundation, Barcelona, Barcelona, Spain<\/li>\n<li>Sergi Fern\u00e1ndez &#8211; i2CAT, Barcelona, Spain<\/li>\n<\/ul>\n<p><strong>Abstract:\u00a0<\/strong>Accessibility is a fundamental requirement for every (multimedia) service. Although immersive media services are on the rise, they still lack of accessibility features. This paper presents a web-based player that enables the presentation of immersive 360\u00ba contents augmented by a set of access services, like subtitles, (spatial) audio description and sign language. The paper initially provides an overview of the end-to-end broadcast platform in which the player is integrated. Then, the key components that make up the player and its appearance are briefly introduced. Finally, the different accessibility, personalisation and interaction features implemented in the player are described. The player is being tested in a series of pilot actions involving users with accessibility needs, is being used as a proof of concept in different standardization activities, and is envisioned to be integrated into the services provided by European broadcasters.<\/p>\n<\/div><\/div><footer class=\"entry-footer\"><\/footer><\/article><br \/>\n<article class=\"iconbox iconbox_left    \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class=\"iconbox_content\"><header class=\"entry-content-header\"><div class=\"iconbox_icon heading-color \" aria-hidden='true' data-av_icon='\ue849' data-av_iconfont='entypo-fontello'  style='color:#83a846; ' ><\/div><h3 class='iconbox_content_title '  itemprop=\"headline\"   style='color:#83a846; '>Abstract Painting Practice: Expanding in a Virtual World <\/h3><\/header><div class='iconbox_content_container  '  itemprop=\"text\"  ><ul>\n<li>Alison Goodyear &#8211; Faculty of Arts, Science and Technology, University of Northampton, Northampton, United Kingdom<\/li>\n<li>Dr Mu Mu &#8211; The University of Northampton, Northampton, United Kingdom<\/li>\n<\/ul>\n<p><strong>Abstract:\u00a0<\/strong>This paper sets out to describe, through a demo for the TVX Conference, how virtual reality (VR) painting software is beginning to open up as a new medium for visual artists working in the field of abstract painting. The demo achieves this by describing how an artist who usually makes abstract paintings with paint and canvas in a studio, that is those existing as physical objects in the world, encounters and negotiates the process of making abstract paintings in VR using Tilt Brush software and Head-Mounted Displays (HMD). This paper also indicates potential future avenues for content creation in this emerging field and what this might mean not only for the artist and the viewer, but for art institutions trying to provide effective methods of delivery for innovative content in order to develop and grow new audiences.<\/p>\n<\/div><\/div><footer class=\"entry-footer\"><\/footer><\/article><br \/>\n<article class=\"iconbox iconbox_left    \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class=\"iconbox_content\"><header class=\"entry-content-header\"><div class=\"iconbox_icon heading-color \" aria-hidden='true' data-av_icon='\ue849' data-av_iconfont='entypo-fontello'  style='color:#83a846; ' ><\/div><h3 class='iconbox_content_title '  itemprop=\"headline\"   style='color:#83a846; '>Deb8: A Tool for Collaborative Analysis of Video<\/h3><\/header><div class='iconbox_content_container  '  itemprop=\"text\"  ><ul>\n<li>Guilherme Carneiro &#8211; School of Computer Science, University of St Andrews, St Andrews, United Kingdom<\/li>\n<\/ul>\n<p><strong>Abstract:\u00a0<\/strong>Public, parliamentary and television debates are commonplace in modern democracies. However, developing an understanding and communicating with others is often limited to passive viewing or, at best, textual discussion on social media. To address this, we present the design and implementation of Deb8, a tool that allows collaborative analysis of video-based TV debates. The tool provides a novel UI designed to enable and capture rich synchronous collaborative discussion of videos based on argumentation graphs that link quotes of the video, opinions, questions, and external evidence. Deb8 supports the creation of rich idea structures based on argumentation theory as well as collaborative tagging of the relevance, support and trustworthiness of the different elements. We report an evaluation of the tool design and a reflection on the challenges involved.<\/p>\n<\/div><\/div><footer class=\"entry-footer\"><\/footer><\/article><br \/>\n<article class=\"iconbox iconbox_left    \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class=\"iconbox_content\"><header class=\"entry-content-header\"><div class=\"iconbox_icon heading-color \" aria-hidden='true' data-av_icon='\ue849' data-av_iconfont='entypo-fontello'  style='color:#83a846; ' ><\/div><h3 class='iconbox_content_title '  itemprop=\"headline\"   style='color:#83a846; '>Framework for Web Delivery of Immersive Audio Experiences Using Device Orchestration<\/h3><\/header><div class='iconbox_content_container  '  itemprop=\"text\"  ><ul>\n<li>Kristian Hentschel &#8211; BBC R&amp;D, Salford, United Kingdom<\/li>\n<li>Jon Francombe &#8211; BBC R&amp;D, Salford, United Kingdom<\/li>\n<\/ul>\n<p><strong>Abstract:\u00a0<\/strong>This demonstration introduces the use of orchestrated media devices and object-based broadcasting to create immersive spatial audio experiences. Mobile phones, tablets, and laptops are synchronised to a common media timeline and contribute one or more individually delivered audio objects to the overall mix. A rule set for assigning objects to devices was developed through a trial production&#8212;a 13-minute audio drama called The Vostok-K Incident. The same timing model as in HbbTV2.0 media synchronisation is used, and future work could augment linear television broadcasts or create novel interactive audio-visual experiences for multiple users. The demonstration will allow delegates to connect their mobile phones to the system. A unique mix is created based on the number and selected locations of connected devices.<strong><br \/>\n<\/strong><\/p>\n<\/div><\/div><footer class=\"entry-footer\"><\/footer><\/article><br \/>\n<article class=\"iconbox iconbox_left    \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class=\"iconbox_content\"><header class=\"entry-content-header\"><div class=\"iconbox_icon heading-color \" aria-hidden='true' data-av_icon='\ue849' data-av_iconfont='entypo-fontello'  style='color:#83a846; ' ><\/div><h3 class='iconbox_content_title '  itemprop=\"headline\"   style='color:#83a846; '>Snapscreen Clip Share: Utilizing Computer Vision to Bridge TV and Social Media<\/h3><\/header><div class='iconbox_content_container  '  itemprop=\"text\"  ><ul>\n<li>Mr Thomas Willomitzer &#8211; Snapscreen, Vienna, Austria<\/li>\n<li>Tyler Tracy Snapscreen &#8211; Snapscreen, Vienna, Austria<\/li>\n<li>Markus Rumler Snapscreen &#8211; Snapscreen, Vienna, Austria<\/li>\n<\/ul>\n<p><strong>Abstract:\u00a0<\/strong>This demonstration showcases Snapscreen Clip Share: a second-screen technology for seamless identification and social sharing of live or recorded TV content. With Clip Share, app users take a snapshot of their viewing screen to generate a broadcast-quality clip of the current program instantly on their mobile device; then, users rewind through the retrieved segment, trim the beginning and end of their clip, add a personal message to kick-off discussion, and share the clip through a range of messaging apps and social media platforms. Where existing clip solutions allow broadcasters and rights-holders to produce clips from broadcast content, Clip Share facilitates fast and easy clipping for app users in order to drive content distribution and recirculation by viewers themselves. Leveraging computer vision to streamline clip creation and sharing provides an intuitive bridge between TV content and social media interactions.<\/p>\n<\/div><\/div><footer class=\"entry-footer\"><\/footer><\/article><br \/>\n<article class=\"iconbox iconbox_left    \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class=\"iconbox_content\"><header class=\"entry-content-header\"><div class=\"iconbox_icon heading-color \" aria-hidden='true' data-av_icon='\ue849' data-av_iconfont='entypo-fontello'  style='color:#83a846; ' ><\/div><h3 class='iconbox_content_title '  itemprop=\"headline\"   style='color:#83a846; '>Visualizing Gaze Presence for 360\u00b0 Camera<\/h3><\/header><div class='iconbox_content_container  '  itemprop=\"text\"  ><ul>\n<li>David A. Shamma &#8211; FXPAL, Palo Alto, California, United States<\/li>\n<li>Tony Dunnigan &#8211; FXPAL, Palo Alto, California, United States<\/li>\n<li>Yulius Tjahjadi &#8211; FXPAL, Palo Alto, California, United States<\/li>\n<li>John J Doherty &#8211; FXPAL, Palo Alto, California, United States<\/li>\n<\/ul>\n<p><strong>Abstract:\u00a0<\/strong>Advancements in 360\u00b0 cameras have increased their related livestreams. In the case of video conferencing, 360\u00b0 cameras provide almost unrestricted visibility into a conference room for a remote viewer without the need for an articulating camera. However, local participants are left wondering if someone is connected and where remote participants might be looking. To address this, we fabricated a prototype device that shows the gaze and presence of remote 360\u00b0 viewers using a ring of LEDs that match the remote viewports. We discuss the long term use of one of the prototypes in a lecture hall and present future directions for visualizing gaze presence in 360\u00b0 video streams.<\/p>\n<\/div><\/div><footer class=\"entry-footer\"><\/footer><\/article><br \/>\n<article class=\"iconbox iconbox_left    \"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class=\"iconbox_content\"><header class=\"entry-content-header\"><div class=\"iconbox_icon heading-color \" aria-hidden='true' data-av_icon='\ue849' data-av_iconfont='entypo-fontello'  style='color:#83a846; ' ><\/div><h3 class='iconbox_content_title '  itemprop=\"headline\"   style='color:#83a846; '>Situated Immersion: The Living Room of the Future<\/h3><\/header><div class='iconbox_content_container  '  itemprop=\"text\"  ><ul>\n<li>Adrian Gradinar &#8211; Imagination Lancaster, Lancaster University, Lancaster, Lancashire, United Kingdom<\/li>\n<li>Joseph Lindley &#8211; Lancaster University, Lancaster, Lancashire, United Kingdom<\/li>\n<li>Paul Coulton &#8211; LICA, Lancaster University, Lancaster, United Kingdom<\/li>\n<li>Mr Ian Forrester &#8211; BBC R&amp;D, Manchester, United Kingdom<\/li>\n<li>Phil Stenton &#8211; BBC Research &amp; Development, BBC, Salford, Manchester, United Kingdom<\/li>\n<\/ul>\n<p><strong>Abstract:\u00a0<\/strong>This paper presents the Living Room of the Future which explores new forms of immersive experience which utilises Object Based Media to provision media that is personalised, adaptable, dynamic, and responsive. It builds upon work on Perceptive Media, Internet of Things Storytelling, and Experiential Futures which, in contrast to approaches that simply conflate immersion with increased visual fidelity, proposes subtle and nuanced ways to immerse audiences in a situated context. The room-sized prototype demonstrates this approach to immersion and includes connected devices that provide contextual data to personalise the media as well as providing physical elements to enhance the immersive experience.<strong>\u00a0<\/strong><\/p>\n<\/div><\/div><footer class=\"entry-footer\"><\/footer><\/article><\/p><\/div>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":2,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-910","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/tvx.acm.org\/2019\/wp-json\/wp\/v2\/pages\/910","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/tvx.acm.org\/2019\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/tvx.acm.org\/2019\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/tvx.acm.org\/2019\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/tvx.acm.org\/2019\/wp-json\/wp\/v2\/comments?post=910"}],"version-history":[{"count":10,"href":"https:\/\/tvx.acm.org\/2019\/wp-json\/wp\/v2\/pages\/910\/revisions"}],"predecessor-version":[{"id":951,"href":"https:\/\/tvx.acm.org\/2019\/wp-json\/wp\/v2\/pages\/910\/revisions\/951"}],"wp:attachment":[{"href":"https:\/\/tvx.acm.org\/2019\/wp-json\/wp\/v2\/media?parent=910"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}