{"id":1653,"date":"2015-04-07T12:12:42","date_gmt":"2015-04-07T12:12:42","guid":{"rendered":"http:\/\/tvx2015.com\/?page_id=1653"},"modified":"2017-05-18T11:38:35","modified_gmt":"2017-05-18T11:38:35","slug":"demos","status":"publish","type":"page","link":"https:\/\/tvx.acm.org\/2017\/program-2\/demos\/","title":{"rendered":"Demos"},"content":{"rendered":"<div style='height:1px; margin-top:-85px'  class='hr hr-invisible '><span class='hr-inner ' ><span class='hr-inner-style'><\/span><\/span><\/div>\n<div class=\"flex_column av_one_full  flex_column_div av-zero-column-padding first  \" style='border-radius:0px; '><section class=\"av_textblock_section\"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class='avia_textblock '   itemprop=\"text\" ><h3>DEMO SESSION<\/h3>\n<p>The following demos were accepted for presentation at the conference. Demos will be presented during two dedicated demo sessions at the main conference. The final program with the exact timing will be made available in May.<\/p>\n<p>Since the authors are preparing their camera-ready versions, the titles and abstracts below are still subject to change.<\/p>\n<\/div><\/section><\/div>\n<div  class='hr hr-default '><span class='hr-inner ' ><span class='hr-inner-style'><\/span><\/span><\/div>\n<div class=\"tabcontainer  sidebar_tab sidebar_tab_left border_tabs \">\n\n<section class=\"av_tab_section\"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" >    <div data-fake-id=\"#tab-id-1\" class=\"tab active_tab\"  itemprop=\"headline\" >Session 1: Multi-Screen, and TV Control<\/div>\n    <div id=\"tab-id-1-container\" class=\"tab_content active_tab_content\">\n        <div class=\"tab_inner_content invers-color\"  itemprop=\"text\" >\n<h3>Scanning News Videos With An Interactive Filmstrip<\/h3>\n<p>Martin Prins \u2013 TNO, The Hague , Zuid-Holland, Netherlands<\/p>\n<p>Joost de Wit \u2013 Media Distillery, Amsterdam, Netherlands<\/p>\n<p>Abstract: Determining whether a (news) video is of-interest and what it is about is a time-consuming process. This is a problem when users quickly want to catch up with the latest news and don\u2019t spend time to see something they already know or saw or is not of interest at all. In this paper, we present a novel method for users to discover what a video is about by means of a summary of the video, presented as an interactive filmstrip. With the interactive filmstrip, users can quickly scan the contents of a video, determine if they want to watch it (and which parts) and playback these parts. The interactive filmstrip is implemented in a response web-based demonstrator application, with mouse-based interaction on PCs and touch\/gesture based-interaction on smartphones and tables.<\/p>\n<h3>Multi-User Motion Matching Interaction for Interactive Television using Smartwatches<\/h3>\n<p>David Verweij \u2013 Department of Industrial Design, Eindhoven University of Technology, Eindhoven, Netherlands<\/p>\n<p>Vassilis-Javed Khan \u2013 Industrial Design Department, Eindhoven University of Technology, Eindhoven, Noord Brabant, Netherlands<\/p>\n<p>Augusto Esteves \u2013 Centre for Interaction Design, Edinburgh Napier University, Edinburgh, United Kingdom<\/p>\n<p>Saskia Bakker \u2013 Department of Industrial Design, Eindhoven University of Technology, Eindhoven, Netherlands<\/p>\n<p>Abstract: Motion matching input, following continuously moving targets by performing bodily movements, offers new interaction possibilities in multiple domains. Unlike optical motion matching input systems, our technique utilizes a smartwatch to record motion data from the users\u2019 wrists, providing robust input regardless of lighting conditions or momentary occlusions. We demonstrate an implementation of motion matching input using smartwatches for interactive television, that allows multi-user input using bodily movements and offers new interaction possibilities by means of a second screen as extension on TV displays.<\/p>\n<h3>Production and delivery of video for multi-device synchronized playout<\/h3>\n<p>Juan A. Nu\u00f1ez \u2013 i2CAT Foundation, Barcelona, Spain<\/p>\n<p>Szymon Malewski \u2013 PSNC, Poznan, Poland<\/p>\n<p>Sergi Fern\u00e1ndez \u2013 i2CAT Foundation, Barcelona, Spain<\/p>\n<p>Joan Llobera \u2013 i2CAT Foundation, Barcelona, Spain<\/p>\n<p>Abstract: In the contemporary living room, the audience\u2019s attention is often divided between TVs, second screens and, increasingly, head mounted displays. To address this reality, ImmersiaTV is a H2020 European project which is redefining the end-to-end broadcast chain: production, distribution and delivery. It is built on two ideas: multi-platform synchronous content playout, and orchestrated videos rendered in the head-mounted display as interactive inserts, which allow introducing basic interactive storytelling techniques (scene selection, forking paths, etc.) as well as classical audio-visual language that is not possible to render with 360 videos (close-ups, slow motion, shot-countershot, etc). We demonstrate our pipeline for offline production, distribution and synchronized playout.<\/p>\n<h3>2-Immerse \u2013 A Platform for Orchestrated Multi-Screen Entertainment<\/h3>\n<p>Ian Kegel \u2013 BT Research &amp; Innovation, Martlesham Heath, Ipswich, United Kingdom<\/p>\n<p>James Walker \u2013 Cisco, London, United Kingdom<\/p>\n<p>Mark Lomas \u2013 BBC Research &amp; Development, Salford, United Kingdom<\/p>\n<p>Jack Jansen \u2013 Centrum voor Wiskunde &amp; Informatica, Amsterdam, Netherlands<\/p>\n<p>John Wyver \u2013 Illuminations, London, United Kingdom<\/p>\n<p>Abstract: This demonstration will showcase a new approach to the production and delivery of multi-screen entertainment enabled by an innovative, standards-based platform developed by the EU-funded project 2-Immerse. Object-based production enables engaging and interactive experiences which make optimal use of the devices available, while maintaining the look and feel of single application. The &#8216;Theatre at Home&#8217; prototype offers an enhanced social experience for users watching a live or &#8216;as live&#8217; broadcast of a theatre performance, allowing them to discuss it with others who are watching at the same time, either in a different room or in a different home.<\/p>\n<h3>Tellybox: Nine Speculative Prototypes For Future TV<\/h3>\n<p>Libby Miller \u2013 Internet Research and Future Services, BBC Research and Development, London, United Kingdom<\/p>\n<p>Joanne Moore \u2013 Internet Research and Future Services, BBC Research and Development, London, London, United Kingdom<\/p>\n<p>Tim Cowlishaw \u2013 Internet Research and Future Services, BBC Research and Development, London, United Kingdom<\/p>\n<p>Henry Cooke \u2013 Internet Research and Future Services, BBC Research and Development, London, United Kingdom<\/p>\n<p>Anthony Onumonu \u2013 Internet Research and Future Services, BBC Research and Development, London, United Kingdom<\/p>\n<p>Kristian Hentschel \u2013 Internet Research and Future Services, BBC Research and Development, London, United Kingdom<\/p>\n<p>Thomas Howe \u2013 Internet Research and Future Services, BBC Research and Development, London, United Kingdom<\/p>\n<p>Chris Needham \u2013 Internet Research and Future Services, BBC Research and Development, London, United Kingdom<\/p>\n<p>Sacha Sedriks \u2013 Internet Research and Future Services, BBC Research and Development, London, United Kingdom<\/p>\n<p>Richard Sewell \u2013 Electric Pocket Limited, Pontnewynydd, United Kingdom<\/p>\n<p>Abstract: We have developed nine speculative (&#8220;half-resolution&#8221;) prototypes as part of our project to explore future possibilities for television experiences as widely as possible. The prototypes are physical representations of our research into why people watch television and what they like and dislike about it. Their physicality improves engagement and quality of feedback, at low cost. The ultimate goal is to be able to describe the high- level characteristics of a really good experience of television in the home, and so provide direction for future technology and interface development.<\/p>\n<h3><\/h3>\n\n        <\/div>\n    <\/div>\n<\/section>\n\n<section class=\"av_tab_section\"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" >    <div data-fake-id=\"#tab-id-2\" class=\"tab \"  itemprop=\"headline\" >Session 2: User Interaction and Virtual Reality<\/div>\n    <div id=\"tab-id-2-container\" class=\"tab_content \">\n        <div class=\"tab_inner_content invers-color\"  itemprop=\"text\" >\n<h3><\/h3>\n<h3>Movies in Mid-Air: One-Minute Movies Enhanced through Mid-Air Haptic Feedback<\/h3>\n<p>Damien Ablart \u2013 SCHI Lab, School of Engineering and Informatics, University of Sussex, Brighton, United Kingdom<\/p>\n<p>Carlos Velasco \u2013 Marketing, BI Norwegian Business School, Oslo, Oslo, Norway<\/p>\n<p>Marianna Obrist \u2013 SCHI Lab, School of Engineering and Informatics, University of Sussex, Brighton, United Kingdom<\/p>\n<p>Abstract: We present a novel movie experience which involves users&#8217; sense of touch. In our demo, we showcase this multisensory experience concept whereby a mid-air haptic technology, which creates tactile sensations in mid-air without direct contact, is integrated into short movies. Specifically, users can experience audiovisual contents (i.e., one-minute movies) enhanced via mid-air haptic feedback. We are convinced that this demo will stimulate interesting discussions around the future of viewing experiences for television, cinema, as well as online video consumption.<\/p>\n<h3>Edinburgh Festival Explorer Demo<\/h3>\n<p>Andrew Gibb \u2013 North Lab, BBC Research and Development, Salford, Lancashire, United Kingdom<\/p>\n<p>Sam Nicholson \u2013 North Lab, BBC Research and Development, Salford, Lancashire, United Kingdom<\/p>\n<p>Graham Thomas \u2013 R&amp;D Dept, BBC, Salford, UK<\/p>\n<p>Abstract: Head-mounted displays and spherical (\u201c360\u201d) video are emerging as an important new medium. Watching a spherical video in a head-mounted display is a compelling experience the first time, but the user soon discovers that they cannot move. The problem of how to move a user\u2019s viewpoint between spherical videos recorded at different locations remains without a general solution. The Edinburgh Festival Explorer demonstrates a novel approach to this problem. The user is given a better sense of the physical relationship of the video locations by placing windows into video spheres in their geographical positions, and giving the user an overview of the region which they can navigate interactively.<\/p>\n<h3>Object-Based Production: A Personalised Interactive Cooking Application<\/h3>\n<p>Jasmine Cox \u2013 British Broadcasting Corporation, Manchester, United Kingdom<\/p>\n<p>Rhianne Jones \u2013 Research &amp; Development, BBC, Salford, Greater Manchester, United Kingdom<\/p>\n<p>Chris Northwood \u2013 BBC Research and Development, BBC, Manchester, United Kingdom<\/p>\n<p>Jonathan Tutcher \u2013 Research &amp; Development, British Broadcasting Corporation, London, United Kingdom<\/p>\n<p>Ben Robinson \u2013 BBC Research &amp; Development, BBC, London, United Kingdom<\/p>\n<p>Abstract: We present the Cook-Along Kitchen Experience (CAKE), a novel prototype that illustrates a new type of interactive, personalised audio-visual experience created using Object-Based Media (OBM) concepts and techniques. CAKE is a real-time, interactive cookery programme that dynamically adapts in real-time as you cook with it. It represents a new interactive video format that combines existing technologies in novel ways to create a distinctly new user experience. We demonstrate the novelty of the user experience: users can interact with the application and see a behind the scenes view of the data model and scheduling algorithm visualising how CAKE is responding to user input.<\/p>\n<h3>Web-based Platform for Subtitles Customization and Synchronization in Multi-Screen Scenarios<\/h3>\n<p>Mario Montagud \u2013 Universitat Polit\u00e8cnica de Val\u00e8ncia, Grau de Gandia, Valencia, Spain<\/p>\n<p>Fernando Boronat \u2013 Universitat Polit\u00e8cnica de Val\u00e8ncia, Grau de Gandia, Valencia, Spain<\/p>\n<p>Juan Gonz\u00e1lez \u2013 Universitat Polit\u00e8cnica de Val\u00e8ncia (UPV), Grao de Gandia, Valencia, Spain<\/p>\n<p>Javier Pastor \u2013 Universitat Polit\u00e8cnica de Val\u00e8ncia, Grau de Gandia, Valencia, Spain<\/p>\n<p>Abstract: This paper presents a web-based platform that enables the customization and synchronization of subtitles on both single- and multi-screen scenarios. The platform enables the dynamic customization of the subtitles\u2019 format (font family, size, color&#8230;) and position according to the users\u2019 preferences and\/or needs. Likewise, it allows configuring the number of subtitles lines to be presented, being able to restore the video playout position by clicking on a specific one. It also allows the simultaneous selection of various subtitle languages, and applying a delay offset to the presentation of subtitles. All these functionalities can also be available on (personal) companion devices, allowing the presentation of subtitles in a synchronized manner with the ones on the main screen and their individual customization. With all these functionalities, the platform enables personalized and immersive media consumption experiences, contributing to a better language learning, social integration and an improved Quality of Experience (QoE) in both domestic and multi-culture environments.<\/p>\n<h3><\/h3>\n<h3>Social VR platform: building 360-degree shared VR spaces<\/h3>\n<p>Simon Gunkel \u2013 TNO, The Hague, Netherlands<\/p>\n<p>Martin Prins \u2013 TNO, The Hague , Zuid-Holland, Netherlands<\/p>\n<p>Hans Stokking \u2013 TNO, The Hague, Netherlands<\/p>\n<p>Omar Niamut \u2013 TNO, The Hague, Netherlands<\/p>\n<p>Abstract: Virtual Reality (VR) and 360-degree video are set to become part of the future social environment, enriching and enhancing the way we share experiences and collaborate remotely. In this demo, we present our ongoing efforts towards social and shared VR; a modular web based VR framework that extends current video conferencing capabilities with new VR functionalities. The framework allows for two people to come together for mediated audio-visual interaction, while engaging in (interactive) content. First results show that a majority of users appreciate the quality and feel highly immersed and present. Thus, with our demo we show that current web technologies can enable a high level of engagement and i<\/p>\n\n        <\/div>\n    <\/div>\n<\/section>\n\n<\/div>\n\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":3,"featured_media":0,"parent":2388,"menu_order":5,"comment_status":"open","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-1653","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/tvx.acm.org\/2017\/wp-json\/wp\/v2\/pages\/1653","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/tvx.acm.org\/2017\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/tvx.acm.org\/2017\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/tvx.acm.org\/2017\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/tvx.acm.org\/2017\/wp-json\/wp\/v2\/comments?post=1653"}],"version-history":[{"count":9,"href":"https:\/\/tvx.acm.org\/2017\/wp-json\/wp\/v2\/pages\/1653\/revisions"}],"predecessor-version":[{"id":2434,"href":"https:\/\/tvx.acm.org\/2017\/wp-json\/wp\/v2\/pages\/1653\/revisions\/2434"}],"up":[{"embeddable":true,"href":"https:\/\/tvx.acm.org\/2017\/wp-json\/wp\/v2\/pages\/2388"}],"wp:attachment":[{"href":"https:\/\/tvx.acm.org\/2017\/wp-json\/wp\/v2\/media?parent=1653"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}