DEMO SESSION
The following demos were accepted for presentation at the conference. Demos will be presented during two dedicated demo sessions at the main conference. The final program with the exact timing will be made available in May.
Since the authors are preparing their camera-ready versions, the titles and abstracts below are still subject to change.
Scanning News Videos With An Interactive Filmstrip
Martin Prins – TNO, The Hague , Zuid-Holland, Netherlands
Joost de Wit – Media Distillery, Amsterdam, Netherlands
Abstract: Determining whether a (news) video is of-interest and what it is about is a time-consuming process. This is a problem when users quickly want to catch up with the latest news and don’t spend time to see something they already know or saw or is not of interest at all. In this paper, we present a novel method for users to discover what a video is about by means of a summary of the video, presented as an interactive filmstrip. With the interactive filmstrip, users can quickly scan the contents of a video, determine if they want to watch it (and which parts) and playback these parts. The interactive filmstrip is implemented in a response web-based demonstrator application, with mouse-based interaction on PCs and touch/gesture based-interaction on smartphones and tables.
Multi-User Motion Matching Interaction for Interactive Television using Smartwatches
David Verweij – Department of Industrial Design, Eindhoven University of Technology, Eindhoven, Netherlands
Vassilis-Javed Khan – Industrial Design Department, Eindhoven University of Technology, Eindhoven, Noord Brabant, Netherlands
Augusto Esteves – Centre for Interaction Design, Edinburgh Napier University, Edinburgh, United Kingdom
Saskia Bakker – Department of Industrial Design, Eindhoven University of Technology, Eindhoven, Netherlands
Abstract: Motion matching input, following continuously moving targets by performing bodily movements, offers new interaction possibilities in multiple domains. Unlike optical motion matching input systems, our technique utilizes a smartwatch to record motion data from the users’ wrists, providing robust input regardless of lighting conditions or momentary occlusions. We demonstrate an implementation of motion matching input using smartwatches for interactive television, that allows multi-user input using bodily movements and offers new interaction possibilities by means of a second screen as extension on TV displays.
Production and delivery of video for multi-device synchronized playout
Juan A. Nuñez – i2CAT Foundation, Barcelona, Spain
Szymon Malewski – PSNC, Poznan, Poland
Sergi Fernández – i2CAT Foundation, Barcelona, Spain
Joan Llobera – i2CAT Foundation, Barcelona, Spain
Abstract: In the contemporary living room, the audience’s attention is often divided between TVs, second screens and, increasingly, head mounted displays. To address this reality, ImmersiaTV is a H2020 European project which is redefining the end-to-end broadcast chain: production, distribution and delivery. It is built on two ideas: multi-platform synchronous content playout, and orchestrated videos rendered in the head-mounted display as interactive inserts, which allow introducing basic interactive storytelling techniques (scene selection, forking paths, etc.) as well as classical audio-visual language that is not possible to render with 360 videos (close-ups, slow motion, shot-countershot, etc). We demonstrate our pipeline for offline production, distribution and synchronized playout.
2-Immerse – A Platform for Orchestrated Multi-Screen Entertainment
Ian Kegel – BT Research & Innovation, Martlesham Heath, Ipswich, United Kingdom
James Walker – Cisco, London, United Kingdom
Mark Lomas – BBC Research & Development, Salford, United Kingdom
Jack Jansen – Centrum voor Wiskunde & Informatica, Amsterdam, Netherlands
John Wyver – Illuminations, London, United Kingdom
Abstract: This demonstration will showcase a new approach to the production and delivery of multi-screen entertainment enabled by an innovative, standards-based platform developed by the EU-funded project 2-Immerse. Object-based production enables engaging and interactive experiences which make optimal use of the devices available, while maintaining the look and feel of single application. The ‘Theatre at Home’ prototype offers an enhanced social experience for users watching a live or ‘as live’ broadcast of a theatre performance, allowing them to discuss it with others who are watching at the same time, either in a different room or in a different home.
Tellybox: Nine Speculative Prototypes For Future TV
Libby Miller – Internet Research and Future Services, BBC Research and Development, London, United Kingdom
Joanne Moore – Internet Research and Future Services, BBC Research and Development, London, London, United Kingdom
Tim Cowlishaw – Internet Research and Future Services, BBC Research and Development, London, United Kingdom
Henry Cooke – Internet Research and Future Services, BBC Research and Development, London, United Kingdom
Anthony Onumonu – Internet Research and Future Services, BBC Research and Development, London, United Kingdom
Kristian Hentschel – Internet Research and Future Services, BBC Research and Development, London, United Kingdom
Thomas Howe – Internet Research and Future Services, BBC Research and Development, London, United Kingdom
Chris Needham – Internet Research and Future Services, BBC Research and Development, London, United Kingdom
Sacha Sedriks – Internet Research and Future Services, BBC Research and Development, London, United Kingdom
Richard Sewell – Electric Pocket Limited, Pontnewynydd, United Kingdom
Abstract: We have developed nine speculative (“half-resolution”) prototypes as part of our project to explore future possibilities for television experiences as widely as possible. The prototypes are physical representations of our research into why people watch television and what they like and dislike about it. Their physicality improves engagement and quality of feedback, at low cost. The ultimate goal is to be able to describe the high- level characteristics of a really good experience of television in the home, and so provide direction for future technology and interface development.
Movies in Mid-Air: One-Minute Movies Enhanced through Mid-Air Haptic Feedback
Damien Ablart – SCHI Lab, School of Engineering and Informatics, University of Sussex, Brighton, United Kingdom
Carlos Velasco – Marketing, BI Norwegian Business School, Oslo, Oslo, Norway
Marianna Obrist – SCHI Lab, School of Engineering and Informatics, University of Sussex, Brighton, United Kingdom
Abstract: We present a novel movie experience which involves users’ sense of touch. In our demo, we showcase this multisensory experience concept whereby a mid-air haptic technology, which creates tactile sensations in mid-air without direct contact, is integrated into short movies. Specifically, users can experience audiovisual contents (i.e., one-minute movies) enhanced via mid-air haptic feedback. We are convinced that this demo will stimulate interesting discussions around the future of viewing experiences for television, cinema, as well as online video consumption.
Edinburgh Festival Explorer Demo
Andrew Gibb – North Lab, BBC Research and Development, Salford, Lancashire, United Kingdom
Sam Nicholson – North Lab, BBC Research and Development, Salford, Lancashire, United Kingdom
Graham Thomas – R&D Dept, BBC, Salford, UK
Abstract: Head-mounted displays and spherical (“360”) video are emerging as an important new medium. Watching a spherical video in a head-mounted display is a compelling experience the first time, but the user soon discovers that they cannot move. The problem of how to move a user’s viewpoint between spherical videos recorded at different locations remains without a general solution. The Edinburgh Festival Explorer demonstrates a novel approach to this problem. The user is given a better sense of the physical relationship of the video locations by placing windows into video spheres in their geographical positions, and giving the user an overview of the region which they can navigate interactively.
Object-Based Production: A Personalised Interactive Cooking Application
Jasmine Cox – British Broadcasting Corporation, Manchester, United Kingdom
Rhianne Jones – Research & Development, BBC, Salford, Greater Manchester, United Kingdom
Chris Northwood – BBC Research and Development, BBC, Manchester, United Kingdom
Jonathan Tutcher – Research & Development, British Broadcasting Corporation, London, United Kingdom
Ben Robinson – BBC Research & Development, BBC, London, United Kingdom
Abstract: We present the Cook-Along Kitchen Experience (CAKE), a novel prototype that illustrates a new type of interactive, personalised audio-visual experience created using Object-Based Media (OBM) concepts and techniques. CAKE is a real-time, interactive cookery programme that dynamically adapts in real-time as you cook with it. It represents a new interactive video format that combines existing technologies in novel ways to create a distinctly new user experience. We demonstrate the novelty of the user experience: users can interact with the application and see a behind the scenes view of the data model and scheduling algorithm visualising how CAKE is responding to user input.
Web-based Platform for Subtitles Customization and Synchronization in Multi-Screen Scenarios
Mario Montagud – Universitat Politècnica de València, Grau de Gandia, Valencia, Spain
Fernando Boronat – Universitat Politècnica de València, Grau de Gandia, Valencia, Spain
Juan González – Universitat Politècnica de València (UPV), Grao de Gandia, Valencia, Spain
Javier Pastor – Universitat Politècnica de València, Grau de Gandia, Valencia, Spain
Abstract: This paper presents a web-based platform that enables the customization and synchronization of subtitles on both single- and multi-screen scenarios. The platform enables the dynamic customization of the subtitles’ format (font family, size, color…) and position according to the users’ preferences and/or needs. Likewise, it allows configuring the number of subtitles lines to be presented, being able to restore the video playout position by clicking on a specific one. It also allows the simultaneous selection of various subtitle languages, and applying a delay offset to the presentation of subtitles. All these functionalities can also be available on (personal) companion devices, allowing the presentation of subtitles in a synchronized manner with the ones on the main screen and their individual customization. With all these functionalities, the platform enables personalized and immersive media consumption experiences, contributing to a better language learning, social integration and an improved Quality of Experience (QoE) in both domestic and multi-culture environments.
Social VR platform: building 360-degree shared VR spaces
Simon Gunkel – TNO, The Hague, Netherlands
Martin Prins – TNO, The Hague , Zuid-Holland, Netherlands
Hans Stokking – TNO, The Hague, Netherlands
Omar Niamut – TNO, The Hague, Netherlands
Abstract: Virtual Reality (VR) and 360-degree video are set to become part of the future social environment, enriching and enhancing the way we share experiences and collaborate remotely. In this demo, we present our ongoing efforts towards social and shared VR; a modular web based VR framework that extends current video conferencing capabilities with new VR functionalities. The framework allows for two people to come together for mediated audio-visual interaction, while engaging in (interactive) content. First results show that a majority of users appreciate the quality and feel highly immersed and present. Thus, with our demo we show that current web technologies can enable a high level of engagement and i