DEMO SESSION
The following demos were accepted for presentation at the conference. Demos will be presented during dedicated demo sessions at the main conference. The final program with the exact timing will be made available soon.
(Demos 1)
2-IMMERSE: MotoGP multiscreen experience with HbbTV 2 retail devices
Rajiv Ramdhany – BBC R&D Salford, UK
Matt Hammond – BBC R&D London, UK
Christoph Ziegler – IRT, Germany
Michael Probst – IRT, Germany
Abstract: This demonstration showcases the 2-IMMERSE platform – a system built on open standards that enables broadcasters, and other content providers, to deliver novel multi-screen entertainment services. These services are based on the concept of object-based media. Our implementation is currently being tested and validated with Hybrid Broadcast Broadband TV (HbbTV) version 2 prototype and retail devices, that are now entering the market, to ensure the 2- IMMERSE platform can be used in an HbbTV ecosystem. In cooperation with Samsung the demonstration will show the 2-IMMERSE MotoGP experience (Silverstone GP 2017) on an HbbTV 2 implementation on a recent consumer device.
(Demos 2)
2IMMERSE Production Suite: A Platform for Creating Interactive Multi-Screen Experiences
Thomas Röggla – Centrum Wiskunde & Informatica, Netherlands
Jie Li1, Jack Jansen – – Centrum Wiskunde & Informatica, Netherlands
Andrew Gower – BT Technology, UK
Martin Trimby – – BT Technology, UK
Pablo Cesar – Centrum Wiskunde & Informatica; and Delft University of Technology, Netherlands
Abstract: We present a software solution for creating and playing back interactive multi-screen experiences. The system consists of a pre-production application for editing layout and timing of interactive media objects and a live-triggering software for in- serting on-demand content during live streams of these edited experiences. The system is governed by a hierarchical file for- mat that defines the temporal relationship and synchronisation of media objects. We also briefly introduce the concept of DMApp Components, an open specification which is used to describe and create custom interactive media objects.
(Demos 3)
A Web-Based Multi-Screen 360-Degree Video Player For Pre-Service Teacher Training
Julian Windscheid – TU Ilmenau Ilmenau, Germany
Andreas Will – TU Ilmenau Ilmenau, Germany
Abstract: This demonstration will showcase a new and innovative eLearning platform for pre-service teacher training. The core element of this platform is a multi-screen 360-degree video player with additional features for 360-degree video analysis. By using the videos in combination with a head-mounted display (HMD) we create a video-based virtual classroom, where the pre-service teachers “become part of the situation”. This offers students an immersive experience to get a first impression of realistic school praxis.
(Demos 4)
Authoring Object-Based Video Narratives
Davy Smith – Digital Creativity Labs, Department of Theatre, Film and Television, University of York, UK
Jonathan Hook – Digital Creativity Labs, Department of Theatre, Film and Television, University of York, UK
Marian F Ursu – Digital Creativity Labs, Department of Theatre, Film and Television, University of York, UK
Abstract: This demonstration presents a new approach for the authoring and delivery of online Object-based video narratives. We introduce a cross-platform desktop application which provides a graphical environment for the authoring of video narratives that can be dynamically sequenced and composited at viewing time, based upon the interactions, or context, of the audience. In addition, we present the Object-Based Storytelling Engine, a client-side JavaScript library which allows the delivery of object-based narratives in any HTML5 compliant browser.
(Demos 5)
Automatic Generation of a TV Programme from Blog Entries
Masaki Hayashi – Department of Game Design, Uppsala University, Sweden
Steven Bachelder – Department of Game Design, Uppsala University, Sweden
Naoya Tsuruta – School of Media Science, Tokyo University of Technology, Japan
Takehiro Teraoka – School of Media Science, Tokyo University of Technology, Japan
Kazuo Sasaki – School of Media Science, Tokyo University of Technology, Japan
Wataru Usami – School of Media Science, Tokyo University of Technology, Japan
Koji Mikami – School of Media Science, Tokyo University of Technology, Japan
Tsukasa Kikuchi – School of Media Science, Tokyo University of Technology, Japan
Yuriko Takeshima – School of Media Science, Tokyo University of Technology, Japan
Kunio Kondo – School of Media Science, Tokyo University of Technology, Japan
Abstract: TVML (TV program Making Language) is a technology capable of obtaining TV (television)-programme-like Computer Graphics (CG) animation by writing text script. We have originally developed TVML and have been studying generative contents with the aid of TVML. This time, we have created an application that automatically converts blog posts into CG animations with TV news show format. The process is: 1) to fetch HTML of the blog posts and perform Web scraping and natural language processing to obtain summarized speech texts, 2) to automatically give a show format obtained from the analysis of professional TV programme to get TVML script, 3) to apply the CG character and artworks etc. that fit the blog content to obtain the final CG animation. In the demo session, we will explain the method and will demonstrate the working application on a PC connected to the Internet showing CG animations actually created on site.
(Demos 6)
Emotive VR: a neuro-interactive 360 movie
Toinon Vigier – LS2N, UMR CJRS 6003 Université de Nantes, France
Marie-Laure Cazin – Ecole Supérieure d’Art et de Design de Tours Angers Le Mans, France
Abstract: This paper describes a demo for the ACM TVX conference about a new form of interactive cinema. This demo consists on the prototype of a new neuro-interactive omnidirectional movie “Freud’s last hypnosis” visualized in Virtual Reality (VR) Head-Mounted Display (HMD). During the visualization, the EEG signals are recorded and analyzed in real time in order to apply some visual and audio feedbacks inside the 360° film, according to the emotional state of the user.
(Demos 7)
Personalized and Immersive Presentation of Video, Audio and Subtitles in 360o Environments: An Opera Use Case
Isaac Fraile – i2CAT Foundation C/ Gran Capità 2-4, Edifici Nexus I, Spain
David Gómez – i2CAT Foundation C/ Gran Capità 2-4, Edifici Nexus I, Spain
Juan A. Núñez – i2CAT Foundation C/ Gran Capità 2-4, Edifici Nexus I, Spain
Mario Montagud – i2CAT Foundation C/ Gran Capità 2-4, Edifici Nexus I, Spain
Sergi Fernández – i2CAT Foundation C/ Gran Capità 2-4, Edifici Nexus I, Spain
Abstract: This paper presents an end-to-end system for a personalized presentation of accessibility and immersive contents in multi-screen scenarios, by focusing on an opera use case. In particular, the system allows experiencing the opera event using the classical audiovisual formats, but it additionally supports a seamless integration of 360o video, spatial audio and the use of Head Mounted Displays (HMDs). The availability of multiple 360o cameras allows experiencing the event from the preferred viewpoint, while the presented audio will match the selected camera position and current user’s viewpoint, providing a highly immersive and realistic experience. Finally, a personalized and assistive presentation of subtitles also contributes to a higher accessibility.
(Demos 8)
UltraTV: an iTV content unification prototype
Pedro Almeida – Digimedia, University of Aveiro, Portugal
Jorge Abreu – Digimedia, University of Aveiro, Portugal
Telmo Silva – Digimedia, University of Aveiro, Portugal
Rafael Guedes – Digimedia, University of Aveiro, Portugal
Diogo Oliveira – Digimedia, University of Aveiro, Portugal
Bernardo Cardoso – Altice Labs Aveiro, Portugal
Hugo Dias – Altice Labs Aveiro, Portugal
Abstract: The UltraTV project proposes a User Interface for the unification of content from different sources, combining TV programs and Over-the-top (OTT) videos, fostering the discovery at the same level. In this demo, the UltraTV high- fidelity prototype, implemented in a set-top box, is presented, along with its unification, recommendation and profiling features. The primary interaction controls and methods are also referred. The development of the prototype benefited from continuous feedback obtained on different validation phases (review by experts, laboratory tests, and a field trial).