LONG AND SHORT PAPERS

The following long and short papers were selected through a double-blind peer-review process.

Long papers will get a 30-minute time slot (+Q&A) for presentation at the conference. The final program with the exact timing of the presentations will be made available soon.

Session 1: Interactivity and Immersion

(Session Chair: Teresa Chambel)

Gestures for Controlling a Moveable TV

Kashmiri Stec – Bang & Olufsen Struer, Denmark
Lars Bo Larsen – Aalborg University, Denmark

Abstract: We investigate the effects of physical context on the preference and production of touchless (3D) gestures, focusing on what users consider to be natural and intuitive. Using an elicitation task, we asked for users’ preferred gestures to control a “moving TV” display from a distance of 3-4m. We conducted three user studies (N=16 each) using the same premise but varying the physical conditions encountered, such as number of hands available or distance and orientation to the display. This is important to ensure the robustness of the gesture set. We observed two dominant strategies which we interpret as dependent on the user’s mental model: hand-as-display and hand-moving-display. Across the varying conditions, users were found to be consistent with their preferred gesture strategy while varying the production (number of hands, orientation, extension of arms) of their gestures in order to match both their mental models and the physical context of use. From a technology perspective, this natural variation challenges the notion of identifying “the optimal gesture set” and should be taken into account when designing future systems with gesture control.

Frictional Realities: Enabling immersion in Mixed-Reality Performances

Asreen Rostami – Stockholm University, Sweden
Chiara Rossitto – Stockholm University, Sweden
Annika Waern – Uppsala University, Sweden

Abstract: This paper presents a case study of a Mixed-Reality Performance employing 360-degree video for a virtual reality experience. We repurpose the notions of friction to illustrate the different threads at which priming is enacted during the performance to create an immersive audience experience. We look at aspects of friction between the different layers of the Mixed-Reality Performance, namely: temporal friction, friction between the physical and virtual presence of the audience, and friction between realities. We argue that Mixed-Reality Performances that employ immersive technology, do not need to rely on its presumed immersive nature to make the performance an engaging or coherent experience. Immersion, in such performances, emerges from the audience’ transition towards a more active role, and the creation of various fictional realities through frictions.

Session 2: Storytelling

(Session Chair: Vinoba Vinayagamoorthy)

Narrative Bytes: Data-Driven Storytelling in Esports

Florian Block – Digital Creativity Labs, University of York, UK
Victoria Hodge – Digital Creativity Labs, University of York, UK
Stephen Hobson – Digital Creativity Labs, University of York, UK
Nick Sephton – Digital Creativity Labs, University of York, UK
Sam Devlin – Digital Creativity Labs, University of York, UK
Marian Ursu – Digital Creativity Labs, University of York, UK
Anders Drachen – Digital Creativity Labs, University of York, UK
Peter Cowling – Digital Creativity Labs, University of York, UK

Abstract: Esports – video games played competitively that are broadcast to large audiences – are a rapidly growing new form of mainstream entertainment. Esports borrow from traditional TV, but are a qualitatively different genre, due to the high flexibility of content capture and availability of detailed gameplay data. Indeed, in esports, there is access to both real-time and historical data about any action taken in the virtual world. This aspect motivates the research presented here, the question asked being: can the information buried deep in such data, unavailable to the human eye, be unlocked and used to improve the live broadcast compilations of the events? In this paper, we present a large- scale case study of a production tool called Echo, which we developed in close collaboration with leading industry stakeholders. Echo uses live and historic match data to detect extraordinary player performances in the popular esport Dota 2, and dynamically translates interesting data points into audience-facing graphics. Echo was deployed at one of the largest yearly Dota 2 tournaments, which was watched by 25 million people. An analysis of 40 hours of video, over 46,000 live chat messages, and feedback of 98 audience members showed that Echo measurably affected the range and quality of storytelling, increased audience engagement, and invoked rich emotional response among viewers.

Facts, Interactivity and Videotape: Exploring the Design Space of Data in Interactive Video Storytelling

Jonathan Hook – Digital Creativity Labs, Department of Theatre, Film & TV, University of York, UK

Abstract: We live in a society that is increasingly data rich, with an unprecedented amount of information being captured, stored and analysed about our lives and the people we share them with. We explore the relationship between this new data and emergent forms of interactive video storytelling. In particular we ask: i) how can interactive video storytelling techniques be employed to provide accessible, informative and pleasurable ways for people to engage with data; and ii) how can data be used by the creators of interactive video stories to meet expressive goals and support new forms of experience? We present an analysis of 43 interactive videos that use data in a noteworthy fashion. This analysis reveals a design space comprising key techniques for telling engaging interactive video stories with and about data. To conclude, we discuss challenges relating to the production and consumption of such content and make recommendations for future research.

Session 3: Understanding Users

(Session Chair: Pedro Almeida)

How Users Perceive Delays in Synchronous Companion Screen Experiences – An Exploratory Study

Vinoba Vinayagamoorthy – BBC R&D London, UK

Abstract: A lot of work has been focused around enabling accurately synchronised companion screen experiences. The challenge has been to ensure that the delays between the presentation of programme content to the TV and the delivery of the relevant companion screen content to a mobile device are kept to a minimum. This is mainly driven by the need to ensure that the integrity of the editorial design of companion screen experiences can be maintained at the users’ end. This paper presents a 32-participant study which sought to explore the impact of delays between the presentation of programmes on a TV and the presentation of companion content on a Tablet. Three types of experiences: 1) video-to-slideshow using Factual content, 2) video-to-alt-video using Sports content, and 3) video-to- AD (audio description) using Drama content; were tested across eight levels of delays. Participant responses suggest different factors influence their evaluation of the different types of experiences tested.

“I Can Watch What I Want”: A Diary Study of On-Demand and Cross-Device Viewing

Jacob M. Rigby – UCL Interaction Centre, University College London, UK
Duncan P. Brumby – UCL Interaction Centre, University College London, UK
Sandy J.J. Gould – School of Computer Science, University of Birmingham, UK
Anna L. Cox – UCL Interaction Centre, University College London, UK

Abstract: In recent years, on-demand video services, such as Netflix and Amazon Video, have become extremely popular. To under- stand how people use these services, we recruited 20 people from nine households to keep a viewing diary for 14 days. To better understand these household viewing diaries, in-depth interviews were conducted. We found that people took advan- tage of the freedom and choice that on-demand services offer, watching on different devices and in different locations, both in the home and outside. People often watched alone so they could watch what they wanted, rather than coming together to watch something of mutual interest. Despite this flexibility, the evening prime time continued to be the most popular time for people to watch on-demand content. Sometimes they watched for extended periods, and during interviews concerns were expressed about how on-demand services make it far too easy to watch too much and that this is often undesirable.

Utilitarian and Hedonic Motivations for Live Streaming Shopping

Jie Cai – New Jersey Institute of Technology Newark, USA
Donghee Yvette Wohn – New Jersey Institute of Technology Newark, USA
Ankit Mittal – New Jersey Institute of Technology Newark, USA
Dhanush Sureshbabu – New Jersey Institute of Technology Newark, USA

Abstract: Watching live streams as part of the online shopping experience is a relatively new phenomenon. In this paper, we examine live streaming shopping, conceptualizing it as a type of online shopping that incorporates real-time social interaction. Live streaming shopping can happen in two ways: live streaming embedded in e-commerce, or e- commerce integrated into live streaming. Based on prior research related to live streaming and consumer motivation theories, we examined the relationships between hedonic and utilitarian motivations and shopping intention. We found that hedonic motivation is positively related to celebrity-based intention and utilitarian motivation is positively related to product-based intention. A content analysis of open-ended questions identified eight reasons for why consumers prefer live streaming shopping over regular online shopping.

Session 4: Data-Driven Approaches

(Session Chair: Jonathan Hook)

A Data-driven Approach to Explore Television Viewing in the Household Environment

Minjoon Kim – Department of Computer Science and Engineering, GSCST & User Experience Lab, Seoul National University, South Korea
Jinyoung Kim – Department of Transdisciplinary Studies, GSCST & User Experience Lab, Seoul National University, South Korea
Sugyo Han – Department of Transdisciplinary Studies, GSCST & User Experience Lab, Seoul National University, South Korea
Joongseek Lee – Department of Transdisciplinary Studies, GSCST & User Experience Lab, Seoul National University, South Korea

Abstract: The rise of small, IoT-related devices and sensors have en- abled us to sense and collect data than ever before. In this study, we walk through our attempt of a data-driven approach in collecting behavioral data on television viewing, an activity thought as passive and habitual. We conducted a 14 day ex- periment with 13 households in the wild using a data logger installed at each house. Television-related data in IR log data and IPTV packets, and contextual data in Bluetooth signal data and brightness data are collected through the data logger. The data is supplemented by the qualitative situational information that participants provided via in-situ chatbot surveys. Our non- intrusive data logger has enabled behavioral data collection in a natural, comprehensive manner. Detailed television viewing behaviors recorded through IR data logs, volume of viewing sessions, and in-situ chatbot responses show how television viewing is heavily context-dependent than previously thought.

Explicating the Challenges of Providing Novel Media Experiences Driven by User Personal Data

Neelima Sailaja – University of Nottingham Nottingham, UK
Andy Crabtree – University of Nottingham Nottingham, UK
Derek McAuley – University of Nottingham Nottingham, UK
Phil Stenton – BBC R&D Salford, UK

Abstract: The turn towards personal data to drive novel media experiences has resulted in a shift in the priorities and challenges associated with media creation and dissemination. This paper takes up the challenge of explicating this novel and dynamic scenario through an interview study of employees delivering diverse personal data driven media services within a large U.K. based media organisation. The results identify a need for better interactions in the user-data-service ecosystem where trust and value are prioritised and balanced. Being legally compliant and going beyond just the mandatory to further ensure social accountability and ethical responsibility as an organisation are unpacked as methods to achieve this balance in data centric interactions. The work also presents how technology is seen and used as a solution for overcoming challenges and realising priorities to provide value while preserving trust within the personal data ecosystem.

Session 5: Systems

(Session Chair: Rene Kaiser)

A New Production Platform for Authoring Object-based Multiscreen TV Viewing Experiences

Jie Li – CWI Amsterdam, Netherlands
Thomas Röggla – CWI Amsterdam, Netherlands
Maxine Glancy – BBC R&D Manchester, UK
Jack Jansen – CWI Amsterdam, Netherlands
Pablo Cesar – CWI & TU Delft Amsterdam, Netherlands

Abstract: Multiscreen TV viewing refers to a spectrum of media productions that can be watched on TV screens and companion screens (e.g., smartphones and tablets). TV production companies are now promoting an interactive and engaging way of viewing TV by offering tailored applications for TV programs. However, viewers are demotivated to install dozens of applications and switch between them. This is one of the obstacles that hinder companion screen applications from reaching mass audiences. To solve this, TV production companies need a standard process for producing multiscreen content, allowing viewers to follow all kinds of programs in one single application. This paper proposes a new object-based production platform for authoring programs for multiscreen. The platform consists of two parts: the preproduction tool and the live editing tool. To evaluate whether the proposed workflow is appropriate, validation interviews were conducted with professionals in the TV broadcasting industry. The professionals were positive about the proposed new workflow, indicating that the platform allows for preparations at the preproduction stage and reduces the workload during the live broadcasting. They see as well its potential to adapt to the current production workflow.

Digital Authoring of Interactive Public Display Applications

Ryan Mills – Lancaster University Lancaster, UK
Matthew Broadbent – Lancaster University Lancaster, UK
Nicholas Race – Lancaster University Lancaster, UK

Abstract: HbbTV (Hybrid broadcast broadband TV) is an emerging force in the entertainment industry, and proper standarisation of technologies would be hugely beneficial for the creation of content. HbbTV aims to realise this vision and has been widely successful thus far. This paper introduces the MPAT (Multi Platform Application Toolkit) project, which is the re- sult of multiple organisational entities effort and dedication to extend the capabilities and functionality of HbbTV, in order to ease the design and creation of interactive TV applications. The paper also showcases the versatility of MPAT, by describ- ing a series of case studies which provide digital storytelling and visual authoring of interactive applications which tran- scend traditional TV use cases, and instead provide a gripping interactive experience via integration with public displays.

Companion Screen Architecture for Bridging TV Experience and Life Activities

Hisayuki Ohmata- NHK (Japan Broadcasting Corporation), JAPAN
Masaya Ikeo- NHK (Japan Broadcasting Corporation), JAPAN
Hiromu Ogawa- NHK (Japan Broadcasting Corporation), JAPAN
Tohru Takiguchi- NHK (Japan Broadcasting Corporation), JAPAN
Hiroshi Fujisawa – NHK (Japan Broadcasting Corporation), JAPAN

Abstract: The diversification of personal lifestyles has complicated the roles of media and associated service consumption. In our current era, when people start to use new services by transitioning from one service or device to another, bothersome operations can decrease their motivation to use the new services effectively. For example, even though companion screen services are now available on integrated broadcast–broadband systems, broadcast accessibility from mobile service remains suboptimal because existing architectures remain television (TV)-centric and cannot use these services effectively. In response to this issue, we propose a user-centric companion screen architecture (CSA) that can tune to a specified TV channel and launch broadcast- related TV applications from mobile and Internet of Things (IoT)-enabled devices. We confirmed the general versatility of this CSA by prototyping multiple use cases involving various broadcasters and by evaluating broadcast accessibility from mobile devices via user tests. The obtained results showed that 86% of the examinees expressed improved user satisfaction and that 78% the examinees reported a potential increase in the number of broadcasts they would watch. Thus, we conclude that our proposed CSA improves broadcast accessibility from mobile and IoT services and can help bridge the gap between TV experiences and life activities.