The Work-in-Progress (WiP) session solicits recent viewpoints, new discoveries, and early-stage design and development in disciplines that are in line with TVX’s areas of interest. It provides a unique opportunity for exchanging brave new ideas, receiving feedback and fostering collaborations. This year, we also introduce a Project-in-Progress special track, targeting contributions from ongoing major research initiatives including European Commission-funded or other similar-scale projects for cross-project discussions.

WiP papers will be presented as a short pitch presentation and a physical poster at the conference, and will be included in conference proceedings indexed in the ACM Digital Library.

Since we are finalizing to prepare camera-ready versions, the missing titles and abstracts below will be updated soon.

(WiP 1)
Twickle: Growing Twitch Streamer’s Communities Through Gamification of Word-of-Mouth Referrals

Jacob T. Browne – Twickle, San Diego, CA, USA
Bharat Batra – Twickle, San Diego, CA, USA

Abstract: has grown to be one of the largest streaming platforms worldwide, hosting over 2 million active streamers. Many of these streamers are using their Twitch stream to earn a living, turning their streams into a business. However, growing a community that supports this endeavor remains a central challenge amongst streamers. In this paper, we present Twickle: a web-based leaderboard tool that leverages the gami- fication of word-of-mouth referrals to grow a streamer’s community. An initial feasibility study with four stream- ers reveals that Twickle increases the amount of new viewers and is appreciated by the Twitch community. We address design opportunities for Twickle and outline future research.

(WiP 2)
Touchable Video Streams: Towards Multi-sensory and Multi-contact Experiences

Seokyeol Kim – School of Computing, Korea Advanced Institute of Science and Technology, Republic of Korea
Jinah Park – School of Computing, Korea Advanced Institute of Science and Technology, Republic of Korea

Abstract: Haptic feedback takes on an important role in providing spatial cues, which are difficult to convey solely by sight, as well as in increasing the immersion of contents. However, although a number of techniques and applications for haptic media have been proposed in this regard, live streaming of touchable video has yet to be actively deployed due to com- putational complexity and equipment limitations. In order to mitigate these issues, we introduce an approach to render haptic feedback directly from RGB-D video streams without surface reconstruction, and also describe how to superim- pose virtual objects or haptic effects onto real-world scenes. Furthermore, we discuss possible improvements in soft- ware and appropriate device setups to extend the proposed system to support a practical solution for multi-sensory and multi-point interaction in streaming touchable media.

(WiP 3)
A Mediography Of Virtual Reality Non- Fiction: Insights And Future Directions

Chris Bevan – University of Bristol Bristol, UK
David Green – University of the West of England, UK

Abstract: The emergence in recent years of consumer-accessible virtual reality (VR) technologies such as the Google Daydream, Oculus Rift and HTC Vive has led to a renewal of commercial, academic and public interest in immersive interactive media. Virtual reality non-fiction (VRNF) (e.g. documentary) is an emergent and rapidly evolving new medium for filmmaking that draws from – and builds upon – traditional forms of non-fiction, as well as interactive media, gaming and immersive theatre. In this paper, we present our ongoing work to capture and present the first comprehensive record of VRNF – a Mediography of Virtual Reality Non-Fiction – to tell the story of where this new medium has come from, how it is evolving, and where it is heading.

(WiP 4)
Content Unification in iTV to Enhance User Experience: The UltraTV Project

Pedro Almeida – Digimedia, University of Aveiro, Portugal
Jorge Ferraz de Abreu – Digimedia, University of Aveiro, Portugal
Sílvia Fernandes – Digimedia, University of Aveiro, Portugal
Eliza Oliveira – Digimedia, University of Aveiro, Portugal

Abstract: Recent changes in TV viewers’ consumption habits are pushing to a point where industry content providers and producers must create new technological solutions to retain customers. To cope with these, the UltraTV project consortium developed an iTV concept, with a focus on the unification of content from different sources. This brings together traditional TV along with Over-the-Top content, aiming to provide an integrated solution that could foster the audiovisual consumption and ease the discovery of content. This paper presents the implemented solution and reports on the results of its evaluation using a Field Trial. Results provide valuable insights for a market-oriented version of the UltraTV concept, proving the feasibility and user demand for a profile-based content unification solution for future iTV solutions.

(WiP 5)
Viewers’ behaviors at Home on TV and Other Screens: An Online Survey

Jorge Ferraz de Abreu – Digimedia, University of Aveiro, Portugal
Pedro Almeida – Digimedia, University of Aveiro, Portugal
Ana Velhinho – Digimedia, University of Aveiro, Portugal
Enrickson Varsori – Digimedia, University of Aveiro, Portugal

Abstract: In a context where audiovisual consumption habits are continually transforming, mostly driven by Video On Demand services, this paper has the main goal of characterizing the motivational factors and behaviors related with the uses of multiple devices at home. The report is sustained in the results of an online survey carried out in Portugal, aiming to collect information about the online video and linear TV content consumption. Besides the regular TV contents, usually watched on a TV connected to a set-top box, the Computer was the most chosen device to watch all the other sources of content at home. Furthermore, 71,4% stated that they usually connect more than one device to the TV screen.

(WiP 6)
Personalising the TV Experience with Augmented Reality Technology: Synchronised Sign Language Interpretation.

Vinoba Vinayagamoorthy – BBC R&D, UK
Maxine Glancy – BBC R&D, UK
Paul Debenham – BBC R&D, UK
Alastair Bruce – BBC R&D, UK
Christoph Ziegler – IRT, Germany
Richard Schäffer – IRT, Germany

Abstract: This paper explores the potential of augmented reality technology as a novel way to allow users to view a sign language interpreter through an optical head-mounted display while watching a TV programme. We address the potential of augmented reality for personalisation of TV access services as part of closed laboratory investigations. Based on guidelines of regulatory authorities and research on traditional sign language services on TV, as well as feedback from experts, we justify our two design proposals. We describe how we produced the content for the AR prototype applications and what we have learned during the process. Finally, we develop questions for our upcoming user studies.

(WiP 7)
Educational Online Video: Opportunities and Barriers to Integrate it in the Entertainment Consumption Routines

Carolina Almeida – CIC.Digital- DIGIMEDIA, University of Aveiro, Campus Universitário de Santiago, Portugal
Pedro Almeida – CIC.Digital- DIGIMEDIA, University of Aveiro, Campus Universitário de Santiago, Portugal

Abstract: General population and particularly teenagers are increasingly using mobile devices for video consumption instead of the regular TV set. Considering that the top motivation for video consumption is to seek for entertainment, there is an opportunity to try to capture some of those moments for educational content enriched with some entertainment characteristics. This study aims to identify narrative and technical characteristics to incorporate in educational informal videos, designed for new media platforms by analysing the preferences of teenagers aged from 12 to 16 years old that attend the Portuguese public school system. Furthermore, the research team expects to understand if educational videos enriched with the referred characteristics are able to be included in entertainment consumption routines of these viewers. Some of the most valued characteristics are the comic approach, the integration of animations, the relaxed yet clear presenter style and the low level of scientific detail in video explanations.

(WiP 8)
Understanding Blind or Visually Impaired People on YouTube through Qualitative Analysis of Videos

Woosuk Seo – University of Michigan, USA
Hyunggu Jung – Kyung Hee University, Republic of Korea

Abstract: In this paper, we analyzed videos to explore blind or visually impaired (BVI) people on YouTube. While researchers found how BVI people interact with contents and other people on social media platforms (e.g., Facebook), little is known about the experience of BVI people on video-based social media platforms (e.g., YouTube). To use videos as a mean of identifying the needs of BVI people on YouTube, we collected and analyzed a specific type of video called Visually Impaired People (VIP) Tag video. This Tag video has a set of structured questions about eye condition and experience as a BVI person. Based on the qualitative analysis of 24 VIP Tag videos created by BVI people, we found how they create videos and why they joined YouTube. In conclusion, we present how video-content analysis can be used to create an inclusive video-based social media platform.

(WiP 9)
Collecting Observational Data about Online Video Use in the Home Using Open-Source Broadcasting Software

Steven Schirra – Twitch, USA
Danae Holmes – Twitch, USA
Alice Rhee – Twitch, USA

Abstract: Capturing contextual data about online media consumption in the home can be difficult, often requiring site visits and hardware installation in the field. In this paper, we present an exploratory study in which we use free, open-source broadcasting software and participants’ existing computer hardware to capture remote, contextual video data inside the home. This method allows participants to simultaneously capture live recordings across multiple computer screens—as well as themselves and their home viewing environment—while watching long-form online video. We discuss the affordances and challenges of this method for researchers seeking to capture contextual data remotely.

(WiP 10)
A Study on User Experience Evaluation of Glasses-type Wearable Device with Built-in Bone Conduction Speaker: Focus on the Zungle Panther

Ayoung Seok – Sogang University, Republic of Korea
Yongsoon Choi – Sogang University, Republic of Korea

Abstract: The current HDM-oriented glasses wearable devices are inconvenient to use in real daily life that it needs both miniaturization and weight lightening. Glasses-type wearable devices are expected to develop into glasses- type devices. There is a lack of research on evaluation of user experience of VR(Virtual reality), AR(Augmented reality), television and game using glasses-type devices and design guideline. This research used Zungle Panther, a glasses-type wearable device with built-in bone conduction speaker to research user experience evaluation model needed for AR・VR content of the near future to be used in daily user life, and to research design guideline needed for glasses-type device guideline for near-future content including TV and online videos.

(WiP 11)
Dynamic Subtitles in Cinematic Virtual Reality

Sylvia Rothe – LMU Munich, Germany
Kim Tran – LMU Munich, Germany
Heinrich Hußmann – LMU Munich, Germany

Abstract: Cinematic Virtual Reality has been increasing in popularity in recent years. Watching 360◦ movies with a Head Mounted Display, the viewer can freely choose the direction of view, and thus the visible section of the movie. Therefore, a new approach for the placements of subtitles is needed. There are three main issues which have to be considered: the position of the subtitles, the speaker identification and the influence for the VR experience. In our study we compared a static method, where the subtitles are placed at the bottom of the field of view, with dynamic subtitles, where the position of the subtitles depends on the scene and is close to the speaking person. This work-in-progress describes first results of the study which point out that dynamic subtitles can lead to a higher score of presence, less sickness and lower workload.

(WiP 12)
Augmenting the Radio Experience by Enhancing Interactions between Radio Editors and Listeners

Sandy Claes – VRT innovation, Belgium
Rik Bauwens  – VRT innovation, Belgium
Mike Matton  – VRT innovation, Belgium

Abstract: Radio has a long history of being a one-way communication channel from radio station to listener. Recent technological advancements, such as online radio, enable the listener to interact more easily with radio stations, potentially augmenting the overall radio experience of the listener. In turn, the editorial teams of radio stations are challenged with the streams of incoming messages. In this paper, we report on the results of an initial, exploratory co-design process that aimed at mapping needs and values of both end-users, i.e. listeners and radio editors towards interaction. Specifically, we organized 6 co-design workshops at radio stations of 3 different countries. Results demonstrate how needs of both type of end-users overlap. The paper is concluded with 5 general points of attention, i.e. relevant feedback, co-creation of content, personal services, content on demand and being part of a community, which form the basis for the continuation of our work.

(WiP 13)
Taxonomies in DUI Design Patterns

Mubashar Iqbal – Tallinn University, Estonia
David Jose Ribeiro Lamas – Tallinn University, Estonia
Ilja Šmorgun – Tallinn University, Estonia

Abstract: Recently a library of design patterns1 was created to aid researchers and designers in specifying Distributed User Interfaces (DUIs). The patterns provide an overview of the solutions to common DUI design problems without requiring a significant amount of time to be spent on reading domain-specific literature and exploring existing DUI implementations. Among the main limitations of the library’s current implementation is the significant overlap among design pattern descriptions and their relationships not being sufficiently clear. To address this, a systematic approach was undertaken to remove the overlaps among the design patterns, as well as to clarify their relationships by creating a taxonomic structure. The results of this study open several research directions to advance the current work on DUI design patterns.

(WiP 14)
Smartphone-like or TV-like Smart TV? The effect of false memory creation

Hyejeong Lee – Hanyang University, Republic of Korea
Hokyoung Ryu – Hanyang University, Republic of Korea
Jieun Kim – Hanyang University, Republic of Korea

Abstract: False belief pertains to what users believe falsely in their mental model about remembering novel features with no prior experience. The current study investigated how the False Belief technique can be employed to extract a first-time smart TV user’s mental model. Smart features formed by a group of users’ false memories (n=41) were monitored to see how the users’ mental model changed with retention intervals (immediate, short, and long delays). The findings showed that a gist trace formed in the first-time use cannot last long (1 month) because of the greater false belief effect. Practical implications of these findings should be furthered to improve the apparent adoption obstacles in smart-TV use.

(WiP 15)
Experiencing Virtual Reality Together: Social VR Use Case Study

Simon Gunkel – TNO, Netherlands
Hans Stokking – TNO, Netherlands
Martin Prins – TNO, Netherlands
Omar Niamut – TNO, Netherlands
Ernestasia Siahaan – CWI, Netherlands
Pablo Cesar – CWI, Netherlands

Abstract: As Virtual Reality (VR) applications gain more momentum recently, the social and communication aspects of VR experiences become more relevant. In this paper, we present some initial results of understanding the type of applications and factors that users would find relevant for Social VR. We conducted a study involving 91 participants, and identified 4 key use cases for Social VR: video conferencing, education, gaming and watching movies. Further, we identified 2 important factors for such experiences: interacting within the experience, and enjoying the experience. Our results serve as an initial step before performing more detailed studies on the functional requirements for specific Social VR applications. We also discuss the necessary research to fill in current technological gaps in order to move Social VR experiences forward.

(WiP 16)
As Music Goes By in versions and movies along time

Acácio Moreira – LASIGE, Faculdade de Ciências Universidade de Lisboa, Portugal
Teresa Chambel – LASIGE, Faculdade de Ciências Universidade de Lisboa, Portugal

Abstract: Music and movies have a significant impact in our lives and they have been playing together since the early days of the moving image. Music history on its own goes back till much earlier, and has been present in every known culture. It has also been common for artists to perform and record music originally written and performed by other musicians, since ancient times. In this paper we address the relevance and the support to access music in versions and movies along time, and introduce As Music Goes By, an interactive web application being designed and developed to contribute to this purpose, aiming at increased richness and flexibility, the chance to find unexpected meaningful information, and the support to create and experience music and movies that keep touching us.

(WiP 17)
ImAc: Enabling Immersive, Accessible and Personalized Media Experiences

Mario Montagud – I2CAT Foundation, Spain
Isaac Fraile – I2CAT Foundation, Spain
Juan A. Nuñez – I2CAT Foundation, Spain
Sergi Fernández – I2CAT Foundation, Spain

Abstract: The integration of immersive contents and consumption devices within the TV landscape brings new fascinating opportunities. However, the exploitation of these immersive TV services is still in its infancy and groundbreaking solutions need to be devised. A key challenge is to enable truly inclusive experiences, regardless of the sensorial and cognitive capacities of the users, their age and language. In this context, ImAc project explores how accessibility services (subtitling, audio description and sign language) can be efficiently integrated with immersive media, such as omnidirectional and Virtual Reality (VR) contents, while keeping compatibility with current standards and technologies. This paper provides an overview of the project, by focusing on its motivation, the followed user-centered methodology and its key research objectives. The end-to-end system (from production to consumption) being specified, the envisioned scenarios and planned evaluations are also briefly described.