{"id":2508,"date":"2017-05-18T10:33:48","date_gmt":"2017-05-18T10:33:48","guid":{"rendered":"https:\/\/tvx.acm.org\/2018\/?page_id=2508"},"modified":"2018-06-22T10:30:52","modified_gmt":"2018-06-22T10:30:52","slug":"work-in-progress","status":"publish","type":"page","link":"https:\/\/tvx.acm.org\/2018\/program\/work-in-progress\/","title":{"rendered":"Work-in-Progress"},"content":{"rendered":"<div class=\"flex_column av_one_full  flex_column_div av-zero-column-padding first  \" style='border-radius:0px; '><section class=\"av_textblock_section\"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class='avia_textblock '   itemprop=\"text\" ><h3><strong>WORK-IN-PROGRESS<\/strong><\/h3>\n<p>The Work-in-Progress (WiP) session solicits recent viewpoints, new discoveries, and early-stage design and development in disciplines that are in line with TVX\u2019s areas of interest. It provides a unique opportunity for exchanging brave new ideas, receiving feedback and fostering collaborations. This year, we also introduce a Project-in-Progress special track, targeting contributions from ongoing major research initiatives including European Commission-funded or other similar-scale projects for cross-project discussions.<\/p>\n<p>WiP papers will be presented as a short pitch presentation and a physical poster at the conference, and will be included in conference proceedings indexed in the ACM Digital Library.<\/p>\n<p>Since we are finalizing to prepare\u00a0camera-ready versions, the missing titles and abstracts below will be updated soon.<\/p>\n<\/div><\/section><br \/>\n<div  class='hr hr-default '><span class='hr-inner ' ><span class='hr-inner-style'><\/span><\/span><\/div><br \/>\n<section class=\"av_textblock_section\"  itemscope=\"itemscope\" itemtype=\"https:\/\/schema.org\/CreativeWork\" ><div class='avia_textblock '   itemprop=\"text\" ><blockquote>\n<p><strong>(WiP 1)<br \/>\nTwickle: Growing Twitch Streamer\u2019s Communities Through Gamification of Word-of-Mouth Referrals<\/strong><\/p>\n<\/blockquote>\n<p>Jacob T. Browne &#8211; Twickle, San Diego, CA, USA<br \/>\nBharat Batra &#8211; Twickle, San Diego, CA, USA<\/p>\n<p><strong>Abstract:<\/strong>\u00a0Twitch.tv has grown to be one of the largest streaming platforms worldwide, hosting over 2 million active streamers. Many of these streamers are using their Twitch stream to earn a living, turning their streams into a business. However, growing a community that supports this endeavor remains a central challenge amongst streamers. In this paper, we present Twickle: a web-based leaderboard tool that leverages the gami- fication of word-of-mouth referrals to grow a streamer\u2019s community. An initial feasibility study with four stream- ers reveals that Twickle increases the amount of new viewers and is appreciated by the Twitch community. We address design opportunities for Twickle and outline future research.<\/p>\n<blockquote>\n<p><strong>(WiP 2)<br \/>\nTouchable Video Streams: Towards Multi-sensory and Multi-contact Experiences<\/strong><\/p>\n<\/blockquote>\n<p>Seokyeol Kim &#8211; School of Computing, Korea Advanced Institute of Science and Technology, Republic of Korea<br \/>\nJinah Park &#8211; School of Computing, Korea Advanced Institute of Science and Technology, Republic of Korea<\/p>\n<p><strong>Abstract:<\/strong>\u00a0Haptic feedback takes on an important role in providing spatial cues, which are difficult to convey solely by sight, as well as in increasing the immersion of contents. However, although a number of techniques and applications for haptic media have been proposed in this regard, live streaming of touchable video has yet to be actively deployed due to com- putational complexity and equipment limitations. In order to mitigate these issues, we introduce an approach to render haptic feedback directly from RGB-D video streams without surface reconstruction, and also describe how to superim- pose virtual objects or haptic effects onto real-world scenes. Furthermore, we discuss possible improvements in soft- ware and appropriate device setups to extend the proposed system to support a practical solution for multi-sensory and multi-point interaction in streaming touchable media.<\/p>\n<blockquote>\n<p><strong>(WiP 3)<br \/>\nA Mediography Of Virtual Reality Non- Fiction: Insights And Future Directions<\/strong><\/p>\n<\/blockquote>\n<p>Chris Bevan &#8211; University of Bristol Bristol, UK<br \/>\nDavid Green &#8211; University of the West of England, UK<\/p>\n<p><strong>Abstract:<\/strong>\u00a0The emergence in recent years of consumer-accessible virtual reality (VR) technologies such as the Google Daydream, Oculus Rift and HTC Vive has led to a renewal of commercial, academic and public interest in immersive interactive media. Virtual reality non-fiction (VRNF) (e.g. documentary) is an emergent and rapidly evolving new medium for filmmaking that draws from &#8211; and builds upon &#8211; traditional forms of non-fiction, as well as interactive media, gaming and immersive theatre. In this paper, we present our ongoing work to capture and present the first comprehensive record of VRNF &#8211; a Mediography of Virtual Reality Non-Fiction &#8211; to tell the story of where this new medium has come from, how it is evolving, and where it is heading.<\/p>\n<blockquote>\n<p><strong>(WiP 4)<br \/>\nContent Unification in iTV to Enhance User Experience: The UltraTV Project<\/strong><\/p>\n<\/blockquote>\n<p>Pedro Almeida\u00a0&#8211; Digimedia, University of Aveiro, Portugal<br \/>\nJorge Ferraz de Abreu\u00a0&#8211; Digimedia, University of Aveiro, Portugal<br \/>\nSi\u0301lvia Fernandes\u00a0&#8211; Digimedia, University of Aveiro, Portugal<br \/>\nEliza Oliveira &#8211; Digimedia, University of Aveiro, Portugal<\/p>\n<p><strong>Abstract:<\/strong>\u00a0Recent changes in TV viewers&#8217; consumption habits are pushing to a point where industry content providers and producers must create new technological solutions to retain customers. To cope with these, the UltraTV project consortium developed an iTV concept, with a focus on the unification of content from different sources. This brings together traditional TV along with Over-the-Top content, aiming to provide an integrated solution that could foster the audiovisual consumption and ease the discovery of content. This paper presents the implemented solution and reports on the results of its evaluation using a Field Trial. Results provide valuable insights for a market-oriented version of the UltraTV concept, proving the feasibility and user demand for a profile-based content unification solution for future iTV solutions.<\/p>\n<blockquote>\n<p><strong>(WiP 5)<br \/>\nViewers\u2019 behaviors at Home on TV and Other Screens: An Online Survey<\/strong><\/p>\n<\/blockquote>\n<p>Jorge Ferraz de Abreu &#8211; Digimedia, University of Aveiro, Portugal<br \/>\nPedro Almeida &#8211; Digimedia, University of Aveiro, Portugal<br \/>\nAna Velhinho &#8211; Digimedia, University of Aveiro, Portugal<br \/>\nEnrickson Varsori &#8211; Digimedia, University of Aveiro, Portugal<\/p>\n<p><strong>Abstract:<\/strong>\u00a0In a context where audiovisual consumption habits are continually transforming, mostly driven by Video On Demand services, this paper has the main goal of characterizing the motivational factors and behaviors related with the uses of multiple devices at home. The report is sustained in the results of an online survey carried out in Portugal, aiming to collect information about the online video and linear TV content consumption. Besides the regular TV contents, usually watched on a TV connected to a set-top box, the Computer was the most chosen device to watch all the other sources of content at home. Furthermore, 71,4% stated that they usually connect more than one device to the TV screen.<\/p>\n<blockquote>\n<p><strong>(WiP 6)<br \/>\nPersonalising the TV Experience with Augmented Reality Technology: Synchronised Sign Language Interpretation.<\/strong><\/p>\n<\/blockquote>\n<p>Vinoba Vinayagamoorthy\u00a0&#8211; BBC R&amp;D, UK<br \/>\nMaxine Glancy\u00a0&#8211; BBC R&amp;D, UK<br \/>\nPaul Debenham\u00a0&#8211; BBC R&amp;D, UK<br \/>\nAlastair Bruce &#8211; BBC R&amp;D, UK<br \/>\nChristoph Ziegler\u00a0&#8211; IRT, Germany<br \/>\nRichard Scha\u0308ffer &#8211; IRT, Germany<\/p>\n<p><strong>Abstract:<\/strong>\u00a0This paper explores the potential of augmented reality technology as a novel way to allow users to view a sign language interpreter through an optical head-mounted display while watching a TV programme. We address the potential of augmented reality for personalisation of TV access services as part of closed laboratory investigations. Based on guidelines of regulatory authorities and research on traditional sign language services on TV, as well as feedback from experts, we justify our two design proposals. We describe how we produced the content for the AR prototype applications and what we have learned during the process. Finally, we develop questions for our upcoming user studies.<\/p>\n<blockquote>\n<p><strong>(WiP 7)<br \/>\nEducational Online Video: Opportunities and Barriers to Integrate it in the Entertainment Consumption Routines<\/strong><\/p>\n<\/blockquote>\n<p>Carolina Almeida &#8211; CIC.Digital- DIGIMEDIA, University of Aveiro, Campus Universita\u0301rio de Santiago, Portugal<br \/>\nPedro Almeida &#8211; CIC.Digital- DIGIMEDIA, University of Aveiro, Campus Universita\u0301rio de Santiago, Portugal<\/p>\n<p><strong>Abstract:<\/strong>\u00a0General population and particularly teenagers are increasingly using mobile devices for video consumption instead of the regular TV set. Considering that the top motivation for video consumption is to seek for entertainment, there is an opportunity to try to capture some of those moments for educational content enriched with some entertainment characteristics. This study aims to identify narrative and technical characteristics to incorporate in educational informal videos, designed for new media platforms by analysing the preferences of teenagers aged from 12 to 16 years old that attend the Portuguese public school system. Furthermore, the research team expects to understand if educational videos enriched with the referred characteristics are able to be included in entertainment consumption routines of these viewers. Some of the most valued characteristics are the comic approach, the integration of animations, the relaxed yet clear presenter style and the low level of scientific detail in video explanations.<\/p>\n<blockquote>\n<p><strong>(WiP 8)<br \/>\nUnderstanding Blind or Visually Impaired People on YouTube through Qualitative Analysis of Videos<\/strong><\/p>\n<\/blockquote>\n<p>Woosuk Seo &#8211; University of Michigan, USA<br \/>\nHyunggu Jung &#8211; Kyung Hee University, Republic of Korea<\/p>\n<p><strong>Abstract:<\/strong>\u00a0In this paper, we analyzed videos to explore blind or visually impaired (BVI) people on YouTube. While researchers found how BVI people interact with contents and other people on social media platforms (e.g., Facebook), little is known about the experience of BVI people on video-based social media platforms (e.g., YouTube). To use videos as a mean of identifying the needs of BVI people on YouTube, we collected and analyzed a specific type of video called Visually Impaired People (VIP) Tag video. This Tag video has a set of structured questions about eye condition and experience as a BVI person. Based on the qualitative analysis of 24 VIP Tag videos created by BVI people, we found how they create videos and why they joined YouTube. In conclusion, we present how video-content analysis can be used to create an inclusive video-based social media platform.<\/p>\n<blockquote>\n<p><strong>(WiP 9)<br \/>\nCollecting Observational Data about Online Video Use in the Home Using Open-Source Broadcasting Software<\/strong><\/p>\n<\/blockquote>\n<p>Steven Schirra &#8211; Twitch, USA<br \/>\nDanae Holmes\u00a0&#8211; Twitch, USA<br \/>\nAlice Rhee\u00a0&#8211; Twitch, USA<\/p>\n<p><strong>Abstract:<\/strong>\u00a0Capturing contextual data about online media consumption in the home can be difficult, often requiring site visits and hardware installation in the field. In this paper, we present an exploratory study in which we use free, open-source broadcasting software and participants\u2019 existing computer hardware to capture remote, contextual video data inside the home. This method allows participants to simultaneously capture live recordings across multiple computer screens\u2014as well as themselves and their home viewing environment\u2014while watching long-form online video. We discuss the affordances and challenges of this method for researchers seeking to capture contextual data remotely.<\/p>\n<blockquote>\n<p><strong>(WiP 10)<br \/>\nA Study on User Experience Evaluation of Glasses-type Wearable Device with Built-in Bone Conduction Speaker: Focus on the Zungle Panther<\/strong><\/p>\n<\/blockquote>\n<p>Ayoung Seok &#8211; Sogang University, Republic of Korea<br \/>\nYongsoon Choi &#8211; Sogang University, Republic of Korea<\/p>\n<p><strong>Abstract:<\/strong>\u00a0The current HDM-oriented glasses wearable devices are inconvenient to use in real daily life that it needs both miniaturization and weight lightening. Glasses-type wearable devices are expected to develop into glasses- type devices. There is a lack of research on evaluation of user experience of VR(Virtual reality), AR(Augmented reality), television and game using glasses-type devices and design guideline. This research used Zungle Panther, a glasses-type wearable device with built-in bone conduction speaker to research user experience evaluation model needed for AR\u30fbVR content of the near future to be used in daily user life, and to research design guideline needed for glasses-type device guideline for near-future content including TV and online videos.<\/p>\n<blockquote>\n<p><strong>(WiP 11)<br \/>\nDynamic Subtitles in Cinematic Virtual Reality<\/strong><\/p>\n<\/blockquote>\n<p>Sylvia Rothe &#8211;\u00a0LMU Munich, Germany<br \/>\nKim Tran\u00a0&#8211;\u00a0LMU Munich, Germany<br \/>\nHeinrich Hu\u00dfmann\u00a0&#8211;\u00a0LMU Munich, Germany<\/p>\n<p><strong>Abstract:<\/strong> Cinematic Virtual Reality has been increasing in popularity\u00a0in recent years. Watching 360\u25e6 movies with a Head\u00a0Mounted Display, the viewer can freely choose the direction\u00a0of view, and thus the visible section of the movie. Therefore,\u00a0a new approach for the placements of subtitles is needed.\u00a0There are three main issues which have to be considered:\u00a0the position of the subtitles, the speaker identification and\u00a0the influence for the VR experience. In our study we compared\u00a0a static method, where the subtitles are placed at the\u00a0bottom of the field of view, with dynamic subtitles, where the\u00a0position of the subtitles depends on the scene and is close\u00a0to the speaking person. This work-in-progress describes\u00a0first results of the study which point out that dynamic subtitles\u00a0can lead to a higher score of presence, less sickness\u00a0and lower workload.<\/p>\n<blockquote>\n<p><strong>(WiP 12)<br \/>\nAugmenting the Radio Experience by Enhancing Interactions between Radio Editors and Listeners<\/strong><\/p>\n<\/blockquote>\n<p>Sandy Claes &#8211;\u00a0VRT innovation, Belgium<br \/>\nRik Bauwens\u00a0 &#8211;\u00a0VRT innovation, Belgium<br \/>\nMike Matton\u00a0 &#8211;\u00a0VRT innovation, Belgium<\/p>\n<p><strong>Abstract:<\/strong>\u00a0Radio has a long history of being a one-way\u00a0communication channel from radio station to listener.\u00a0Recent technological advancements, such as online\u00a0radio, enable the listener to interact more easily with\u00a0radio stations, potentially augmenting the overall radio\u00a0experience of the listener. In turn, the editorial teams\u00a0of radio stations are challenged with the streams of\u00a0incoming messages. In this paper, we report on the results of an initial, exploratory co-design process that\u00a0aimed at mapping needs and values of both end-users,\u00a0i.e. listeners and radio editors towards interaction.\u00a0Specifically, we organized 6 co-design workshops at\u00a0radio stations of 3 different countries. Results\u00a0demonstrate how needs of both type of end-users\u00a0overlap. The paper is concluded with 5 general points of\u00a0attention, i.e. relevant feedback, co-creation of content,\u00a0personal services, content on demand and being part of\u00a0a community, which form the basis for the continuation\u00a0of our work.<\/p>\n<blockquote>\n<p><strong>(WiP 13)<br \/>\nTaxonomies in DUI Design Patterns<\/strong><\/p>\n<\/blockquote>\n<p>Mubashar Iqbal &#8211;\u00a0Tallinn University,\u00a0Estonia<br \/>\nDavid Jose Ribeiro Lamas\u00a0&#8211;\u00a0Tallinn University,\u00a0Estonia<br \/>\nIlja \u0160morgun\u00a0&#8211;\u00a0Tallinn University,\u00a0Estonia<\/p>\n<p><strong>Abstract:<\/strong>\u00a0Recently a library of design patterns1 was created to aid\u00a0researchers and designers in specifying Distributed User\u00a0Interfaces (DUIs). The patterns provide an overview of\u00a0the solutions to common DUI design problems without requiring\u00a0a significant amount of time to be spent on reading\u00a0domain-specific literature and exploring existing DUI implementations.\u00a0Among the main limitations of the library\u2019s\u00a0current implementation is the significant overlap among design\u00a0pattern descriptions and their relationships not being\u00a0sufficiently clear. To address this, a systematic approach\u00a0was undertaken to remove the overlaps among the design\u00a0patterns, as well as to clarify their relationships by creating a taxonomic structure. The results of this study open several\u00a0research directions to advance the current work on DUI\u00a0design patterns.<\/p>\n<blockquote>\n<p><strong>(WiP 14)<br \/>\nSmartphone-like or TV-like Smart TV? The effect of false memory creation<\/strong><\/p>\n<\/blockquote>\n<p>Hyejeong Lee &#8211; Hanyang University, Republic of Korea<br \/>\nHokyoung Ryu &#8211; Hanyang University, Republic of Korea<br \/>\nJieun Kim &#8211;\u00a0Hanyang University, Republic of Korea<\/p>\n<p><strong>Abstract:<\/strong> False belief pertains to what users believe falsely in\u00a0their mental model about remembering novel features\u00a0with no prior experience. The current study\u00a0investigated how the False Belief technique can be\u00a0employed to extract a first-time smart TV user\u2019s mental\u00a0model. Smart features formed by a group of users\u2019\u00a0false memories (n=41) were monitored to see how the\u00a0users\u2019 mental model changed with retention intervals (immediate, short, and long delays). The findings\u00a0showed that a gist trace formed in the first-time use\u00a0cannot last long (1 month) because of the greater false\u00a0belief effect. Practical implications of these findings\u00a0should be furthered to improve the apparent adoption\u00a0obstacles in smart-TV use.<\/p>\n<blockquote>\n<p><strong>(WiP 15)<br \/>\nExperiencing Virtual Reality Together: Social VR Use Case Study<\/strong><\/p>\n<\/blockquote>\n<p>Simon Gunkel &#8211;\u00a0TNO, Netherlands<br \/>\nHans Stokking &#8211;\u00a0TNO, Netherlands<br \/>\nMartin Prins\u00a0&#8211;\u00a0TNO, Netherlands<br \/>\nOmar Niamut\u00a0&#8211;\u00a0TNO, Netherlands<br \/>\nErnestasia Siahaan &#8211;\u00a0CWI, Netherlands<br \/>\nPablo Cesar\u00a0&#8211;\u00a0CWI, Netherlands<\/p>\n<p><strong>Abstract:<\/strong> As Virtual Reality (VR) applications gain more momentum\u00a0recently, the social and communication aspects of VR experiences\u00a0become more relevant. In this paper, we present\u00a0some initial results of understanding the type of applications\u00a0and factors that users would find relevant for Social VR. We\u00a0conducted a study involving 91 participants, and identified 4\u00a0key use cases for Social VR: video conferencing, education,\u00a0gaming and watching movies. Further, we identified 2\u00a0important factors for such experiences: interacting within\u00a0the experience, and enjoying the experience. Our results\u00a0serve as an initial step before performing more detailed\u00a0studies on the functional requirements for specific Social\u00a0VR applications. We also discuss the necessary research\u00a0to fill in current technological gaps in order to move Social\u00a0VR experiences forward.<\/p>\n<blockquote>\n<p><strong>(WiP 16)<br \/>\nAs Music Goes By in versions and movies along time<\/strong><\/p>\n<\/blockquote>\n<p>Ac\u00e1cio Moreira &#8211;\u00a0LASIGE, Faculdade de Ci\u00eancias\u00a0Universidade de Lisboa, Portugal<br \/>\nTeresa Chambel\u00a0&#8211;\u00a0LASIGE, Faculdade de Ci\u00eancias\u00a0Universidade de Lisboa, Portugal<\/p>\n<p><strong>Abstract:<\/strong>\u00a0Music and movies have a significant impact in our lives and they have been playing together since the early\u00a0days of the moving image. Music history on its own\u00a0goes back till much earlier, and has been present in\u00a0every known culture. It has also been common for\u00a0artists to perform and record music originally written\u00a0and performed by other musicians, since ancient times.\u00a0In this paper we address the relevance and the support\u00a0to access music in versions and movies along time, and\u00a0introduce As Music Goes By, an interactive web\u00a0application being designed and developed to contribute\u00a0to this purpose, aiming at increased richness and\u00a0flexibility, the chance to find unexpected meaningful\u00a0information, and the support to create and experience\u00a0music and movies that keep touching us.<\/p>\n<blockquote>\n<p><strong>(WiP 17)<br \/>\nImAc: Enabling Immersive, Accessible and Personalized Media Experiences<\/strong><\/p>\n<\/blockquote>\n<p>Mario Montagud &#8211;\u00a0I2CAT Foundation, Spain<br \/>\nIsaac Fraile\u00a0&#8211;\u00a0I2CAT Foundation, Spain<br \/>\nJuan A. Nu\u00f1ez\u00a0&#8211;\u00a0I2CAT Foundation, Spain<br \/>\nSergi Fern\u00e1ndez\u00a0&#8211;\u00a0I2CAT Foundation, Spain<\/p>\n<p><strong>Abstract:<\/strong>\u00a0The integration of immersive contents and consumption\u00a0devices within the TV landscape brings new fascinating\u00a0opportunities. However, the exploitation of these\u00a0immersive TV services is still in its infancy and groundbreaking\u00a0solutions need to be devised. A key challenge is to enable truly inclusive experiences, regardless of\u00a0the sensorial and cognitive capacities of the users, their\u00a0age and language. In this context, ImAc project\u00a0explores how accessibility services (subtitling, audio\u00a0description and sign language) can be efficiently\u00a0integrated with immersive media, such as\u00a0omnidirectional and Virtual Reality (VR) contents, while\u00a0keeping compatibility with current standards and\u00a0technologies. This paper provides an overview of the\u00a0project, by focusing on its motivation, the followed\u00a0user-centered methodology and its key research\u00a0objectives. The end-to-end system (from production to\u00a0consumption) being specified, the envisioned scenarios\u00a0and planned evaluations are also briefly described.<\/p>\n<\/div><\/section><\/p><\/div>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":0,"parent":2506,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-2508","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/tvx.acm.org\/2018\/wp-json\/wp\/v2\/pages\/2508","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/tvx.acm.org\/2018\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/tvx.acm.org\/2018\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/tvx.acm.org\/2018\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/tvx.acm.org\/2018\/wp-json\/wp\/v2\/comments?post=2508"}],"version-history":[{"count":8,"href":"https:\/\/tvx.acm.org\/2018\/wp-json\/wp\/v2\/pages\/2508\/revisions"}],"predecessor-version":[{"id":3223,"href":"https:\/\/tvx.acm.org\/2018\/wp-json\/wp\/v2\/pages\/2508\/revisions\/3223"}],"up":[{"embeddable":true,"href":"https:\/\/tvx.acm.org\/2018\/wp-json\/wp\/v2\/pages\/2506"}],"wp:attachment":[{"href":"https:\/\/tvx.acm.org\/2018\/wp-json\/wp\/v2\/media?parent=2508"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}