Experimental video player

A concept player optimised for scanning through technical videos that are predominantly speech

Roles

  • Interaction Design
  • Motion Design
  • Prototyping

This work is from my role as an experience designer at an advertising agency

A major computing company with a large library of technical videos tasked us with concepting a central destination for video content discovery. The goal was to create an interactive video experience that can answer people’s questions.

Considerations for technical video content

Unlike other popular video platforms with more visually-driven content, the videos on this platform are dominated by voice-overs paired with static visuals. Since many of the videos were long and covered multiple topics, it would be especially helpful to provide quick access to a specific part in a video. To achieve that without manual tagging of the large library of existing content, I proposed using the company’s own speech-to-text service to automatically generate timecoded transcripts at scale, improving navigation within these videos, as well as discoverability of the content via search.

Video transcript prior art

Educational video platforms commonly provide a way for users to browse through the full transcript of captioned videos. However, it is typically tucked away and kept separate from the video. For this platform’s narrative-driven, long-form videos, I saw an opportunity to improve navigation within a video by more closely integrating the transcript.

Video seeker for narration-focused content

Since the videos are generally in the format of static slides accompanied by long narrations, the video frame-based seeking functionality of regular video players was less useful for identifying a specific portion in the videos.

Instead, I proposed making the transcript the main focus when seeking, allowing users to browse through a video as though scrolling and skimming through an article.

Answering questions

Another natural language API available in the platform was an answering service that can take a natural language question as an input, and extract snippets from a corpus to answer the question. I proposed integrating this into the search functionality of the content hub, utilising automatically generated video transcripts as a corpus for answering questions. The search results would display answer snippets in text, and allow users to jump to the referenced portion of the videos.

Prototyping

Due to the complex functionalities in this experience, it was essential to create an interactive prototype to validate the concept. The prototype was created in Principle, making extensive use of independent components with multiple states.