• StoryKit (Part 1/2): Background & Playback

  • Back
3 October 2020 by 

If you found my content helpful then please consider supporting me to do even more crazy projects and writeups. Just click the button below to donate.

I worked on the StoryKit project (formally OBM Toolkit) in BBC R&D from 2017 to 2020. During my time on the project, I’ve helped the project grow from its inception to it’s use in the real world by production teams resulting in experiences such as Click1000 and HDMAdventure. 

Object Based Media Background

Before jumping straight into talking about StoryKit, I think it’ll be helpful to set the background of Object Based Media (OBM) in BBC R&D before StoryKit. R&D have been experimenting in Object Based Media for years. Several prototypes had been produced in and around this area, experimenting with various different facets of OBM.

One example of this is an OBM weather broadcast called Forecaster. Forecaster is a standard weather forecast where the parts are delivered separately to the user. This allows the user to switch the presenter for a signer, move graphics around, etc. Another prototype R&D produced was an experiment into variable length radio called Derik Tangy. The most recent prototype prior to StoryKit was the Cook Along Kitchen Experience. CAKE was an interactive cooking program that would wait for you and would reschedule recipes based on how long you took at each step. It also allowed you to specify information about your kitchen and guests to adapt the recipes. 

These prototypes all provided valuable insights into OBM but had one issue in common, they all required custom hand written code and software developers to create them. R&D for the most part are not content creators so we needed a way to let the content creators in the rest of the BBC experiment with OBM. This wasn’t going to be practical if for each creator a software developer needed to be involved so the idea of StoryKit was born; A toolkit to create and deploy OBM experiences using a simple graphical web user interface. 

Object Based Media Data Model

The first part of the toolkit to be produced was the Object Based Media data model. This data model is a shared format for specifying OBM experiences and is used by all the tools in StoryKit. This data model was starting to be scoped out when I joined the project and I joined in the discussions to scope it. I was also responsible for designing the production side of the Object Based Media data model as that was needed for the StoryShooter tool I was working on. The data model is tightly specified in the JSON schema language. I later created tools to convert the JSON schema into a GraphQL schema for use in StoryFormer but more about that later.

Over the course of the StoryKit project, the data model evolved massively. I remember starting many sessions of intensive discussions around many changes to make sure there were necessary and fit. The data model has since been open sourced and is available from the BBC website.


The next section of the toolkit I worked on was a tool called StoryShooter. I was responsible for the direction, coding and research of the tool. The goal of the StoryShooter was to help manage the flow of media into StoryKit to allow for rough drafts of experiences whilst still on set. It was also designed to capture logging data to allow for quicker edits when the data was imported into linear editors such as Adobe Premiere. I built the tool with NodeJS, Express on the backend and React, Redux and React Router on the frontend. The backend of the tool integrated with a selection of IP Studio services (another R&D project) and was heavily based on top of this functionality.

StoryShooter Portable Studio

As well as the StoryShooter web UI, I also built a hardware system (given the snappy name of Portable Studio in a Box) which was designed to capture audio and video directly from the cameras and mics on set.  This system was configured and controlled directly via the StoryShooter tool avoiding the need for extra control interfaces. Under the hood of this hardware it was again running IP Studio code. The process of speccing and constructing this hardware taught me a lot about the broadcast world. By shadowing several productions, I also learnt a lot about how logging is done on set. I wrote the Ansible and customised an Ubuntu install script to provision the desktop part of the system and also wrote some Ansible to provision the MikroTik router. 

By the time I finished on the StoryShooter project I had a working UI and the hardware worked barring some bugs with the PTP clock and a few underlying bugs in IP Studio. Unfortunately these bugs never got fixed and as work started to slow on IP Studio this project was parked in favour of focusing more on StoryPlayer and StoryFormer.


When I joined the team again after a 3 month project working with BBC Archives all focus had shifted to StoryPlayer. StoryPlayer is the audience facing component of StoryKit, it is what takes an instance of the data model and plays back the experience. The shift to focus on StoryPlayer was because we had interest from BBC Arts in doing an origami based experience. This experience was later created and released as the Make Along: Origami Jumping Frog experience, the first OBM experience R&D produced using the new data model. 

StoryPlayer similar to StoryFormer uses a NodeJS backend and vanilla Javascript frontend. From the start we intended to open source StoryPlayer so we wanted to keep the frontend as vanilla Javascript as possible to reduce the learning curve to contribute. Our frontend code was compiled via Webpack and Babel and the backend was compiled through Babel so we could use the latest ECMAScript features on both the frontend and backend. As well as being a large learning experience in CSS, subtitling formats and analytics, this project also taught me the difficulties of cross platform compatibility when trying to do something new with web technology.  

StoryPlayer deployment

Deployment of StoryPlayer had to be reliable and scalable as it was going to serve large audiences. We handled this in several different ways. First we deployed everything through the BBC’s deployment platform (Cosmos) which handled deployment to AWS in Test and Live environments and versioning those deployments. As all infrastructure was defined in code using Troposphere, we could easily roll back broken deployments with minimal effort. This handled the reliability concern. 

Solving the scalability challenge required a bit more effort as we would be serving bandwidth intensive video, images and audio to a lot of users. We solved the issue of scalability of static assets by deploying them to an Akami Content Delivery Network (CDN) backed S3 bucket. The final step was to leverage the BBC’s existing audio/video publishing pipeline. By using this pipeline we could push our experience audio/video assets to the BBC’s CDNs as DASH and HLS streams. These two coupled together completely removed the load off our NodeJS microservices reducing them to just serving dynamic pages. We could then easily scale out our microservices horizontally to handle load. 

Like the Object Based Media Data Model, StoryPlayer has also been open sourced and available from the BBC website

To be continued…

In the next segment, I’ll talk about the authoring side of StoryKit, the experiences released whilst working on the project and the insights gained from the analytics.
Click Here to continue to next post.