Time Addressable Media Store (TAMS)
One of the big challenges in creation of access services is creating them in time for broadcast. Content is often finished very close to its broadcast date and this means that audio describers get a very short deadline to create an audio description track. BBC R&D have been working on a way to speed up the content creation process which may also give audio describers more time to work on AD tracks.
What is time addressable media storage?
A Time Addressable Media Store (TAMS) is a way of making the elements of content such as a raw video tracks, raw audio tracks, video tracks with special effects, audio tracks with sound effects etc available to teams who need to work on them, over a secure internet connection. These are then addressed based on where they fit in the timeline of a programme. Instead of creating a new master file each time you add sound effects or background music a TAMS based workflow will just note when the sound effects or music track needs to be played and will play it alongside the original audio track. This means the teams creating the sound effects and the background music can both work on them at the same time.
How can this help with access services?
At the moment audio describers receive files that have been finished except for the access services. Depending on the content creation workflow, TAMS may enable audio describers to access the video and audio tracks earlier in the creation process giving them more time to create an audio description track.
Subtitle creation requires just the audio tracks and is less susceptible to last minute edits than audio description. A TAMS workflow will likely have clean dialogue tracks without additional sound effects or background music. Each of these tracks could be run through automated speech recognition (which benefits from just having the dialogue) to produce first attempts at subtitles that can be checked and corrected by a human subtitler.
In future, when AI is capable of assisting with the production of audio description, it is likely to start the same way by creating a first attempt at a script or identifying characters, objects or locations to assist an audio describer to write a script. This work could be done on media tracks in a TAMS datastore reducing the amount of work audio describers need to do on some or all content.
How else could this help?
The broadcasting industry has been talking about object-based media for a while. Standard broadcast subtitles are an example of object-based media because the subtitle file is delivered alongside the main program and then you can choose whether to access this ‘media object’ or not. The idea of object-based media is that content can be broken down further into parts of the audio or video which can be played out or not. TAMS stores a program as objects which means it could be possible for a user to turn down the background music if they have trouble hearing the dialogue. Subtitles for language translation could also be read out by a separate, optional audio object. This type of object-based media isn’t supported in today’s broadcasting methods but could be supported in future by streaming software or content broadcast over the internet such as by the BBC’s Freely app.
In Summary
TAMS presents an opportunity to increase access service provision at time of broadcast by enabling the creation of access services earlier in the content production process. It may also enable user personalisation features that drive future content accessibility.
About the author
John Paton works in the Media, Culture and Immersive technologies team at RNIB focusing on media technology and regulation. With a Masters degree in computing and almost 20 years working in accessibility he’s always on the lookout for new technologies that could help blind and partially sighted people.