The music video format is at least as old as MTV and is in need of some drastic makeover!
This article is written by Johan Belin and Mary Fagot with contributions from Jakob Nylund, Alaa Mendili, Karl Ringman and Svante Stadler.
The music video format is at least as old as MTV and is in need of some drastic makeover! With her last series of releases, Robyn has experimented with pushing boundaries of visuals together with music. After seeing the #Killingme interactive piece, DinahMoe contacted Mary Fagot at Blip Boutique to start a discussion of what other innovative experiences might be created. The starting point was how we could let users control both visuals and sound interactively. Mary proposed that Robyn’s “We Dance To The Beat” was a perfect track for the purpose. The title gave the basic idea: We Dance To The Beat Of [insert your name here] - the Beat is your own creation. So we got to started working out this concept, got a team of talent on board to develop it with us, and got to work.
The Concept - The Interactive Beat Machine
We hit on the idea of the Interactive Beat Machine - you explore a Grid consisting of some 100+ video clips, each with an instrument from the original song. You move around the grid until you find a combination of clips and instruments that you like. By clicking each clip you change both the visual and the music. By clicking several times you can create visual/musical patterns and rhythms. Each clip reacts in its own unique way, and you create your personal ‘Beat’.
Once you’ve created a ‘beat’ you like, you publish your creation to the Stream, an ever expanding version of the Grid that plays back all Beats created by the users. As a final touch your Beat is presented with your name spoken in the same vocoder style as in the original track.
We chose the Grid to give impression of an endless field of options for the user to explore - an expanding universe revealing itself well beyond the frame of the browser. This sense of limitless options in the user’s control was then focused by the navigation which rather than letting the user choose at random, forces focus by moving two cells at a time. The idea that there is always something more and possibly better just one step away activates exploration. Then in the Stream, we blew this theory up, and allowed users to either passively experience the flow of beats or choose for themselves which Beats they wanted to see & hear.
Creating The Beats
A seemingly infinite concept needs a lot of content, this eventually added up to over 100 video clips, some 500 audio files, and 220 patterns and interaction behaviors used on the site.
Creating / selecting the visuals
The intention with the visuals was to create a vast variety of images that would mirror the infinite number of possibilities for creation by the user. We wanted it to feel both graphic, and random, high and low - as if somehow we pulled a random sampling of videos straight from the internet itself.
Once the pieces were created it was all about cutting - we were working without the sounds at the time and had to find the right match, then find the right edit points. Then we fit that to the interactions, which sometimes sent us back to editorial. It was all a work in progress until the very end to find the right match from visual to sound to interaction.
We already had a pretty good idea about what was fun to do with the video clips, so the first step was to create generic interaction behaviors that we could use as a starting point. We really wanted to keep things simple and intuitive so we restricted interaction to just click and adapted the underlying logic to this.
We made three different visual behaviors:
- Changing playback rate.
- Changing playback direction.
- Changing loop points / patterns.
The main sounds were created from a total of 14 discrete instruments, e.g. bass, snare, lead from the original song. In addition to that we produced new musical elements for when the user interacts with the clips. The final sounds and musical patterns are pretty much tailored for each video clip.
Putting it all together
We used a test application to experiment combining visuals, instruments and interaction behaviors to see what worked and what didn’t. Working through all the clips the first round took some three days. After that it has been at least 10 rounds more, each round incorporating what we learned during the previous round.
Creating the Site
We wanted a simple design as the videos are the main point of interest and what the user should focus on. The outcome is a flirt with the 80’s, DOS and VCR’s. 3 text layers, one red, one green and one blue was added on to each other with a blend mode to mimic an old TV or computer screen gone bad. For roll-overs we just shifted the different text layers slightly to create a distortion effect.
The Development: Stream, Create, Intermissions
The goal for the stream was to have a user generated never ending remix of Robyn's track. Basically we had to keep a continuous flow while navigating through the user beats. Which meant our challenge was to load a large (and constantly growing) amount of data and play it seamlessly without any preloading interruption. Since both the audio and video was a loop of 8 seconds, the solution was to load the data of the next user beat while playing the current one. Once the next user beat was loaded, we would jump to it at the end of the current loop. To keep the memory usage low, we would also unload the data of a user beat once it finished playing.
We wanted the Create section to be as easy as possible for the user to participate, so we came up with a simple three step solution: play, publish, share.
For the intermission we wanted Robyn to take over, to be in the center and to mix it up a bit with the users beat. The first tests we did was a bit static when it was duplicated across the scene, so we added a delay effect, but we still needed more motion to make it more dynamic. So why not record a Skype video chat with Robyn eating a banana? Yes, that's exactly what we did!
With the beat constantly running in our heads we became a bit obsessed with having everything in sync with the beat so we tried to add it to everything we could think of. The blinking cursor when you type your name, the credits page scroll rhythm, the zoom effect on Robyn's intermission, and so on.
Of course as with all sites it requires a lot of testing to see there are no bugs and other quirks that would destroy the experience. Sometimes they are easy to find and fix, other times the search can go on forever. The longest bug searching/fixing session went on for approximately 5hrs with a constant Skype video chat open the whole time. That must be close to some kind of record! Perhaps it deserves a trophy, a medal or at least a foursquare badge.
Signing your Beat, vocoder style
To tie to the song and personalize each user’s beat, we had a vocoder voice saying “we dance to the beat of “ + the user’s name. We used a text-to-speech application (Flite) to create a sound file based on the name you entered, a vocoder (MDA) using this as modulator and a pre-recorded pad sound as a carrier, plus some additional audio effects (filtered delay, soft distortion, limiting) to make it blend with the other instruments, and the final sound file is played back in the client.
Mapping the clips to the grid
Since we needed to make sure that you never got the same instrument twice in any four cell combination, we could not just use random distribution to place the content on the grid. We got some high end mathematical help with the combinatorics needed and a program was written that mapped the instruments to the grid, which we then used to map the clips.
Adaptive music. i.e. music that responds to user interaction in realtime, requires lots of control logic behind the scenes. Though our framework for adaptive music was the basis, this project was different in two aspects.
We needed the music to be as responsive to the user interaction as possible, to give the user a sense of control and allow totally unique user creations. To do this, we needed to split the music into its smallest parts, a single bass drum, a single hi-hat sound and so on.
The track also demanded perfect timing, a commodity in other media, but in Flash, not so much. Since we split the music tracks in its smallest parts the timing was even more crucial. All sounds are played back using Flash dynamic sound capabilities. To make it run smoothly on lower end machines we used PixelBender for mixing.
The concept is based on handling video clips in a musical way, i.e. creating visual rhythms and patterns. Since creating rhythms and patterns for adaptive music is an essential part of our framework, extending it to handle video was pretty straight forward.
We use embedded FLVs on timeline and control playback with code. This makes it possible to set exact in and out points, change playback rate and playback direction. It also allows mixed frame rates between clips, a huge advantage and when we are talking about some 100+ clips. To make playback as smooth as possible and to minimize CPU load the number of keyframes were kept rather high, every 5 frames.
The video and sound had to be in sync at all times. Dynamically generated sound has a delay (latency) of between 200 and 450 mS from when you start a sound to when the user actually hears it. During normal playback we just compensate for this by delaying the clock controlling video playback. What is more tricky is to handle how real-time interaction and replayed (stored) interaction data should behave. The user interaction must be responsive, and the final result when you listen to your Beat must be predictable. We ended up with quite elaborate logic for handling recording, overwriting and playback of user interaction.
To get the word out and generate some buzz, Robyn spread the word on her social networks 3 days before the launch. Without giving any hint, she revealed that it would be an interactive experience, the song title and #wdttb. Quickly bloggers picked up on it and a lot of attention was gathered for the launch day.
The project was an amazing collaboration spanning the globe from Stockholm to London, to Montreal to Los Angeles. It wouldn’t have been possible without hard working and creative people (and Skype!).
- as of now, you would have to watch the stream for more than two full days and nights if you want to hear all of the user generated content, and it still keeps growing.
- before you even start messing with them, there are 1002 possible combinations of visuals and sounds. once you start clicking it is infinite.
- the project contributors were awake 24 hours a day collectively across 4 time zones
- 87 kilos of coffee were consumed
- no animals harmed in the making of this project
- one large banana was harmed in the making of this project
- currently there is enough beats to drive from New York to Los Angeles without repeating a single users beat and then some!
- longest Skype call: 5hrs 23min 3sec.
- the pretty girl eating a strawberry is Anna Maria Larsson
- the dancing monkey is James Frost.
- not everyone working on this project is Swedish or went to Hyper Island, but almost.
- the site was launched on December 13th, which also coincidental was Taylor Swifts 21st birthday. Cheers!
Creative Direction: Mary Fagot for Blip Boutique
Creative Direction: Johan Belin for DinahMoe
Art Direction & Design: Jakob Nylund
Creative Technology & Flash: Alaa Mendili
Creative Technology & Flash: Karl Ringman
Video Direction: Mary Fagot & James Frost For Blip Boutique, Jakob Nylund, Felix Hill
Beat Interaction Design & Development: Johan Belin for DinahMoe
Interactive Music Production & Sound: Erik Brattlöf for DinahMoe
Back-End Development: Hjalmar Hägglund
Project Management: Thomas BjörkVideo
Post Production: Pär Ståhlberg
Server-side Audio Processing: Svante Stadler
Server/hosting Manager: Nico Nuzzaci
Dancer & Dance Co-Ordinator: Maria "Decida" Wahlberg
Dancers: Dennis Sehlin, Maele Sabuni, Fredrik Quinones, Bianca Traum, Victor Mengarelli, Anton Palm, Nicole Singstedt
Additional Thanks: Måns Eriksson, Ivan Bridgewater, Internet Archive, Nasa, Sarasota Jungle Gardens, John Cormier, Brian Rea
About the authors
I work as creative at DinahMoe, which is a digital production company 100% dedicated to music and sound for interactive media. I have a background as music and sound producer, mainly for TV commercials. After that, I started and was the driving force in several internet centered ventures. This gave me invaluable experience in application and system development, web technologies, complex project management etc. Since early Flash days I thought music and sound on the web was badly mistreated. Now I had the experience needed to do it right, which was the reason why I started DinahMoe in 2008.
As co-Creative Director of Blip Boutique, I make cool stuff that is not boring. My background is in film/video, print, and digital & interactive Creative Direction which at various times has included creating work for everything related to Robyn, to the Gap, YouTube, LionsGate, Capitol and Virgin Records. My partner James Frost and I came together to build Blip Boutique in 2006. Now stop reading and go make a beat! #wdttb