These days modern NLEs (Non Linear Editors) have many tools for creating effective multi track composites. But a few years ago this wasn’t the case, and today many large enterprise production companies (like TV stations) may not have installed systems with all the bells and whistles. Previously the edit system in use at my day job was very efficient at one thing… cutting news. But it had short comings where graphics was concerned. There was only 2 tracks of video and you could not create masks.
Blender solved this problem for me. In fact it solved the problem in a myriad of ways. Its up to the artist to choose the workflow that best suited them.
For me I prefer rapid prototyping and responsive feedback, so I chose to use the VSE (Video Sequence Editor). The VSE can provide real time feedback and it can reproduce many of the effects you would expect to find in the compositor.
A typical TV News graphic package will consist of the following elements:
- Script with key data
- Voice over for timing
The key for me is the timing derived from the recorded Voice Over. You need to remain flexible if this timing changes. Sometimes a script will be sent to our lawyers, they ensure that the claims in the script are defensible in court, other times the facts may change when more data comes to light (when the location of a missing plane becomes clearer for example). For this reason it is important to have handles on you media, which don’t affect still images but are very important for animation or video.
Often I will generate content external from Blender to save on processing time, or I will generate backgrounds as still images. As creating a still image is faster than regenerating a 3D construction or compositing an effect for every frame.
But when I want to animate fast against a voice track I choose the VSE.
I leverage the masks in Blender to create as much interactive blending of the elements as possible. I should note here that the keyframe control of masks in Blender is sub-optimal (rubbish).
Its really easy to use masks, simply open a UV/Image editor window and set it to show Render Result. Then press f12 to send it a temp frame to draw your mask on. Then apply this mask to the strip as a modifier (remember to set the strip’s blend type to alpha over or over drop). As I build up the imagery I will add a mask for each layer.
But I don’t apply the masks directly to the video strip. First I need to add a way of moving the media around on the screen, so I add a Transform effect to the original source clip. Then I apply the mask to this strip.
At times we need to alter an image but reuse the source multiple times. This occurs when we conform a vertical video (shot from a phone in 9:16 ratio) to widescreen TV presentation. In this example I have a imported an image that is narrow but Blender auto-conforms to full width, so I apply a Transform effect and restore the aspect. Then I move that up a layer (check that blend type is alpha over) and apply a Blur effect to the source strip below. This appears as the same image in the background but blurry (and with a color curve to reduce the brightness).
Instead of building these layers every time I want to produce this effect, I could instead apply these effects to a metastrip, then replace the source image as required. This makes effects work in Blender fast and intuitive.