Could you briefly explain who you are where you are and what you do?
Hello I am Simon but I go by the name “quollism“. I am still in Perth, Western Australia after being born here nearly 40 years ago. By day I’m a software developer but as a hobby I’m making an animated film and blogging my weekly progress at quollism.com.
What do you use Blender for?
Film makey stuff – cutting together storyboards, modelling, rigging, animating, editorial, that sort of thing. The odd bit of image manipulation since work is too cheap to give us Photoshop. Also justifying an annual $6000 trip to Blender Conference.
How did you discover Blender?
I first picked it up back in 2003 and I honestly don’t remember back that far.
Why do you choose Blender for editing your film over a commercial application like Premier Pro?
Partly cost (i.e. none), partly familiarity, partly the cosy appeal of creating the film inside one stable cross-platform piece of software with really solid backwards compatibility.
Why make a short film with open software, wouldn’t it be easier with a commercial application?
CG animation is a big effort no matter what software you use. Obviously you don’t want to go to the effort of hand-entering individual vectors like they had to with Tron, but on the visual side I feel like Blender has everything I need – including render and A/V editing, which (as far as I know) vanilla Maya doesn’t have.
How much media is generated inside Blender and how much is external media?
Right now the plan is to use Blender for everything visual – textures will all be procedural, etc. Blender’s video sequence editor is limited in its audio capabilities right now, so for heavyweight sound work (music, recording dialogue) I use a DAW called Reaper. It’s not open source but it is super inexpensive considering how powerful it is.
Tell me about the music, do you make it all yourself. Is it all ukulele or do you use other instruments?
I do indeed do all the music and sound myself. It’s sequenced in Reaper and played through software synthesisers – some generate sound using the classic subtractive oscillator and filter method, others use physical modelling to simulate drums, strings and metal. For me that’s analogous to 3D rendering processes – cheap/artificial-looking scanline rendering versus expensive/natural-looking path tracing. The naturalistic-synthetic thing is part of the visual art direction too.
I only recently picked up the ukulele again after several years of clean ukulele-free living. If I miss the Blender Conference 2017 deadline it’s because I spent too much time working out cool 1920s jazz turnarounds and not enough time setting keyframes.
Do you mix sound in Blender or do you construct a soundtrack in a (DAW) Digital Audio Workstation first?
I bounce between both. For the story reels I export dialogue and sound effects out of Reaper and sequence them in Blender alongside stills or video footage.
I also give myself click tracks and musical fragments to play with, and their tempos are synched to a particular number of frames. For a slow-ish scene I might have a 12 or 14 frame pulse, for a frantic scene I might set the pulse to 8 or even 7 frames. (Less frames means less time between beats, therefore a quicker beat.)
Deciding the pulse of a given scene ahead of time lets me make cuts and time events to coincide with beats in the music. And when it comes time to finally write the score, I know what tempo I should be working at because I already have a click track.
What order you build the movie? Story board with scratch track in VSE, then… what?
Once I’ve got a 2D animatic (rough dialogue/sfx/music with still images), the audio of the story reel forms a reference soundtrack and the video part forms a visual reference. When I’m doing any kind of animation, I have the audio of the reel sitting in the VSE and the video set as the scene background image. This way I can scrub back and forth to do my lipsync and also get the 3D framing and blocking close to the 2D drawings.
Over time the drawings get replaced with blocked animation, final animation, final render and final composite and finally I have a movie. In theory. 🙂
What are the key benefits of free open software?
Everyone benefits from the benefits, not just people who can afford to upgrade to the latest version! Anyone can obtain and use the software for free, as well as examining the source code to see how it does what it does, fixing bugs if they’re able (or determined enough to get able), adding new features they want, that kind of thing. As long as people use it, as long as user/developers are contributing to upkeep/codebase, and as long as it’s managed well, generally the software just gets more reliable and useful. Blender is one of the best examples of how to do an open source project the right way.
I gather that you built the short film in Blender’s VSE, do you think that the VSE is functional enough for this?
Definitely. There’s niceties which are missing from a more fully featured NLE, but it’s ready for animation. If I was doing a live action film and trying to keep track of five different takes per shot and synched sound recordings, I would probably start to want metadata capabilities, footage bins, richer audio capabilities and so on.
With animation I drop the latest version of any given shot into the edit at the right point (whether it’s a boomsmashed scene or a final render EXR sequence) and that’s that. Before final sound, I’m doing sound entirely within the VSE – animation shots are exported silent and lined up with the reference audio to make movie magic.
How do you think Blender’s VSE could be improved?
A volume meter would be nice but I don’t know if the current user interface can do one. Anything else that I’m really missing, I could probably script up myself.
Do you sell any cool Quollism T-shirts?
Not yet. If I started selling merch without having anything at the core of that merch it would feel like jumping the gun. But if people want my stupid internet name on t-shirts or mugs or undies I guess I’d better see to it some day.