Back to blog
Friday, March 31, 2017

Combining Virtual and Acoustic Instruments

Introduction

Real musicians are human beings, recent studies prove.

Human beings feel things and have emotions. Music that makes us feel things is still made, believe it or not, by human beings.

Which brings us back to the start: real musicians may be the key in creating the feelings we get when listening to music.

This sort of elliptic and virtuosistic theorem is at the base of a widespread problem in music production: many people write and arrange music that involves the sound of real musicians but can not afford to hire real musicians.

If we follow my initial statement, we might conclude that few people can afford to have feelings and emotions in their music. But no! For better or worse, technology is here to help (note: slight sarcasm).

As a pianist and keyboard player myself, I was just a kid when MIDI started to bloom and all sorts of sounds were made available at my fingertips. All of a sudden I could have a full orchestra or the weirdest percussions to play with, under the same black and white keys I had used for my acoustic piano. Years passed by and from 4MB samples we reached libraries that would require an entire hard drive.

Sampling each and every articulation, modern sample libraries are able to meticulously recreate all of the timbral aspects of an instrument. Some of them are expensive and taxing on our systems but they are, nonetheless, here and they sound amazing.

Like any good sci-fi TV series, right when it seems that humanity has been overwhelmed by the machines and all seems lost, an epic soundtrack kicks in (ironically made with sample libraries) and the tide of battle is changed. Deep inside, a voice keeps reminding us that there is something about real human beings that just cannot be captured in a sample.

So today, even if everything and anything has been sampled already, professional music producers and studios are often carefully in blending together “virtual” and “real” performances. So what happens when you need the sound of a real orchestra?

One of my first jobs as a musician in the ‘90s was to assist a conductor and his orchestra during recording and mixing gigs for some of the most famous pop artists in Italy. Most of the top studios and arrangers/songwriters at the time were using sample libraries to lay down the basic orchestral sound and then relying on real musicians to overdub real instruments.

 The idea was to get the weight and mass of the orchestral body from the virtual sound library and the detail, realism and air from real musicians.

In this article we are going to focus on overdubbing violins, since I’ve recently worked on precisely this. However, the techniques and tricks I’ll explain here can be used to your advantage for any instrument anytime you need “virtual and real” versions to coexist and blend.

The Starting Point

Some time ago, composer and friend Noe brought to my studio a short soundtrack piece that featured orchestral strings and a piano track, with the goal of overdubbing real violin on it to achieve a more emotional and realistic sound. The foundation of the track is already really good. Let’s hear it as it was brought to me, just by summing the raw stems as they were imported in Pro Tools.

Backing Track (Rough)

Setting Up The Live Room

Noe himself is going to play violin over this track, so we have a clear interpretive advantage of having the composer and the performer being the same person.

For tracking, I set up two pairs of microphones in the bigger live room. This room is almost 50sqm, rectangular in shape and built from the ground up (proportions included) to be used for this purpose. As you will hear from the raw examples, great care was put in maintaining some natural reverb tail and avoiding a deadsound. But for orchestral instruments, this is still a very controlled environment in comparison to classical orchestral chambers and concert halls.

The first thing I did was to put four chairs in the room and label them “Front Left”, “Front Right”, “Rear Left” and “Rear Right”. The idea is to capture the performer four times playing the same exact part, with the purpose of blending different “point of views” to achieve that ensemble sound.

For microphones, I decided to use one “close” pair (Neumann KM-184s), positioned close to the chairs and a second “far” pair (Lauten Atlantis), very high and very far (almost 6 meters, give or take). I like to track strings by “looking at them” from high above. My thinking is that the sound that resonates from the instrument on the performer’s shoulder has a tendency to shine and bloom a bit vertically, like hot air.

The 184s were sent to a Mindprint DTC and the Atlantis were sent to a pair of Neve 1073s. No compression was used but the 184s were EQ’d by the DTC and the Atlantis by a Roger Schult w2377 EQ.

On both EQs, the idea is to filter some of the extreme lows and to open up the air at the extreme top. The Atlantis being far I was able to push 4-5dBs all the way up at 23k and on the 184s I went more carefully since their closer position could make the high-frequency content screech a bit. Nevertheless the tube character of the Mindprint complemented the detail and accuracy of the 184s.

The Mindprint DTC (for the Closed Pair) and the Roger Schult w2377 (for the Fair Pair) during tracking

We ended up with 4 good takes for each chair, meaning a total of 64 tracks (32 stereo pairs) if you are keeping score.

Editing Time

  • No matter how much you hate it, the second most important factor in making these tracks blend is by comping them.
  • First, I grouped the two recorded pairs by chair name, so that any edit I did on the group got transferred to each and every microphone used in that particular take.
  • Second, I listened to one chair at the time looking for mistakes and problems. I comped each take in order to create an overall good performance for each chair.
  • Third, I compared each one against the virtual orchestra part that was already there, taking notes of disparities in dynamics, attack and release times, portamento etc.
  • Fourth, I added fade ins and outs to all the takes (in group, again) while listening to the original backing track to make the performances blended with the original.
  • Fifth, I brought listened again to all of the chairs in solo, close first and then far, to make sure the real strings worked by themselves.

One very important consideration before we go the audio clips: a good amount of taste is always involved, and it’s only developed by listening to the work of others and doing this a million times. While a solid method is key, proceeding in just a mathematical way on the editing will make you remove all the human factor from the recordings, completely defeating the original purpose.

Let’s listen to how the violin sounded, completely raw, in the two mic pairs. I picked the Front Right and Rear Left chairs to give you an idea of two opposite sides. Try to spot the little imperfections that I treasured in those takes.

Let's compare two opposite chairs between the two mic pairs, first:

Real Strings, Close - Front Right Chair (Rough)
Real Strings, Close - Rear Left Chair (Rough)
Real Strings, Far - Front Right Chair (Rough)
Real Strings, Far - Rear Left Chair (Rough)

Now let's hear how all the chairs sounded in the two different pairs.

Real Strings, Close - All 4 Chairs (Rough)
Real Strings, Far - All 4 Chairs (Rough)

And finally, let's hear how both pairs sound when featuring all the chairs we've recorded.

Real Strings, Close+Far - All 4 Chairs (Rough)

What to use? Closed, far or both? All three solution works and/or can be made to work, don't worry. It's a bit too early to decide, we'll figure this out later on.

Processing

Working on the sound of these takes might seem a bit different from the usual workflow, but to me it’s based on a simple principle that I always employ: priorities.

Reverb and Panning

In this case, I wanted to work on reverb and spatial positioning as soon as possible. In a case like this, I like to use the Waves S1 Imager. In a real orchestra, you would have the Primi (First Violins) and Secondi (Second Violins) slightly on the left, the violas and cellos slightly right and the double bass right behind them a bit more on the right. This is obviously no rule (there are many variations on this theme) but by hearing the backing track I noticed this general positioning rule was respected, so my real strings had to follow.

The S1 Imager used to position All Close (left) and All Far (right) sets in the stereo field

I was excited to put the new Exponential Audio R4 to the test. I created two different reverbs (conveniently called rev1 and rev2), with the idea of having the second one much darker and with a longer pre-delay, but still based on the same main parameters of rev1.

I wanted to accentuate the difference between close and far microphones, while still being in the same virtual space.

The two reverbs, as used in the session

Real Strings Close (Reverb1 added)
Real Strings Far (Reverb2 added)

Preparing The Backing Track

After this was done, I wanted to prepare the backing track. First off I decided to pull the Primi down a good -7dB, which means we are mostly going to replace the First Violins with our real ones. Then all I did was really minimal, except for maybe what I did on the double basses. I wanted to emphasize the extreme lows in the typical “movie soundtrack” balance that you hear in theaters. After all, this will be used for a cinematic sequence.

Double Basses (original, reverb only)
Double Basses (EQ3 + MaxxBass)

Let's hear the whole backing track (no real strings added yet) after my processing.

Backing Track (Processed)
Written by Alberto Rizzo Schettino

Pianist and Resident Engineer of Fuseroom Recording Studio in Berlin, Hollywood's Musicians Institute Scholarship winner and Outstanding Student Award 2005, ee's worked in productions for Italian pop stars like Anna Oxa, Marco Masini and RAF, Stefano 'Cocco' Cantini and Riccardo Galardini, side by side with world-class musicians and mentors like Roger Burn and since 2013 is part of the team at pureMix.net. Alberto has worked with David White, Niels Kurvin, Jenny Wu, Apple and Apple Music, Microsoft, Etihad Airways, Qatar Airways, Virgin Airlines, Cane, Morgan Heritage, Riot Games, Dangerous Music, Focal, Universal Audio and more.