Emulating human performance in orchestral software
I recently bought Garritan Personal Orchestra and find it a great tool to get professional-sounding orchestral renders of my scores. However, I feel a certain lack of "human" playing in it, i.e., chord changes sound a bit "robotic" at times and longer notes are perfectly sustained for long periods of time (they do not decay), which makes the performance sound a bit too computer-generated.
I understand professional musicians and film score makers use such software on a regular basis, yet they obtain much more human sounding results. Is there a trick to making the score sound less "computer-generated"? What can I do to make the results appear closer to human performance?
3 Comments
Sorted by latest first Latest Oldest Best
Is there a trick to making the score sound less "computer-generated"?
Certainly. As you may know, most of the music produced today, including pop songs and movie soundtracks, are computer generated. Only the most famous singers and feature films can afford to hire musicians to record their music, because it is so expensive. The general audience will not realize, although subconsciously they may find the recorded music "better".
You are looking into a profession on its own: music production.
(...) yet they obtain much more human sounding results
The music production process takes years if not decades to master. A 4-minute soundtrack may take a team of 3 engineers a month to produce, using hardware and software that easily add up to hundreds of thousands of dollars. If you are just starting out, do not be disappointed if your soundtrack is not nearly as good as those you would like.
The first step is to get a Digital Audio Workstation software, or DAW for short. It is unclear from your question whether you already have one, or you are just rendering sounds from the score software directly.
The second step is to get a good virtual instrument. A high quality virtual instrument library is very expensive. Garritan Personal Orchestra is good for beginners for its low price and simplicity, however you will not have detailed control over note articulation, breathing, vibrato etc.
No matter what library you use, you need to experiment its full potential. Open up the DAW, fire up the instrument, fiddle the settings then play it on a keyboard. Observe how well you can emulate a human player and produce notes with a variety of tonal characteristics.
(...) longer notes are perfectly sustained for long periods of time
(they do not decay)
The third step is to play every instrument, line by line, into the DAW and record the data. This is the core reason why your sound is "robotic" - a real human player will interpret the melodic line and play them musically. If you simply dump the MIDI data from the score software into the DAW, it will always sound robotic. This process will take time, but it will greatly improve your soundtrack quality even if you are using a very cheap virtual instrument. There are "humanization" features in some software, which basically randomly offsets some values, but my opinion is that they don't quite achieve their objective.
For this you would need a MIDI keyboard, a pedal and an ASIO sound card. For entry level options these hardware should be affordable if you can afford GPO.
The next step is to fine tune the performance data recorded. You will need a few hardware fader controls, which is included on most MIDI keyboards. You will at least want to fine tune the "velocity" data of the MIDI, which controls loudness. If your virtual instrument can control vibrato, great. If it can control legato vs staccato, even better. These parameters can either be recorded together with the performance or recorded on a separate run.
This step is optional: If your orchestral score calls for 2 instrument, record them using different virtual instruments. In the real world, it is very unlikely that both flutists are using the exact same flute model. If you "render" the sound directly from the score, the software will just double the volume of the same instrument, which is not realistic. The same principle applies to the entire woodwind and brass section ... I hope you see now why music production is time consuming and expensive.
Then you need to "mix" all your instruments as audio data. In a real orchestra, some players will sit closer to the audience / microphone and some further away. They will have different reverb and left/right balance in the 3D space. When you render the score directly, every instrument is played as if it is right next to the microphone, which is why it sounds unreal.
I hope this answer gives you an idea of how to make your soundtrack less "computer generated" and more lively, and some general directions if you wish to explore further.
I just watched a series of videos of film composer Hans Zimmer who mocks up all his soundtracks using software and virtual instruments so the directors can hear what it's going to sound like before they record it with a real orchestra or other musicians.
He has a control surface with several faders on it, and each fader is mapped to a different MIDI control message that changes a performance parameter on the virtual instrument. So he can call up french horns, hit a chord with his right hand, and then use his left to grab a fader and perform a crescendo on the horns. The other faders change all kinds of things like articulation mode (e.g., change to tremolo and change the speed of the tremolo) and reverb levels, etc.
That requires a few ingredients:
Virtual instruments that support MIDI CC messages to make performance changes. Garritan probably supports some of this.
Hardware CC controllers to map to the controls in the virtual instrument.
The time to map them up in a way that makes sense to you.
More time to practice using them to create performances that sound how you want.
A lot of keyboard controllers include at least two built-in performance controls: the mod wheel and aftertouch, so if you have a MIDI keyboard you might not need to spend a lot on external faders or anything like that.
After recording a MIDI performance, or if you've sequenced it from scratch, there are plenty of ways to add randomness and humanization.
Some DAWs (I know Pro Tools does this) allow you to randomize the parameters of a group of selected MIDI events all at once. This is especially effective for velocity.
Beyond that, a very effective but also very time consuming technique is manual editing, informed by an understanding of how the real instruments work and how they are played. Two examples are drums and violins (and other members of the strings section).
When humans play drums, they use both hands, and virtual no drummer has identical hands. Also, when playing a repeated pattern, it's very difficult for a human to not "swing" it slightly. By that I mean, the two hits are not spaced perfectly evenly, usually every other hit is slightly early, and the hit that's on time is usually slightly louder. So for drums, taking every odd hit (the first, third, fifth, etc.) and making it slightly louder, and then taking every even hit (second, fourth, sixth, etc.) and moving it slightly forward in time can create a more realistic sound.
A similar thing applies to bowed string instruments. The upbow sounds different from the downbow. Also it's almost impossible to perform perfect tremolo - the timing will be just ever so slightly skewed. Often the solution there is to use a separate tremolo sample, so changing samples in a performance might be how you can create the articulations you want. The same applies to drums with rolls.
There are plenty of other little details to think about. Enough legato notes played on a clarinet will seem odd because there's no time for the player to breath. Little tiny details that every notices without knowing that they noticed.
I would recommend 3 steps to humanize the performance.
Use the Instrument Control panels to adjust the settings of each instrument so the timbre is appealing. Some DAW's will also allow you to automate these controls so they can change throughout the track. Check your DAW's manual for more information about automating controls inside a plugin. For example, in Reaper, you can run GPO as a VST plugin, and then draw automation lines to changes the instrument control parameters during the song.
Familiarize yourself with the MIDI controls, especially note velocity. This will prevent the notes from all sounding the same, and allow you to create more human-like dynamics.
usermanuals.garritan.com/GPO5/Content/controls.htm#Basic_controls
Manually automate the volume sliders for each channel. This is time consuming, but is a common tool in professional production to create dynamics and balance each section's instrumentation.
Terms of Use Privacy policy Contact About Cancellation policy © freshhoot.com2025 All Rights reserved.