Jump to content

Samplitude Audio Engine Vs. Protools Audio Engine


hubercraft
 Share

Recommended Posts

There are myths about the various audio engines. What is known:

Many Samplitude users came from other apps. Many users have commented that Samp "sounds better" then their former applications. I moved from ProTools TDM (not HD)and Logic & there is definately more "air" & "depth" then PT.

The 32bit floating point audio engine & meticulous design of plugins in Samp. I believe that PT still uses fixed point processing.

There are many different ways to design a DAW. Sampliquoia always has put audio quality near or at the top of thier list,

Link to comment
Share on other sites

THe studio I work at has HD3 and I will happily do some sam / pt tests this weekend. ill post reports.

If it does sound better, I dont think it would be a day and night difference, now between say sam and resaon..well...Ill keep my mouth shut ;)

Link to comment
Share on other sites

Hello GJ Orange. The phase tests you are talking about was for testing a single wave file in two different apps. I am refering to the mix engine. I think the audio engine in Sam is far superior than PT. There are some great tests you can do yourself to hear the diference.

Jeff

Link to comment
Share on other sites

OK guys, give this a try. Record about 15 tracks in your favorite app. We did this useing a Trident console useing EQ only and recording the signal already at a volume we liked. I recorded a full band, 1 song, into Cubase. Because the levels were already were we wanted them all we did was pan were we wanted and then rendered each file as a stereo file all with the same start time. Take these tracks and drop them into your test apps. Remember the panning is already done. Sam was more open and wider than PT, Cubase, or Logic. We then did a mix with a few pluggins. These were VST plugs and used the for SAM, Cubase and PT le . When we got the mix where we liked it, we saved the settings for each plug and used the same settings in each app. Again Sam kicked butt. I did this as a blind test for some client friends of mine who are not engineers, but musicians with good ears. I firmly believe Sam has the best mix engine around. Remember these are the same files with the pan rendered already. If anyone else gives this a try please post your results!

Have fun

Jeff

Link to comment
Share on other sites

OK guys, give this a try. Record about 15 tracks in your favorite app. We did this useing a Trident console useing EQ only and recording the signal already at a volume we liked. I recorded a full band, 1 song, into Cubase. Because the levels were already were we wanted them all we did was pan were we wanted and then rendered each file as a stereo file all with the same start time. Take these tracks and drop them into your test apps. Remember the panning is already done. Sam was more open and wider than PT, Cubase, or Logic. We then did a mix with a few pluggins. These were VST plugs and used the for SAM, Cubase and PT le . When we got the mix where we liked it, we saved the settings for each plug and used the same settings in each app. Again Sam kicked butt. I did this as a blind test for some client friends of mine who are not engineers, but musicians with good ears. I firmly believe Sam has the best mix engine around. Remember these are the same files with the pan rendered already. If anyone else gives this a try please post your results!

Have fun

Jeff

I tried it today with a 32 track project (max for PT Le), test candidates were Sam SE, PT Le (6.4), a friend of mine came by with his laptop and Cubase SX3.

First no plug ins, master fader at -7. Listening back to it I thought Sam and PT were equal, with subtle differences that were more about taste than about quality. Cubase sounded significantly worse. But then, I really don't like Cubase. My friend liked Sam best, then Cubase, then PT. He really doesn't like PT.

We got really into it and decided to do a double blind test. I numbered the files, noting on a piece of paper which number corresponded to which app. My girlfriend played the files for us in random order. What seemed obvious before was not obvious any more. Our ratings were all over the place.

We played two files together, inverted the phase in one: Nothing. Complete silence. Same result with every combination.

We then put a compressor and eq (Kjaerhus classic) on every second channel, same settings in all daws. This time we didn't bother with a double blind test. Again, the resulting files cancelled each other out.

If there is another test that I could try to experience the superiority of Sams mix engine for myself, please let me know.

Link to comment
Share on other sites

OK guys, give this a try. Record about 15 tracks in your favorite app. We did this useing a Trident console useing EQ only and recording the signal already at a volume we liked. I recorded a full band, 1 song, into Cubase. Because the levels were already were we wanted them all we did was pan were we wanted and then rendered each file as a stereo file all with the same start time. Take these tracks and drop them into your test apps. Remember the panning is already done. Sam was more open and wider than PT, Cubase, or Logic. We then did a mix with a few pluggins. These were VST plugs and used the for SAM, Cubase and PT le . When we got the mix where we liked it, we saved the settings for each plug and used the same settings in each app. Again Sam kicked butt. I did this as a blind test for some client friends of mine who are not engineers, but musicians with good ears. I firmly believe Sam has the best mix engine around. Remember these are the same files with the pan rendered already. If anyone else gives this a try please post your results!

Have fun

Jeff

First of all your test is flawed. The ONLY way to make a test like this legit, is if ALL DAWs use the same I/O hardware, DSP and the same clock. But before you do that, you have to pick the most prestine convertors that allow you to work with native and TDMII. So that limits you to about two or three manufacturers.

Second. The band must be recorded simultaneously in all systems, which are all calibrated the same. Then the stereo outputs must be fed to a passive summing device, which would feed a prestine monitor system. All of the judging parties must occupy the same acoustic space of the sweetspot, and should be blindfolded, and not informed of their selections. Since all of this has never been done and published, and is physically impossible in some cases, I would say...math is math. Any two DAW programs of like kind (floating, or fixed) should be too close to call, because the answer for adding 1+1 should be the same in either case. We all know that fixed point has more accurate math than floating, but floating has more headroom. So you have to compare apples to apples. I prefer fixed myself. I don't think anyone here wants their bank accounts or accounts receivables figured with floating point math.

Link to comment
Share on other sites

The ONLY way to make a test like this legit, is if ALL DAWs use the same I/O hardware, DSP and the same clock. But before you do that, you have to pick the most prestine convertors that allow you to work with native and TDMII. So that limits you to about two or three manufacturers.

Second. The band must be recorded simultaneously in all systems, which are all calibrated the same. Then the stereo outputs must be fed to a passive summing device, which would feed a prestine monitor system. All of the judging parties must occupy the same acoustic space of the sweetspot, and should be blindfolded

What does testing the internal mix bus have to do with the I/O hardware? We are talking about comparing the resulting stereo files of bounce operations. Why would you think it is necessary to use a summing box for a stereo file? Why must all judging parties occupy the same sweet spot? If something sounds better than something else, it usually sounds better at the sweet spot and a little bit away from the sweet spot. Why should everybody be blindfolded (you realize that "double blind test" does not mean blindfolded?)? You don't have to leave it to the experts... for the cancellation test you do not even need good ears...

Link to comment
Share on other sites

The ONLY way to make a test like this legit, is if ALL DAWs use the same I/O hardware, DSP and the same clock. But before you do that, you have to pick the most prestine convertors that allow you to work with native and TDMII. So that limits you to about two or three manufacturers.

Second. The band must be recorded simultaneously in all systems, which are all calibrated the same. Then the stereo outputs must be fed to a passive summing device, which would feed a prestine monitor system. All of the judging parties must occupy the same acoustic space of the sweetspot, and should be blindfolded

What does testing the internal mix bus have to do with the I/O hardware? We are talking about comparing the resulting stereo files of bounce operations. Why would you think it is necessary to use a summing box for a stereo file? Why must all judging parties occupy the same sweet spot? If something sounds better than something else, it usually sounds better at the sweet spot and a little bit away from the sweet spot. Why should everybody be blindfolded (you realize that "double blind test" does not mean blindfolded?)? You don't have to leave it to the experts... for the cancellation test you do not even need good ears...

Look, the test posted earlier was flawed. It was recorded into one DAW, then imported into the others. In order to make this fair, you should allow ALL DAWs to record the test source material. The reason for this is that some hardware sounds better than others, and some DAWs have you locked into their hardware, unless you have a bridge of some kind.

In order to make the test monitoring quick and double-blind, you need a passive source selector, or summing device, that would allow split second changes, so that comparisons can be heard. I have seen tests thrown, where people have seen the objects, and make up their mind before hand which one has to wind, because it is brand-x. Good example is an AES show in 90-91. There was a guy with a black box containing two types of wire. One was hi-fi, tweaky stuff, the other Beldon install grade. The box was mislabeled on purpose. People listened, and 99% of the time, picked the more expensive one, saying that they heard a difference. The guy revealed at the end of the show that he lied, and hence the industry was full of crap, and open too much to suggestion. Blind-folding is the way to go IMO.

It has been scientifically proven that humans cannot memorize sounds to recall later. This "I don't remember my Neumann in 1960 sounding like this brand-x, etc. Perceived differences in loudness will throw the test, hence the need for a passive unit, as most people today cannot calibrate a console's teape returns to tolerances of .1 dB.

The reason that all parties must monitor exactly in the same space is this. Most studios, even the ones designed and built by the big boys (Walters/Storyk, Fran Manzella, Russ Berger, etc) all have a frequency response of +/-15dB. That is just reality. There is no such thing as consistant, flat 20Hz- 20kHz response in ANY room, EVERYWHERE in a room. It likely will not happen even in the sweet spot.

So, if you are standing off axis of the monitors, there will be a 6dB drop. You have no idea what is really going on then. You will not hear the frequency and phase relationships of the source. Combine that with the fact that the room will be jacking with you as well. For a demonstration, check out Ethan's videos at www.realtraps.com.

Link to comment
Share on other sites

Look, the test posted earlier was flawed. It was recorded into one DAW, then imported into the others. In order to make this fair, you should allow ALL DAWs to record the test source material. The reason for this is that some hardware sounds better than others, and some DAWs have you locked into their hardware, unless you have a bridge of some kind.

Following situation: a client delivers a 20 track project (simple .wav files). Now you start mixing that stuff. So it is reasonable to compare any material mixed with different DAWs.. Whatever goes on the core of DAWs, the built-in EQ and fx are finally most important.. just if you use external fx for everything, this wouldn't matter, but what worth is a native ITB mixer if you cannot use its panning and EQ due to missing quality?

Link to comment
Share on other sites

Look, the test posted earlier was flawed. It was recorded into one DAW, then imported into the others. In order to make this fair, you should allow ALL DAWs to record the test source material. The reason for this is that some hardware sounds better than others, and some DAWs have you locked into their hardware, unless you have a bridge of some kind.

Following situation: a client delivers a 20 track project (simple .wav files). Now you start mixing that stuff. So it is reasonable to compare any material mixed with different DAWs.. Whatever goes on the core of DAWs, the built-in EQ and fx are finally most important.. just if you use external fx for everything, this wouldn't matter, but what worth is a native ITB mixer if you cannot use its panning and EQ due to missing quality?

How did the guy get those wave files? That is a big issue. Were those files consolidated? Did they have effects mixed in and if so how.

The deal is that you will likely never be able to compare three DAWs simultaneously, with the same exact plugs. This is what you need to do, because all DAWs handle plugs differently. Not all DAWs use the same panning laws. This will effect it. Again, all of this us moot IMO.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

×
×
  • Create New...