Is there a way I can record, hear myself, and not have latency with an audio interface?

Iā€™m still not quite clear Alexis on why you got latency using the 2i2. Generally as Keith says youā€™ll only get latency if you are listening to the return from the PC and youā€™d only do that if you were using the PC to add effects during recording e.g. recording your dry guitar and adding amp and stomp effects in the DAW, generally youā€™d just have the switch on the 2i2 in the Direct Monitoring position and everything you hear through the headphones will be what goes into the interface.

I did have a strange issue recently where I was recording to playback from my DAW (so I was playing a drum track on my DAW and recording at the same time) and although during recording I nailed when I came in when I listened back on the PC I was slightly outā€¦turned out Iā€™d forgotten to switch the driver to ASIO (and my mixer will quite happily use native windows drivers) and it was causing that little bit of latency. However I didnā€™t hear when recording only when playing back.

2 Likes

Exactly. If you are direct monitoring then you could have several seconds of latency into the PC and it shouldnā€™t affect either your monitoring or the recording.

Obviously, several seconds of latency is ridiculous and you would never get that in practice. But if you did it would mainly be a pain simply because of having to wait a few seconds for the DAW to start playback after clicking the play button. And if you arenā€™t using a DAW with latency compensation, you might need some editing to align the tracks.

In fact, one piece of advice I have been given when recording with direct monitoring is to tweak the driver buffer settings to increase the latency, as this also lowers the load on the system and reduces the chances of any buffer overrun issues (aka ā€œxrunsā€). And it allows you to run more plugins at once.

If you are recording an amp or an acoustic guitar or vocals and having latency issues, whether Windows, Mac, Linux, or iPad, then you are probably doing it wrong. You are probably monitoring the DAW rather than monitoring directly from the interface (or amp in the room, etc.)

Cheers,

Keith

In theory, latency compensation would sort that out. Most DAWs (and even tools like Audacity) have this. However, you do normally have to calibrate/configure it.

Even with low-latency drivers like ASIO, thereā€™s going to be a small amount of latency, and, in fact, the latency that the driver describes isnā€™t quite correct: thatā€™s just the latency caused by the buffering to/from the USB bus. Itā€™s normally the biggest impact and itā€™s the only thing you can realistically control, but the interface itself will have some buffering latency that you canā€™t see.

The latency will be very low, but there will be a tiny offset between separately recorded tracks if the calibration is incorrect. I think most DAWs will default to the driver reported latency which, as I said, is probably not entirely correct.

The suggested approach is to connect an output on your audio device to an input with a short cable, and run a calibration measurement on your DAW, if available. On Ardour and Mixbus, this looks like this:

image

This will measure the real round-trip latency and calibrate the latency compensation accordingly. Of course, with some types of interface (like USB connected amps) this may not be practical, but for normal audio interfaces with analog inputs/outputs, it is.

Thereā€™s a detailed article on this subject in the Ardour manual, a lot of which applies to all setups, so itā€™s worth reading even if you donā€™t use Ardour:

https://manual.ardour.org/synchronization/latency-and-latency-compensation/

Cheers,

Keith

Thanks Keith Iā€™ll take a look at that. It was rather strange at the time as Iā€™d never ever had a problem syncing myself recording multitrack i.e. playing a previously recorded track on the DAW whilst I record another. Every recording would be out of sync. Turned out Iā€™d simply forgotten to use ASIO drivers and it had defaulted to Windows drivers which were causing the problem. Once I switched all worked again as expected. I might add I was playing back the track from the DAW through the mixer AI into the monitors as itā€™s a two way interface. It did though make me understand people frustrations when they get this kind of issue.

Yes, the standard Windows drivers are notorious for not having great latency due to buffering, and I donā€™t think the drivers have a way of reporting the buffering latency to the apps, so thereā€™s no way the app can automatically calibrate for it.

Which was why ASIO drivers were invented., and became the standard for low-latency work.

Cheers,

Keith

1 Like

By the way, the article mentions church organists. There is another ā€œinstrumentā€ which has a far worse latency: a church bell.

The time between ā€œpulling the rope to ring the bellā€ and the bell actually sounding is around 2 seconds. And yet expert ringers can synchronize their ringing with the other ringers within a few tens of milliseconds.

Cheers,

Keith

P.S. I put ā€œpulling the rope to ring the bellā€ in quotes because thatā€™s actually the incorrect way to think about it. Bell Ringing (as in the English-style, full-circle ringing that @brianlarsen and I do) is really a continuous process of bell handling through the rope, which is how experts can achieve such precise control over a swinging lump of metal the weight of a small car.

Thread-bomb! :laughing:
My son and me at band practice, demonstrating the above.
I only realised now how appropriate the sally colours are.
Iā€™ll call this ā€˜ringing for peaceā€™ today :angel:

1 Like

Satellite delayed response. :rofl:

Its a combination of the two. GTR tone etc is FXed on the Mustang with some mastering FX in Reaper on the GTR tracks - I run the stereo lineouts into 2 distinct tracks and pan and may add an FX or two so they are discernibly different. VOX fx is all done in Reaper. And then there is a pre-Master ā€œmasterā€ track for final mastering. So a few things in the chain but no obvious latency. BUT I have adjusted the latency offset in Reaper, as per these

Recording Latency Manual Offset in REAPER

Adjusting Recording Latency (Loopback Test) in REAPER

There is a separate thread on these adjustments somewhere ( @DavidP )

The 1820 has a ā€œdry/wetā€ mix control. So I can monitor direct 100% DI or fully open its taking the output from Reaper/PC and no noticeable latency.

MONITORING knob adjusts the mix between the direct inputs and computer playback:
IN: only direct inputs (0 ms latency)
MIX: 50/50 mix of inputs and playback from computer
PB 1-2: only playback from computer

I generally run in the mid position where its biased to the AI input but the FX is present.

Your last point re buffer sizes, what was the recommended settings to avoid overruns ?
As you can see I am running at 256 and could probably go lower but have not tried to since things are stable after the last OM debacle due to an OBS/Zoom sample mismatch, thatā€™s why its showing 48k Hz and not 44.1k which I have always used in the past. Anyway separate issue and now solved. Is the recommendation to go higher 512 1024 ?

Also I am always curious why the ASIO status shows different latency values, In 328 Out 360, when the sample rate is set to 256 ? Or is that samples captured ? Input offset is 67 samples.

All of this reminds me to check everything gets to Zoom in one piece and avoid the chaos of the last one!

Cheers

Toby
:sunglasses:

I think as low as you can go without audio stutters. I use 64 on my 2i2.

Thereā€™s not really a specific recommended value to prevent overruns. Itā€™s down to the specific behaviour of your system and how much load you are putting on it (including plugins and any other apps running at the same time). A lot is hardware related: Iā€™ve had systems where the number of overruns decreased by plugging the AI into a different USB port.

In general, for recording purposes, unless you are heavily relying on plugins to influence your performance, I would simply turn off the DAW monitoring completely. Then you donā€™t have to worry about latency at all, and you can put the buffer size up as high as you like.

If you are using plugins to tune the track, and you can do them in post-production (mixing and mastering) rather than in pre-production (recording/tracking), then you are best doing that. Mastering plugins, for instance, are designed for the mastering stage, which is normally done hours or days after youā€™ve done your recording. Iā€™m not sure why you would use them at the tracking stage.

As I say, for recording purposes, unless you are using DAW plugins (such as amp modelling plugins) as a key part of the sound you need to monitor in order to influence the performance, then you really should be using direct monitoring. If you are using an external amp (or multifx, etc.) to create your core tones, you really should be direct monitoring. And that lets you set nice large buffer sizes and avoid overruns or CPU exhaustion issues.

If you are using the DAW for live performance purposes, that might change things as, in that case, any DAW plugins you use become part of your ā€œlive soundā€. but, even then, I would ask yourself: do I really need to constantly monitor the end-result audio?

I would argue that for the normal mixing/mastering type plugins like compression and EQ, then you probably donā€™t. You should focus on getting these set up so that they work as you need them to, and then focus on your performance during the actual live show.

I have done sound for a fair number of local band gigs and in every case, even as the dedicated ā€œsound guyā€, I tend to get the mix and any EQ and compression, and any added effects, set up at the start of the gig during the sound check and then mostly leave things alone, other than tweaking individual levels as required during the set (for instance when the guitar player changes guitar, or the backing singer does the lead vocal).

In a live stream setup where you are performing and broadcasting, you need to have this stuff set up at the start as I guarantee you that, if there are major problems, they will not be the sort of thing you can easily solve by adjusting a bus compressor or EQ, and I very much doubt you will have the ability to adjust such things on the fly whilst you are performing.

So do you need to hear the effect of these plugins during your performance? I would argue that you donā€™t.

So I would recommend that, even for live stream setups, you direct monitor if at all possible so you make latency a non-issue. After all noone will notice if the audio stream is delayed by 20-30 ms (or even 100ms). You can then crank up the buffer size and reduce the likelihood of buffer overruns.

The other thing, as you have found out, it to try to make sure all of your audio devices and apps are running at the same base sample rate.

I hope that helps.

Cheers,

Keith

Sorry, but thatā€™s wrong. Unless you absolutely need to monitor the DAW because your core tone plugins (such as amp modelling plugins) are on the PC, and if you have the ability to monitor directly (rather than via the DAW) then that is what you should be doing. And in that case latency is then not an issue.

The question is then: why are you trying to minimize system latency when it doesnā€™t matter? You are just creating potential problems.

In this case, crank up the buffer size: 1024, 2048, it doesnā€™t matter. The aim here should be to minimize the chances of audio stutters.

Cheers,

Keith

Iā€™m not an expert on ASIO. I would assume these are the measured values from the calibration you did. All good stuff and this is absolutely what you should be doing.

As I said above, the buffer latency is only part of the story; there are other latencies involved (as your case shows) and you can only account for these if you measure them.

But the ā€œ256 samplesā€ buffer value here is ā€œhow many samples are built up before they are transferred to/from the audio interfaceā€.

The more samples you buffer, the bigger the gap between a packet being captured and it being transferred to the PC (and vice versa). Doubling the buffer size will double the buffer latency.

The smaller the buffer size, the quicker the samples are set to/from the audio interface and, thus, the less waiting around there is for the buffer to fill up. But that causes lots of small chucks of data to be transmitted and received on the bus, which then requires more processing to handle them. And if the buffer size is so small that the audio and USB bus drivers canā€™t keep up, you get overruns as the small buffer fills up and the buffered data cannot be sent fast enough to empty the buffer to receive new data, so the next bunch of samples gets dropped onto the floor.

Note the figures here can be calculated using quite simple mathematics.

At 48kHz, a sample is transmitted every 1/48,000 of a second. If your buffer size is 256 samples, you can work out the latency with 1/48000*256 = 0.005333s (which is 5.333 ms).

For the output latency, the latency is equivalent to 360 samples: 1/48000*360 = 0.0075 (7.5ms)

Cheers,

Keith

Keith

Thanks for the comprehensive reply on both subjects, all useful information. For general recording I would pretty much just monitor direct. Then mix and master the stems afterwards. For the OM performance Iā€™ve still just used the mid point, so not needed to listen to the ā€œfullā€ PC DAW output and then assess levels on video playback from OBS. The PC has a reasonable spec so seems to cope and the ā€œliveā€ set up is not over burdened with plugins. So all good and yes the aim is to have set up in place which is locked down prior to kick off. My error running up to the last OM was changing the sample rate in OBS to 44.1k to match Reaper (hit loads of pops and crackles), hence OBS and Zoom threw a big spat and I had to add the AI direct to Zoom 10 minutes into the show !! If Iā€™d changed Reaper to 48k all would have been good. Another lesson learned.

Cheers

Toby
:sunglasses:

1 Like

And you expect me to find it, Toby :open_mouth:

I was planning to post those Reaper videos and then saw youā€™d done so.

1 Like

I think the big difference with your setup compared to most is that you are using a ā€œrecordingā€ setup for ā€œliveā€ performance so effectively routing your live performance into Reaper to add software effects (stomps etc) then output that for live purposes e.g. the OM via OBS.

The vast majority just use a DAW for recording so the ā€œliveā€ application of effects is not important. As long as the latency is low enough so that you can sync your playing with the other already recorded tracks thatā€™s enough.

I reckon ultimately youā€™d be better off with a mixer that includes some fx. Then you can do you live setup etc. on that applying your eq, panning etc. Probably the only thing youā€™d be missing is some Vox fx. Then you just pump that out to your PC and OBS. I guess you couldnā€™t simulate your ā€œparallelā€ amp paths that I guess you are doing with your separate guitar channel outputs that you apply different FX to.

How are you going to plumb your Trio into the mix?

And Toby has a Play Acoustic pedal so the vox mic could be plumbed through that to a mixer for vox fx.

2 Likes

Itā€™s not wrong. Iā€™m not an expert but itā€™s literally what Focusrite say on buffer size for recording https://support.focusrite.com/hc/en-gb/articles/115004120965-Sample-Rate-Bit-Depth-Buffer-Size-Explained.

Granted I agree that when using direct monitoring it doesnā€™t matter, but why wouldnā€™t you want to listen to what the DAW is recording if you have the option to? Thatā€™s how I see it.

Weā€™ll the only reason youā€™d listen to what your DAW is recording is IF you have some effects applied to that track in the DAW and want to hear them as you play themā€¦ so similar to what Toby does where he uses the DAW to apply effects before sharing live. If not then listening to it via direct monitoring IS listening to what your Daw is recording.

Maybe not ā€œwrongā€, but perhaps ā€œincompleteā€. Itā€™s simply not not very good advice if you are direct monitoring. And I apply that ā€œincompleteā€ label to the Focusrite article too, as it doesnā€™t mention direct monitoring at all.

By the way, I do count myself somewhat as an expert, as Iā€™ve worked on latency-sensitive telecommunications systems/applications for a large part of my working life, and I have a qualification in Audio Engineering.

Exactly!

At the end of the day, if you can get a low buffer size and get acceptable latency and performance, whether or not you are monitoring directly or via the DAW, then thatā€™s great.

But a lot of people do have issues with this, which is the whole point of this thread. And, if they are able to do it, direct monitoring completely solves these issues, even on systems where getting an acceptably low buffer size without causing glitches is difficult.

The trouble is thereā€™s a persistent myth going around that you always need to minimize your latency. Yes, there are cases where this is important, but there are very many cases where it doesnā€™t (or shouldnā€™t) matter at all.

The trouble is, perpetuating this myth with incomplete advice really isnā€™t helping as people chase after low-latency as the wrong solution to their monitoring problems, or a solution to problems they might not even have.

Cheers,

Keith

This could also go into a multi-input AI with direct monitoring. This should keep the tracks separate for recording purposes (if required) but allow a ā€œmonitor mixā€ in the AI going out to the direct monitor.

But the bottom line is if the primary effects (vocal, amp tones, etc.) are done externally, rather than in the DAW, they can be monitored externally too. Latency is then a non-issue. So set the buffer size to something that works reliably. Larger buffer settings should always be more reliable than smaller ones; donā€™t be afraid to use them.

I would, at least, set the buffer to the next setting up from where it appears to start working reliably. In my experience, you can still get occasional minor glitches if you go for the lowest buffer size that seems to work.

Cheers,

Keith