# Arturia's True Analog Emulation, how does it work?

#### dashxdr

##### Newbie level 6
Arturia is a company that sells software synthesizers, one of them emulates the Moog. They brag about their approach, calling it True Analog Emulation. One feature they describe is the approach does not generate any aliased frequences, unlike a competitor's product.

The aliased frequencies they speak of are artifacts of the algorithm used to generated the audio samples. If their sampling rate is 48000/second, a naive approach to generating your samples will inadvertantly create higher order frequencies than 24000 hz. These cannot be reproduced at a sampling rate of 48khz, so they'll cause aliased "noise" frequencies that are unwanted.

What I'm trying to figure out is how such an algorithm might work. Meaning how to generate smooth, clean synthesized audio without using massive CPU cycles and without having alias noise.

Here is a concrete example. Suppose you generate a square wave like this:
float a = 0;
freq = .1;
for(;
{
output_sample = (a - floor(a)) < 0.5 ? -1 : 1;
a = a + freq;
freq = freq * .9999;
}

It will start high and go lower and lower. If you actually do this and output the samples to a DAC + speaker, it sounds awful. There are screetching frequencies in the background behind the main dropping tone, hissing, some rising, some falling. Just total chaos that is a big distraction.

I believe these artifacts are what Arturia's promotional literature is talking about.

I can get a good result by building up a square wave from sine waves of higher and higher frequency (and lower amplitude) if I cut off at the right point (sort of a low-pass filter). But this is computationally expensive as well as very specific to the waveform in question (not very general).

The general question is how do you approximate a signal that could be defined by patial differential equations, in a continuous sense, using computers that are not continuous in any sense, without introducing distortion?

I don't even know how to properly phrase the question, actually. No formal education in this area.

You've got a circuit composed of resistors, transistors, operational amplifiers, capacitors and coils. You want to model that circuit and produce samples of the signals at say 48khz without introducing artifacts. That is, your sequence of samples would be as nearly identical as possible to a digitized sample stream coming from actual hardware...

Sorry this question took too long to ask. Any advice at all would be welcome. Even any suggestions as to the correct terminology I should be using. Is this in the realm of Finite Impulse Response? Numerical Analysis? Digital Signal Processing? Digital filtering?

Thanks.

Last edited:

##### Super Moderator
Staff member
I read an article pointing out how the commercial standard of 44.1 kHz is too low a rate for digital sampling of music. Because many waveforms in the audible range contain ultrasonic harmonics. Suppose you sample a 21 kHz sine wave at 44.1 kHz... you obtain an audible tone at 2.1 kHz (I believe it said).

Moreover there are audiophiles who claim we perceive the sound having greater fidelity when those higher harmonics are absent, even if they are outside our range of audibility.

I have played sine waves into my tape recorders. Including frequencies above 15 kHz. And sure enough I hear funny whistles and squeaks on playback that are not supposed to be there. The tape head bias is ultrasonic, and it is a counterpart of sorts to the digital sampling frequency. I also get the same whistles and squeaks from high frequency sine waves recorded by my computer at 44.1 kHz.

The Arturia engineers would have to be ingenious to find a way around this.

Since you bring up the Moog...

From the beginning the Moog people were proud of their proprietary audio filters, which gave their synthesizer its distinctive sound. It would have sounded flat and machine-like if it had been a simple square wave running through a low pass filter.

So to digitally duplicate the Moog sound would also be a major accomplishment for Arturia's engineers.

Perhaps more so because it has a unique quality as compared to instrument voices such as I hear on my \$300 Yamaha keyboard. Certain ones sound amazingly lifelike, having been sampled from real instruments such as a saxophone, trumpet, grand piano, etc.

However those instruments do not have a glissando effect (frequency sweep). The Moog was capable of glissando.

To produce glissando requires that a tone be synthesized electronically.

Furthermore an original (analog) Moog filters a high note slightly differently from a low note.

If a manufacturer claims to imitate the analog Moog, then it can only get good reviews if people say they can't tell the difference. I imagine that is why Arturia would want to claim 'true analog emulation'.

As for the sampling artifacts, I wouldn't know how they eliminate that, or whether it would happen anyway as a result of duplicating the Moog sound, and a marketing guy noticed it and made it a selling point.

#### dashxdr

##### Newbie level 6
If a manufacturer claims to imitate the analog Moog, then it can only get good reviews if people say they can't tell the difference. I imagine that is why Arturia would want to claim 'true analog emulation'.

As for the sampling artifacts, I wouldn't know how they eliminate that, or whether it would happen anyway as a result of duplicating the Moog sound, and a marketing guy noticed it and made it a selling point.
I've played with the Arturia program, there is even a free time limited trial of it. The output is very clean, and from playing with it I'm convinced it's the real deal. The sounds it generates are just like the ones I'm used to hearing.

I know the concept is possible, I'm just wondering about details of implementation. The problems you describe with sampling at 44100 hz sound like the input signal wasn't low-pass filtered before sampling. You'd want to block anything over 22050hz before sampling, otherwise sampling itself will generate the alias frequency distortions.

#### dashxdr

##### Super Moderator
Staff member
It's likely that you know more about this topic than those of us currently posting at this board.

You would probably get more replies at a website that has musicians who are also familiar with digital sound processing.

My post was about a number of things which were tandem to the particular question you bring up. I should have divided it up into installments.

I see now that Arturia is the only outfit to be given a license to synthesize the Moog sound in software. This appears to be a big deal in the music world.

Perhaps it had to wait for today's faster computers, before it was possible to duplicate in software what real analog circuits do.

As you may know, MIDI software generates a waveform for each instrument, among many instruments that might be playing at one time. The waveform consists of (maybe) 44,100 numerical values. The program sums all the waveforms every 1/44,100th of a second (1/88,200th for stereo). Then it sends one numerical value to the sound card (I think). Or two values for stereo.

Ten years ago I had a 486 PC that had a MIDI playback program. It could create a couple dozen MIDI instruments at one time. The voices were flat and monotonous, as compared to nowadays. Nevertheless MIDI software created a new outlet for musicians back then.

If I had to generate one voice in software, and change it continually...

I would cut corners where it would not hurt. I'm not sure an individual waveform needs to be calculated from each zero-crossing to the next. Maybe 50 times a second would be often enough to re-calculate a new waveform. This would be based on pitch, amplitude, etc., as well as add-ins such as a phasing waveform or ring oscillator.

Whenever I could, I would generate lookup tables, to save time from having to re-calculate afresh.

To reduce the aliasing artifacts problem, I wonder whether it would work better to create the waveform first, then use fast fourier algorithms to eliminate any harmonics above 15 kHz. That might be faster than having to determine which sinewaves should be excluded, prior to adding them up.

I have not used fast fourier analysis, but I know it can do amazing things. It can create an equalizer to filter various bands of sound while it is played live through my computer. It can create animated spectrograms on my computer screen (such as the Firestorm visualization in Windows Media Player).

#### alyna

##### Newbie level 4
as i read everyone's post, I find it very interesting... it gives me an idea about Arturia's True Analog Emulation... great job guys:razz:

#### dashxdr

##### Newbie level 6
I have been investigating the issue in more detail. My approach was to hack into the code using the software installed on WindowsXP and using the Syser debugger. I can setup the Moog synth to generate 32 polyphonic voices that play continuously, and this bogs down the cpu. Then I can interrupt the system with the Syser debugger, and the chances are very good that at the moment of interruption the cpu is busy inside the number crunching of the Arturia synthesizer.

Here is one section of code I dug into:

001533F0 8B 56 38 mov edx, [esi+0x38]
001533F3 F3 0F 10 0C 8A movss xmm1, [edx+ecx*4]
001533F8 8B 54 24 20 mov edx, [esp+0x20]
001533FC 0F 5A E1 cvtps2pd xmm4, xmm1
001533FF F2 0F 59 E7 mulsd xmm4, xmm7
00153403 F2 0F 5C E7 subsd xmm4, xmm7
00153407 66 0F 5A E4 cvtpd2ps xmm4, xmm4
0015340B F3 0F 58 24 83 addss xmm4, [ebx+eax*4]
00153410 F3 0F 11 24 83 movss [ebx+eax*4], xmm4
00153415 0F 28 E0 movaps xmm4, xmm0
00153418 F3 0F 59 E1 mulss xmm4, xmm1
0015341C F3 0F 5C E0 subss xmm4, xmm0
00153420 F3 0F 58 24 82 addss xmm4, [edx+eax*4]
00153425 F3 0F 11 24 82 movss [edx+eax*4], xmm4
0015342A 03 4E 30 add ecx, [esi+0x30]
0015342D 40 inc eax
0015342E 23 86 DC 02 00 00 and eax, [esi+0x2DC]
00153434 3B 4E 40 cmp ecx, [esi+0x40]
00153437 7C B7 jl 0x001533F0

Notes: In the cases I explored:
[esi+0x40] is 0x4000
[esi+2dc] is 0xff
[esi+30] is 0x200
A typical value for ecx on entry is 0x2d14.

There is a 16384 entry table of single precision floats pointed to by [esi+0x38]. The table is a precomputed square wave with about 15 overtones, but it includes only from 25% to 50% of the complete cycle. That is, if we're dealing with a waveform where the first 50% is at one extreme and the last 50% is at the other extreme, they only bother to store the central 50%. The stored waveform gets scanned in reverse to generate a complete waveform.

Just to make that clear, in the case of a cosine wave, going from 0 to pi takes you through the entire range of +1 to -1. If you've stored a cosine in a table, you don't need to store the values from pi to 2*pi, you could just scan your table in the reverse direction to go from -1 to +1.

Their table consists of the transition of a square wave from the minimum value to the maximum value. It starts out flat then we see some ringing, then it sweeps to the maximum value with ringing, then the ringing dies down. The low value centers aroung 0.0, and the high value centers around 1.0. That is, the range goes from 0 to 1 as opposed to +/- 1 or +/- .5, which I found odd. I'd have expected a waveform to be centered around 0... Here's an image:

Now it's important to note the waveform only has overtones up to around 15 times the base frequency. I counted 7 minima on the left half. I located other similiar waveforms stored but they're even bigger (more than 16384 samples) and can have fewer overtones. So it's clear their approach involves waveform tables but they carefully eliminate higher frequencies in order to avoid the aliasing issue.

I'm uncertain exactly how Arturia uses the stored transition table. I didn't find any other recognizable waveform (sine, pulse, triangle for example) although I looked. It did occur to me they could create a pulse of variable duty cycle by just sitting on a steady level for some arbitrary time before sweeping through the waveform data. They can also create a triangle wave by sweeping slowly through the transition part. But I didn't confirm they're doing this approach.

My take, though, is they're not doing anything magical about emulating the analog circuitry. I had hoped there was some deep insight into the theoretical side that I could learn, but I'm no longer convinced they're doing anything impressive. The key insight I'm convinced of is that they make sure there are no high frequency components introduced into the waveforms in the first place, so there is no chance for aliasing. My attempts to synthesize audio all had that problem, as I never realized one has to avoid that issue.

Example: Suppose you've got a sine wave table stored in 1024 samples. It repeats, so you can generate a sequence of audio samples by just stepping through your table and outputing the samples to a digital to analog converter and then to an amplified speaker. You can change the frequency by changing your step through the waveform.

It ought to be clear that this will work fine as long as your step is comfortably less than 512 samples. But if your step goes beyond 512, the effect is to create an alias frequency instead of the intended frequency. I believe the step would be identical to 1024 minus the large step. So a step of 600 would be as if your step is 424. Essentially you're stepping backwards through the sinewave table instead of forwards, but that's irrelevant since the sine is symmetric.

Now if your waveform table has higher frequency components, say a sine at 2x the base frequency, you can't use any step bigger than 256 because *that* would cause aliasing distortion. The limits get worse very quickly as you add higher frequency components into your wave table. If you've got an 16x frequency component, you can't even step more than 16 samples at a time, which is small compared to the 512 step we were permitted with the original sine wave.

Digital filtering is very easy in terms of cpu power. A 2 pole butterworth filter involves just a few multiplies and additions. The moog synthesizer worked by having various oscillators (sawtooth, triangle, sine, pulse/square) which were then fed through a low pass filter where the cutoff frequency changed with an ADSR envelope. I believe if you create your oscillator sources carefully without introducing alias distortion, performing the final digital low-pass filter is trivial and won't introduce any distortion itself.

My error was always in ignoring the introduction of unwanted aliased noise frequencies. Avoid that and the sound will remain perfectly clean. The rest of Arturia's software is just detail, as in user interface and whatnot, as well as an attempt to replicate the behaviour of the real moog synthesizer precisely. That's not something I'm so hung up on. It is very easy with a little bit of coding to create moog-like sounds.

Hope this comment is useful to someone.

Last edited:

##### Super Moderator
Staff member
Great hacking job, Dashxdr, to locate those waveforms.

Yes, it makes sense that they would omit waveforms above a certain frequency. Hence the squiggles before and after the transition.

To mimic a square wave you would have to add more harmonics, mathematically speaking.

#### dashxdr

##### Newbie level 6
I read an article pointing out how the commercial standard of 44.1 kHz is too low a rate for digital sampling of music. Because many waveforms in the audible range contain ultrasonic harmonics. Suppose you sample a 21 kHz sine wave at 44.1 kHz... you obtain an audible tone at 2.1 kHz (I believe it said).
I wanted to comment on this. That issue of the Nyquist frequency being the cutoff has always bugged me. Nyquist frequency is 1/2 the sampling rate. It's clear you can't represent a frequency higher than the Nyquist frequency. But what isn't made clear is the distortion even at the Nyquist frequency. This is because it's conceivable you could be sampling a sine wave anywhere along the cycle. All works perfectly if you're sampling it at the peaks (max + min). But you could just as well be sampling it at the zero crossings. Or anywhere in between. It depends on the phase of the sine wave with respect to the instant of sampling. If your sine wave is slightly off from the Nyquist frequency, you will get beats, as the phase drifts along. This is probably where that alias frequency you mention is coming from.

If we take 1/2 the Nyquist frequency as the cutoff things get much better. That gives us a step of 1/4 of the waveform. As, say, the even pair start moving towards the zero point crossing, the odd pair is moving towards the maximum. This would correspond to sine and cosine waves being 1/4 cycle out of phase. A fourier analysis produces a complex number representing the power for each frequency. The real part and imaginary part represent the cosine and sine amounts (I might have them backwards). When we're sampling a sine at the Nyquist frequency, we only get one component, and the other is lost.

A sine at any frequency has two pieces of information. Its amplitude and its phase.

#### sky_123

If your sine wave is slightly off from the Nyquist frequency, you will get beats, as the phase drifts along.
What beats do you get? Sure the amplitude will depend on the points in time that the waveform is sampled, but if you're signal is below the nyquist
freq, I can't see how you can get beats which implies mixing.

(21kHz sampled at 44.1kHz, and the freq spectrum)

Last edited:

#### dashxdr

##### Newbie level 6
What beats do you get? Sure the amplitude will depend on the points in time that the waveform is sampled, but if you're signal is below the nyquist
freq, I can't see how you can get beats which implies mixing.
The sampling itself introduces the beats. I think of it this way: Imagine the step is .999999 of the Nyquist frequency. Every cycle there is a small residue as you're moving through the sine wave. If you start out aligned with the max and min, some time later you'll drift to the point where you're aligned with the zero crossings. While you're aligned at the zero crossings there is no signal output. Then it starts getting louder again.

I couldn't view those images btw. Clicking on them on my browser just yields black.

#### sky_123

Unfortunately they open fine for me on Firefox and IE.
There's no harmonic or beat at all in the images. Sure there is a change in amplitude, but if you do the same for any freq up to 20kHz, I don't believe you can ever hear it.
The freq spectrum in the image shows there is no beat or harmonic.
Below is the same exercise for 20kHz (in a slightly better color scheme).

The reconstructed amplitude is faithful. Do you believe it is possible to hear this less than 1db difference in amplitude? And note there is definitely no beat frequency or any harmonics.
However, I'm not an audio expert so this is a layman's opinion.
And if I repeat the experiment at another high frequency (18kHz) by now the delta is a tiny fraction of 1dB, and totally impossible for the ear to notice.

The sampling itself introduces the beats.
It won't, as long as you're band limiting to 20kHz before sampling of course. But we're clear on that.

Last edited:

#### dashxdr

##### Newbie level 6
And if I repeat the experiment at another high frequency (18kHz) by now the delta is a tiny fraction of 1dB, and totally impossible for the ear to notice.
I'm pretty sure I couldn't hear the small side band (it's visible in your 2nd image) but maybe someone could. The effect would be most pronounced right near the 22050hz frequency I'd guess.

The main point is sampling itself isn't perfect, and (evidently) even at CD quality some people aren't satisfied.

Doubling the sample rate would probably make everyone happy. DVD blanks store 4.3 gigs, plenty of extra room when CD's are only around 700 megs.

What's that software that generates those images? It looks pretty elaborate.

#### sky_123

There isn't a sideband, it is just a visible effect of the particular FFT that was used. Even if it was there (e.g. for a real oscillator with some noise), it would be inaudible because the tone is 90dB or something higher.
For the general population CD is high quality, but I'm with you that 24bit/96kHz would be preferable. I guess the business case for it is very low. If I had the choice of (say) SACD or CD I would certainly pick the former for the few bits of music I might buy, but it's just not available for most commercial music.

The program is an old piece of software (not available anymore, called Cool Edit). It's current incarnation is called Adobe Audition but I've never used that.

#### dashxdr

##### Newbie level 6

Anyway... the issue I've been puzzling over is to how to generate the raw sawtooth, triangle, pulse or square wave which can then be fed to the filter stage(s) in order to produce musical sound effects. The naive approach where you produce samples such as:
function saw(x) {return 2*(x - Math.floor(x)-.5);}
at the sampling rate will produce artifacts. SOLUTION: Oversampling. You oversample say 16x, then digitally low-pass the values with a cutoff around 16khz, then throw away 15 out of 16 of them. The values then will have much reduced aliasing artifacts. The waveform functions you call like saw() stay the same.

Another more cumbersome approach is to sum up the raw sine waves + overtones to produce the sawtooth (or whatever) waveform, and you stop producing overtones once you reach the nyquist frequency... but I gave up on that method.

-Dave

Last edited by a moderator:

#### d123

Hi,

VCO chip like XR2209 is easy to use for square wave and triangle wave. Square wave output or 555 can be used to turn on/off bjt inverter with capacitor from collector to ground charged in linear fashion by a simple two transistor current source to get a sawtooth, or LM334 as current source. Look for ramp generator circuits and wave shaping circuits. Half hex inverter package plus resistors and capacitor make a square wave oscillator, too, can use leftover inverters for buffering output.

#### Attachments

• 187.3 KB Views: 0

#### FvM

##### Super Moderator
Staff member
function saw(x) {return 2*(x - Math.floor(x)-.5);}
at the sampling rate will produce artifacts. SOLUTION: Oversampling. You oversample say 16x, then digitally low-pass the values with a cutoff around 16khz, then throw away 15 out of 16 of them. The values then will have much reduced aliasing artifacts. The waveform functions you call like saw() stay the same.
You didn't specify what the 1x and 16x sampling frequencies are. I presume 1x is something like 44 or 48 kHz. If so, why oversampling? You can apply a digital filter at the 1x domain directly. There are no limits for a digital filter. Oversampling is typically used for ADCs due to the implementation limits of analog filters.

#### FvM

##### Super Moderator
Staff member
I realize that the problem is to band-limit the generated signal before sampling it. If the modelled signal has unlimited bandwidth, e.g. ideal square wave or sawtooth, oversampling reduces the alias signal magnitude by the oversampling factor, but still generates alias components.