ws6transam
Newbie level 3
- Joined
- Apr 2, 2012
- Messages
- 4
- Helped
- 0
- Reputation
- 0
- Reaction score
- 0
- Trophy points
- 1,281
- Location
- East Lansing, MI
- Activity points
- 1,349
I've been looking at the problem this morning, and finally hit upon a low-tech method of solving the real problem, which isn't the creation of the impulse, but is in the calculation of the true sample rate.
Since the sample rate varies by a slight amount, the pulses move back & forth depending on whether or not the sample rate is a measure higher or lower than the nominal frequency, which it 50 s/sec.
One thing I can do (and did do) was go through the fileset until I identified the next file interval that showed the 20 msec. glitch. It turns out the glitch occurs every fifth file. If the sample rate is slightly faster, the fifth file in the sequence will "jump" forward 20 milliseconds. Since we have 1091 seconds of time history between the five files, and the nominal sample rate is 50 s/sec, if the time-history jumps forward one sample, we end up with (1091 * 50) +1 samples per 1091 seconds. That means true sample rate = 50.000917 samples/second. It doesn't sound like a problem, but it does skew each file by 4 milliseconds. When we concatenate five files, the end of the file is off by 20 milliseconds. If we concatenate fifteen files (about an hour's worth of time-history), we see 60 milliseconds of skew, which in the case of seismic data, might be enough to throw your calculated location off by a couple of Km.
What I can probably do is not bother with curve fitting; I can probably calculate sample rate directly by counting impulses for six adjacent files and dividing by total number of samples. This assumes that there has not been any lost samples; Thus far I have not observed any. However the idea of oversampling the curve to infer it's real shape by using identically shaped impulses still sounds pretty neat. It just doesnt need to be done in my application after all.
Since the sample rate varies by a slight amount, the pulses move back & forth depending on whether or not the sample rate is a measure higher or lower than the nominal frequency, which it 50 s/sec.
One thing I can do (and did do) was go through the fileset until I identified the next file interval that showed the 20 msec. glitch. It turns out the glitch occurs every fifth file. If the sample rate is slightly faster, the fifth file in the sequence will "jump" forward 20 milliseconds. Since we have 1091 seconds of time history between the five files, and the nominal sample rate is 50 s/sec, if the time-history jumps forward one sample, we end up with (1091 * 50) +1 samples per 1091 seconds. That means true sample rate = 50.000917 samples/second. It doesn't sound like a problem, but it does skew each file by 4 milliseconds. When we concatenate five files, the end of the file is off by 20 milliseconds. If we concatenate fifteen files (about an hour's worth of time-history), we see 60 milliseconds of skew, which in the case of seismic data, might be enough to throw your calculated location off by a couple of Km.
What I can probably do is not bother with curve fitting; I can probably calculate sample rate directly by counting impulses for six adjacent files and dividing by total number of samples. This assumes that there has not been any lost samples; Thus far I have not observed any. However the idea of oversampling the curve to infer it's real shape by using identically shaped impulses still sounds pretty neat. It just doesnt need to be done in my application after all.