nadal1991
Newbie level 4
- Joined
- Jul 11, 2013
- Messages
- 5
- Helped
- 0
- Reputation
- 0
- Reaction score
- 0
- Trophy points
- 1
- Activity points
- 61
Hi,
In purpose of a project, I am interested with Adaptive Sampling Approaches for "Sensors".
I decide to implement this approach, which could be found here : http://info.iet.unipi.it/~anastasi/papers/tim09.pdf
This general idea of this approach is to compute the maximum frequency of the discrete sensed signal (for example Temperature), and then determine the appropriate Sampling frequency for the sensor (with using the Shannon theorem : Sampling_frequency >= 2*Maximum_frequency).
I have no problems for understanding the approach and the algorithm used for adapt sampling frequency (it's quiet logical).
My problem is with the computation of the maximum frequency. According to the approach, we must compute an initial maximum frequency at the beginning after a first set of samples (for example the W= 500 first samples). And after that we must compute the maximum frequency after every new acquired sample (always wit a window length of W).
I try to use the Fast Fourier transform to find the Maximum Frequency of the discrete signal, i proceed like this with Matlab:
I don't have a real temperature Dataset, so i generate a random Dataset of 500 samples between 30 & 35.
I obtained a value for the maximum frequency. And for testing the approach, i add new samples with very different values from the precedent ones to the signal for generating a strong variation and so increase the maximum frequency ! But here, i don't get an increase on the maximum frequency but instead i obtained a decrease.
I don't think this is logical ? In my thought, if i generate strong variations in the acquired signal, so the maximum frequency of the signal will increase and then the Sampling frequency of the sensor will increase too in order to capture all the dynamic and variations of the phenomenon ?
I wanted to know if my thinking is correct ? and if so, why the result i obtained are like this (problem with the computation of the frequency, the fft, or the random dataset used) ?
I am in a computer science specialty, so my lack of knowledge in the signal processing theory limit me to understand all this problematic.
Thank you in advance for all the time you could use to study my questions.
Nad.
In purpose of a project, I am interested with Adaptive Sampling Approaches for "Sensors".
I decide to implement this approach, which could be found here : http://info.iet.unipi.it/~anastasi/papers/tim09.pdf
This general idea of this approach is to compute the maximum frequency of the discrete sensed signal (for example Temperature), and then determine the appropriate Sampling frequency for the sensor (with using the Shannon theorem : Sampling_frequency >= 2*Maximum_frequency).
I have no problems for understanding the approach and the algorithm used for adapt sampling frequency (it's quiet logical).
My problem is with the computation of the maximum frequency. According to the approach, we must compute an initial maximum frequency at the beginning after a first set of samples (for example the W= 500 first samples). And after that we must compute the maximum frequency after every new acquired sample (always wit a window length of W).
I try to use the Fast Fourier transform to find the Maximum Frequency of the discrete signal, i proceed like this with Matlab:
Code:
Fs = 1/15; % Sampling frequency in Hz
t = 0:1/Fs:7500-15; % Time vector
nfft=500; % Number of samples
X = 30 + (30-35).*rand(1,500); % our signal
signal = X - mean(X);
Y = fft(signal,nfft);
Y = Y(1:nfft/2);
mx = abs(Y);
f = (0:nfft/2-1)*Fs/nfft;
figure(2);
plot(f,mx);
title('FFT Random discret dataset');
xlabel('Frequency (Hz)');
ylabel('Power');
[maxVal,maxInd] = max(mx);
maxFreq = maxInd * Fs / nfft;
maxFreq % Maximum frequency
I don't have a real temperature Dataset, so i generate a random Dataset of 500 samples between 30 & 35.
I obtained a value for the maximum frequency. And for testing the approach, i add new samples with very different values from the precedent ones to the signal for generating a strong variation and so increase the maximum frequency ! But here, i don't get an increase on the maximum frequency but instead i obtained a decrease.
I don't think this is logical ? In my thought, if i generate strong variations in the acquired signal, so the maximum frequency of the signal will increase and then the Sampling frequency of the sensor will increase too in order to capture all the dynamic and variations of the phenomenon ?
I wanted to know if my thinking is correct ? and if so, why the result i obtained are like this (problem with the computation of the frequency, the fft, or the random dataset used) ?
I am in a computer science specialty, so my lack of knowledge in the signal processing theory limit me to understand all this problematic.
Thank you in advance for all the time you could use to study my questions.
Nad.