Clock frequency divided by 4 gives instruction cycle time.
Multyply by timer prescale value gives timer increment time.
Example with 4 Meg clock and 1:8 prescale:
Instruction cycle time = 4/4 = 1M;
1/1000000 = 0.000001 seconds
Timer increment time = 0.000001 * 8 prescale = 0.000008 seconds;
Counts for 1mS = 0.001 / 0.000008 = 125;
Result 125 counts of timer take 1mS.
If you subtract the result from 256 and preload the timer with that value in an interrupt routine.
You will have a 1mS interrupt you can use as a clock;
Code:
/*--- Global ---*/
volatile unsigned int sysclock;
/*--- Interrupt routine ---*/
void interrupt int_vector(void)
{
if(T0IF) /* Timer_0 overflow interrupt ~1mS system clock */
{
sysclock++;
TMR0 = 131; /* Preload Timer_0 for 1mS interrupt */
T0IF = 0U;
}
}
/*--- Delay routine ---*/
void delay_mS(unsigned int ms)
{
unsigned int timer = sysclock;
while((sysclock - timer) < ms){
;
}
}
You don't have to worry about overflow when subtracting one unsigned from another.
You could also use a macro and let the compiler work it out for itself.
Code:
/*--- Macro to calculate 1mS Timer period (1:8 Prescale value) ---*/
#define FREQ_OSC 4.0
#define PRESCALE 8.0
#define TMR_1mS ((1000.0)/(((1.0)/((FREQ_OSC)/(4.0)))*(PRESCALE)))