For any given precision (16bit, 32, 64FP, etc) there will be an optimised finite Taylor series, sometimes with only 4 to 6 terms. A good source of the coefficients is the source code of the maths library of open source compilers.
the Taylor series is easy to derive but not a very good method of calculating sines, Taylor series being expanded about a single point. if you have a choice of methods, you might want to look at Chebyshev polynomials, "Applied Numerical Methods" - Carnahan, Luther, Wilkes, is a painless introduction
1) it is required to calculate the sinus of an arbitrary argument at any time
2) it is required an oscillator
In the first case, polynomial approximation is good. It has the form of a power series, but the coefficients are not exactly the same as in the Taylor series because they are optimized for some objective (minimum peak error, minimum rms error) over the interval of interest.
For the sin() function, a polynomial in the interval [0,pi/2] is needed.
In the second case, the values are calculated sequentially, not in a random order. This allows a simpler implementation. One of the simplest is a coupled oscillator, that has the form of a second order IIR filter with a pole pair on the unit circle and can produce sinus and cosinus outputs. Look for coupled oscillator in some book on Digital Signal Processing. (Means have to be provided for controlling the amplitude of the oscillation).
the Taylor series is easy to derive but not a very good method of calculating sines, Taylor series being expanded about a single point. if you have a choice of methods, you might want to look at Chebyshev polynomials, "Applied Numerical Methods" - Carnahan, Luther, Wilkes, is a painless introduction