Both of the definitions you gave are valid. One is absolute error, which is useful if you are designing a current reference, the other one is the error around the mean value, you can use the latter for matched circuits that don't require absolute accuracy but good matching between them.
The interval width (number of standard deviations) depends completely on your choice. If you want almost all of your chips to perform within specs then it should be 3 standard deviations, if you are more relaxed about it then it can be 1 standard deviation. 3 sigma = 99% of the samples will be within range, 2 sigma = 95 or something I forgot and 1 sigma is around 65%. It's your decision to choose which one to use.