I'm am fairly new to spread spectrum techniques so even I doubt what I am about to say. But this is how I interpret chip and symbol rate.
Starting with chip rate:
Imagine the mixer being used to modulate the carrier within your IC is being pumped by a square wave alternating between 1 and -1. In order to spread the carrier information over many different frequencies the frequency of the modulating square wave is varied over time. To vary the frequency of the square wave the output of the square wave is multiplied by a pre-determined pseudo random code before going to the mixer. As an example lets use a pseudo code of (1,-1,-1,1). This code is also time dependent and repeats itself infinity but for this example lets assume we step through the code at 2x the square wave frequency. By doing this multiplication over time we have doubled the square wave frequency. If we used the pseudo code (1,1,-1,-1,-1,-1,1,1) and stepped though the code as the same 2* square wave frequency, we halved the square wave frequency. In this case the pseudo codes 1's and -1's are chips, and the rate we stepped through the code is chip rate.
For symbol rate: I believe in communications a symbol is simply a bit. So symbol rate would be how many bits/second are being transmitted.
Spreading factor ends up being the chip/sec / symbols/sec which is chips/symbol. The more chips used per symbol leads to more pseudo random frequencies generated per symbol meaning more of the information is spread throughout different frequencies. The only down side is the receiver needs to have the same pseudo random code to demodulate the carrier and for some reason or another my phone can't be using voice and data at the same time :-?
I hope that helps, it still confuses the heck out of me.