IC inputs that use a clock signal, will have variation in exact switching point. Between "logic 0" and "logic 1" there's a gray area where inputs may read 0, or may read 1 (undefined). If a clock signal takes much time to cross this gray area, then you get a large time window in which you don't know when exactly the clocking event occurs (=timing uncertainty). Or with multiple inputs that are clocked simultaneous, some may be clocked, while some others are not (yet).
Also if there's some noise on the power supply (as is often the case), a slowly changing clock that is meant to be a clean 0->1 transition, might be read as a quick 0->1->0->1 sequence.
In digital logic that's all very undesirable, so you want clock signals to make 0<->1 transitions as fast as possible (steep slopes). That automatically translates into a signal that looks like a square wave. Of course nothing is perfect, so in real world circuits you're more likely to have a signal that looks like a very much flattened sine wave.