dBm is dB relative to 1 milliwatt. Power can be any value relative to 1 mW, so positive or negative dB. Noise should hopefully be lower than signal, so the ratio 10*log(Pnoise/Psignal) should be negative for a useful device.
One rating manufacturers give in the specs is for the "full scale" power output, which is more or less the maximum power available. This is the power at which the output stage just begins to saturate, i.e. Vout/Vin is 1 dB smaller than it would be at lower input levels, or 1 dB "compression". They frequently mention 3 dB compression as well, meaning even more saturated. Roughly speaking, at this level the amplifier is close to operating with the output transistors are either fully on or fully off, so they have the least dissipation. Operation at this level also begins to generate harmonics, because the output waveform is no longer sinusoidal.
I think they usually give 3d order harmonic intercept as a point of comparison between amplifiers. This is (likely) the biggest harmonic when there is only modest distortion of the output waveform.
I know less about ACPR, perhaps someone else will contribute here. What I can offer is that these amplifiers can be used in communications schemes where there are several carrier frequencies (channels) propagating along the same wire. It is useful to know how much crosstalk will be generated by the amplifier when it is used to amplify the composite signal. Presumably for the manufacturer's intended application, 550 kHz is the channel separation in frequency. E.g. for a 50 MHz carrier, the next channels might be 50.550 MHz, 51.1 MHz, etc. I assume that -21.4 dBc means that the adjacent channel has some signal leaking in from the carrier at a level given by 10*log(Padj/Pc) = -21.4.