To increase speed of execution in high performance CPU's, one bottleneck is the "slow" memory access time. So cpu's prefetch the next several instructions from ram in sequential manner.
However it is not necessary that the next few instructions will be in sequence - there could be a conditional jump somewhere. If that happens then entire prefetched opcodes are wasted.
To improve this scenario, some prefetch methods try to 'predict' which path instruction sequence will take, and fill the fifo with those.
That's my theory.
- - - Updated - - -
This could be used for other systems where fore-knowledge of next required data is beneficial.