The I*R drop is going to be time-varying and data-
varying, and you are not concerned about only the
time-averaged "I" but its worst case dip at exactly
the wrong time - this will be your outlier delay and
the extent of timing nondeterminism.
For example, a logical event that causes chip scale
switching will hammer the bus far beyond normal
averaged small scale switching current-slugs. To
get timing signed off righteously, that minimum Vdd
span (and you must also include VSS internal I*R
rise), your rail span collapse is
(Vdd0-Vss0)-I*(Rvdd+Rvss)+dI/dt*(Lvdd+Lvss)
or something like that and it needs to be better than
the Vdd (Vdd-Vss) assumption in the timing models
or the timing analysis is rendered bogus as far as
design integrity proof goes.