There's "layers" to this "onion".
As noted supply droop is not going to be a device
reliability impactor, but might be a problem for
reliable operation due to a "timing miss".
If you are on the hook for timing closure then
you must meet the "assumptions" built into the
timing models. Supply tolerance is certainly in
there, and newer technologies tend to declare
tighter voltage tolerances. "Back in the day",
5V +/-10% on HCS, ACS logic gave way to +/-5%
tolerances as people started trying to get 0.6um
CMOS to stand up (and many failed to get that,
for extended temp range, or succeeded only by
adding process complexity (LDD, halo, etc.).
A -15% tolerance at your end probably puts
you afoul of foundry-blessed timing models.
At the least you ought to recharacterize and
rerun timing analysis for what you claim your
core droop will be.
Core supply droop is going to be logic-pattern-
variable. Transient droop from a best-case-idle
to worst-case-thrash would have both inductive
and resistive components to the supply deflection,
including off-chip elements like bond wire and
package (if any) inductance. You do not want to
depend on anything "statistical" or "averaged"
from a test pattern's DIDD; you need to know what
is the worst, and that the logic still hangs together.