What I see is, your clock*measurement rollup has
about 20pS of jitter, and the 4-level data waveform
is showing about 60pS. So you might say that there
is 40pS of "contributed jitter".
With prime (incoming) jitter 1/3 of the total, there
has to be some questions about test hardware setup.
Me last time at that rodeo was a decade back, and
it took us working on an optical bench, a whole lot
of copper braid and solder and decoupling caps, and
the best 'scope the company had (and cables to
match the spendiness) to get down to a repeatable
15pS-range contributed jitter.
You might step back a bit, try to see what the clock
looks like when you acquire it on one channel and
trigger from another on the same sampling basis as
the eye mode you showed. Have to get that clean
before you can chase anything more mysterious.
I think you'd benefit by quantifying the various
components of the displayed jitter - source phase
noise, trigger repeatability, common mode bounce
and edge-rate vs trigger level, channel BW impacts,
etc. Any of these might "bury" or "smear" the kind
of deterministic jitter that you'd expect a chip
interconnect type problem, to add to the mix. And
you can set proper expectations for the casual
reviewer, who might freak out over 60pS jitter on
this bench lashup (?) if you didn't tell them that
30pS of it (say) was imposed by the equipment
and external hardware.
Then I'd set up an "eye diagram" where you capture
the clock while triggering from the data (a level of
your choosing, try several) and see how that compares
to the "canned" eye diagram. You'd hope they give the
same answer, but hope != trust. Look for ways you
can cross-check the instruments and methods.