I think you've got it backwards, the purported DACs
apply a correction current apiece to the inputs. The
outcome is what's symbolized as a current source.
Now why you'd want to embed two DACs per latch
cell kind of eludes me.
Maybe there's only one pair per bit line (pair) and the
improvement in speed / sensitivity / reliability is worth
the pain. But I kind of doubt it. Bit line capacitance
is liable to dwarf the capacitance of the input device
and so you could probably just use bigger, better
matching transistors and come out better off than
doing it by complexity.
I suppose the calibration scheme is also "left as an
exercise...".
I suspect maybe this is all patent driven, like they had
to make a sense amp that wasn't already lawyered up
and this let them get a patent. Make it a 1-bit DAC,
always "0" current code, and you've busted somebody
else's patent grip for a trivial implementation cost.
Or something.