Low LBIST coverage is a well-documented and researched fact, not subjective opinion based on random observations seen in a corporate project or two. Refer to countless LBIST-focused published papers and standard DFT texts (Abramovici, Wang etc) for more on that plus how large n-input circuits are broken down into smaller sub-circuits by making the entire design scannable and scanning in the PRPG output serially through one of its stages (aka the STUMPS architecture).
And therein lies the solution to this question: PRPG outputs when applied in parallel to an n-input CUT will give the same coverage as deterministic patterns. But when the STUMPS architecture (the most widely one used) is used, with the PRPG length being p and the scan length a different k, all expected patterns for the 2^k input CUT may not be generated at all. The missed patterns make it especially difficult to target random-pattern resistant faults -- which is why deterministic "top-off" patterns (hybrid BIST) used or test pointed inserted (adds to area overhead) to fill the gap.
BTW, LBIST usage is based on application - automotive, military - where long-term reliability (through in-field testing) is paramount and/or high storage, high test time through an ATE is undesirable. Not on the specifics of the circuit itself.
For those interested, the book Digital Systems Testing and Testable Design describes the p vs k conundrum in much fascinating detail in the BIST chapter with specific examples. Most places explaining the low coverage LBIST disadvantage unfortunately assume the reader to know of serial input architectures so unless I'm made to believe random patterns applied in parallel will have any different coverage than deterministic ATPG, I'll take this to be the final solution.