neazoi
Advanced Member level 6
All the techniques I have seen in driving a LED matrix display refer to scan each row and set a bit in this row each time. Then go to the next row and set the next bit, then to the next row etc. By row cycling very fast the user sees all the set bits of the screen simultaneously, despite they are set one at a time. Obviously this is done for hardware minimising but this is depended on the speed of the logic involved.
I have thought of using flip flop buffers, one for each row and column to "keep" the state of the set bits in each column and row. This means that each time the microcontroller that drives the display just needs to refresh accordingly only specified bits and not the whole screen, by just setting accordingly its pins. But this also means that because of the "memory" (flip flops) each bit has to be cleared manually if it needs to switch off. This has the advantage that no complex timing needs to be kept and there could be other advantages I cannot think of now.
Do you think this second approach could be feasible or useful at all?
I have thought of using flip flop buffers, one for each row and column to "keep" the state of the set bits in each column and row. This means that each time the microcontroller that drives the display just needs to refresh accordingly only specified bits and not the whole screen, by just setting accordingly its pins. But this also means that because of the "memory" (flip flops) each bit has to be cleared manually if it needs to switch off. This has the advantage that no complex timing needs to be kept and there could be other advantages I cannot think of now.
Do you think this second approach could be feasible or useful at all?