To my understanding, each module should have its own control board with one LUT for gamma correction and one LUT for brightness matching. The RGB video data will be received a 24-bit data with 256x25x256 levels for R,G & B channels. For each pixel and each of its RGB channels, this incoming 8-bit data will be multiplied with gamma correction value first (using LUT, since the relation in non-linear) and then multiplied with intensity variation factor to compute the correct level of current required for each LED separately.
But all this needs huge computing power. I am being convinced that an FPGA with LUT is the only solution. It is definetly out of scope of any 8-bit microcontroller. Maybe a DSP with good horesepower can do the job. But since the computation is very simple and very fast, an FPGA looks a better option.
As far as the 8-bit, 10-bit, 16-bit discussion goes, It depends how much accuracy you want. There are now 3 level conversions that come to my mind
1. Gamma correction
2. Intensity matching
3. Overall brighness controll for day/night time viewage
All 3 conversions require some mathematics and gamma correction is a non-linear function. At any stage, you might want to upgrage you data width from 8-bit to 10 or 16 bits.
Any ideas that these 3 corrections should be done in which order