Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Power Consumption of GPU/AI L1/L2 cache

chihcheng

Newbie
Joined
Sep 21, 2019
Messages
4
Helped
0
Reputation
0
Reaction score
0
Trophy points
1
Location
Taiwan
Activity points
30
The data width (256 bytes) of the GPU/AI L1/L2 cache is very large, which makes my invention not directly applicable to it.
Now I have invented a new SRAM macro partition, which reduces the power consumption of GPU/AI L1/L2 cache to 15%~30%.
 

Attachments

  • Power Consumption of GPU_AI L1_L2 cache.pdf
    405.4 KB · Views: 72
This figure Fig. 10 explains "•Reference Fig. 2, ,Fig.5, Fig. 6, bit line output is in the same way(just at S0, S4, S8, S12 TOP), so there does not need more routing power."

Because all bit line output of each small part output to S0, S4, S8, S12 TOP, no mux is needed and no routing power is wasted.

Notice: The arrangement of Fig. 6 is marked for small part. All bit line output is shown in the figure Fig. 10.
DATAOUT.png
 

LaTeX Commands Quick-Menu:

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top