Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.
This figure Fig. 10 explains "•Reference Fig. 2, ,Fig.5, Fig. 6, bit line output is in the same way(just at S0, S4, S8, S12 TOP), so there does not need more routing power."
Because all bit line output of each small part output to S0, S4, S8, S12 TOP, no mux is needed and no routing power is...
The data width (256 bytes) of the GPU/AI L1/L2 cache is very large, which makes my invention not directly applicable to it.
Now I have invented a new SRAM macro partition, which reduces the power consumption of GPU/AI L1/L2 cache to 15%~30%.
The document is Principle and Architecture of Ultra Low Power SRAM Device.
The power of new invention is only 10~20% of current technology.
And this is very important to AI/IOT/hand on device.
So which company get the licence of this invention, it will dominate the next generation...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.