I was a professional software developer for over 10 years and have a BS & MS in Mechanical Engineering from the University of Illinois.
This model is not programmable and is for 6 cities. The above link describes, in detail, how to extend this design to 23 cities.
Why does every single integer on the system need its' own address? It doesn't. Most of the time large arrays need a start location and a length.
Trying to get an inherently serial design to run in parallel instead of just designing in parallel to begin with.
From an overview perspective, the salesman example you implemented in LTspice, it sounds as a hardware accelerator specific to that problem which is typically accomplished via software, but this model does not seem to use the basic concept behind the purpose of your approach (namely, interleaving 'processors between memory banks'). Perhaps if you explain more summarized and focused on the concept would turn things easier to understand instead of rambling on the advantages without presenting any performance comparison with other approaches in ASIC.
BTW, I did not understand the meaning of this statement on the page "what is 4p":
{ Why does every single integer on the system need its' own address? It doesn't. Most of the time large arrays need a start location and a length. }
This does not is true at all, as well as do not attempt to explain the innovation announced.
Another phrase that caugh my attention is that:
{Trying to get an inherently serial design to run in parallel instead of just designing in parallel to begin with. }
This is exactly what hardware accelerators do, some of them at a single clock, others few more. There are specific state of the art ideas for many mathematical or computational problems, much of them available at known scientific publications, but all of them have innerent drawbacks, as for example the larger hardware amount of resources required.
So either I did not understand your invention at all, or you were not clear in your explanation.
Keep in mind that the processor canvas is resident inside its own RAM. So kernel calls occur on chip. This is discussed in detail here:
Much of the hardware acceleration tries to take this serial design and do computation in parallel. I argue that this is much less efficient that just writing code in parallel to being with.
There are few processing applications that could be accomplished totally in parallel, most of them are pipelined, therefore the overal concept of that approach would be quite restricted to a tinny scope, which I have not yet been able to imagine what is. I`m sitll not able to link the proposed revolutionary solution ("interleaving processors between memory cells") to the scenarios you have presented; as said before, you could try to explain things more consiselly and focused on the proposed idea instead of just giving general clues
Dude, you are reinventing the wheel. Search IEEExplore for logic in memory and for hardware accelerators. There is nothing novel about what you are doing.
FYI, I have designed what is probably the biggest chip ever designed by an academic. 1B transistors in the damn thing.
it is difficult to explain things concretely when you do not answer specific questions that I pose directly for the purpose of isolating the *specific* parts of the design that are eluding you
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?