Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Definition of hardware implementation

Status
Not open for further replies.

Arthur Asimov Heinlein

Junior Member level 3
Joined
Jan 22, 2014
Messages
26
Helped
0
Reputation
0
Reaction score
0
Trophy points
1
Activity points
235
For example I have an encryption cipher, and want to implement the algorithm. I load the behavior verilog codes into FPGA and work it out. Can this process be called hardware implementation? I have thought that hardware implementation should follow the ASIC flow, which is at circuit or gate level, to get a specific security circuit.

Thanks!
 

Verilog code maps to the luts, registers and rams inside the FPGA device during compilation. So it can be classed as a hardware implementation.
 

Verilog code maps to the luts, registers and rams inside the FPGA device during compilation. So it can be classed as a hardware implementation.

Not like structural Verilog code, a small change to behavior code may lead to huge modification of actual circuit, does this mean that this kind of implementation is not stable?

Thanks!
 

I dont quite know what you mean?
A small change to the code may only mean a minor functional change, but the internal of the FPGA will probably be placed completly differently. But as long as you have decent timing specs you wouldnt notice any difference externally.
 

Arthur Asimov Heinlein,

I think your question has more to do with philosophy than with engineering...
Hardware is a phisical thing - therefore it's easily defined: Flip Flops, Muxes, Gates, etc.

But how do you define software?
It's something very abstractive...
 

I dont quite know what you mean?
A small change to the code may only mean a minor functional change, but the internal of the FPGA will probably be placed completly differently. But as long as you have decent timing specs you wouldnt notice any difference externally.

I mean, the small change may lead to different energy or performance metric values, such as throughput, energy, power, etc, so this feature may limit the comparisons between physical metrics.
 

I mean, the small change may lead to different energy or performance metric values, such as throughput, energy, power, etc, so this feature may limit the comparisons between physical metrics.

throughput is a question of function, not placement.
Energy is more a case of logic usage and data throughput.

Are you proposing designing an FPGA with an asynchronous design inside?
 

throughput is a question of function, not placement.
Energy is more a case of logic usage and data throughput.

Are you proposing designing an FPGA with an asynchronous design inside?

I think a security circuit must be an asynchronous design. I think these physical metrics much relate to the structure of the circuit, which is synthesized from a behavior verilog code. This means different structures from different codes will get different metric values.
 

I think a security circuit must be an asynchronous design. I think these physical metrics much relate to the structure of the circuit, which is synthesized from a behavior verilog code. This means different structures from different codes will get different metric values.

Since when do they have to be asynchronous? I've been working around systems that encrypt data and pretty much all of the designs used some sort of pipelined encryption engine. Otherwise you can't get enough performance out of them and/or you need too many copies of the encryption engine.

Regards
 

Since when do they have to be asynchronous? I've been working around systems that encrypt data and pretty much all of the designs used some sort of pipelined encryption engine. Otherwise you can't get enough performance out of them and/or you need too many copies of the encryption engine.

Regards


Well, I think my design does not use pipelined encryption. It does one encryption round per clock cycle and has two register blocks to store the states of input and output. Between the input registers and output registers are logic gates. Seems it is a synchronous design....

Bests
 

So does this relate to the original question/
 

What metrics?
It all depends what the function change is.

Basically: more logic used = more power. Thats about the only thing I can think that may change much (if at all). FPGAs are quite power hungry devices.

If you have a fully synchronous design, with good timing specs, then if it meets the specs, it should work.
 

What metrics?
It all depends what the function change is.

Basically: more logic used = more power. Thats about the only thing I can think that may change much (if at all). FPGAs are quite power hungry devices.

If you have a fully synchronous design, with good timing specs, then if it meets the specs, it should work.
I have thought that the function change may lead to longer critical path thus affect throughput, max frequency, or more logic unit as well as different switching factor.
Anyway, I think I need some experiments to verify what I think. Thank you for your answers!

Thanks.
 

FPGAs are optimised for synchronous design. If you set the timing specs, then it tries to meet the requirement.
Yes, extra logic may affect the critical path, and may make it harder to fit but there are several factors that affect this. Mostly:

1. number of logic layers between registers
2. how full the device is
3. the fitter seed value

The throughput is a function of design and clock speed. The clock speed is usually fixed within a design, so only the function will change the data throughput.

I highly suggest you drop any asynchronous elements from your design. Synchronous designs allow for a much higher data throughput.
 

Let me put it in another way. Does a completely different internal FPGA connection due to a minor functional change lead to quite different metrics measure values?

Like Tricky says it depends on the change....

suppose the change is between the following in a Kintex7....
Code:
assign g = a ? c : b;
assign g = a ^ (c & b | d);
assign g = b ? (a ? f : e) : (a ? d : c);

Now a Kintex 7 has 6-input LUTs which means all three examples can fit within a single LUT so the difference in the code has only connectivity differences to the single LUT. Now if the signals a, b, c, d, e, and f are the outputs of say a counter then the output g will toggle differently depending on the logic implemented in the LUT. If the number of inputs into the LUT change and the toggle rate at the output of the LUT changes then there will be a small difference in the power consumption.

As tricky points out you'll see more change as the number of LUTs increases. So if instead the logic change is more extensive...
Code:
assign g[15:0] = a ? c[7:0] : b[7:0];
assign g[15:0] = a ? c[7:0] * b[7:0] : c[7:0] + b[7:0];

In this case you would see a dramatic increase in the number of logic levels and the total number of LUTs.

So unless you quantify the change you can't realistically determine how the metrics you measure will change and by how much.

This whole thread seems to be a thesis topic, a rather vague thesis topic to say the least. Using words like "minor change" are much too vague to develop any kind of meaningful metrics.

Regards
 

Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top