Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Why learn non-synthesizable vhdl ?

Status
Not open for further replies.

ZX_Spectrum

Newbie level 5
Joined
Apr 13, 2016
Messages
8
Helped
0
Reputation
0
Reaction score
0
Trophy points
1
Activity points
104
could you please tell me what's the point of learning the non synthesizable part of VHDL ?

i am new to vhdl actually i have just started learning it
i have learned that very big portion of the 1000-page book i am using is for non-synthesizable vhdl

i know that this is used for simulation purpose but does it really worth it ? or in other words is it really technically vital to do high-level system simulation ?

i want to learn VHDL for some embedded systems projects is the high-level simulation of the system a crucial step in the design process
i think a lot of engineers do their embedded system designs without the very high level capabilities of simulation vhdl

thanks
 

For any non trivial project you tend to find yourself writing more test bench code then you do hardware, even when targeting programmable logic, if you are targeting ASIC the situation is worse again.

Unlike computer programs where setting a breakpoint and watching variables while single stepping the thing is a reasonable way to debug, non trivial HDL codes are far better fault found by extensive test scripts then by a painfully slow, process of bringing nodes out where you can get to them, then redoing the P&R (So 20 minutes or so if the part is mostly full) and using a 'scope'.....

You can get away without the test benches for the truly trivial things, sometimes, but design verification is a HUGE part of what HDL types write.

73 Dan.
 
Non synthesisable VHDL can be used for modelling, scoreboarding, data generation and data logging. At the very least, it is used to generate waveforms to test your design. Verification using only a waveform is slow and tedious and probably wont cover all of your test cases, hence why testbenches should be self checking ie. prodice a pass/fail result based on a set of test vectors. In addition, RTL code can be slow and use large amounts of ram. Modelling techniques can improve simulation and ram usage enormously.

Getting good test coverage is very important for avoiding bugs later.
 

The other case is when something isn't synthesizable now, but will be in the future.

Likewise, for when a synthesizable construct currently isn't optimal, but might be fine in the future.

It can be difficult to determine which things that can be synthesized, which things that could be synthesized, and which things that can't be synthesized.
 

DanMills and TrickyDicky emphasized the importance of simulation related VHDL constructs. Another application e.g. for real arithmetic is to calculate constants and function tables at compile time.

On the other hand, if you want to focus on synthesizable VHDL, there's a lot of text books like VHDL Logic Synthesis Approach or Digital Logic and Microprocessor Design with VHDL that teach exclusively the synthesizable part.

It can be difficult to determine which things that can be synthesized, which things that could be synthesized, and which things that can't be synthesized.

I never found it difficult. But it seems to me that some teachers have problems to explain the criteria systematically. Respectively we see VHDL (or Verilog) beginners who start writing HDL like procedural software.
 

Real world, verification is much more than 50% of design time. I have read books that say 70% atleast. Verification is carried out by writing VHDL which is essentially not synthesizable, and it does not have to be synthesizable anyway. Mind you, as the process technology size has been decreasing as per Moor's law, the design complexity has also been increasing exponentially. Just think what that means, it means that the designs being carried out today are a lot more complex than those carried out 10 years ago. The tools used for design have advanced too, but this more complex design has to be carried out in less time than it would be if it was done 10 years ago, and use almost the same number of engineers working on it. How do we meet the time and cost constraints then? We use things called Hardware Verification Languages (HVL) for verification and often some people are given the task of verification only, their title is Hardware Verification Engineer.

VHDL is good for Hardware Description. It has been used for a very long time and so has Verilog. However, they are not HVL, they are both HDL even though they can also be used for hardware verification. HVL are a different category, and they are high level languages like C++ designed exclusively for verification purpose. The popular ones are today SystemVerilog which is a superset of Verilog and SystemC which is C++ with special library extensions that enable one to write digital hardware verification programs.

I am telling you this since you should be aware of what is happening in the real world. HVL emerged in late 90s and have really picked up after mid 2000s as far as I am aware. The reason is not only that people want to verify more complex designs in less time and save money but also because the HVL language tools have improved and they have more features thanks to the backing up of IEEE, taking up defining them (SystemVerilog and SystemC) and evolving them to where they are today.
 

Real world, verification is much more than 50% of design time. I have read books that say 70% atleast. Verification is carried out by writing VHDL which is essentially not synthesizable, and it does not have to be synthesizable anyway. Mind you, as the process technology size has been decreasing as per Moor's law, the design complexity has also been increasing exponentially. Just think what that means, it means that the designs being carried out today are a lot more complex than those carried out 10 years ago. The tools used for design have advanced too, but this more complex design has to be carried out in less time than it would be if it was done 10 years ago, and use almost the same number of engineers working on it. How do we meet the time and cost constraints then? We use things called Hardware Verification Languages (HVL) for verification and often some people are given the task of verification only, their title is Hardware Verification Engineer.

VHDL is good for Hardware Description. It has been used for a very long time and so has Verilog. However, they are not HVL, they are both HDL even though they can also be used for hardware verification. HVL are a different category, and they are high level languages like C++ designed exclusively for verification purpose. The popular ones are today SystemVerilog which is a superset of Verilog and SystemC which is C++ with special library extensions that enable one to write digital hardware verification programs.

I am telling you this since you should be aware of what is happening in the real world. HVL emerged in late 90s and have really picked up after mid 2000s as far as I am aware. The reason is not only that people want to verify more complex designs in less time and save money but also because the HVL language tools have improved and they have more features thanks to the backing up of IEEE, taking up defining them (SystemVerilog and SystemC) and evolving them to where they are today.

I think people forget/dont know that a lot of what can be done with SV can also be done in VHDL. Constrained random, transaction level modelling, sequencing etc. Where VHDL falls down is the reusability aspect. It lacks classes, which massively hurts. Then you get UVM, which basically just standardises everything (to the OP, UVM is a big SV library of verification tools).

What I have just noticed is that formal verification is starting to make an appearance in the FPGA world, basically because of the massive complexity. While they are fairly standard in the ASIC world, Ive never heard of them being used on FPGAs, but they will become a necessity for some due to the increased dev time for designs. Formal tools are an order of magnitude more expensive than standard verification tools (5/6 figures per licence) but can verify designs fully in a way that you could never cover in a standard dynamic testbench. I notice that Quartus prime 15.1 has an option to specify an external formal tool - something Ive never seen before.
 

What I have just noticed is that formal verification is starting to make an appearance in the FPGA world, basically because of the massive complexity. While they are fairly standard in the ASIC world, Ive never heard of them being used on FPGAs, but they will become a necessity for some due to the increased dev time for designs. Formal tools are an order of magnitude more expensive than standard verification tools (5/6 figures per licence) but can verify designs fully in a way that you could never cover in a standard dynamic testbench. I notice that Quartus prime 15.1 has an option to specify an external formal tool - something Ive never seen before.

Given the size of the Stratix 10 series and the Ultrascale devices this is pretty much going to eventually happen as these devices are a large as some ASICs 8-10 years ago, when UVM and formal tools really started to take off. It's hard enough to verify a Virtex-6 240T or Virtex-7 330T let alone an even larger part.
 

Status
Not open for further replies.

Similar threads

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top