Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

FPGA Dev Cost Model Help

Status
Not open for further replies.

fgt4w

Newbie level 5
Joined
Sep 13, 2013
Messages
9
Helped
0
Reputation
0
Reaction score
0
Trophy points
1
Activity points
146
Hi Everyone,

I'm a cost model builder and an FPGA newb, and i've been tasked with designing a quick, high-level, early stage FPGA cost model. I was hoping you guys could help me out.

The model has 3 activities
1. Architectural Design (high-level design/architecture work, behavioral HDL coding and simulation, identifying major components including IP modules or cores to be used)
2. Detailed Design (simulation, timing analysis, design verification and rework, and synthesis)
3. Implementation (floorplanning, translation, mapping, place and route, and programming the device)

My question is: How does the amount of IP modules/cores used affect the effort (work days) for each activity (as compared to developing everything from scratch)?

For example, if you're planning to build an FPGA with from scratch (no IP modules/cores reused), and you estimate the effort breaks down like this:
Prelim Design 30 days (30%)
Detailed Design 65 days (65%)
Implementation 5 days (5%)


Approx how much time could you save if instead you had 50% of your design reusing IP modules/cores?

What about (theoretically) 100% reuse?

Thank you for your help!
 

This is too broad a question. There are too many factors:
Do we actually know what algorithms to use? what is the application? do you already have IP in house? what are the interfaces? What are the engineer skills? What are the timing challenges?

It can really vary from project to project.

For example, I recently worked on a project that had most of the HDL already coded, but some upgrades were needed. With 3 engineers over 4 months my entire effort was put into place and route, as the timing and floorplanning were a massive problem. It was using a clock speed close to the limits of the device with 70%+ resources used and 90%+ of ram. But on other projects Ive spent months in 1 and 2 with just days in 3 (no resource problems).

So each project is different.
 
  • Like
Reactions: fgt4w

    fgt4w

    Points: 2
    Helpful Answer Positive Rating
This is too broad a question. There are too many factors:
Do we actually know what algorithms to use? what is the application? do you already have IP in house? what are the interfaces? What are the engineer skills? What are the timing challenges?

It can really vary from project to project.

For example, I recently worked on a project that had most of the HDL already coded, but some upgrades were needed. With 3 engineers over 4 months my entire effort was put into place and route, as the timing and floorplanning were a massive problem. It was using a clock speed close to the limits of the device with 70%+ resources used and 90%+ of ram. But on other projects Ive spent months in 1 and 2 with just days in 3 (no resource problems).

So each project is different.

Thanks for the response.

So if I understand the scenario you described - you have exactly 100% IP reuse, 0% new design. The effort for the project was
Preliminary Design: 0 days, 0%
Detailed Design: 0 days, 0% of the total
Implementation: 4 months effort, 100% of the total

Now consider that exact same project, except that you planned to built it from scratch. 0% IP reuse, 100% new design. Assume that you expect the exact same amount of timing issues (perhaps not realistic, but lets hold it constant for discussion purposes). Exact same application, developers equally skilled (not the exact same skills - but both considered at the exact same "level of skill" i.e. both experts, or both novices, for the type of work they will be doing). exact same interfaces. Everything that affects the effort needed is held constant except the amount of IP reuse and new design. What would the new breakout be? Here's my guess:

Preliminary Design: 3 months effort, 27% of the total
Detailed Design: 4 months effort, 36%
Implementation: 4 months effort, 36%

Then, assume 50% IP reuse, 50% new design. My guess:

Preliminary Design: 2 months effort, 22%
Detailed Design: 3 months effort, 33% of the total
Implementation: 4 months effort, 45% of the total

- - - Updated - - -

Basically, my question is:

Let's assume you sat down at a computer, opened a "cost model" that supposedly estimates FPGA development costs, and described your project. You told the model the application, how much timing issues you expect to encounter, your interface complexity, developer skill levels, where your IP comes from, predicted size (logic elements or gates or HDL lines of code or whatever makes the most sense to you). You also say it will be 100% new design (aka 0% reuse). You click RUN, and it gives you some numbers.

How would you expect those numbers to change if you reran the model, but with 50% new design? how about 0%? Everything else remains unchanged. If it depends on specific scenarios, perhaps you could pick 1 common scenario and describe it, then explain why it would be wrong in another scenario so I can see your thought process.

I truly appreciate your help, i'd be screwed without folks way smarter/knowledgeable than me helping to figure this out. You're a lifesaver
 

As you can see, its not an exact science.
My scenario wasnt as you said. There were design changes that required prelim and detailed work. Those were mostly handled by the other 2 engineers. While I was working in parrellel on the implementation.

But usually, putting more effort in stages 1 and 2 will reduce the effort needed in stage 3.

When you go to cost a project, I highly recommend you get input from the engineers likely to be involved, or get engineers involved who have experience with the IP to be used. THey will have the best idea of how long it will take to generate new IP, link existing IP together or the timing/resource constraints the project may have.
 
  • Like
Reactions: fgt4w

    fgt4w

    Points: 2
    Helpful Answer Positive Rating
Thanks TrickyDicky.

So the effect of design reuse depends on how easy it is to link the IP together into something that works for your project. For IP that is a perfect fit for your design, it could reduce stage 1,2 costs to near 0. For IP that is difficult to link together, has bad documentation, etc. - it could be the same cost as building from scratch. It is usually somewhere in the middle.

So basically, my cost model needs to ask "for the portions where design has been reused, rate the difficulty to link it with the rest of the design on a scale of 1 to 10, 1 meaning its extremely easy/near 0 design effort, 10 meaning its no better than designing from scratch". Does that sound like a good approach?

For stage 3, you just look at your options (design new, or reuse this internal IP, or buy this IP from a 3rd party, or whatever). Each option could have dif timing/resource constraints. Can engineers generally predict what the timing/resource constraints would be with each option, if asked to rate them during early prelim design phase?
 
Last edited:

That sounds like its getting there.
For stage 3, another factor comes in: do you already have a target device? (because its a board upgrade, a new design on an existing board or there are cost constraints in terms of money or power)?

Its usually fairly easy to work out the memory and multiplier requirements for a given algorithm if the IP doesnt yet exist (but you have your algorithm architected in a hardware "friendly" format). For IP that does exist you should know roughly all the resource cost. And as you know your desired data rate, you can work out what clock speeds you require and therefore you can guess how hard you think the fit will be given the resource usage and clock speed. Pushing FPGAs over 70% utilisation is going to increase compile times quite a bit as well as starve the routing resources, which will make the job harder with a faster clock.
 
  • Like
Reactions: fgt4w

    fgt4w

    Points: 2
    Helpful Answer Positive Rating
FPGA dev cost

1) Ask your engineer
2) Multiply by 2
3) Add another week just because
 

Haha thats not the first time i've heard that Ice-Tea :) Must be true. Hope you guys had a great Thanksgiving!

For that scale of 1 to 10 I talked about....

What's a realistic range that 90% of projects would fit into? 2 to 9? 4 to 8? 1 to 5?
I would guess I need different ranges depending on whether you're reusing soft cores, firm cores, or hard cores (of course you can mix and match)
What's a good default value? i.e. if you were asked to rate it 1 to 10 for thousands of real projects, what would be the most common value?

Also, from what i've read, IP reuse usually doesn't save much effort for the prelim design phase. You still need to think about where the IP fits in from a high-level perspective, create test benches, etc. The behavioral HDL coding effort gets replaced by the effort for finding/selecting/vetting the IP to be reused plus reworking your design a bit to make sure it all fits together. The prelim design effort could be somewhat reduced, but really, its the detailed design/verification is where you save your effort when reusing IP. Does that sound right?

Thanks again!
 
Last edited:

It is true. Even when you're being pessimistic in your scheduling you should multiply by 2. If the design is very complex with multiple clock domains and interfaces communicating with each other and FW controlling everything, then you should also increase the integration time as you'll inevitably find bugs during integration, which will be very difficult to track down.

Your cost model project seems to stink of upper management with no clue about design engineering trying to come up with a way to tell engineers how long it will take to do a job.

The problem with these types of models is those same clueless types will make it the holy grail of scheduling projects and if a project that has morphing requirements and/or was difficult to estimate due to unknowns goes over budget they start panicking and micromanage the project to death. Anyone for 3 progress meetings a day that run about 1 hour long (with all the BS that gets brought up)...Hmm that leaves you with 5 hours of work time. Of course they'll want you to work for 10 hours a day to make up for the poorly written specs...so 13 hour days...Oh wow, I want to work for that company, NOT!
 

Ya, unfortunately when stupid people use a cost model they don't understand, you get stupid results. Of course, it doesn't help if the model itself is poorly designed. I'll try to get that part right. And i'll try to make sure it doesn't get used to swamp the engineers with BS meetings and unrealistic expectations.

We'll definitely consider risk and requirements volatility, which we've studied alot and have in-house experts on. What we know little about is FPGA development. The first step is putting a model together with a basic theoretical idea of how it should work, and the next is to look at a bunch of real, completed projects and see where the model goes wrong, calibrate it, etc.
 

Maybe that isn't such a bad place to work after all...Seems like someone has an idea of how something like this should be used and that it might require tuning and may just break outright on occasion.
 

Status
Not open for further replies.

Similar threads

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top