Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Image aquisition by camera to FPGA using GigE Vision or similar in speed

Status
Not open for further replies.

wolf12

Advanced Member level 4
Full Member level 1
Joined
Mar 31, 2011
Messages
109
Helped
1
Reputation
2
Reaction score
1
Trophy points
1,298
Visit site
Activity points
2,102
Hello,

Can you suggest me fast way of image acquisition to FPGA for image processing (GigE Vision or similar) or if there is some kind of converter?

I searched for GigE ip cores but couldn't find free ones, even if I buy, I'm not sure whether its troublesome to connect to FPGA directly and whether it will be waste of resources in fpga because I intend image processing on FPGA.

My application is Machine Vision.

Thank you.
 

Ive seen DVI and SDI done in FPGAs, so GigE vision should be no problem.
But unless you make your own core, you will probably have to purchase one.
 
  • Like
Reactions: wolf12

    wolf12

    Points: 2
    Helpful Answer Positive Rating
Ive seen DVI and SDI done in FPGAs, so GigE vision should be no problem.
But unless you make your own core, you will probably have to purchase one.

I don't have enough experience to make my own core for GigE Vision. I have read in forum that it's a difficult task. I'm currently defining the scope of my undergraduate image processing project, I'm not sure adding GigE Vision is a different project or not. Can you give your suggestion regarding this ?

Thank you.
 

I highly recommend you find a SD video decoder chip rather than using a high speed interface like gigE. It's then quite simple to get video into your FPGA, as you can access the data video data directly and not have to strip off some form of packetisation off your packets.

SDI would be fairly straightforward too, but the external chips and cameras required are a little more expensive. With an SD video decoder the cameras are very cheap.

Something like this would be perfect:
https://www.analog.com/static/imported-files/data_sheets/ADV7180.pdf

- - - Updated - - -

A quick google tells me you can get the parts for <$10/chip
 
  • Like
Reactions: wolf12

    wolf12

    Points: 2
    Helpful Answer Positive Rating
I highly recommend you find a SD video decoder chip rather than using a high speed interface like gigE. It's then quite simple to get video into your FPGA, as you can access the data video data directly and not have to strip off some form of packetisation off your packets.

SDI would be fairly straightforward too, but the external chips and cameras required are a little more expensive. With an SD video decoder the cameras are very cheap.

Something like this would be perfect:
https://www.analog.com/static/imported-files/data_sheets/ADV7180.pdf

- - - Updated - - -

A quick google tells me you can get the parts for <$10/chip

I'm sorry, as I haven't provide enough info, I wasted your time. But I will find your information useful for another application. The cameras for the project can be expensive (I get free cameras). Images are microscopic. I will not need video, just images. For example Basler camera, around 2000 x 2400 pixels. I need a similar fast protocol, like USB 3.0 Vision or Camera Link or FireWire. Or An easy way for GigE Vision.

If I cannot find, I think to work with offline images. Is there a better way to keep the speed and less complexity?

Thank you.
 

I was staying out of this until some useful information was posted.

As you are using a camera that has a Gigabit Ethernet connection. You might try using the opencores GigE core
https://opencores.org/project,ethernet_tri_mode
Typically I avoid using cores from there as they are usually poorly implemented (a lot of them appear to be student projects) or lack documentation, but then you get what you pay for (in this case nothing much).

As GigE Vision is transferred over UDP (https://en.wikipedia.org/wiki/GigE_Vision) there isn't any technical reason you would have to purchase a GigE core (unless the opencores one doesn't work). The biggest obstacle will be that the GigE Vision specification on the protocol can't be freely disseminated as it is licensed.

You might have better luck using USB 3.0 (not Vision) or Firewire as they are somewhat ubiquitous due to being used on consumer products and aren't targeted specifically to commercial camera applications.

Regards
 
  • Like
Reactions: wolf12

    wolf12

    Points: 2
    Helpful Answer Positive Rating
2000x2400 is a large non-standard video size. What kind of frame rate and pixel clock are you looking at? HD video at 1080p60 runs at 148MHz pixel clock.
Cameralink is a nice easy interface. It just provides data and sync lines similar to the video encoder I pointed you at.

What exactly is the application? do you have any feedback in the system limiting your latency?
Why not provide more spec so we can help more?
 

Tricky I think you missed the OPs comment about transfering still pictures, not video.
 

I was staying out of this until some useful information was posted.

As you are using a camera that has a Gigabit Ethernet connection. You might try using the opencores GigE core
https://opencores.org/project,ethernet_tri_mode
Typically I avoid using cores from there as they are usually poorly implemented (a lot of them appear to be student projects) or lack documentation, but then you get what you pay for (in this case nothing much).

As GigE Vision is transferred over UDP (https://en.wikipedia.org/wiki/GigE_Vision) there isn't any technical reason you would have to purchase a GigE core (unless the opencores one doesn't work). The biggest obstacle will be that the GigE Vision specification on the protocol can't be freely disseminated as it is licensed.

You might have better luck using USB 3.0 (not Vision) or Firewire as they are somewhat ubiquitous due to being used on consumer products and aren't targeted specifically to commercial camera applications.

Regards


Is this the specification?

[weblink to copyrighted pdf document, deleted by moderator]

Can I use it in an industrial environment? I mean is it illegal? Can I try before I buy?
 
Last edited by a moderator:

Is this the specification?

[weblink to copyrighted pdf document, deleted by moderator]

Can I use it in an industrial environment? I mean is it illegal? Can I try before I buy?
Certainly looks like the standard, posted online in violation of the copywrite and from what I could tell the licensing.

I would check with your legal department on implementing a licensed standard the requires licensing to use it.

Regards
 
Last edited by a moderator:
  • Like
Reactions: wolf12

    wolf12

    Points: 2
    Helpful Answer Positive Rating
2000x2400 is a large non-standard video size. What kind of frame rate and pixel clock are you looking at? HD video at 1080p60 runs at 148MHz pixel clock.
Cameralink is a nice easy interface. It just provides data and sync lines similar to the video encoder I pointed you at.

What exactly is the application? do you have any feedback in the system limiting your latency?
Why not provide more spec so we can help more?

I'm sorry. My project is undergraduate final year project, But I chose to do it related to industry, and I have signed NDA. As the idea of using FPGA is to make everything faster, the interface between camera and FPGA should be fast too, to get the Photo quickly. Can I PM you?
 

If all you are transfering is still images, why are you bothering with gig-e vision or cameralink? these are real time video interfaces.
 

For industrial cameras, real-time images are a prerequisite. For example, when inspecting components in a production process, the components are transported on the conveyor belts at high speed. For a precise inspection, the camera must acquire the images as quickly as the components are being transported. Timing this precisely requires low latency: a small time delay between receiving the trigger signal and acquiring the image. Further, the time delay must not vary; no jitter can affect the moments of image acquisition. For an application with high image rates (e.g. 300 images per second), the required latency times can be only microseconds.

I copied this from here,
https://www.baslerweb.com/Technologies_Real_time_capability-145638.html

My application require this speed, and they already use these cameras.

Thank you.
 
Last edited:

Status
Not open for further replies.

Similar threads

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top