Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

FPGA-based visual tracking

Status
Not open for further replies.

raymart07

Newbie level 5
Joined
Feb 24, 2014
Messages
10
Helped
0
Reputation
0
Reaction score
0
Trophy points
1
Activity points
83
hi.. :)
can someone help me?? I have a project regarding visual tracking using DE2-115 FPGA development board.. I need a verilog code for that. please help me. thank you.. :)
 

People are not just going to give you code - what do you need help with?
 
Good day! We need to design an FPGA based object tracking project. The movement of the target object is measured through points in coordinates in reference to the centroid of the first frame. The output must be displayed in the LED 8x8 matrix. Thank you! We hope that you can help us as soon as possible.
 

That sounds like a project spec - what have you done so far and what problems do you have?
 

I have a project regarding visual tracking using DE2-115 FPGA development board.. I need a verilog code for that.

Source codes for digital image processing are mostly usual to find implementations available on the Web wrote in C/C++ languages.

You could consider employ at your design some soft core processor designed in Verilog language, and therefore further developments would concern strictly at firmware programming scope.


+++
 

hi..good day.. we had a matlab file (m file) and we want to convert it to verilog... there's a problem in our testbench..can u pls help us..thank you..Godbless :) here's the m file.

-------------------------------------------------------------------------------------------
Code:
redThresh = 0.25;
vidDevice = imaq.VideoDevice('winvideo', 1, 'YUY2_640x480', ...
                        'ROI', [1 1 640 480], ...
                        'ReturnedColorSpace', 'rgb');
vidInfo = imaqhwinfo(vidDevice);
hblob = vision.BlobAnalysis('AreaOutputPort', false, ...
                                'CentroidOutputPort', true, ...
                                'BoundingBoxOutputPort', true, ...
                                'MaximumBlobArea', 3000, ...
                                'MaximumCount', 4);
hshapeinRedBox = vision.ShapeInserter('BorderColor', 'Custom', ...
                        'CustomBorderColor', [1 0 0], ...
                        'Fill', true, ...
                        'FillColor', 'Custom', ...
                        'CustomFillColor', [1 0 0], ...
                        'Opacity', 0.4);
htextins = vision.TextInserter('Text','Number of Red Object: %2d', ...
                        'Location', [7 2], ...
                        'Color', [1 0 0], ...
                        'FontSize', 12);
htextinsCent = vision.TextInserter('Text','* X:%4d, Y:%4d', ...
                    'LocationSource', 'Input port', ...
                    'Color', [1 1 0], ...
                    'FontSize', 14);
hVideoIn = vision.VideoPlayer('Name', 'izel',  ...
                    'Position', [100 100 vidInfo.MaxWidth+20 vidInfo.MaxHeight+20]);
nFrame = 0;
while(nFrame < 200)
    rgbFrame = step(vidDevice);
    diffFrame = imsubtract(rgbFrame(:, :, 1), rgb2gray(rgbFrame));
    diffFrame = medfilt2(diffFrame, [3 3]);
    binFrame = im2bw(diffFrame, redThresh);
    binFrame = bwareaopen(binFrame, 800);
    [centroid, bbox] = step(hblob, binFrame);
    centroid = uint16(centroid); 
    rgbFrame(1:20,1:165,:) = 0;
    vidIn = step(hshapeinRedBox, rgbFrame, bbox);
    for object = 1:1:length(bbox(:,1))
        centX = centroid(object,1); centY = centroid(object,2);
        vidIn = step(htextinsCent, vidIn, [centX centY],[centX-6 centY-9]);
    end
    vidIn = step(htextins, vidIn, uint8(length(bbox(:,1))));
    step(hVideoIn, vidIn); 
    nFrame = nFrame+1;
end
release(hVideoIn);
release(vidDevice);
clear all;
clc
 
Last edited by a moderator:

What are we supposed to do with the Matlab?
Where is the testbench? whats the problem?
 

We need to compile a program about Object Tracking in FPGA using DE2-115 Board. We found a code in Matlab about red object tracking and it seemed to function the same as we want to output in FPGA. We found out that M File can be converted to Verilog. But when we try to convert it, out testbench is not working since it lacks some inputs. Can you help us regarding the testbench? and also the conversion of Mfile to Verilog? Thankyou!

Here is the M File for the red object tracker:
Code:
function hVideoIn = izel_red_tracker(video)
redThresh = 0.25;
vidDevice = imaq.VideoDevice('winvideo', 1, 'YUY2_640x480', ...
                        'ROI', [1 1 640 480], ...
                        'ReturnedColorSpace', 'rgb');
vidInfo = imaqhwinfo(vidDevice);
hblob = vision.BlobAnalysis('AreaOutputPort', false, ...
                                'CentroidOutputPort', true, ...
                                'BoundingBoxOutputPort', true, ...
                                'MaximumBlobArea', 3000, ...
                                'MaximumCount', 4);
hshapeinRedBox = vision.ShapeInserter('BorderColor', 'Custom', ...
                        'CustomBorderColor', [1 0 0], ...
                        'Fill', true, ...
                        'FillColor', 'Custom', ...
                        'CustomFillColor', [1 0 0], ...
                        'Opacity', 0.4);
htextins = vision.TextInserter('Text','Number of Red Object: %2d', ...
                        'Location', [7 2], ...
                        'Color', [1 0 0], ...
                        'FontSize', 12);
htextinsCent = vision.TextInserter('Text','* X:%4d, Y:%4d', ...
                    'LocationSource', 'Input port', ...
                    'Color', [1 1 0], ...
                    'FontSize', 14);
hVideoIn = vision.VideoPlayer('Name', video,  ...
                    'Position', [100 100 vidInfo.MaxWidth+20 vidInfo.MaxHeight+20]);
nFrame = 0;
while(nFrame < 200)
    rgbFrame = step(vidDevice);
    diffFrame = imsubtract(rgbFrame(:, :, 1), rgb2gray(rgbFrame));
    diffFrame = medfilt2(diffFrame, [3 3]);
    binFrame = im2bw(diffFrame, redThresh);
    binFrame = bwareaopen(binFrame, 800);
    [centroid, bbox] = step(hblob, binFrame);
    centroid = uint16(centroid); 
    rgbFrame(1:20,1:165,:) = 0;
    vidIn = step(hshapeinRedBox, rgbFrame, bbox);
    for object = 1:1:length(bbox(:,1))
        centX = centroid(object,1); centY = centroid(object,2);
        vidIn = step(htextinsCent, vidIn, [centX centY],[centX-6 centY-9]);
    end
    vidIn = step(htextins, vidIn, uint8(length(bbox(:,1))));
    step(hVideoIn, vidIn); 
    nFrame = nFrame+1;
end
release(hVideoIn);
release(vidDevice);
clear 
clc

Here is the testbench:
Code:
vid = videoinput('winvideo', 1, 'MJPG_640x480');
src = getselectedsource(vid);

vid.FramesPerTrigger = 1;

preview(vid);

start(vid);
video = vid;


Please help us! Thank you!
 
Last edited by a moderator:

Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top