Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Decoding color from digitized NTSC component video signal

Status
Not open for further replies.

jokkebk

Junior Member level 1
Joined
Jul 7, 2012
Messages
16
Helped
0
Reputation
2
Reaction score
1
Trophy points
1,283
Location
Finland
Activity points
1,460
Help from experienced digital signal processing specialists is needed! I recently did a nice hack where I digitized a composite video signal from Raspberry Pi using a USB oscilloscope, and decoded it to a B/W video in computer:

https://codeandlife.com/2012/07/31/realtime-composite-video-decoding-with-picoscope/

I just recently got a better scope, and was able to increase the capture quality much further - with a larger buffer, I can get as much as 4000-8000 samples per scanline (250-500 MSps sampling rate), resulting in very usable B/W "composite video console" on PC:

**broken link removed**

Now I'd really like to add colors. The problem is, my maths studies are from 10 years ago, and I haven't made much progress today. The scanline signal looks like this:

**broken link removed**

There's a reference 3.579545 MHz color burst (~8 cycles) in the beginning of a scanline, and what then follows is luminance data, overlaid with color data that is added to luminance by modulating two 3.579545 MHz carrier waves 90 degrees apart (I have understood that their superposition creates a single sine wave varying in phase and amplitude).

However, I'm not sure how to decode (separate and normalize) the luminance and two color components from the single waveform. Creating digital filters with FFT + iFFT is one option (don't know much about that either), but I was thinking, might there be some shortcut around this? I can successfully "tune" a sine wave digitally to the same phase as the reference colorburst, but so far I haven't figured out how to use that synced sine wave to extract the "resonant" color data digitally.

The solution wouldn't need to be very accurate, even 10-20 accurate color vectors over the scanline would suffice for my purposes. :) Any ideas and pointers would be welcome!
 

In my view the video signal that you mentioned comes broadly in the categories called as CVBS which in another word called as "Composite video" ....these video signal most of the time are uses QAM( quadrature amplitude modulation techniques ) refer the link below-

https://www.intersil.com/content/dam/Intersil/documents/an96/an9644.pdf

so what you need to do is either have some chip to do this duty and there are various CVBS decoder available in the market for you ...refer

https://www.google.co.in/search?q=c...pw.r_qf.&fp=9b6d0d988c44e1bb&biw=1366&bih=606




Good Luck
 
In my view the video signal that you mentioned comes broadly in the categories called as CVBS which in another word called as "Composite video" ....these video signal most of the time are uses QAM( quadrature amplitude modulation techniques ) refer the link below

Thanks for the link, it was quite informative! Maybe implementing a high pass filter would be one option to get rudimentary color signal out, or even a trap filter, using fourier transformations.

so what you need to do is either have some chip to do this duty and there are various CVBS decoder available in the market for you

I'm going for a full SW based approach if at all possible, without any external components - just plug the component video cable to the scope and decode everything in software (since analog components can do it with the same signal, it must be possible digitally also). Preferably, I'd also write the decoding algorithms myself, but using external FFT/iFFT libraries to implement digital filters is not ruled out.

If I would be able to separate the color band from the signal using software, does anyone know how the color information could be decoded from that signal using software?
 

ok, let me understand what you want to do but CVBS signal comes in analog form now what you are saying is you have digitize it and kept in the format like a file or files on PC that is what you want to do.... In that case you need to write QAM demodulator with lot of signal based conditions ....or how you are connecting the signal to PC ( I am assuming you want to use the software on the PC)....

with regards,
milind
 

ok, let me understand what you want to do but CVBS signal comes in analog form now what you are saying is you have digitize it and kept in the format like a file or files on PC that is what you want to do.... In that case you need to write QAM demodulator with lot of signal based conditions ....or how you are connecting the signal to PC ( I am assuming you want to use the software on the PC).

Yes, I'm digitizing it with a Picoscope 3206B which basically gives me the signal as C array of short values, over 1 million samples taken every 2ms (or 4, 8, 16...). I am already recognizing sync pulses and drawing scanlines using luminousity information, but would also like to add rough color info decoded from the 2000-4000 sample scanlines.

I've already calculated a 3.5... MHz reference sine wave which I sync to signal color burst (in the beginning of every scanline) with least squares algorithm, but am unsure how to use that to extract color subcarrier phase and amplitude in different points of the scanline, because that is mixed with the luma value. Crude way might be to calculate theoretical peak (max, min) locations and use just those points to estimate color subcarrier amplitude, but I guess someone with better knowledge of digital signal processing might have tips on techniques that could be employed.

The topic is quite complex, as it requires knowledge of both QAM and purely digital signal processing - my maths courses 10 years ago just brushed these subjects. But I'm interested in learning so even rough theoretical pointers are appreciated.
 

Now way to look at this problem will be like this .....first this information that you capture represents one frame of the video( I mean single Image of the video) with color content....Now target problem is you have sample data in the form of sequence and you need to get the two dimensional signal out of it with at lest three dimension ( R G B values of it) ......Now you will need first information is resolution of video ( My geuss here will be you are mostly dealing with 480i or 576i) to start with I am not sure how to extract this information from CVBS signal but you need to refer analog television document and the how to decode the pixel information using this luma and croma values..... I think this will be our problem sentence...

refer this -

https://www.google.co.in/url?sa=t&r...n4HIDA&usg=AFQjCNEVlQwYRlGMxBk_5mDgLVRnDGhiwQ

**broken link removed**

It is very complex problem but very very interesting for me too.....so share your progress on the forum so I will also able to learn for you...and we will be able to get the frame information from it....I am also working on it.... let sink up tomorrow...need to finish today some assignment for company

Good Luck
 
Last edited:

Yes, the information you linked is about where I am now. NTSC that I'm decoding consists of 525 scanlines, which form two fields of about ~242 horizontal lines each, interlaced to form a single image. The component video signal does not have a horizontal resolution, instead one can think of it as a continuous luminance signal, upon which the chrominance signal is modulated. My program has a digitized version of this signal with about 2000 samples. If I just interpret this as luminance, I get a quite nice B/W image, but getting the chroma out from digital signal is hard.

Unfortunately, all information in the net is concerned with analog separation of luminance and chrominance info, using analog filters, phase and amplitude detectors and whatnot. I have so far encountered no sources which would explain how to create trap filters or phase/amplitude detectors digitally. The data is there, but the methods to extract it are not.

My initial attempt based on the assumption that if we have locally fairly static luminance level L, over which is overlaid a chrominance signal C using a sine wave, so the total signal would be:

L + C * sin(x / wavelen * 2 * Pi)

If we multiply the above signal with in-phase reference sin(x / wavelen * 2* Pi) and integrate by x through the section [0, wavelen] and assume luminance is fairly constant, we get:

integrate( L * sin(x/wavelen * 2 * Pi) + C * sin^2 (x/wavelen * 2 * Pi), {x, 0, wavelen} ) = C * wavelen / 2

Thus I could get C out of the wave. This would even work over any interval with length wavelen. However, for some reason that just generated pure rubbish - maybe I should not run the algo for every point using interval [x-wavelen/2, x+wavelen/2], but take full periods instead. Or something. Problem is, that even the above is a large simplification, as both the phase and amplitude of the color carrier are modulated (or, maybe there's just two sine waves 90 degrees apart, I have not visualized if these two conceptualizations are actually the same).
 

Ooh, I think I found something interesting along these topics from Wikipedia, namely the digital filter article:

https://en.wikipedia.org/wiki/Digital_filter

Especially useful seem to be pseudo code implementations for low-pass and high-pass filters that work with discrete digitization:

https://en.wikipedia.org/wiki/Low-pass_filter#Discrete-time_realization
https://en.wikipedia.org/wiki/High-pass_filter#Discrete-time_realization

I think I'll try to implement these next week to see if I could separate the luminance and chrominance information, or at least make the color data more susceptible for interpretation. :)
 

I don't think that you should use an integrator for color signal demodulation. Just multiply the composite signal with the qudrature carriers and low-pass filter the product according to the color signal band width.
 
I don't think that you should use an integrator for color signal demodulation. Just multiply the composite signal with the qudrature carriers and low-pass filter the product according to the color signal band width.

Thank you. This gave me the confidence to continue along my original "simple" path. The crucial bit of information was this:

https://en.wikipedia.org/wiki/Quadrature_amplitude_modulation#Analog_QAM

Basically the above section just tells why the multiplication with carrier works as you said it would. Discrete low-pass filtering didn't work very well with the simplistic method suggested in the Wikipedia article, so I re-implemented my running average method and it actually worked just fine - it turned out that my original code only had two separate critical bugs.

After getting the two color curves out, the rest was simply a YUV -> RGB conversion. For some reason, the Raspberry Pi NTSC output mode did not use YIQ color space, but the YUV from PAL. Strange. Here's the end result:

**broken link removed**

Luminance data would still need to be cleaned from chrominance interference, but I think I'll leave that for another day. I'll let you know when I have a blog post with details ready on this one. :)
 


Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top