Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

show/merge two or more PAL videos on the same screen

Status
Not open for further replies.

Garyl

Full Member level 5
Joined
Mar 28, 2017
Messages
253
Helped
6
Reputation
12
Reaction score
7
Trophy points
18
Activity points
2,633
Hello,
I have nt found anything about this topic.
I want to create simple system that will merge 2 or 4 PAL signals into one.

Input: multiple PAL signal sources like cameras, 2 cameras or more
Ouput: single PAL signal, screen divided in two or 4....

What kind of microcontroller or chip can do that?

If it's possible, I'd also like to add more functionality, like switching between PAL signals, etc...
Thanks in advance!
 

In classical video systems, RGB signals are combined by analog circuits. The essential requirement is that that all sources are synchronized to a master timing generator.

In case of non-synchronized sources, or to achieve effects like split screen, digital video processing must be utilized.
 

This is not easy to do unless the PAL signals are synchronized with each other and if in color, the subcarrier oscillators are also locked.

A micro can be used to control the switching but its a task probably much easier to do in an analog circuit. If the sources are not in sync, the only solution is to digitize all the sources and store them in dual-ported RAM so the reading can be done independently of the writing. It isn't an easy task and I don't know of any IC made for doing it, the nearest is probably something used in multi-camera security recorders but I've never seen them for sale and they are most likely custom made by the VCR manufacturer.

In studio environments such as you see on domestic TV, all the cameras are driven from a master timing source to ensure they all run identical sync streams.

Brian.
 

Okay, so that's the general info, now let's put this a bit different way.

I have bought that cheap chinese cameras for 6$ each:
New Mini HD 13 3.6mm 700TVL CCD IR Wide Angle Lens FPV Home Security Camera PAL


they are working great but I want to handle several of them but I have single screen, so what would you suggest? Is this really such a hard task?

Could I somehow do it with 32 bit ARM or PIC micros?
 

Digitizing all video signals and processing the pictures digitally is the only way. Real time video processing is far beyond the capabilities of the said micro processors.
 

Simplest way is to use a Video Capture card that is easily installed on PCI ( or whatever ) slot on a computer.There are tons of Video Capture Cards with multiple inputs that used for video surveillance.
The software does the necessary, capturing,dividing,storing etc.
 

Those 200MHz micros are too slow or is it a RAM problem?

So, if the computer is necessary, would It be also doable with Raspberry PI?
 

200 Mhz is way too slow to process real time video.
I don't think the Raspberry Pi has the computing resources to digitize a single video signal, let alone 2 or more.

Even if it had, writing the software and drivers to manage video would be a formidable task.

I'm also with Bigboss: you require a computer with reasonable capabilities which has slots for video capture cards.
 

I am kinda suprised, because the PAL signal is almost not used anymore, all CRT TVs and casette recorders/readers are obsolete, etc... and it's still far from the reach of hobbyist microcontrollers?

Well, anyway, what about that:
Picolo Pro 2
PCI video capture card with four BNC connectors for standard PAL/NTSC cameras

AT A GLANCE
- 4x BNC connectors on the bracket
- 32-bit 33 MHz PCI bus
- One video decoder, 25/30 images per second (50/60 fields per second)
- Fast switching between up to 4 cameras
- PCI and PCIe versions available
but it's for PC, and I wanted something more light..
 

I am kinda suprised, because the PAL signal is almost not used anymore, all CRT TVs and casette recorders/readers are obsolete, etc... and it's still far from the reach of hobbyist microcontrollers.

Well it has always been out of reach for a hobbyist micro. It's why the professionals never use those micros and when they do use a micro they are typically only used for control of something else that is doing the heavy lifting of the video manipulation, e.g. an FPGA.

- - - Updated - - -

Maybe some numbers will help...
PAL 720x576, so digitized you have 24,883,200 pixels/sec
Typical hobbyist micro runs at 16MHz (arduino), ARM ones maybe 33MHz.

Regardless you can't even process the video pixels with a 16MHz or even a 33MHz micro.
 
  • Like
Reactions: Garyl

    Garyl

    Points: 2
    Helpful Answer Positive Rating
Maybe some numbers will help...
PAL 720x576, so digitized you have 24,883,200 pixels/sec
Typical hobbyist micro runs at 16MHz (arduino), ARM ones maybe 33MHz.

Regardless you can't even process the video pixels with a 16MHz or even a 33MHz micro.


The best chip within my reach so far is: PIC32MZ2048EFH144
Max Speed MHz: 252
Program Memory Size (KB): 2048
RAM (KB): 512
Auxiliary Flash (KB): 160

the max speed is said to be 252MHz
Up to 252 MHz/415 DMIPS, MIPS Warrior M-class core

but the numbers still worry me, because the 720*576 gives 414720 pixels.... assuming White/black, it;s 414720 bits = 51.84kB.. but assuming byte per pixels it's 414720 bytes = 414.72kB, almost entire RAM... so it's still kinda not enough...

Would it be possible to use some external RAM along with that PIC32MZ?
8 channels of hardware programmable DMA and 18 channels of dedicated DMA with automatic data size detection
 

You aren't seeing the problem, it isn't to do with the microcontroller speed, it's the need to use high speed ADC, DAC and dual ported memory that causes the problem.

It's conceptual - if you run one camera to one monitor you see one picture and everything is OK. If you mix a second independent video source with it, the picture is likely to be completely unusable. The reason is each camera produces not only the picture signal but synchronizing signals as well, these are used to make sure the area on the screen is mapped to the area the camera lens sees, for example the top right corner of the camera view should be in the top right corner of the TV picture. The sync signals are added to the video so the scan in the monitor aligns with the scan in the camera sensor.

Now, a second camera will do exactly the same and if that camera alone is connected to the monitor, it will pick up the syncs and adjust it's scan circuits so the picture lines up. However, if you add together signals from the two cameras and they are not exactly synchronized with each other, one or the other or maybe neither will succeed in locking the monitor and the other picture will drift through it or make it roll or tear. This is why gen-lock is used in studios, there is one master timing source that drives all the cameras so they remain in perfect sync with each other. In fact in a studio, the master generator is used to provide the final output and the syncs are removed "stripped off" the individual camera outputs.

As you add more cameras, the competition to lock the picture gets worse, two is already too many!

So to do it the simple way, you synchronize the cameras and use an analog mixer.

The complicated way is you reduce each of the camera outputs to a digitized frame of data then store it in memory (a 'frame store') so you have one memory per camera, then you read all the memories out simultaneously using a master sync generator. Essentially you let each memory store at it's own pace but read them all out at the monitors rate. It's a little complicated to find the start of each picture when storing and common practice is to use line and frame locked PLL circuits to produce the memory write signals.

Your calculations are way out I'm afraid. 720*576 = 414,720 pixels but that assumes the pixel is either off on on (0 or 1), if you want gray scale monochrome you probably need 8 bit definition so the number of bits increases to 3,317,760 bits, thats about 3.3Mb per frame. Multiply that by three again if you want RGB color. In practice, it can be a little smaller than that because 720*576 usually includes the overscan, porch and sometimes sync pulses which are not necessary to store. Bear in mind the frame store needs that capacity but has to be written to and read from at the same time, you will find such dual-port memory is far more expensive than normal RAM. It has two address and data bus, one for storing data and the other for reading it.

Brian.
 

"The reason is each camera produces not only the picture signal but synchronizing signals as well"

And let's not forget also the color burst's *phase reference* that actually encodes the color information.

No microcontroller can do this. This is the realm of heavy-duty FPGAs, assisted by lots of other resources like gigabytes of RAM and a powerful processor on a modern computer.
The Picolo card mentioned earlier, if one opens a close up image, one can see such a FPGA.
Even then, the documentation clearly indicates that only a single camera is "live", the others are captured stills.

Unfortunately, the OP has a classic case where he has put the cart ahead of the horse. Purchasing the cameras first, then attempting to understand the requirements afterwards.
 

Unfortunately, the OP has a classic case where he has put the cart ahead of the horse. Purchasing the cameras first, then attempting to understand the requirements afterwards.

Not a real problem, since I have bought just a single, cheapest chinese camera from Alliexpress for the research purposes.

There is one more thing I consider. What about dedicated PAL decoding chips?


The ADV7182 automatically detects and converts standard analog baseband video signals compatible with worldwide NTSC, PAL, and SECAM standards into a 4:2:2 component video data stream. This video data stream is compatible with the 8-bit ITU-R BT.656 interface standard.

The ADV7800 is a high quality, single-chip, multiformat 3D comb filter, video decoder, and graphics digitizer. This multiformat 3D comb filter decoder supports the conversion of PAL, NTSC, and SECAM standards in the form of a composite or an S-Video into a digital ITU-R BT.656 format. The ADV7800 also supports the decoding of a component RGB/YPbPr video signal into a digital YCrCb or RGB pixel output stream.

The support for component video includes standards such as 525i, 625i, 525p, 625p, 720p, 1080i, 1080p, and many other HD and SMPTE standards. Graphics digitization is supported by the ADV7800; it is capable of digitizing RGB graphics signals from VGA to SXGA rates and converting them into a digital RGB or YCrCb pixel output stream. SCART and overlay functionality are enabled by the ability of the ADV7800 to simultaneously process CVBS and standard definition RGB signals.

The ADV7800 contains two main processing sections. The first section is the standard definition processor (SDP), which processes all PAL, NTSC, SECAM, and component (up to 525p/625p) signal types. The second section is the component processor (CP), which processes YPrPb and RGB component formats, including RGB graphics.

Would any of this cheap makes it easier to interface PAL camera with, for instance, PIC32MZ?

Just a single camera, for the learning purposes (for the start maybe just decode image to memory, later to SD card or try to sent over the USB)?
 

They are essentially the part described earlier that converts the camera signal and saves it in the frame store. You still need lots of external high speed memory to actually store the data, it isn't inside the IC.

Yes, it would make it simpler by putting everything in the digital domain but do you realize these devices are expensive and have 176 pins, you can't reasonably use them for home experiments and a board with one per camera, each having it's own RAM and still needing a fast processor to control them is a seriously complicated project.

Brian.
 
  • Like
Reactions: Garyl

    Garyl

    Points: 2
    Helpful Answer Positive Rating
The only solution available for the amateur is to use 4x ADV7180 with ADV7513(hdmi) or ADV7123(svga) glued with a FPGA chip (Cyclone V/IV). For a developing of the hardware design you can use DE2-115 from Terasic, there is already ADV7180 and ADV7123 onboard.
 
  • Like
Reactions: Garyl

    Garyl

    Points: 2
    Helpful Answer Positive Rating
They are essentially the part described earlier that converts the camera signal and saves it in the frame store. You still need lots of external high speed memory to actually store the data, it isn't inside the IC.
Maybe for the start I'd try with the IC... find some lower resolution PAL source so it can fit the RAM?


Yes, it would make it simpler by putting everything in the digital domain but do you realize these devices are expensive and have 176 pins, you can't reasonably use them for home experiments and a board with one per camera
I saw those are SMD parts but some of them are QFP 64 / 100 which are still within my reach (I can order a prototype board for them and solder them easily)




The only solution available for the amateur is to use 4x ADV7180 with ADV7513(hdmi) or ADV7123(svga) glued with a FPGA chip (Cyclone V/IV). For a developing of the hardware design you can use DE2-115 from Terasic, there is already ADV7180 and ADV7123 onboard.

This is a very good and accurate answer, but I lack experience with FPGA and I wanted to at least try achieving anything with PIC32MZ. But in worst case maybe I'll lean FPGA as well, it depends how long it takes while I have a background in MCUs and C-style programming...
 

With the chip you mentioned, it is much more feasible to do what you require to do.
And if you stick to standard resolution PAL, and apply some decimation, you may be able to do it with a microcontroller.

Anyway, it will be an excellent learning experience.
 

Hey, I know that soem time passed from the last post here, but I have found something interesting, which makes me think that this project is infact a bit more doable than some people say.

See that:
https://nootropicdesign.com/projectlab/2011/03/20/video-frame-capture/
Demo; https://www.youtube.com/watch?v=lfGNqX3nzFs

They are using ARDUINO (just 8 bit MCU + LM1881 chip) to capture that image:
videoFrameCapture.jpg
How does it work?
The Video Experimenter uses an LM1881 video sync separator to detect the timing of the vertical and horizontal sync in a composite video signal. An enhanced version of the TVout library (available below) uses this sync timing information to overlay content onto the video signal. The ATmega328 microcontroller on the Arduino includes an analog comparator that can be used to detect the brightness of the video signal at any given point in time. Using this brightness information, low-res monochrome image capture into the TVout frame buffer is possible. The ability to capture image information in memory lets you implement simple computer vision experiments.
It's important to note that this shield will not work on the Arduino Mega. Read this for more information (it's not my fault!). The Video Experimenter will work on the Seeeduino Mega with some jumper wires. Read this for more information.


They did that on Arduino UNO, which specs are:

Flash Memory 32 KB (ATmega328P) of which 0.5 KB used by bootloader
SRAM 2 KB (ATmega328P)
EEPROM 1 KB (ATmega328P)
Clock Speed 16 MHz


So I think that I can surely replicate it with a bit better results on: PIC32MZ2048EFH144
Max Speed MHz: 252
Program Memory Size (KB): 2048
RAM (KB): 512
Auxiliary Flash (KB): 160

I think that resolution increase is 100% possible, but I'm unsure about brightness or maybe even basic colors support, what do you think?



PS: Other interesting links:
https://hackaday.com/2011/06/07/capturing-video-with-an-arduino/
https://nootropicdesign.com/ve/projects.html
 

Status
Not open for further replies.

Similar threads

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top