True and false. It paints the whole screen top to bottom. It does paint every other line, but the user perceives it as a full refresh. Depending on implementation like on TV there's an overlap, so in a way, it actually does paint ALL OF IT.Interlace doesnt paint the whole screen - it paints half the screen on each refresh.
Well there's your answer. Refreshing a CRT at 30Hz simply isn't an option. But sometimes that's all the bandwidth you've got.And the bandwidth is the same as the equivolent progressive scan. eg. 1080i30 is the same bandwidth as 1080p30 - but 1080i30 refreshes at 60hz (but only half the screen per refresh).
Partially true. Appearance of motion and "fluidity" were definitely no part in that consideration. But irrelevant when examining motivation for use in digital systems. You are refreshing the whole screen (as perceived by the user) using just half the data bandwidth compared to progressive. For systems with monolithic memory architecture where memory bandwidth is shared between video system and CPU, interlace provides a crucial advantage..Interlace was chosen originally to give the higher refresh rates at 50/60 HZ (depending on region). It is better for motion as it looks more fluid.
For broadcast, 1080i is the final interlaced format - UHD4k/8k will not support interlaced modes - they just up the progressive rate to 60 or even 120Hz.
It's easier to work with when you have enough data bandwidth to even be able to do it. Sometimes it's not possible at all due to hard limits like memory speed.Progressive is also much easier to work with.
I highly doubt that broadcast industry continues to use interlace out of nostalgia. Not enough transmission bandwidth for progressive is more like it, which brings you right back to what I explained.Interlace is just a throwback to the past, and has hung around really only for the broadcast industry.
True and false. It paints the whole screen top to bottom. It does paint every other line, but the user perceives it as a full refresh. Depending on implementation like on TV there's an overlap, so in a way, it actually does paint ALL OF IT.
Interlace is not good for a CRT when you sit so close that you easily can see every pixel. The pixels will have "flicker".
With a modern LCD monitor, interlace can be good again! Since the monitor can "remember" the state for each pixel until the next refresh, the flickering can be avoided.
The only drawback is then the update rate. This means that interlace is probably good for normal computer work with very high resolution displays.
The bandwidth is reduced by 50% and this is useful both for analog and digital video. There are interlaced HDMI modes, but of course they must be supported by the equipment in both ends of the HDMI cable.
Earlier in the thread I posted a link to specs for a *very* common interlaced mode you will find supported even by current hardware.PS - I finally found some more information about the single interlaced VGA format you're working so hard to support:
It was invented in 1987 by IBM, for their 8514 display adaptor.
https://en.wikipedia.org/wiki/IBM_8514
It was a pre-curser to the XGA format that was standardised in 1990:
https://en.wikipedia.org/wiki/Graphics_display_resolution#XGA
From the link above, you'll note that every other standard here is progressive.
It seems IBM invented it as an intermediate stop gap to sell their own custom hardware....
Interlace at 100 or 120Hz looks perfectly good on a CRT. In fact it has less flicker than equivalent 50 or 60Hz progressive refresh. I did a lot of experimenting with this in the past couple of weeks.Interlace is not good for a CRT when you sit so close that you easily can see every pixel. The pixels will have "flicker".
Absolutely correct.With a modern LCD monitor, interlace can be good again! Since the monitor can "remember" the state for each pixel until the next refresh, the flickering can be avoided.
The only drawback is then the update rate. This means that interlace is probably good for normal computer work with very high resolution displays.
The bandwidth is reduced by 50% and this is useful both for analog and digital video. There are interlaced HDMI modes, but of course they must be supported by the equipment in both ends of the HDMI cable.
Earlier in the thread I posted a link to specs for a *very* common interlaced mode you will find supported even by current hardware.
Not working hard to support, but being able to generate interlaced modes on VGA can come handy if you want to display interlaced computer video signal on a VGA monitor without deinterlacing it.
There is a very good reason I keep reminding you that interlace was the standard for many computer systems in the past. We are really not talking television signals here.
You didnt answer my question about your memory. Embedded systems from the last 10+ years have been able to support many formats larger than the 1024x768 interlace you're talking about ($30 rasberry pi can do 1080p60!). Im trying to understand what you meant by a "70ns" turnaround on your ram. It makes me think you are not accessing the memory efficiently. Why not elaborate on this?
I'm sorry if I missed it. My FPGA board for instance is actually 10 years old (time flies!), the RAM chip on it is a 16Mbyte Micron CellularRAM (PSRAM),
You're missing the point. My old Nexys2 is not a target platform, it's only used to test logic for other projects before boards get made (which can use newer FPGAs and faster memory). If you actually spent a minute thinking about it then you would realize that memory speed limitations do not prevent me from trying all of the VGA modes, you forget that VGA is analog in nature and monitors (originally) only care about horizontal and vertical syncs. In between if I run into a memory speed problem I can just use my slower pixel clock which results in bigger pixels, but the sync properties of the mode are not limited by this hardware in any way. To the monitor it makes no difference if you divide your active area into 1280, 640 or just 320 pixels. Those are just constants, I can test the logic on the board I have and just make the constants bigger when it's time to implement on faster hardware. The design is scalable.You would be better off buying a newer FPGA board with some decent hardware on it (like one with HDMI), than torture yourself with designing a video mode that will inevitably go further and further out of favor.
I'm a masochist for bothering to reply.But hey, if you are a masochist then go for it (I've enjoyed reading this thread). ;-)
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?
We use cookies and similar technologies for the following purposes:
Do you accept cookies and these technologies?