Continue to Site

Welcome to EDAboard.com

Welcome to our site! EDAboard.com is an international Electronics Discussion Forum focused on EDA software, circuits, schematics, books, theory, papers, asic, pld, 8051, DSP, Network, RF, Analog Design, PCB, Service Manuals... and a whole lot more! To participate you need to register. Registration is free. Click here to register now.

Difference between analog and digital TV

Status
Not open for further replies.

lomaxe

Member level 1
Joined
Dec 9, 2010
Messages
39
Helped
3
Reputation
6
Reaction score
3
Trophy points
1,288
Location
Ukraine
Activity points
1,683
Can somebody explain me why does the power of the digital television signal DVB-T is equivalent to 1/4 the power of the analog television signal?
For example at the TV relay where I work there is the TV transmitter. It is written ot it that it can give 1000 Watt of the analog TV and 250 Watt of the DVB-T TV signal.
Why is it so?
 

A lot of this comes from the magic of error correcting codes (ECC, also called forward error correction FEC).

the idea is that you add a small amount (or possibly a large amount) of carefully generated, redundant data to the transmission. then you can lower the power and allow more bit-errors to occur at the receiver. eg. if you have a 1MB transmission and a 1 in a million chance (per bit) of error, you would expect 8 errors in transmission. if you added an ECC scheme, you might be able to correct up to 10 errors at the cost of adding an additional 100kB to the transmission. so now there is a good chance that the receiver will see 0 bit errors. If 8 bit errors is subjectively ok, then the power could be lowered until 18 bit errors would be expected.

The use of compression also plays a big role. if you reduce the data required by a factor of 10, it allows you to add a lot more ECC (or conserve bandwidth).
 

Analog TVs use a continuously variable video signal. The video signals (encoded as separate color and luminance) are modulated with the audio signals on individual channels.

Digital TVs use a binary-based system that requires each frame to be downloaded before it is displayed. The video signals are encoded and modulated as radio waves with the audio signals on individual channels, which may also have subchannels.

Analog TV signals are directly compatible with CRT-type displays (TVs with picture tubes). They "drive" the electron guns, which fire at the phosphor dots on the back of the screen, causing light of a certain color to glow.

Digital TV signals are directly compatible with other types of displays that use fixed-pixel technology, including LCD, LED-backlit LCD, plasma, LCD projection, DLP projection, and LCoS projection. The video file is uncompressed and the data controls the color of each individual pixel.

---------- Post added at 10:53 ---------- Previous post was at 10:49 ----------

What is digital TV? When we talk about digital television generally relates to digital television broadcasting. Digital TV broadcasting can with different platforms: cable, satellite or terrestrial. Each platform uses different transmission network. How we can receive digital signals? We need a digital receiver to the TV or stand-alone set-top box connected to old TV is integrated.

But there is no uniform standard for digital television. For digital terrestrial television, there are four large and incompatible transmission (modulation) standards. For example, in North America uses ATSC, DMB-T uses China, Japan and Brazil with ISDB-T and in Europe, Russia, India, Australia and many other countries use DVB-T. In addition, there are many codecs (algorithms) to compress video and audio can be: are MPEG-2 and MPEG-4 compression standards, the most often used for video. This means that you have a digital receiver compatible with standard transmission and use codecs in your country.


But this diversity of standards is not new. We had the same situation in analogue TV. The following parameters in the analog television transmission, different values:

* Number of lines
* Frame rate
* Channel bandwidth
* Video bandwidth
* Audio-Offset
* Video-modulation
* Audio modulation
* Color System

This means that you had to have a TV set compatible with the standard used in your country to be able to watch TV. However, almost all recent analog TV sets are able to receive and display common standards used worldwide. These analog standards were defined many years ago, they were not modified and no new standard for analog television was added. This meant a stable situation for decades.

Now the situation has changed. In the digital world is so easy to invent a new method or algorithm, for better and more efficient compared to the old days. A typical example is a transmission standard, DVB-T. It is the successor to the DVB-T2, which is in contrast to the old DVB-T standard, but that will increase capacity, reliability and flexibility. MPEG-4 is also a newer and better compression than MPEG-2.

This means that digital technology is constantly evolving and are new, better and more complex (incompatible) standards. The practical consequence of this rapid development is that a plasma or LCD TV displays an image and a separate set-top boxes compatible with digital standards have at home. TV will probably last seven years or more, but the set-top box is much shorter.
 

A lot of this comes from the magic of error correcting codes (ECC, also called forward error correction FEC).

the idea is that you add a small amount (or possibly a large amount) of carefully generated, redundant data to the transmission. then you can lower the power and allow more bit-errors to occur at the receiver. eg. if you have a 1MB transmission and a 1 in a million chance (per bit) of error, you would expect 8 errors in transmission. if you added an ECC scheme, you might be able to correct up to 10 errors at the cost of adding an additional 100kB to the transmission. so now there is a good chance that the receiver will see 0 bit errors. If 8 bit errors is subjectively ok, then the power could be lowered until 18 bit errors would be expected.

The use of compression also plays a big role. if you reduce the data required by a factor of 10, it allows you to add a lot more ECC (or conserve bandwidth).

But isn't it connected with the property of the COFDM signal, which is used in the DVB-T standard? That is, the COFDM signal has peak-to-average power ratio about 12 dB.
To be precise, after randomization of a baseband digital stream the peak-to-average power ratio of the COFDM signal is about 12 dB with the probability 0,01% and 9,6 dB with the probability 0,1%.
In analog TV signal, when we say 1000 Watt, it means that the transmitter can give us 1000 Watt of the power at peak meaning of the signal.
At digital transmitters it is written usualy, that the power is in RMS (root mean square). So, digital power is measured in RMS meaning and the analog power is measured in peak meaning.
What do you think about this explanation?
 

if the transmitters are not rated in the same manner, then that could also explain some of the issues. But for digital transmission, the power can usually be reduced by either reducing the probability of bit error, or by reducing the impact of a bit error. The former is more focused on specific modulation schemes, and the latter is more focused on coding schemes. In modern systems, there is often some connection between the coding and the modulation. An early example is TCM, which is an encoding/decoding scheme that takes the modulation scheme into account.
 

Lot of good guesses. It is very easy to get reliable answers at DVB - Digital Video Broadcasting - Home. Most of their technical papers are free for download.
A error free decoding of DVB-t requires 20-22 dB s/n. The variation in s/n is due to what coding scheme that is used.
Compare this with a decoded PAL signal that requires 40 dB s/n to be accepted as a good picture quality. So when a PAL transmitter is replaced with a DVB-t transmitter is it in most cases a factor 100 in difference in TX power for same coverage and that is the ratio most TV-transmitters are using when replacing system. However is the cost almost the same for these new transmitters even if they are a factor 100 smaller, due to much higher requirement for low phase errors and such.
FEC is not used in DVB-t. DVB-h, was the first DVB standard that used FEC. DVB-t2, which in many areas now is replacing DVB-t, is using FEC which allows for a less demanding c/n then DVB-t. It will in most cases not result in that less TX power can be used but have other advantages.
 

I don't have the original spec, but wikipedia seems to think DVB-t did use FEC. The specific algorithms are different between DVB-T and DVB-T2 though. DVB-T seems to use some variant of RSV coding, while DVB-T2 uses LDPC+BCH coding. Keep in mind that most modern digital communication systems use at least some type of FEC. Though both do use OFDM with fairly high order QAM options, both of which require good tx characteristics. Its possible "FEC" hadn't become a buzzword at the time of launch for DVB-T, and simply wasn't included in the high-level documents. at the same time, the first page of the DVB-T document on the website listed in post #6 does mention multiple different FEC rates.
 

You are right, I was thinking on MPE-FEC as the only FEC protocol and which was implemented in DVB-h. Reed-Solomon FEC is implemented in DVB-t.
 

You are right, I was thinking on MPE-FEC as the only FEC protocol and which was implemented in DVB-h. Reed-Solomon FEC is implemented in DVB-t.

DVB-T uses:
-shorted Reed-Solomon Coding (204,188,t=8 ) where t - is error correcting capapility in bytes. It means, that RS code in DVB-T can correct any 8 bytes in 204 bytes packet.
-convolutional coding with code rates from 1/2 to 7/8.

---------- Post added at 15:04 ---------- Previous post was at 14:54 ----------

Lot of good guesses. It is very easy to get reliable answers at DVB - Digital Video Broadcasting - Home. Most of their technical papers are free for download.
A error free decoding of DVB-t requires 20-22 dB s/n. The variation in s/n is due to what coding scheme that is used.
Compare this with a decoded PAL signal that requires 40 dB s/n to be accepted as a good picture quality. So when a PAL transmitter is replaced with a DVB-t transmitter is it in most cases a factor 100 in difference in TX power for same coverage and that is the ratio most TV-transmitters are using when replacing system. However is the cost almost the same for these new transmitters even if they are a factor 100 smaller, due to much higher requirement for low phase errors and such.
FEC is not used in DVB-t. DVB-h, was the first DVB standard that used FEC. DVB-t2, which in many areas now is replacing DVB-t, is using FEC which allows for a less demanding c/n then DVB-t. It will in most cases not result in that less TX power can be used but have other advantages.

But if to take into account your logic, there is no sence in reducing the power of digital signal compare with analog signal only for the sake of the same covering zone. I think the reason in the difference of the digital and analog power of the same transmitter lies in different types of modulations. AM in analog and COFDM in digital (DVB-T)
 

But if to take into account your logic, there is no sence in reducing the power of digital signal compare with analog signal only for the sake of the same covering zone. I think the reason in the difference of the digital and analog power of the same transmitter lies in different types of modulations. AM in analog and COFDM in digital (DVB-T)

Maintaining similar coverage as the system it replaces is in most cases a requirement to avoid interference in same channel or adjacent channels from other existing transmitters. It is regulated in international agreements. To maintain same coverage have DVB specified what field strength that is required (both max and min) and what kind of sensitivity and selectivity a receiver must fulfill to be accepted as a DVB-receiver. This do in general result in a max allowed TX power 20 dB less then similar PAL/Secam transmitter.
 

Maintaining similar coverage as the system it replaces is in most cases a requirement to avoid interference in same channel or adjacent channels from other existing transmitters. It is regulated in international agreements. To maintain same coverage have DVB specified what field strength that is required (both max and min) and what kind of sensitivity and selectivity a receiver must fulfill to be accepted as a DVB-receiver. This do in general result in a max allowed TX power 20 dB less then similar PAL/Secam transmitter.

But in this case they should write on the transmitters something like this:
'1000 W analog, equivalent 250 W digital DVB-T and the transmitter is able to transmit up to 1000 W digital DVB-T'
 

Amount of power that the transmitter is able to deliver is more a about how it is designed. In reality must both DVB and CW transmission method be be detailed defined to be able to say what amount of power that a specific unit can deliver. COFDM signals type DVB-t is a bit extra complicated to measure. Often is a thermic method recommended.
 
  • Like
Reactions: lomaxe

    lomaxe

    Points: 2
    Helpful Answer Positive Rating
Hi again. In one document I have read such information. I'd like to quote it:

"In practical applications, the I and Q signals representing the DVB-T baseband
signal are generated using a D/A converter that clips the signal
through its conversion range and thus greatly reduces the crest factor. Depending
on the quality criterion, the maximum output drive level is approx.
15 dB over the RMS value, which corresponds to a crest factor limiting to
15 dB. This limiting is performed in the digital signal processing at the numerical
level in order to properly drive the D/A converter.
Since the baseband (magnitude from I and Q) is considered here prior to
modulation, this represents the envelope approach. The 15 dB limiting has
practically no effect yet on the signal quality since the signal peaks that are
clipped only occur extremely rarely (for the I and Q signal, the probability is
approx. 2 x 10-8).
Inside a transmitter, the greatest limiting of the DVB-T signal occurs in the
power amplifier. The output stage is driven so that frequent signal peaks lie
significantly above the 1 dB compression point.
Compression of the amplifier characteristic means that high signal peaks
have less gain than the average value. This results in a lower amplitude
probability for high signal peaks. The saturation power of the amplifier determines
the maximum possible peak power and thus the crest factor.
Through precorrection of the baseband signal, high signal peaks have their
level boosted to counteract the compression behavior of the amplifier.
However, this does not affect the saturation power of the amplifier and thus
the crest factor. The overall transfer characteristic up to the saturation
power level is, however, linearized so that the amplitude probability for high
signal peaks is increased."


I don't understand this: "Through precorrection of the baseband signal, high signal peaks have their
level boosted to counteract the compression behavior of the amplifier."

Can somebody explain me what do they mean under precorrection of the baseband signal and how does it turn that high signal peaks level boosted and in such way counteract the compression behavior of the amplifier?
 

Final RF amplifier have not a linear gain when it is driven close to its peak values, as most amplifiers. Can be described as a compression.
It is in this case compensated by a predefined gain correction in base band signal, before 2:nd mixer.
In a AM amplifier circuit can this compensation be done by conventional feedback. Guess that solution is avoided for improved phase stability.
 

In analog TV,the image varies but in digital,the image is constant.
 

Final RF amplifier have not a linear gain when it is driven close to its peak values, as most amplifiers. Can be described as a compression.
It is in this case compensated by a predefined gain correction in base band signal, before 2:nd mixer.
In a AM amplifier circuit can this compensation be done by conventional feedback. Guess that solution is avoided for improved phase stability.

Final RF amplifier have not a linear gain when it is driven close to its peak values, as most amplifiers. Can be described as a compression.
It is in this case compensated by a predefined gain correction in base band signal, before 2:nd mixer.
In a AM amplifier circuit can this compensation be done by conventional feedback. Guess that solution is avoided for improved phase stability.

I understand that, but I can't understand:
1.What kind of preccorection of the baseband signal is used for boosting their levels?
2.Why boosting the levels is used for counteracting the compression behavior of an amplifier if high level input signals causes a compression?
 

1. Most likely a part of the total D/A modulation. Correction can then be taken from a table.
2. If usable peak level can increase with 3 dB is it much worth for average power level efficiency, resulting in less total power consumption for a given amount of output power, less heat, cheaper transistors....
 

In analog TV,the image varies but in digital,the image is constant.

Do you mean, that in digital TV images are static??? :)

---------- Post added at 18:22 ---------- Previous post was at 18:10 ----------

2. If usable peak level can increase with 3 dB is it much worth for average power level efficiency, resulting in less total power consumption for a given amount of output power, less heat, cheaper transistors....

For average power yes. But what about the peaks that appear, for example, in COFDM signal? Boosting the common input level causes boosting peak levels over the average. So, that levels will be compressed and it will reduce the crest factor of the signal. And I don't understand how boostng will help with the crest factor. Or, may be I don't understand something?
 

Without gain compensation can PA not be used in its higher nonlinear region. Whit compensation will final output signal looks linear despite that PA is used in its nonlinear region. As result of this is can whole average RMS value feed to PA be higher, whiteout clipping or nonlinear amplification as result.
https://www.altera.com/literature/an/an396.pdf
 
  • Like
Reactions: lomaxe

    lomaxe

    Points: 2
    Helpful Answer Positive Rating
I'd like to add some interesting information I've found in the Internet related to the topic. It quoted from "Digital Terrestrial Broadcasting", Artech House, 2000.

"As can be seen from Figure below the COFDM signal in the time domain exhibits a high peak to average power ratio (up to 12 dB) which makes it susceptible to non-linear distortion in transmission, as the signal peaks occasionally thrust the power amplifiers into saturation.



When this happens the transmitter will generate harmonics that will cause out of channel unwanted emissions, or interference. A practical consequence of this is that transmitter amplifiers must be operated in such a manner as to allow for these signal peaks and prevent amplifier saturation. This means that the transmitter output power back-off (OBO) must be adjusted to obtain the minimum required bit error rate (BER) performance and also minimize adjacent channel interference. However, backing-off the power of a transmitter reduces it's efficiency and results in larger transmitter footprints for the rated output power of the device. However, it is expected that the footprints of digital transmitters will reduce with new cooling arrangements and digital precorrection.
As mentioned above, highly linear amplifiers are needed for digital transmitters and class AB operation is typical for solid state transmitters. While a 10 dB OBO is quoted for COFDM transmissions relative to analog transmissions to achieve similar coverage, most transmitter manufacturers recommend only a 6-7 dB reduction in transmitter output power when replacing an analog transmitter with a digital transmitter. In practical terms and as a general comparison, a 2 kW (rms) COFDM digital transmitter is presently comparable in the number of amplifiers used with a 10 kW (peak sync) analog transmitter. This is in order to achieve the required linearity. As we will see in Chapter 8, an output channel filter is also incorporated into many digital terrestrial transmitters to reduce the spurious emissions into adjacent channels."


I don't know how will it for you, but that information is very interesting for me. But I have some text, which I don't understand because of poor knowledge of english technical terms. Can somebody explain, what does it mean "transmitter footprints" in context what I have written above?
 

Status
Not open for further replies.

Part and Inventory Search

Welcome to EDABoard.com

Sponsor

Back
Top