Thursday, December 22, 2005

Binary number !!

We can use "int v = 0xFF;" to use hexadecimal numbers.
We can use still other base numbers, but..., how to use binary numbers in ANSI compiler??

Tuesday, December 20, 2005

solve this

if(X)
{
printf("Hello");
}
else
{
printf("World");
}

Question: What should be the value of "X" so that the output is HelloWorld?

Sunday, December 18, 2005

MACRO

What does the message ``warning: macro replacement within a string literal'' mean?

Some pre-ANSI compilers/preprocessors interpreted macro definitions like

#define TRACE(var, fmt) printf("TRACE: var = fmt\n", var)

such that invocations like

TRACE(i, %d);


were expanded as

printf("TRACE: i = %d\n", i);


In other words, macro parameters were expanded even inside string literals and character constants.

Macro expansion is not defined in this way by K&R or by Standard C. When you do want to turn macro arguments into strings, you can use the new # preprocessing operator, along with string literal concatenation (another new ANSI feature):

#define TRACE(var, fmt) printf("TRACE: " #var " = " #fmt "\n", var)


What's the best way to write a multi-statement macro?

ANSWER:
#define MACRO(arg1, arg2) do { \
/* declarations */ \
stmt1; \
stmt2; \
/* ... */ \
} while(0) /* (no trailing ; ) */


How can I write a macro which takes a variable number of arguments?
One popular trick is to define and invoke the macro with a single, parenthesized ``argument'' which in the macro expansion becomes the entire argument list, parentheses and all, for a function such as printf:

#define DEBUG(args) (printf("DEBUG: "), printf args)

if(n != 0) DEBUG(("n is %d\n", n));

The obvious disadvantage is that the caller must always remember to use the extra parentheses.

gcc has an extension which allows a function-like macro to accept a variable number of arguments, but it's not standard. Other possible solutions are to use different macros (DEBUG1, DEBUG2, etc.) depending on the number of arguments, to play games with commas:

#define DEBUG(args) (printf("DEBUG: "), printf(args))
#define _ ,

DEBUG("i = %d" _ i)

Wednesday, December 14, 2005

What is ATSC?

ATSC (Advanced Television Systems Committee) is a group which defines the standards for digital television transmission in the United States and many other counties.

ATSC is the digital replacement for the earlier analog standard, NTSC.

The ATSC standards are created by the Advanced Television Steering Committee, whose charter members are the Electronic Industries Association (EIA), the Institute of Electrical and Electronic Engineers (IEEE), the National Association of Broadcasters (NAB), the National Cable Television Association (NCTA), and the Society of Motion Picture and Television Engineers (SMPTE).

The ATSC standards include high definition television (HDTV), standard definition television (SDTV), data broadcasting, multichannel surround-sound audio, and satellite direct-to-home broadcasting.

In addition to the United States, ATSC is also used in Canada, South Korea, Argentina, and Mexico.

ATSC uses 8VSB (8-level Vestigal Side-Band) modulation and MPEG-2 compression.

What is HDTV?

HDTV (High Definition Television) is one of the DTV (Digital TV) standards. HDTV will eventually replace analog television.

HDTV offers resolutions of 720p, 1080i, and 1080p.

The 720p format offers 720 lines of horizontal resolution with progressive scan. Progressive scan means that every line is refreshed in each frame update.

The 1080i format offers 1080 lines of horizontal resolution with interlacing. Interlacing means that every other line is refreshed in each frame update. This means that it requires two frame updates to repaint the entire screen.

1080p offers the best of both worlds, 1080 lines of progressively scanned video.

HDTV features an aspect ratio of 16:9.

The HDTV standard also includes the use of 5.1 Dolby Digital surround sound (AC-3).

Each HDTV channel provides 19.39Mbps of bandwidth.

HDTV utilizes MPEG-2 compression.

HDTV is part of a larger set of standards called ATSC.

The FCC (Federal Communications Commission) has mandated that all licensed television stations be capable of broadcasting DTV by 2007.

What is aspect ratio?

Aspect Ratio is the ratio of a picture's width to its height.

NTSC television uses a 4:3 aspect ratio.

HDTV and SDTV feature a 16:9 aspect ratio.

Movie theaters typically use an aspect ration of 1.85:1 or 2.35:1.

Converting a movie to television requires either pan and scan or letterboxing.

Pan and scan is a process where technicians remove portions of the left or right side of the video to convert the aspect ratio.

Letterboxing is displaying the full picture in the center of the screen, with black bars above and below it.

What is MPEG-2?

MPEG-2 (Motion Picture Experts Group - 2) is a compression standard for digital television.

MPEG-2 compression enables digital television broadcasters to transmit video streams with higher resolution and audio streams with higher quality sound while using as little bandwidth as possible.

MPEG-2 is capable of reducing the amount of bandwidth utilized by as much as 55 to 1.

MPEG-2 is utilized by DVB, HDTV, and DVD.

MPEG-2 has been adopted as ISO Standard 13818-1.

Thursday, December 08, 2005

sizeof() Vs strlen()

In C language Difference between sizeof() and strlen()

Consider the following example:

/* Example for sizeof( ) , strlen( ) */
#include 
main()
{
       char String[]="Hello";

       printf("\n SIZE OF String %d STRING LENGTH %d", 
       sizeof( String ), strlen( String ) );
}

Result:
SIZE OF String 6 STRING LENGTH 5.

Every String contains a NULL character( '\0' ) at the end. The sizeof() function will include that NULL character also for calculating string size but strlen() function not.

Why is sizeof('a') not 1?
Perhaps surprisingly, character constants in C are of type int, so sizeof('a') is sizeof(int) (though it's different in C++).

Result:
In Turbo C output is: 2
In Turbo C++ output is: 1

Wednesday, December 07, 2005

Difference between Program Stream and Transport Stream

From ISO 13818-1

Transport Stream:
  • Transport stream is a stream definition which is tailore for communicating or storing one ore more programs of coded data according to ITU-T Rec. H.262 ISO/IEC 13818-2 and ISO/IEC 13818-3 and other data in environments in which significant errors may occur. Such errors may be manifested as bit value errors or loss of packets.
  • Transport Streams May be Fixed or Variable rate. In either case, the constituent elementary stream may be either fixed or variable rate.
  • Transport Stream rate is defined by the values and locations of Program Clock Reference (PCR) fields, which in general are separate PCR fields for each program.
  • Transport Stream may be constructed from
  • a) from one or more elementary coded data streams
  • b) from Program Streams
  • c) from other Transport Streams which may themselves contain one or more programs.
  • Transport Streams are constructed in two layers: a sytem layer and a compression layer. The input stream to the Transport Stream decoder has a system layer wrapped about a compression layer. Inpute Streams to the Video and Audio decoders have only the compression layer.
  • Transport Stream system layer is divided into two syb-layers, one for multiplex-wide operations (the Transport stream Packet Layer), and one for stream-specific operations (the PES packet layer).
Program Stream:
  • The Program Stream is a stream definition which is tailored for Commnucating or storing one program of coded data and other data in environments where errors are very unlikely, and where processing of system coding e.g by software, is a major consideration.
  • Program Streams may be fixed or variable rate. In either case, the consituent elementary streams may be either fixed or variable rate.
  • The Program Stream rate is defined by the values and locations of the System Clock Reference (SCR) and mux_rate fields.
  • Program Streams are constructed in two layers: a system layer and a compression layer. The inpur to the Program Stream Decoder has a system layer wrapped about a compression layer. Input streams to the video and Audio decoders have only the compression layer.
  • The Program Stream system layer is divided into two sub-layers, one for multiplex-wide operations (the pack layer), and one for stream-specific operations (the PES packet layer).

Analog Versus Digital Transmission

http://www.informit.com/articles/article.asp?p=24687&seqNum=5&rl=1

Feature Analog Characteristics Digital Characteristics
  • Signal Continuously Variable Discrete Signal
  • Bandwidth Low bandwidth High Bandwidth
  • Speed Low High
  • Error Rate High 10^-5 bits Low 10^-7

Monday, December 05, 2005

Difference between Threads and Process

From the book Parallel and Distributed Programming Using C++ BY Tracey Hughes,
Cameron Hughes , Published by Addisioin-Wesley.


LINK:
{
http://books.google.com/books?ie=UTF-
8&hl=en&id=RQT5XeqaagEC&dq=Difference+between
+Thread+and+Process&prev=http://books.google.com/
books%3Fq%3DDifference%2Bbetween%2BThread%2Band%2BProcess&lpg
=PA102&pg=PA104&sig=
7XWb4nEO4BnjVFz1uGSAdwseDJs}



The major difference between Threads and Processes is each process has its own
address space and thread don't. If a process creates multiple threads, all the threads will be contained in its address space. This is why they share resources so easily and interthread communication is so simple. Child processes have their own address space and a copy of the data segment. Therefore, when a child changes its variables or data, it does not affect the data of its parent process. A shared memory area has to be created in order for parent and child processes to share data. Interprocess communication mechanisms, scuh as pipes and fifos, are used to communicate or pass data between them. Threads of the same process can pass data and communication by reading and writing directly to any data that is accesible to the parent process.

Similarites between Threads and Processes


  • Both has an id, set of registers, state, priority and scheduling policy.
  • Both have attributes that describe the entity to the OS.
  • Both have an information block.
  • Both share resources with the parent process.
  • Both function as independent entities from the parent process.
  • The creator cna exercise some control over the thread or process.
  • Both can change their attributes.
  • Both can create new resources.
  • Neither can access the resources of another process.

Differences Between Threads and Processes

  • Threads share the address space of the process that created it; processes have their own address.
  • Threads have direct access to the data segment of its process; processes have their own copy of the data segment of the parent process.
  • Threads can directly communicated with other threads of its process; processes muse use interprocess communication to communicate with sibling processes.
  • Thread have almost no overload; processes have considerable overhead.
  • New threads are easily created; new processes require duplication of the parent process.
  • Threads can exercise considerable control over threads of the same process; processes can only exercise control over child processes.
  • Changes to the main thread (cancellation, prority change, etc.) may affect the behavior of the other threads of the process; changes to the parent process does not affect childe processes.


Cross Compiler

A cross compiler is a compiler capable of creating executable code for another platform than the one on which the cross compiler is run. Such a tool is handy when you want to compile code for a platform that you don't have access to, or because it is inconvenient or impossible to compile on that platform (as is the case with embedded systems.)

(en.wikipedia.org/wiki/Cross_compiler)

About INLINE functions

Advantages:

  1. It leads to a more readable program
  2. It leads to much faster code (reducing fucntion calling time). That is improves program performence.
  3. It accomplishes the same efficiency that the macro accomplishes.
Disadvantages:
  1. Its frequent use can lead to increase in code size (that is output file size).
These advantages and disadvantages are part of the choice and/or trade off's that you have to make as a programmer. Speed over filesize or readability over lines of code, those are the types of choices that you as a programmer will have to make.

Sunday, December 04, 2005

CI and CAM

Common Interface ( CI )
Common Interface ( CI ) To decode encrypted programmes, you need a subscription to the appropriate broadcaster along with hardware that enables you to use the decryption card (smart card) sent to you by the broadcaster. The first piece of hardwareis the Common Interface (CI), which is connected directly to the DVB card. A Conditional Access Module (CAM) isinserted into the CI, and the CAM is used to house the smartcard itself. Unfortunately it is easy to get these namesconfused.

Again and again Conditional Access Modules are referred to as CI modules - which can be confusing, since CIs themselves can also be designed in the form of pluggable modules(for example, the Siemens DVB-C module).

CIs are available in the form of PCI cards, DVB card daughterboards or as modules that can be installed in a 3.5" drive bay.Due to these different formats, you should ensure that the module you purchase can be used with your DVB card. Not all DVBcards have the connections required for CIs, so make sure in advance that the Ci fits your card.

Some broadcasters specify in their Terms and Conditions that you have to use a specific certified receiver to receive and decrypt their programmes - so far, however, there is not certified CI / CAM.



Conditional Access Module (CAM)
A Common interface (CI) module can be used to house many different peripheral devices, such as a modem, additional memory, games consoles, or more usually a Conditional Access Module (CAM) - sometimes referred to as a Common Interface Module (CIM, CI module). The CAM provides space for one or two smartcards (depending on the CI) supplied by the broadcaster.

Encryption systemsThere are various different encryption systems, which are not mutually compatible. The CAM must be suitable for the system you wish to decrypt - often a CAM is only suitable for one system, while other CAMs can be used with a number of systems (e.g. the Joker CAM).

Below are examples of encryption systems (their use and sample broadcasters are given in brackets):

Irdeto BetaCrypt (previously used by Premiere) Premiere Nagra (currently used by Premiere) Seca (Aston Canal+) Viaccess (previously used by Viasat) Conax (Canal+ Scandinavia, SVT) CryptoWorks (ORF, Xtra Music Payradio, Wizja +, MTV, DigiTurk, CzechLink, Easy.tv) NDS (Sky, currently used by Viasat)

The best-known CAM is probably in the Alphacrypt CAM, which works with VDR. Unfortunately CAMs are not usually particularly cheap, costing between 60 and 180 euros (the Alphacrypt CAM is at the top end of this price range). Obviously you can sometimes obtain them more cheaply on the second-hand market.

Mascom now produces an Alphacrypt Light CAM, which works well with VDR (tested with vdr-1.3.16 and Fujitsu-Siemens DVB-C PCI with CI) and KabelDeutschland Digital or Premiere. The high-street price for this CAM is 66 euros.

Volatile Pointer

Volatile is a variable which may change the valuewithout knowing of code.
A variable should be declared volatile whenever its value could change unexpectedly.

In practice, only three types of variables could change:
Memory-mapped peripheral registers
Global variables modified by an interrupt service routine
Global variables within a multi-threaded application

It is also used to avoid the compiler optimisation.

consider the below code,
volatile int *vp=0x5565;
.
.
.
.

code


while ( *vp!= 0 )
{
..code
}
In the above case, the vp pointer value may change during the run time.
To ensure that the volatile keyword is used.

so that the value of vp will not read often by compiler.
so the optimisation of the compiler will stop.
But in the ordinary pointer it is not done.

What is mutex?

A synchronization object that provides mutual exclusion among tasks. A mutex is often used to ensure that shared variables are always seen by other tasks in a consistent state.

Another name of mutex is 'semaphore'.

Tuner

Tuner an electronic receiver that detects and demodulates and amplifies transmitted signals

Broadcasting TV systems

SECAM
SECAM (Sequentiel Couleur avec Mémoire, French for "sequential color with memory") is an analog television system, using frequency modulation to encode chrominance information. It is so named because it uses memory to store lines of color information, in order to eliminate the color artifacts found on systems using the NTSC standard.
It was developed for the same purpose as PAL, but uses a different (and many would argue inferior) mechanism to do so. R-Y and B-Y information is transmitted in alternate lines, and a video line store is used to combine the signals together. This means that the vertical colour resolution is halved relative to PAL and NTSC.
SECAM was introduced in France in 1967, where it is still used; it has also been adopted in many former French colonies, as well as parts of Eastern Europe (Bulgaria, Hungary) and the former Soviet Union. Many have argued that the primary motivation for the development of SECAM in France was to protect French television equipment manufacturers and make it more difficult to view non-French programming.
Political factors from the Cold War have also been attributed to the adoption of SECAM in Eastern Europe, as its use made it impossible for most Eastern Europeans to view television which was broadcast from outside the Iron Curtain which were mostly using PAL.
There are three varieties of SECAM:
1. French SECAM is used in France and its former colonies
2. MESECAM is used in the Middle East
3. D-SECAM is used in the Commonwealth of Independent States and Eastern Europe.
NTSC
The National Television Standards Committee sets the analog television standard for the United States; this format itself is also informally called "NTSC".
While a standard for the United States, it has been adopted in other countries as well, for example Japan. The current version replaced an older NTSC standard by adding chrominance information on a 3.579545 (exactly 315/88) MHz subcarrier, retaining compatibility with older black-and-white NTSC television receivers.
The NTSC format consists of 29.97 interlaced frames of video a second, each consisting of 480 lines of vertical resolution out of a total of 525 (the rest are used for sync, vertical retrace, and other data such as captioning).
NTSC interlaces its scanlines, drawing odd-numbered scanlines in odd-numbered fields and even-numbered scanlines in even-numbered fields, which gives a nearly flicker-free image at approximately 59.94 hertz (nominally 60 Hz / 1.001) refresh frequency, which is close to the nominal 60 Hz alternating current power used in the United States. (Compare this to the 50 Hz refresh rate of the 625-line PAL video format used in Europe, where 50 Hz (25 hertz is resonant) AC is the standard; PAL has noticeably more flicker than NTSC.) Synchronization of the refresh rate to the power cycle helped film cameras record early live television broadcasts, as it was very simple to sync a film projector to capture a frame of video to a film cell using the frequency of the alternating current.
Also, it was preferable to match the screen refresh rate to the power source so as to avoid wave interference that would produce rolling bars on the screen.
PAL
PAL, short for Phase Alternating Line, is the analogue video format used in television transmission in most of Europe (except France, Bulgaria, Russia, Yugoslavia, and some other countries in Eastern Europe, where SECAM is used), Australia and some Asian, African, and South American countries.
PAL was developed in Germany by Walter Bruch, and first introduced in 1967. The name "Phase Alternating Line" describes the way that part of the color information on the video signal is reversed in phase with each line, which automatically corrects phase errors in the transmission of the signal. NTSC receivers have a tint or hue control to perform the correction manually.
Some engineers jokingly expand NTSC to "Never Twice the Same Colour" while referring to PAL as "Perfect At Last" or "Peace At Last"! However, the alternation of colour information - Hanover bars - can lead to picture grain on pictures with extreme phase errors.
The PAL colour system is usually used with a video format that has 625 lines per frame and a refresh rate of 25 frames per second. Like NTSC this is an interlaced format. Each frame consists of two fields (half-a-frame), each field has half of the lines of a frame (one has all the even lines, one has all the odd lines). Fields are transmitted and displayed successively. There are 50 fields per second. At the time of its design, the interlacing of fields was a compromise between flicker and bandwidth.

DENC

DENC means Digital Encoder. In order to display digital video on analog TVs the video signal must be encoded in standards such as PAL, NTSC or SECAM. This operation is performed by a DENC hardware device.

Friday, December 02, 2005

What is DVB?

DVB (Digital Video Broadcast) is a set of standards for the digital transmission of video and audio streams, and also data transmission.

The DVB standards are maintained by the DVB Project, which is an industry-led consortium of over 260 broadcasters, manufacturers, network operators, software developers, regulatory bodies and others in over 35 countries.

DVB standards are available on the web at the ETSI Publications Download Area.

DVB has been implemented over satellite (DVB-S, DVB-S2), cable (DVB-C), terrestrial broadcasting (DVB-T), and handheld terminals (DVB-H).

DVB utilizes MPEG-2 compression.

DVB primarily uses Musicam audio encoding, but also has optional support for AC3.

What is Reed-Solomon?

Reed-Solomon is an algorithm for Forward Error Correction (FEC).

Reed-Solomon was introduced by Irving S. Reed and Gustave Solomon of MIT Labs in Polynomial Codes Over Certain Finite Fields, which was published in the Journal of the Society for Industrial and Applied Mathematics in 1960.

Reed-Solomon does not specify a block size or a specific number of check symbols. These variables can be set to the best variables for each transmission medium.

DVB uses Reed Solomon coding configured to use blocks of 188 information symbols and 16 check symbols, which results in a total block size of 204 symbols.

Reed-Solomon is often abbreviated as RS.

What is Forward Error Correction (FEC)?

Forward Error Correction (FEC) is a type of error correction which improves on simple error detection schemes by enabling the receiver to correct errors once they are detected. This reduces the need for retransmissions.

FEC works by adding check bits to the outgoing data stream. Adding more check bits reduces the amount of available bandwidth, but also enables the receiver to correct for more errors.

Forward Error Correction is particulary well suited for satellite transmissions, where bandwidth is reasonable but latency is significant.

Forward Error Correction vs. Backward Error Correction

Forward Error Correction protocols impose a greater bandwidth overhead than backward error correction protocols, but are able to recover from errors more quickly and with significantly fewer retransmissions.

What is 8PSK?

8PSK (8 Phase Shift Keying) is a phase modulation algorithm.

Phase modulation is a version of frequency modulation where the phase of the carrier wave is modulated to encode bits of digital information in each phase change.

The "PSK" in 8PSK refers to the use of Phased Shift Keying. Phased Shift Keying is a form of phase modulation which is accomplished by the use of a discrete number of states. 8PSK refers to PSK with 8 sates. With half that number of states, you will have QPSK. With twice the number of states as 8PSK, you will have 16PSK.

Because QPSK has 8 possible states 8PSK is able to encode three bits per symbol.

8PSK is less tolerant of link degradation than QPSK, but provides more data capacity.

What is QPSK?

QPSK (Quadrature Phase Shift Keying) is a phase modulation algorithm.

Phase modulation is a version of frequency modulation where the phase of the carrier wave is modulated to encode bits of digital information in each phase change.

The "PSK" in QPSK refers to the use of Phased Shift Keying. Phased Shift Keying is a form of phase modulation which is accomplished by the use of a discrete number of states. QPSK refers to PSK with 4 states. With half that number of states, you will have BPSK (Binary Phased Shift Keying). With twice the number of states as QPSK, you will have 8PSK.

The "Quad" in QPSK refers to four phases in which a carrier is sent in QPSK: 45, 135, 225, and 315 degrees.

QPSK Encoding

Because QPSK has 4 possible states, QPSK is able to encode two bits per symbol.

Phase                Data
45  degrees       Binary 00
135 degrees       Binary 01
225 degrees       Binary 11
315 degrees       Binary 10


QPSK is more tolerant of link degradation than 8PSK, but does not provide as much data capacity.

What is symbol rate?

The symbol rate is the rate of state changes on a communications circuit.

If a circuit can carry two tones per second, the circuit has a symbol rate of two.

Circuits then use different modulation techniques to carry multiple bits per symbol.

If the circuit is limited to two different tones, the first tone can represent a 0 and the second tone can represent a 1. In this circuit, the symbol rate is the same as the bit rate.

If the circuit can carry four different tones, then the tones can be used to encode twice as many bits per symbol. In this circuit, the bit rate is now twice the symbol rate.

Using more tones allows more bits per second (bps) to be squeezed out of every symbol, but this also requires higher quality circuits. If the circuit is not high enough quality, the number of retransmissions will cause the circuit to be slower than with a lower number of tones.

The choice of how many tones to use is determined by the modulation algorithm chosen. QPSK uses four tones, 8PSK uses eight tones.

The use of 4 tones is standard in the satellite world. In the cable television world, the higher quality transmission medium enables 64 tones to be the standard, using 64QAM modulation.

Symbol Rate is abbreviated as SR.

The symbol rate is also known as the baud rate.

What is azimuth?

What is azimuth?
Azimuth is fancy name for direction.

Azimuth is an angular measurement made in the horizontal plane.

A correct azimuth setting is critical pointing a satellite antenna.

Magnetic compass North will vary from the true azimuth North by the value of declination.

What is elevation?

Elevation is the angular measurement of a satellite above the horizon.

Elevation is mesured in degrees. A satellite which is higher in the sky will have a greater elevation than one which is close to the horizon.

A satellite exactly level with the horizon would have an elevation of 0 degrees. A satellite with an elevation of 90 degrees would be directly overhead.

Knowing the elevation of a satellite from your location is critical to being able to successfully point a satellite antenna to it.

What is uplink? downlink?

Uplink is the signal path from an earth station to a satellite.

The opposite of uplink is downlink. Downlink is the signal path from the satellite toward the earth.

Uplink Frequencies

Satellite Band        Uplink Frequency

C Band                 5.925 - 6.425 Ghz
Ku Band                14 - 14.5 Ghz
Ka Band                27.5 - 31 Ghz

Thursday, December 01, 2005

What is an LNB?

LNB - Low Noise Block
An LNB, or Low Noise Block, is an amplifier which receives the radio signal from the satellite after it has been reflected by the satellite dish.
In addition to amplifying the signal, the LNB also converts the signal to a frequency usable by the In-Door-Unit.

The functions of the LNB were at one time provided by two separate components, a Low Noise Amplifier (LNA) for signal amplification and a block downconverter for downconversion.
Most LNB's used for satellite television include an integrated feedhorn. An LNB with an integrated feedhorn is known an an LNBF.

C Band LNB's are measured in degrees Kelvin, with a lower number representing a higher grade LNB. Ku and Ka band LNB's are measured in decibels, with a lower number also representing a higher quality LNB.
Dual and Quad LNB Units
A Dual LNB will allow you tune into two separate satellite signals at once. This is very useful if you have two television sets and wish to watch different channels on each of them. Quad-LNB's also exist for those with more than two television sets.

What is a feedhorn?
The feedhorn is the part of a satellite dish system which gathers the reflected signal from the dish and focuses it towards the LNB.
An LNB with an integrated feedhorn is referred to an an LNBF.

--------------------------------------------------------------------------------------

C Band
C band is the original frequency allocation for communications satellites.

C-Band uses 3.7-4.2Ghz for downlink and 5.925-6.425Ghz for uplink.

The lower frequencies used by C band perform better under adverse weather conditions than the Ku band or Ka band frequencies.

C Band Dishes
C band requires the use of a large dish, usually 6' across. C band dishes vary between 3' and 9' across, depending upon signal strength.
Because C Band dishes are so much larger than Ku and Ka band dishes, a C Band dish is sometimes referred to in friendly jest as a BUD (Big Ugly Dish).





--------------------------------------------------------------------------------------
Ku band
The Ku band uplink uses frequencies from 14 to 14.5GHz and the downlink uses frequencies between 11.7 and 12.7GHz.
The Ku band downlink frequencies are further subdivided according to their assigned use:

Ku Band Usage                   Downlink
Fixed Satellite Service         11.7 - 12.2Ghz
Broadcast Satellite Service     12.2 - 12.7Ghz
The higher frequencies of Ku band are significantly more vulnerable to signal quality problems caused by rainfall, known as rainfade, than C band satellite frequencies. However, they are less susceptible to rainfade than the Ka band frequencies.

Ku band satellites typically transmit with much more power than C band satellites. This allows Ku band dishes to be smaller and helps Ku band transmissions to overcome rainfade.

Ku Band Dishes

Ku band dishes can be much smaller than C band dishes. Ku band dishes vary from 2' to 5' in diameter.

Using a C Band Antenna for Ku Band

It is possible to add a Ku Band LNB to a C Band satellite dish.

For this to work properly, the C Band dish must be a solid dish or a mesh dish with holes less than one-quarter inch across.



--------------------------------------------------------------------------------------
Ka band
The Ka band uplink uses frequencies between 27.5Ghz and 31Ghz and the downlink uses frequencies between 18.3 and 18.8Ghz and between 19.7 and 20.2Ghz.
Ka band dishes can be much smaller than C band dishes. Ka band dishes vary from 2' to 5' in diameter.
Ka band satellites typically transmit with much more power than C band satellites.
The higher frequencies of Ka band are significantly more vulnerable to signal quality problems caused by rainfall, known as rainfade

--------------------------------------------------------------------------------------

How can we calculate the actual size of MPEG file which will be crated?

If you choose CBR, The following equation can be used to calculate. "2048/2018" stands for bitrate of system stream.

Filesize (Kb) = (Video+Audio) x (2048/2018) x sec/8

E.g. Video is 1150Kbps, audio is 224Kbps, 15 sec MPEG file would be "(1150+224) x (2048/2018) x 15/8" = 2614KB

What is bitrate?

Bitrate means the number of bit which go through the stream per 1 sec. Stream means the MPEG file.
Generally, compression is described by bitrate in MPEG. The higher bitrate, the higher quality and lower compression, larger file size.
If the bitrate is same at any part on a single stream, it is called as CBR (Constant Bit Rate). In VBR(Variable Bit Rate), the bitrate can be different depends on the part of stream.

What is I picture P, B picture?

I picture This is independent picture which completes compression inside frame. The frame data can be played independently.
P picture This refers to previous I, P picture to extract difference, then compressed The compression is higher than I picture.
B picture This refers to both previous and next I, P picture to extract difference, then compressed The compression is higher than P picture.

What type of MPEG stream is available?

MPEG-Video stream This is video part stream. The file extension would be m1v, m2v, mpv, vbs etc.
MPEG-Audio stream This is audio part stream. The file extension would be mp1, mp2, mp3, mpa, abs etc.
MPEG-System stream This is multiplex of MPEG-Video stream and MPEG-Audio stream in one stream. The file extension is mpg, m2p etc.

How will get 0x12 from 0x1234?

Just do Right Shift 8 times. Thats it.

But Normally we say that & with 0xFF00 and do right shift 8 times. Please remember to think before answering what is the question and the possibilities to reduce instructions/ execution time/ memory consumptioin.

Simple Example, If we want to multiply a value by 2, we put

"value * 2"

But efficient way would be

"Value << 1"

That is left shift one time will give multiples of two. This will reduce the CPU cycles consumed.

Left Shift is faster than Multiplication

What is the Size of Class having a "int" variable and a inline function?

The size of that class is only the size of that int variable. sizeof dont consider the size of Inline functions.


Example:

Class A
{
int IntVariable;
void PrintContent( void )
{
cout << endl << IntVariable << endl;
};
}


Consider size of the word is 4 bytes. So, sizeof( A ) will return 4 bytes only.

Is It possible to call a Static function by a Function Pointer?

Yes It is Possible. Eventhough, static functions are local to that file, if we assign that function address to a same type (prototype should be strictly same, otherwise result unknown) function pointer. We can call that function indirectly...