Monday, March 28, 2016

What is video capturing?

Video Capturing

Video capture :  is the process of converting an analog video signal—such as that produced by a video camera or DVD player—to digital video. The resulting digital data are computer files referred to as a digital video stream, or more often, simply video stream.
If You want to learn about the technology, computer science & engineering, web programming, freelancing, earning please click here :CSE SOLVE

What do you mean by multimedia system? What are the elements of multimedia system?

Multimedia:

Multimedia is a technology which stores data as text, photo,pictures,music,sounds,graphic,film and animation and gives the methods to collect and modify the data as desired.

Text
:


It may be an easy content type to forget when considering multimedia systems, but text content is by far the most common media type in computing applications. Most multimedia systems use a combination of text and other media to deliver functionality. Text in multimedia systems can express specific information

Images
:


Digital image files appear in many multimedia applications. Digital photographs can display application content or can alternatively form part of a user interface. Interactive elements, such as buttons. Digital image files use a variety of formats and file extensions. Among the most common are JPEGs and PNGs.

Related Reading: The Advantages of Using Multimedia in Web Design

Audio :

Audio files and streams play a major role in some multimedia systems. Audio files appear as part of application content and also to aid interaction. Audio formats include MP3, WMA, Wave, MIDI and RealAudio.

Video :

Digital video appears in many multimedia applications, particularly on the Web. As with audio, websites can stream digital video to increase the speed and availability of playback. Common digital video formats include Flash, MPEG, AVI, WMV and QuickTime.

Animation :

Animated components are common within both Web and desktop multimedia applications. Using Flash, developers can author FLV files, exporting them as SWF movies for deployment to users. Flash also uses ActionScript code to achieve animated and interactive effects.
If You want to learn about the technology, computer science & engineering, web programming, freelancing, earning please click here :CSE SOLVE

What are the types of images used in multimedia system?

The types of images used in multimedia system :

1. TIFF (also known as TIF), file types ending in .tif :

TIFF stands for Tagged Image File Format. TIFF images create very large file sizes. TIFF images are uncompressed and thus contain a lot of detailed image data (which is why the files are so big) TIFFs are also extremely flexible in terms of color (they can be grayscale, or CMYK for print, or RGB for web) and content (layers, image tags).

TIFF is the most common file type used in photo software (such as Photoshop), as well as page layout software (such as Quark and InDesign), again because a TIFF contains a lot of image data.

2. JPEG (also known as JPG), file types ending in .jpg:
JPEG stands for Joint Photographic Experts Group, which created this standard for this type of image formatting. JPEG files are images that have been compressed to store a lot of information in a small-size file. Most digital cameras store photos in JPEG format, because then you can take more photos on one camera card than you can with other formats.

A JPEG is compressed in a way that loses some of the image detail during the compression in order to make the file small (and thus called “lossy” compression).

JPEG files are usually used for photographs on the web, because they create a small file that is easily loaded on a web page and also looks good.

JPEG files are bad for line drawings or logos or graphics, as the compression makes them look “bitmappy” (jagged lines instead of straight ones).

3. GIF, file types ending in .gif:

GIF stands for Graphic Interchange Format. This format compresses images but, as different from JPEG, the compression is lossless (no detail is lost in the compression, but the file can’t be made as small as a JPEG).

GIFs also have an extremely limited color range suitable for the web but not for printing. This format is never used for photography, because of the limited number of colors. GIFs can also be used for animations.

4. PNG, file types ending in .png:

PNG stands for Portable Network Graphics. It was created as an open format to replace GIF, because the patent for GIF was owned by one company and nobody else wanted to pay licensing fees. It also allows for a full range of color and better compression.

It’s used almost exclusively for web images, never for print images. For photographs, PNG is not as good as JPEG, because it creates a larger file. But for images with some text, or line art, it’s better, because the images look less “bitmappy.”

When you take a screenshot on your Mac, the resulting image is a PNG–probably because most screenshots are a mix of images and text.

5. Raw image files:

Raw image files contain data from a digital camera (usually). The files are called raw because they haven’t been processed and therefore can’t be edited or printed yet. There are a lot of different raw formats–each camera company often has its own proprietary format.

Raw files usually contain a vast amount of data that is uncompressed. Because of this, the size of a raw file is extremely large. Usually they are converted to TIFF before editing and color-correcting.
If You want to learn about the technology, computer science & engineering, web programming, freelancing, earning please click here :CSE SOLVE

What are the features of audio editing software?

1. Sound editing functions include cut, copy, paste, delete, insert, silence, auto-trim and more.

2. Audio effects include amplify, normalize, equalizer, envelope, reverb, echo, reverse and many more.

3. Integrated VST plugin support gives professionals access to thousands of additional tools and effects.

4. Supports almost all audio and music file formats including mp3, wav, vox, gsm, wma, au, aif, flac, real audio, ogg, aac, m4a, mid, amr, and many more.

5. Batch processing allows you to apply effects and/or convert thousands of files as a single function.

6. Scrub, search and bookmark audio for precise editing.

7. Create bookmarks and regions to easily find, recall and assemble segments of long audio files.

8.Tools include spectral analysis (FFT), speech synthesis (text-to-speech), and voice changer.

9. Audio restoration features including noise reduction and click pop removal.

10. Supports sample rates from 6 to 96kHz, stereo or mono, 8, 16, 24 or 32 bits.

11. Works directly with MixPad Multi-Track Audio Mixer.

12. Easy to use interface will have you editing in minutes.
If You want to learn about the technology, computer science & engineering, web programming, freelancing, earning please click here :CSE SOLVE

What is multimedia authoring? What are the types of Multimedia authoring Syatem?

Multimedia Authoring: An authoring system is a program that has pre-programmed elements for the development of interactive multimedia software titles. Authoring systems can be defined as software that allows its user to create multimedia applications for manipulating multimedia objects
Types of Multimedia:

Graphical user Interface: A GUI (usually pronounced GOO-ee) is a graphical (rather than purely textual) user interface to a computer. As you read this, you are looking at the GUI or graphical user interface of your particular Web browser. The term came into existence because the first interactive user interfaces to computers were not graphical; they were text-and-keyboard oriented and usually consisted of commands you had to remember and computer responses that were infamously brief. The command interface of the DOS operating system (which you can still get to from your Windows operating system) is an example of the typical user-computer interface before GUIs arrived. An intermediate step in user interfaces between the command line interface and the GUI was the non-graphical menu-based interface, which let you interact by using a mouse rather than by having to type in keyboard commands.

Today's major operating systems provide a graphical user interface. Applications typically use the elements of the GUI that come with the operating system and add their own graphical user interface elements and ideas. A GUI sometimes uses one or more metaphors for objects familiar in real life, such as the desktop, the view through a window, or the physical layout in a building. Elements of a GUI include such things as: windows, pull-down menus, buttons, scroll bars, iconic images, wizards, the mouse, and no doubt many things that haven't been invented yet. With the increasing use of multimedia as part of the GUI, sound, voice, motion video, and virtual reality interfaces seem likely to become part of the GUI for many applications. A system's graphical user interface along with its input devices is sometimes referred to as its "look-and-feel."

The GUI familiar to most of us today in either the Mac or the Windows operating systems and their applications originated at the Xerox Palo Alto Research Laboratory in the late 1970s. Apple used it in their first Macintosh computers. Later, Microsoft used many of the same ideas in their first version of the Windows operating system for IBM-compatible PCs.

When creating an application, many object-oriented tools exist that facilitate writing a graphical user interface. Each GUI element is defined as a class widget from which you can create object instances for your application. You can code or modify prepackaged methods that an object will use to respond to user stimuli.
If You want to learn about the technology, computer science & engineering, web programming, freelancing, earning please click here :CSE SOLVE

What is Multimedia PC? Write the specification of MPC - 1 and MPC - 2

Multimedia PC :

The Multimedia PC (MPC) was a recommended configuration for a personal computer (PC) with a CD-ROM drive. The standard was set and named by the "Multimedia PC Marketing Council", which was a working group of the Software Publishers Association (SPA, now the Software and Information Industry Association).

The specification of MPC - 1 and MPC - 2 : 
Specification of MPC-1:
CPU

Minimum requirement: 386SX (or compatible) microprocessor
RAM
Minimum requirement: 2 megabytes of RAM
Magnetic Storage
Requirement:
3.5-inch, high density (1.44-MB) floppy disk drive.
Minimum requirement: 30-MB hard disk drive.
Optical Storage

Requirement:
CD-ROM drive with sustained 150kB/second transfer rate; average seek time of 1 second or less; 10,000 hours MTBF; mode 1 capability (mode 2 and form 1 & 2 optional); MSCDEX 2.2 driver that implements the extended audio APIs; subchannel Q
The drive must be capable of maintaining a sustained transfer rate of 150kB/sec. without consuming more than 40 percent of the CPU bandwidth in the process. It is recommended that this capability be achieved for read block sizes no less than 16K and lead time of no more than is required to load the CD-ROM buffer with 1 read block of data.
It is recommended that the drive have on-board buffers of 64K and implement read-ahead buffering.
Audio
Requirement:
CD-ROM drive with CD-DA (Red Book) outputs and a front panel volume control.
Requirement: 8-bit (16-bit recommended) digital-to-analog converter (DAC) with linear PCM sampling; DMA or FIFO buffered transfer capability with interrupt on buffer empty; 22.05 and 11.025 kHz sample rate mandatory.
44.1 kHz sampling rate desirable; optional stereo channels; no more than 10 percent of the CPU bandwidth required to output 11.025 or 22.05 kHz; no more than 15 percent for 44.1 kHz.
Requirement: 8-bit (16-bit recommended) analog-to-digital converter (ADC) with linear PCM sampling, 11.025 kHz mandatory, (22.01 kHz, or 44.1 kHz sampling rate optional); DMA or FIFO buffered transfer capability with interrupt on buffer full; microphone input.
Requirement: Internal synthesizer hardware with multi-voice, multi-timbral capabilities, six simultaneous melody notes plus two simultaneous percussive notes.
Requirement: Internal mixing capabilities to combine input from three (recommended four) sources and present the output as a stereo, line-level audio signal at the back panel. The four sources are: CD Red Book, synthesizer, DAC (waveform), and (recommended but not required) an auxiliary input source. Each input must have at least a 3-bit volume control (eight steps) with a logarithmic taper.
A 4-bit or greater volume control is strongly recommended. If all sources are sourced with -10dB (consumer line level: 1 milliwatt into 700 ohms=0dB) without attenuation, the mixer will not clip and will output between 0 dB and +3 dB. Individual audio source and master digital volume control registers and extra line-level audio sources are highly recommended.

Video
Requirement:
VGA-compatible display adapter, and a color VGA- compatible monitor. A basic Multimedia PC uses mode 12h (640 x 480, 16 colors)
. An enhanced configuration referred to as VGA+ is recommended with 640 x 480, 256 colors.
The recommended performance goal for VGA+ adapters is to be able to blit 1, 4, and 8 bit-per-pixel DIBs (device independent bitmaps) at 350K pixels/second given 100 percent of the CPU, and at 140K pixels/second given 40 percent of the CPU. This recommendation applies to run-length encoded images and non-encoded images. The recommended performance is needed to fully support high-performance applications, such as synchronized audio-visual presentations. User Input.
Requirement:
Standard 101-key IBM-style keyboard with standard DIN connector, or keyboard that delivers identical functionality utilizing key-combinations.
Requirement: Two-button mouse with bus or serial connector, with at least one additional communication port remaining free.
I/O
Requirement:
Standard 9-pin or 25-pin asynchronous serial port, programmable up to 9600 bits per second (BPS), switchable interrupt channel.
Requirement: Standard 25-pin bidirectional parallel port with interrupt capability.
Requirement: 1 MIDI port with In, Out, and Thru must have interrupt support for input and FIFO transfer.
Requirement: IBM-style analog or digital joystick port.
System Software
Multimedia PC system software must offer binary compatibility with Microsoft Windows 3.0 with Multimedia Extensions or Windows 3.1.
Specification of MPC-2:
CPU
Minimum requirement: 25 MHz 486SX (or compatible) microprocessor.
RAM
Minimum requirement:
4 megabytes of RAM (8 megabytes recommended).
Magnetic Storage
Requirement: 160-MB or larger hard disk drive.
Optical Storage
Requirements:
CD-ROM drive capable of sustained 300 KB/sec. transfer rate, average seek time of 400 milliseconds or less, CD-ROM XA ready (mode 1 capable, mode 2 form 1 capable, mode 2 form 2 capable), multisession capable.
At 300 KB/sec. sustained transfer rate it is recommended that no more than 60 percent of the CPU bandwidth be consumed. It is recommended that the CPU utilization recommendation be achieved for read block sizes no less than 16K and lead time of no more than is required to load the CD-ROM buffer with 1 read block of data.
Audio
Requirement:
16-bit digital-to-analog converter (DAC), 44.1 kHz sample rate mandatory, stereo channels; no more than 15 percent of the CPU bandwidth be required to output 44.1 kHz.
Requirement: 16-bit analog-to-digital converter (ADC) with Linear PCM sampling; 44.1 kHz sample rate mandatory.
CD-ROM XA audio capability is recommended.
Support for the IMA adopted ADPCM software algorithm is recommended.
Video
Requirement:
Color monitor with display resolution of 640 x 480 with 65,536 (64K) colors. The recommended performance goal for VGA+ adapters is to be able to blit 1, 4, and 8 bit-per-pixel DIBs (device independent bitmaps) at 1.2 megapixels/second given 40 percent of the CPU. This recommendation applies to run-length encoded images and non-encoded images. The recommended performance is needed to fully support demanding multimedia applications including the delivery of video with 320 x 240 resolution at 16 frames/second and 256 colors.
User Input :
No changes from Level 1.
I/O :
No changes from Level 1.
System Software :
No changes from Level 1.
If You want to learn about the technology, computer science & engineering, web programming, freelancing, earning please click here :CSE SOLVE

What is luminance and chromaticity?

Luminance: Luminance is apparent brightness, how bright an object appears to the human eye. So when you look at the world what you see is a pattern of varying luminances (if we ignore the color component). What you see on the this page you are reading is the luminance of the black letters compared to the luminance of the white screen.

Luminance is measured in candelas per square meter.
Since luminance is what we see then light sources which we look at have luminance too. The luminance of the sun and the moon give us a good idea of the huge range of brightness which the human eye can handle.

Luminance of the sun: 1,600,000,000 cd/m2.

Luminance of the moon: 2500 cd/m2.

If you look at the sun you'll get 1,600 million candles per square meter into you eye. That is why you should not look directly at the sun for very long.

Chromaticity:

Chromaticity is an objective specification of the quality of a color regardless of its luminance. Chromaticity consists of two independent parameters, often specified as hue (h) and colorfulness (s), where the latter is alternatively called saturation, chroma, intensity,[1] or excitation purity.[2][3] This number of parameters follows from trichromacy of vision of most humans, which is assumed by most models in color science.
If You want to learn about the technology, computer science & engineering, web programming, freelancing, earning please click here :CSE SOLVE

What is Projection? What are the different projection mechanisms? - Explain in details.

Projection :  Projection is a defense mechanism that involves taking our own unacceptable qualities or feelings and ascribing them to other people.

 Example: if you have a strong dislike for someone, you might instead believe that he or she does not like you. Projection works by allowing the expression of the desire or impulse, but in a way that the ego cannot recognize, therefore reducing anxiety.

  The different projection mechanisms :

1. Denial:

Denial is the refusal to accept reality or fact, acting as if a painful event, thought or feeling did not exist. It is considered one of the most primitive of the defense mechanisms because it is characteristic of early childhood development.
2. Regression:

Regression is the reversion to an earlier stage of development in the face of unacceptable thoughts or impulses. For an example an adolescent who is overwhelmed with fear
3. Acting Out:

Acting Out is performing an extreme behavior in order to express thoughts or feelings the person feels incapable of otherwise expressing. Instead of saying, “I’m angry with you,”
4. Dissociation:

Dissociation is when a person loses track of time and/or person, and instead finds another representation of their self in order to continue in the moment. A person who dissociates often loses track of time
5. Compartmentalization:

Compartmentalization is a lesser form of dissociation, wherein parts of oneself are separated from awareness of other parts and behaving as if one had separate sets of values.
6. Projection:

Projection is the misattribution of a person’s undesired thoughts, feelings or impulses onto another person who does not have those thoughts, feelings or impulses. Projection is used especially when the thoughts are considered unacceptable for the person to express.

7. Reaction Formation:

Reaction Formation is the converting of unwanted or dangerous thoughts, feelings or impulses into their opposites. .
If You want to learn about the technology, computer science & engineering, web programming, freelancing, earning please click here :CSE SOLVE

Write and explain Cohen-Sutherland line clipping clgorithm.

The Cohen–Sutherland algorithm is a computer graphics algorithm used for line clipping. The algorithm divides a two-dimensional space into 9 regions (or a three-dimensional space into 27 regions), and then efficiently determines the lines and portions of lines that are visible in the center region of interest (the viewport).

Algorithm:

• Both endpoints are in the viewport region (bitwise OR of endpoints == 0): trivial accept.

• Both endpoints share at least one non-visible region which implies that the line does not cross the visible region. (bitwise AND of endpoints != 0): trivial reject.

• Both endpoints are in different regions: In case of this nontrivial situation the algorithm finds one of the two points that is outside the viewport region (there will be at least one point outside). The intersection of the outpoint and extended viewport border is then calculated (i.e. with the parametric equation for the line) and this new point replaces the outpoint. The algorithm repeats until a trivial accept or reject occurs.

The numbers in the figure below are called outcodes. An outcode is computed for each of the two points in the line. The outcode will have four bits for two-dimensional clipping, or six bits in the three-dimensional case. The first bit is set to 1 if the point is above the viewport. The bits in the 2D outcode represent: Top, Bottom, Right, Left. For example the outcode 1010 represents a point that is top-right of the viewport. Note that the outcodes for endpoints must be recalculated on each iteration after the clipping occurs.
If You want to learn about the technology, computer science & engineering, web programming, freelancing, earning please click here :CSE SOLVE

What do you mean by interactive computer Graphics?

Interactive computer Graphics :

Interactive computer Graphic is like a website, it is only useful if it is browsed by a visitor and no two visitors are exactly alike. It means the website must support the interaction of users with a variety of skills, interests and end goals. Interactive computer graphics involves the user’s interaction.

What is SRGP? Explain about SRGP.

SRGP :  SRGP stands for Simple Raster Graphics Package.

Explain about SRGP : 

- SRGP consists of library functions that describe custom data types and constants.

- SRGP procedures operate on canvases, which is a 2D array of pixels.

- The depth will be the number of planes requested by the designated application.

- Every canvas has its local coordinate system.

- Multiple windows can not be controlled by SRGP application.
If You want to learn about the technology, computer science & engineering, web programming, freelancing, earning please click here :CSE SOLVE

What is the difference between vector and raster graphics?

Raster Graphics:


1. raster graphics are composed of pixels.

2. A raster graphic, such as a gif or jpeg, is an array of pixels of various colors, which together form an image.

3. Raster graphics, on the other hand, become "blocky," since each pixel increases in size as the image is made larger.

4. Adobe Photoshop, GIMP, Krita, Corel Photopaint and Pixelmator are primarily raster.

5. Most digital painting programs and apps like ArtRage, Sketchbook, Layerpaint and Procreate are raster.

6. JPG, GIF, PNG, TIFF, BMP are all common raster image formats. PSDs (Photoshop. PDFs can contain both.

7. Raster Graphics are comprised of tiny squares of color information, which are usually referred to as pixels, or dots.

8. Raster Graphics are usually measured in Dots Per Inch (dpi) when creating images or graphics for print, or Pixels Per Inch (ppi) when creating images or graphics for web use, which allows you to measure how much detailed color information a specific image contains.

9. For example, if you have a 2 inch x 2 inch image at a resolution of 300 ppi, your image contains a total of 600 pixels of color that provide the detail, color, and shading information for your image.

10. Raster Graphic Examples: Stationery Printing, Catalogues, Flyers, Postcards, etc.

Vector Graphics:

1. vector graphics are composed of paths.

2. A vector graphic, such as an .eps file or Adobe Illustrator? file, is composed of paths, or lines, that are either straight or curved.

3. The data file for a vector image contains the points where the paths start and end, how much the paths curve, and the colors that either border or fill the paths. Because vector graphics are not made of pixels, the images can be scaled to be very large without losing quality.

4. Adobe Illustrator, Inkscape, Sketch, Affinity Designer and Corel Draw are primarily vector.

5. Most CAD and 3D rendering programs like AutoCAD, Maya, Blender and Cinema4D work with (more complex) vectors.

6. EPS, SVG and AI (Illustrator) are the most common vector formats. They can all contain embedded raster images. PDFs can contain both.

7. Vector Application Examples: Large-format signage, vehicle wraps, window graphics, vinyl lettering, etc.

8.  For example, if you have a 2 inch x 2 inch image at a resolution of 300 ppi, your image contains a total of 600 pixels of color that provide the detail, color, and shading information for your image.

9. While Raster Graphics are comprised of individual pixels, Vector Graphics are built using mathematically defined areas to produce shapes, lines, and curves. This is why Vector graphics are suited for graphic elements that are more geometric in nature such as shapes and text, whereas Raster Graphics are suited for more detailed graphics such.
If You want to learn about the technology, computer science & engineering, web programming, freelancing, earning please click here :CSE SOLVE

What are the merits and demerits of DVST?

DVST :  DVST stands for Direct View Storage Tube. It is one of the display devices in which an electron flood gun and writing gun is present. The flood gun floods electrons to a wire grid on which already the writing gun has written some image. The electrons from the flood gun will be repelled back by the negatively charged wire grid which has been charged so by the writing electron beam. The part of the wire grid which has not been charged -ve will allow the electrons to pass through and the electrons will collide on the screen and produce the image.

Advantages and disadvantages of DVST:

Advantages:

1. Refreshing is not essential.
2. Without flicker, very complex pictures can be exhibit at very high resolution.
 
Disadvantages:

1. They normally never display colour.
2. Selected part of picture never removed.
3. It can take quite a few seconds for composite pictures while redrawing and erasing process.
If You want to learn about the technology, computer science & engineering, web programming, freelancing, earning please click here :CSE SOLVE

Briefly explain the standards of TV and Video Broadcasting

Broadcast television systems:
. Broadcast television systems are encoding or formatting standards for the transmission and reception of terrestrial television signals.

Frames:

Ignoring color, all television systems work in essentially the same manner. The monochrome image seen by a camera is divided into horizontal scan lines

Viewing technology:

Analog television signal standards are designed to be displayed on a cathode ray tube (CRT)
.
Overscan:

Television images are unique in that they must incorporate regions of the picture with reasonable-quality content.
 
Interlacing:

In a purely analog system, field order is merely a matter of convention

Image polarity:

Another parameter of analog television systems, minor by comparison, is the choice of whether vision modulation is positive or negative

Audio:

In analog television, the analog audio portion of a broadcast is invariably modulated separately from the video Evolution
In a few countries, most notably the United Kingdom, television broadcasting on VHF has been entirely shut down

Digital Video Broadcasting:

The Digital Video Broadcasting Project is an industry-led consortium of over 200 broadcasters, manufacturers, network operators, software developers and regulators from around the world committed to designing open technical standards for the delivery of digital television.
If You want to learn about the technology, computer science & engineering, web programming, freelancing, earning please click here :CSE SOLVE

Write short notes on AVI and DVI

Audio Video Interleave:
Audio Video Interleaved (also Audio Video Interleave). AVI files can contain both audio and video data in a file container that allows synchronous audio-with-video playback. the AVI files support multiple streaming audio and video,

Format:

AVI is a derivative of the Resource Interchange File Format (RIFF), which divides a file's data into blocks, or "chunks." Each "chunk" is identified. An AVI file takes the form of a single chunk in a RIFF formatted file

Limitations:

Since its introduction in the early 90s, new computer video techniques have been introduced which the original AVI specification did not anticipate.
AVI does not provide a standardized way to encode information, There are several competing approaches to including a time code in AVI files, although it is widely used.

AVI was not intended to contain video using any compression technique. Approaches exist to support modern video compression techniques (such as MPEG-4)

AVI cannot contain some specific types of (VBR) data reliably AVI files at the resolutions and frame rates normally used to encode standard definition feature films is about 5 MB per hour of video.


Digital Video Interactive:

Digital Video Interactive (DVI) was the first multimedia desktop video standard for IBM-compatible personal computers. It enabled full-screen, full motion video, and graphics to be presented on a DOS-based desktop computer. The scope of Digital Video Interactive encompasses a file format.

Implementations:

The first implementation of DVI developed in the mid-80s relied on three 16-bit ISA cards installed inside the computer, one for audio processing, another for video.

Later DVI implementations only used one card, such as Intel's ActionMedia series Compression.

The DVI format specified two video compression schemes, Presentation Level Video or Production Level Video (PLV) and Real-Time Video (RTV) and two audio compression schemes, ADPCM and PCM8.[3][1].

The original video compression scheme, called Presentation Level Video (PLV), was asymmetric in that a Digital VAX-11/750 minicomputer was used to compress the video in non-real time to 30.
If You want to learn about the technology, computer science & engineering, web programming, freelancing, earning please click here :CSE SOLVE

Describe the YIQ and CMYK color mode

YIQ:
The YIQ color space at Y=0.5. the I and Q chroma coordinates are scaled up to 1.0.

An image along with its Y, I, and Q components.

YIQ is the color space used by the NTSC color TV system, employed mainly in North and Central America, and Japan. I stands for in-phase, while Q stands for quadrature, referring to the components used in quadrature amplitude modulation. Some forms of NTSC now use the YUV color space, which is also used by other systems such as PAL.

The Y component represents the luma information, and is the only component used by black-and-white television receivers. I and Q represent the chrominance information. In YUV, the U and V components can be thought of as X and Y coordinates within the color space. I and Q can be thought of as a second pair of axes on the same graph, rotated 33°; therefore IQ and UV represent different coordinate systems on the same plane.

The YIQ system is intended to take advantage of human color-response characteristics. The eye is more sensitive to changes in the orange-blue (I) range than in the purple-green range (Q) — therefore less bandwidth is required for Q than for I. Broadcast NTSC limits I to 1.3 MHz and Q to 0.4 MHz. I and Q are frequency interleaved into the 4 MHz Y signal, which keeps the bandwidth of the overall signal down to 4.2 MHz. In YUV systems, since U and V both contain information in the orange-blue range, both components must be given the same amount of bandwidth as I to achieve similar color fidelity.

  CMYK Color Model:
"CMYK" redirects here. For the extended play by James Blake, see CMYK (EP).

"CMYB" redirects here. For the cMyb gene, see MYB (gene).

Color printing typically uses ink of four colors: cyan, magenta, yellow, and key (black).

When CMY “primaries” are combined at full strength, the resulting “secondary” mixtures are red, green, and blue. Mixing all three gives black.

The CMYK color model (process color, four color) is a subtractive color model, used in color printing, and is also used to describe the printing process itself. CMYK refers to the four inks used in some color printing: cyan, magenta, yellow, and key (black).

The "K" in CMYK stands for key because in four-color printing, cyan, magenta, and yellow printing plates are carefully keyed, or aligned, with the key of the black key plate.

The "black" generated by mixing commercially practical cyan, magenta and yellow inks is unsatisfactory, so four-color printing uses black ink in addition to the subtractive primaries. Common reasons for using black ink include:[6]
If You want to learn about the technology, computer science & engineering, web programming, freelancing, earning please click here :CSE SOLVE

Explain in short the HDTV and DVI standards

The HDTV and DVI standards : 

HDTV : 

Short for High-Definition Television, a new type of television that provides much better resolution than current televisions based on the NTSC standard. HDTV is a digital TV broadcasting format where the broadcast transmits widescreen pictures with more detail and quality than found in a standard analog television, or other digital television formats. HDTV is a type of Digital Television (DTV) broadcast, and is considered to be the best quality DTV format available. Types of HDTV displays include direct-view, plasma, rear screen, and front screen projection. HDTV requires an HDTV tuner to view and the most detailed HDTV format is 1080i.

 HDTV Minimum Performance Attributes:

Receiver:  Receives ATSC terrestrial digital transmissions and decodes all ATSC Table 3 video formats.

Display Scanning Format:  Has active vertical scanning lines of 720 progressive (720p), 1080 interlaced (1080i), or higher

Aspect Ratio Capable of displaying a 16:9 image1

 Audio: Receives and reproduces, and/or outputs Dolby Digital audio.

DVI : 

Digital Video Interactive:

Digital Video Interactive (DVI) was the first multimedia desktop video standard for IBM-compatible personal computers. It enabled full-screen, full motion video, and graphics to be presented on a DOS-based desktop computer. The scope of Digital Video Interactive encompasses a file format. Implementations:

The first implementation of DVI developed in the mid-80s relied on three 16-bit ISA cards installed inside the computer, one for audio processing, another for video.
Later DVI implementations only used one card, such as Intel's ActionMedia series
Compression

The DVI format specified two video compression schemes, Presentation Level Video or Production Level Video (PLV) and Real-Time Video (RTV) and two audio compression schemes, ADPCM and PCM8.[3][1].

The original video compression scheme, called Presentation Level Video (PLV), was asymmetric in that a Digital VAX-11/750 minicomputer was used to compress the video in non-real time to 30.
If You want to learn about the technology, computer science & engineering, web programming, freelancing, earning please click here :CSE SOLVE

Describe different types of broadcast video standards




The first colour TV broadcast system was implemented in the United States in 1953. This was based on the NTSC - National Television System Committee standard. NTSC is used by many countries on the American continent as well as many Asian countries including Japan.
NTSC runs on 525 lines/frame.

The PAL - Phase Alternating Line standard was introduced in the early 1960's and implemented in most European countries except for France.
The PAL standard utilises a wider channel bandwidth than NTSC which allows for better picture quality.
PAL runs on 625 lines/frame.

The SECAM - Sequential Couleur Avec Memoire or Sequential Colour with Memory standard was introduced in the early 1960's and implemented in France. SECAM uses the same bandwidth as PAL but transmits the colour information sequentially.

SECAM runs on 625 lines/frame.
If You want to learn about the technology, computer science & engineering, web programming, freelancing, earning please click here :CSE SOLVE

Compare MPEG-I with MPEG-II?


1.  MPEG2 succeeded the MPEG1 to address some of the older standard’s weaknesses.

2.  MPEG2 has better quality than MPEG1.

3. MPEG1 is used for VCD while MPEG2 is used for DVD.

4.  One may consider MPEG2 as MPEG1 that supports higher resolutions and capable of using higher and variable bitrates.

5.  MPEG1 is older than MPEG2 but the former is arguably better in lower bitrates.

6.  MPEG2 has a more complex encoding algorithm.
If You want to learn about the technology, computer science & engineering, web programming, freelancing, earning please click here :CSE SOLVE

What is 2D Discrete Cosine Transformation(DCT)?


The discrete cosine transform (DCT) helps separate the image into parts (or spectral sub-bands) of differing importance (with respect to the image's visual quality). The DCT is similar to the discrete Fourier transform: it transforms a signal or image from the spatial domain to the frequency domain.
 The basic operation of the DCT is as follows:

1. The input image is N by M;

2. f(i,j) is the intensity of the pixel in row i and column j;

3. F(u,v) is the DCT coefficient in row k1 and column k2 of the DCT matrix.

4. For most images, much of the signal energy lies at low frequencies; these appear in the upper left corner of the DCT.

5. Compression is achieved since the lower right values represent higher frequencies, and are often small - small enough to be neglected with little visible distortion.

6. The DCT input is an 8 by 8 array of integers. This array contains each pixel's gray scale level;

7. 8 bit pixels have levels from 0 to 255.
If You want to learn about the technology, computer science & engineering, web programming, freelancing, earning please click here :CSE SOLVE

Write down the Hufman coading algorithm used in loseless compression

 The Hufman coading algorithm used in loseless compression :


1. Read a BMP image using image box control in Delphi language. The TImage control can be used to display a graphical image - Icon (ICO), Bitmap (BMP), Metafile (WMF), GIF, JPEG, etc. This control will read an image and convert them in a text file.

2. Call a function that will Sort or prioritize characters based on frequency count of each characters in file.

3. Call a function that will create an initial heap. Then reheap that tree according to occurrence of each node in the tree, lower the occurrence earlier it is attached in heap. Create a new node where the left child is the lowest in the sorted list and the right is the second lowest in the sorted list.

4. Build Huffman code tree based on prioritized list. Chop-off those two elements in the sorted list as they are now part of one node and add the probabilities. The result is the probability for the new node.

5. Perform insertion sort on the list with the new node.

6. Repeat STEPS 3,4,5 UNTIL you only have 1 node left.

7. Perform a traversal of tree to generate code table. This will determine code for each element of tree in the following way.
The code for each symbol may be obtained by tracing a path to the symbol from the root of the tree. A 1 is assigned for a branch in one direction and a 0 is assigned for a branch in the other direction. For example a symbol which is reached by branching right twice, then left once may be represented by the pattern '110'. The figure below depicts codes for nodes of a sample tree.
*
/ \
(0) (1)
/ \
(10)(11)
/ \
(110) (111)

8. Once a Huffman tree is built, Canonical Huffman codes, which require less information to rebuild, may be generated by the following steps: Step 1. Remember the lengths of the codes resulting from a Huffman tree generated per above. Step 2. Sort the symbols to be encoded by the lengths of their codes (use symbol value to break ties). Step 3. Initialize the current code to all zeros and assign code values to symbols from longest to shortest code as follows:

(A). If the current code length is greater than the length of the code for the current symbol, right shift off the extra bits.

(B). Assign the code to the current symbol.

(C). Increment the code value.

(D). Get the symbol with the next longest code.

(E). Repeat from A until all symbols are assigned codes.

9. Encoding Data- Once a Huffman code has been generated, data may be encoded simply by replacing each symbol with its code.

10. The original image is reconstructed i.e. decompression is done by using Huffman Decoding.

11.Generate a tree equivalent to the encoding tree. If you know the Huffman code for some encoded data, decoding may be accomplished by reading the encoded data one bit at a time. Once the bits read match a code for symbol, write out the symbol and start collecting bits again.

12. Read input character wise and left to the tree until last element is reached in the tree.

13. Output the character encodes in the leaf and returns to the root, and continues the step 12 until all the codes of corresponding symbols is known.
If You want to learn about the technology, computer science & engineering, web programming, freelancing, earning please click here :CSE SOLVE

What is data compression? Why data compression is needed?

Data compression : 
                                         In digital signal processing, data compression, source coding, or bit-rate reduction involves encoding information using fewer bits than the original representation. Compression can be either loss or lossless.


Data compression is needed : 
                                                      Data compression is needed because it allows the data to be stored in an area without taking up an unnecessary amount of space. Data compression uses a series of algorithms to reduce the amount of real space that the data would normally take up.

The amount of data that is shown when it is compressed is dependent on how the data has been compressed. If there is an extremely large amount of data in a sequence, it will be compressed to a large size. If there is a small amount of data that is compressed, it will not take up as much real space as compressed large amounts of data.
Determining the amount of space that the data will take up is dependent on the algorithm that is used to compress the data. If the algorithm is used properly and formatted specifically to the data, the data can take up nearly no real space. If a generalized algorithm is used, the data may not fit into a compressed area as well as if a specific algorithm is used. The algorithm is the determining factor on how well the data will compress, how much space it will take up and how much of the data will be available after compression.
If You want to learn about the technology, computer science & engineering, web programming, freelancing, earning please click here :CSE SOLVE

What do you mean by digital image?

Digital Image : 
                            Digital imaging or digital image acquisition is the creation of digital images, such as of a physical scene or of the interior structure of an object. The term is often assumed to imply or include the processing, compression, storage, printing, and display of such images.

Digital imaging can be classified by the type of electromagnetic radiation or other waves whose variable attenuation, as they pass through or reflect off objects, conveys the information that constitutes the image. In all classes of digital imaging, the information is converted by image sensors into digital signals that are processed by a computer and outputted as a visible-light image. For example, the medium of visible light allows digital photography (including digital videography) with various kinds of digital cameras (including digital video cameras). X-rays allow digital X-ray imaging (digital radiography, fluoroscopy, and CT), and gamma rays allow digital gamma ray imaging (digital scintigraphy, SPECT, and PET). Sound allows ultrasonography (such as medical ultrasonography) and sonar, and radio waves allow radar. Digital imaging lends itself well to image analysis by software, as well as to image editing (including image manipulation).
If You want to learn about the technology, computer science & engineering, web programming, freelancing, earning please click here :CSE SOLVE

Define geometric transformation and coordinate transformation

Geometric Transformation:
A geometric transformation is any bijection of a set having some geometric structure to itself or another such set. Specifically, "A geometric transformation is a function whose domain and range are sets of points. Most often the domain and range of a geometric transformation are both R2 or both R3.

Example:
             within transformation geometry, the properties of an isosceles triangle are deduced from the fact that it is mapped to itself by a reflection about a certain line. This contrasts with the classical proofs by the criteria for congruence of triangles.[1]


Co-ordinate Transformation:
                           co-ordinate transformations are no intuitive enough in 2-D, and positively painful in 3-D. This page tackles them in the following order: (i) vectors in 2-D, (ii) tensors in 2-D, (iii) vectors in 3-D, (iv) tensors in 3-D, and finally (v) 4th rank tensor transforms.

A major aspect of coordinate transforms is the evaluation of the transformation matrix, especially in 3-D. This is touched on here, and discussed at length on the next page.

It is very important to recognize that all coordinate transforms on this page are rotations of the coordinate system while the object itself stays fixed. The "object" can be a vector such as force or velocity, or a tensor such as stress or strain in a component. Object rotations are discussed in later sections.
If You want to learn about the technology, computer science & engineering, web programming, freelancing, earning please click here :CSE SOLVE