Glossary Flashcards

1
Q

firmware

A

Permanent software programmed into a read-only memory.

You can think of firmware simply as “software for hardware.” Devices that you might think of as strictly hardware such as optical drives, a network card, a router, or a scanner all have software that is programmed into special memory contained in the hardware itself.

Manufacturers of CD and DVD drives often release regular firmware updates to keep their hardware compatible with new media. Network router manufacturers often release updates to firmware on their devices to improve performance or add additional features.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

access keys

A

keys you can use to select or execute a command

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

command

A

input that tells the computer which task to execute

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

contextual tab

A

a Ribbon tab that is only available in a certain context or situation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

selecting vs. highlighting

A

you select text, then you can highlight it…or do anything else

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

contiguous

A

adjacent or in a row

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

dialog box launcher

A

an arrow button on the ribbon you click to open a dialog box

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

information about a file

A

file properties

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

font

A

a complete set of characters In a specific

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

tiff

A

TIFF - Tag Image File Format. (.TIF file extension, pronounced Tif) TIFF is the format of choice for archiving important images. TIFF is THE leading commercial and professional image standard. TIFF is the most universal and most widely supported format across all platforms, Mac, Windows, Unix.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

gif

A

Gif (Graphics Interchange Format) images are great for creating very low resolution files for your website. They support transparency, which is great. Transparency allows you to place the gif over any color background or even photos, and you won’t see a border or background in the image. All you will see is the icon. You typically use a gif for simple logos, icons, or symbols. Using a gif for photos is not recommended, because gifs are limited to 256 colors. In some cases you can use even less. The less colors that are in your image, the smaller your file size will be. Gif files also support a feature called interlacing, which preloads the graphic. It starts out blurry and becomes focused and crisp when it is finished downloading. This makes the transition for your viewer easier, and they don’t have to wait as long to see logos or icons on your site. Gifs also support animation. Gifs don’t support the level of animation that Flash files do, but it still allows you to add movement or transitions to your site, without a lot of programming or coding. More advanced web designers and developers tend to use jQuery to create animated effects. Gif files are also compressed, which gives them a small file size.

You mainly use a gif file format for logos and graphics with solid areas of color. You wouldn’t use a photographic image, or a graphic with gradients.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

jpeg

A

Jpeg (Joint Photographic Experts Group) files can be relatively small in size, but they still look crisp and beautiful. Jpegs support up to 16.7 million colors, which makes them the right choice for complex images and photographs. With the wide range of colors, you can have beautiful imagery without a bulky file size. With new responsive techniques, you can also have flexible images without large loading times. There are also progressive jpegs, which preload similar to interlaced gifs. They start out blurry, but come into focus as their information loads.

The Graphics Interchange Format (better known by its acronym GIF; /ˈdʒɪf/ or /ˈɡɪf/) is a bitmap image format that was introduced by CompuServe in 1987[1] and has since come into widespread usage on the World Wide Web due to its wide support and portability.

The format supports up to 8 bits per pixel for each image, allowing a single image to reference its own palette of up to 256 different colors chosen from the 24-bit RGB color space. It also supports animations and allows a separate palette of up to 256 colors for each frame. These palette limitations make the GIF format unsuitable for reproducing color photographs and other images with continuous color, but it is well-suited for simpler images such as graphics or logos with solid areas of color.

GIF images are compressed using the Lempel-Ziv-Welch (LZW) lossless data compression technique to reduce the file size without degrading the visual quality. This compression technique was patented in 1985. Controversy over the licensing agreement between the software patent holder, Unisys, and CompuServe in 1994 spurred the development of the Portable Network Graphics (PNG) standard. All the relevant patents have now expired.

In computing, JPEG (/ˈdʒeɪpɛɡ/ jay-peg)[1] (seen most often with the .jpg or .jpeg filename extension) is a commonly used method of lossy compression for digital images, particularly for those images produced by digital photography. The degree of compression can be adjusted, allowing a selectable tradeoff between storage size and image quality. JPEG typically achieves 10:1 compression with little perceptible loss in image quality.[citation needed]

JPEG compression is used in a number of image file formats. JPEG/Exif is the most common image format used by digital cameras and other photographic image capture devices; along with JPEG/JFIF, it is the most common format for storing and transmitting photographic images on the World Wide Web.[citation needed] These format variations are often not distinguished, and are simply called JPEG.

The term “JPEG” is an acronym for the Joint Photographic Experts Group, which created the standard. The MIME media type for JPEG is image/jpeg (defined in RFC 1341), except in Internet Explorer, which provides a MIME type of image/pjpeg when uploading JPEG images.[2]

JPEG/JFIF supports a maximum image size of 65535×65535 pixels[3] – one to four gigapixels (1000 megapixels), depending on aspect ratio (from panoramic 3:1 to square).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

png

A

(Portable Network Graphic, pronounced “ping”) PNG files were developed to build upon the purpose of gifs. Designers need the ability to incorporate low-resolution images that load quickly but also look great, too. This is where PNG comes in. PNG-8 does not support transparency, but PNG-24 and PNG-32 do. PNG files are lossless, which means that they do not lose quality during editing. This is unlike jpegs, where they lose quality. PNG files tend to be larger than jpegs, because they contain more information, and are lossless. PNG files do not support animation. For this purpose, a gif should be used.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

bitmap image format

A

In computer graphics, a raster graphics image is a dot matrix data structure representing a generally rectangular grid of pixels, or points of color, viewable via a monitor, paper, or other display medium. Raster images are stored in image files with varying formats.[1]

A bitmap, a single-bit raster[2], corresponds bit-for-bit with an image displayed on a screen, generally in the same format used for storage in the display’s video memory, or maybe as a device-independent bitmap. A raster is technically characterized by the width and height of the image in pixels and by the number of bits per pixel (a color depth, which determines the number of colors it can represent).[3]

The printing and prepress industries know raster graphics as contones (from “continuous tones”). The opposite to contones is “line work”, usually implemented as vector graphics in digital systems.[4]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

openEXR

A

OpenEXR is a high dynamic range imaging image file format, released as an open standard along with a set of software tools created by Industrial Light and Magic (ILM), released under a free software license similar to the BSD license.[1]

It is notable for supporting 16-bit-per-channel floating point values (half precision), with a sign bit, five bits of exponent, and a ten-bit significand. This allows a dynamic range of over thirty stops of exposure.

Both lossless and lossy compression of high dynamic range data is also supported.[2]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

DPX Image

A

Digital Picture Exchange (DPX) is a common file format for digital intermediate and visual effects work and is an ANSI/SMPTE standard (268M-2003).[2] The file format is most commonly used to represent the density of each colour channel of a scanned negative film in an uncompressed “logarithmic” image where the gamma of the original camera negative is preserved as taken by a film scanner. For this reason, DPX is the worldwide-chosen format for still frames storage in most Digital Intermediate post-production facilities and film labs. Other common video formats are supported as well (see below), from video to purely digital ones, making DPX a file format suitable for almost any raster digital imaging applications. DPX provides, in fact, a great deal of flexibility in storing colour information, colour spaces and colour planes for exchange between production facilities. Multiple forms of packing and alignment are possible. Last but not least, the DPX Specification allow for a wide variety of metadata to further clarify information stored (and storable) within each file.

The DPX file format was originally derived from the Kodak Cineon open file format (.cin file extension) used for digital images generated by Kodak’s original film scanner. The original DPX (version 1.0) specifications are part of SMPTE 268M-1994).[3] The specification was later improved and its latest version (2.0) is published by SMPTE as ANSI/SMPTE 268M-2003.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

aspect ratio

A

The aspect ratio of an image describes the proportional relationship between its width and its height.

It is commonly expressed as two numbers separated by a colon, as in 16:9. For an x:y aspect ratio, no matter how big or small the image is, if the width is divided into x units of equal length and the height is measured using this same length unit, the height will be measured to be y units. For example, consider a group of images, all with an aspect ratio of 16:9. One image is 16 inches wide and 9 inches high. Another image is 16 centimeters wide and 9 centimeters high. A third is 8 yards wide and 4.5 yards high.

The most common aspect ratios used today in the presentation of films in cinemas are 1.85:1 and 2.39:1.[1] Two common videographic aspect ratios are 4:3 (1.33:1),[a] the universal video format of the 20th century, and 16:9 (1.77:1), universal for high-definition television and European digital television. Other cinema and video aspect ratios exist, but are used infrequently.

In still camera photography, the most common aspect ratios are 4:3, 3:2, and more recently being found in consumer cameras 16:9.[2] Other aspect ratios, such as 5:3, 5:4, and 1:1 (square format), are used in photography as well, particularly in medium format and large format.

With television, DVD and Blu-ray Disc, converting formats of unequal ratios is achieved by enlarging the original image to fill the receiving format’s display area and cutting off any excess picture information (zooming and cropping), by adding horizontal mattes (letterboxing) or vertical mattes (pillarboxing) to retain the original format’s aspect ratio, by stretching (hence distorting) the image to fill the receiving format’s ratio, or by scaling by different factors in both directions, possibly scaling by a different factor in the center and at the edges (as in Wide Zoom mode).

18
Q

image scaling

A

In computer graphics, image scaling is the process of resizing a digital image. Scaling is a non-trivial process that involves a trade-off between efficiency, smoothness and sharpness. With bitmap graphics, as the size of an image is reduced or enlarged, the pixels that form the image become increasingly visible, making the image appear “soft” if pixels are averaged, or jagged if not. With vector graphics the trade-off may be in processing power for re-rendering the image, which may be noticeable as slow re-rendering with still graphics, or slower frame rate and frame skipping in computer animation.

Apart from fitting a smaller display area, image size is most commonly decreased (or subsampled or downsampled) in order to produce thumbnails. Enlarging an image (upsampling or interpolating) is generally common for making smaller imagery fit a bigger screen in fullscreen mode, for example. In “zooming” a bitmap image, it is not possible to discover any more information in the image than already exists, and image quality inevitably suffers. However, there are several methods of increasing the number of pixels that an image contains, which evens out the appearance of the original pixels.

19
Q

PCM vs LPCM

A

Pulse-code modulation (PCM) is a method used to digitally represent sampled analog signals. It is the standard form of digital audio in computers, Compact Discs, digital telephony and other digital audio applications. In a PCM stream, the amplitude of the analog signal is sampled regularly at uniform intervals, and each sample is quantized to the nearest value within a range of digital steps.

Linear pulse-code modulation (LPCM) is a specific type of PCM where the quantization levels are linearly uniform.[5] This is in contrast to PCM encodings where quantization levels vary as a function of amplitude (as with the A-law algorithm or the μ-law algorithm). Though PCM is a more general term, it is often used to describe data encoded as LPCM.

A PCM stream has two basic properties that determine the stream’s fidelity to the original analog signal: the sampling rate, which is the number of times per second that samples are taken; and the bit depth, which determines the number of possible digital values that can be used to represent each sample.

20
Q

square vs. non square pixels

A

Pixel aspect ratio (often abbreviated PAR) is a mathematical ratio that describes how the width of a pixel in a digital image compares to the height of that pixel.

Most digital imaging systems display an image as a grid of tiny, square pixels. However, some imaging systems, especially those that must be compatible with standard-definition television motion pictures, display an image as a grid of rectangular pixels, in which the pixel width and height are different. Pixel Aspect Ratio describes this difference.

Use of pixel aspect ratio mostly involves pictures pertaining to standard-definition television and some other exceptional cases. Most other imaging systems, including those that comply with SMPTE standards and practices, use square pixels.

Issues of non-square pixels[edit]
Directly mapping an image with a certain pixel aspect ratio on a device whose pixel aspect ratio is different makes the image look unnaturally stretched or squashed in either the horizontal or vertical direction. For example, a circle generated for a computer display with square pixels looks like a vertical ellipse on a standard-definition NTSC television that uses vertically rectangular pixels. This issue is more evident on wide-screen TVs.

Pixel Aspect Ratio must be taken into consideration by video editing software products that edit video files with non-square pixels, especially when mixing video clips with different pixel aspect ratios. This would be the case when creating a video montage from various cameras employing different video standards (a relatively rare situation). Special effects software products must also take the pixel aspect ratio into consideration, since some special effects require calculation of the distances from a certain point so that they look visually correct. An example of such effects would be radial blur, motion blur, or even a simple image rotation.

Use of pixel aspect ratio[edit]
Pixel aspect ratio value is used mainly in digital video software, where motion pictures must be converted or reconditioned to use video systems other than the original. The video player software may use pixel aspect ratio to properly render digital video on screen. Video editing software uses Pixel Aspect Ratio to properly scale and render a video into a new format.

21
Q

AAC

A

Advanced Audio Coding (AAC) is an audio coding standard for lossy digital audio compression. Designed to be the successor of the MP3 format, AAC generally achieves better sound quality than MP3 at similar bit rates.

Filename extension names: .m4a, .m4b, .m4p, .m4v, .m4r, .3gp, .mp4, .aac

AAC has been standardized by ISO and IEC, as part of the MPEG-2 and MPEG-4 specifications.[3][4] Part of the AAC known as High Efficiency Advanced Audio Coding (HE-AAC) which is part of MPEG-4 Audio is also adopted into digital radio standards like DAB+ and Digital Radio Mondiale, as well as mobile television standards DVB-H and ATSC-M/H.

AAC supports inclusion of 48 full-bandwidth (up to 96 kHz) audio channels in one stream plus 16 low frequency effects (LFE, limited to 120 Hz) channels, up to 16 “coupling” or dialog channels, and up to 16 data streams. The quality for stereo is satisfactory to modest requirements at 96 kbit/s in joint stereo mode; however, hi-fi transparency demands data rates of at least 128 kbit/s (VBR). The MPEG-2 audio tests showed that AAC meets the requirements referred to as “transparent” for the ITU at 128 kbit/s for stereo, and 320 kbit/s for 5.1 audio.

AAC is the default or standard audio format for YouTube, iPhone, iPod, iPad, Nintendo DSi, Nintendo 3DS, iTunes, DivX Plus Web Player and PlayStation 3. It is supported on PlayStation Vita, Wii (with the Photo Channel 1.1 update installed), Sony Walkman MP3 series and later, Sony Ericsson; Nokia, Android, BlackBerry, and webOS-based mobile phones, with the use of a converter. AAC also continues to enjoy increasing adoption by manufacturers of in-dash car audio systems.

22
Q

AC3

A

Dolby Digital audio compression

23
Q

AIFF

A

Audio Interchange File Format (AIFF) is an audio file format standard used for storing sound data for personal computers and other electronic audio devices. The format was developed by Apple Inc. in 1988 based on Electronic Arts’ Interchange File Format (IFF, widely used on Amiga systems) and is most commonly used on Apple Macintosh computer systems.

Filename extension .aiff
.aif
.aifc

The audio data in a standard AIFF file is uncompressed pulse-code modulation (PCM). There is also a compressed variant of AIFF known as AIFF-C or AIFC, with various defined compression codecs.

Unlike the better-known lossy MP3 format, AIFF is uncompressed (which aids rapid streaming of multiple audio files from disk to the application), and is lossless. Like any uncompressed, lossless format, it uses much more disk space than MP3—about 10MB for one minute of stereo audio at a sample rate of 44.1 kHz and a bit depth of 16 bits. In addition to audio data, AIFF can include loop point data and the musical note of a sample, for use by hardware samplers and musical applications.

The file extension for the standard AIFF format is .aiff or .aif. For the compressed variants it is supposed to be .aifc, but .aiff or .aif are accepted as well by audio applications supporting the format.

24
Q

CAF

A

The Core Audio Format is a container for storing audio, developed by Apple Inc.. It is compatible with Mac OS X 10.4 and higher; Mac OS X 10.3 needs QuickTime 7 to be installed.[1]

Filename extension .caf

Core Audio Format is designed to overcome limitations of older digital audio formats, including AIFF and WAV. Just like the QuickTime .mov container, a .caf container can contain many different audio formats, metadata tracks, and much more data. Not limited to a 4 GB file size like older digital audio formats, a single .caf file can theoretically save hundreds of years of recorded audio due to its use of 64-bit file offsets.[2]

Soundtrack Pro and Logic Studio use the .caf format extensively for their loop and sound effects library, particularly for surround-sound audio compressed with the Apple Lossless codec.

25
Q

MP3

A

Not to be confused with MPEG-3.

Filename extension .mp3[1]

MPEG-1 or MPEG-2 Audio Layer III,[4] more commonly referred to as MP3, is an audio coding format for digital audio which uses a form of lossy data compression. It is a common audio format for consumer audio streaming or storage, as well as a de facto standard of digital audio compression for the transfer and playback of music on most digital audio players.

MP3 is an audio-specific format that was designed by the Moving Picture Experts Group (MPEG) as part of its MPEG-1 standard and later extended in the MPEG-2 standard. The first MPEG subgroup – Audio group was formed by several teams of engineers at Fraunhofer IIS, University of Hannover, AT&T-Bell Labs, Thomson-Brandt, CCETT, and others.[7] MPEG-1 Audio (MPEG-1 Part 3), which included MPEG-1 Audio Layer I, II and III was approved as a committee draft of ISO/IEC standard in 1991,[8][9] finalised in 1992[10] and published in 1993 (ISO/IEC 11172-3:1993[5]). Backwards compatible MPEG-2 Audio (MPEG-2 Part 3) with additional bit rates and sample rates was published in 1995 (ISO/IEC 13818-3:1995).[6][11]

The use in MP3 of a lossy compression algorithm is designed to greatly reduce the amount of data required to represent the audio recording and still sound like a faithful reproduction of the original uncompressed audio for most listeners. An MP3 file that is created using the setting of 128 kbit/s will result in a file that is about 1/11 the size[note 1] of the CD file created from the original audio source. An MP3 file can also be constructed at higher or lower bit rates, with higher or lower resulting quality.

The compression works by reducing accuracy of certain parts of sound that are considered to be beyond the auditory resolution ability of most people. This method is commonly referred to as perceptual coding.[13] It uses psychoacoustic models to discard or reduce precision of components less audible to human hearing, and then records the remaining information in an efficient manner.

26
Q

MPEG-3

A

Not to be confused with MP3.
MPEG-3 is the designation for a group of audio and video coding standards agreed upon by the Moving Picture Experts Group (MPEG) designed to handle HDTV signals at 1080p[1] in the range of 20 to 40 megabits per second.[2] MPEG-3 was launched as an effort to address the need of an HDTV standard while work on MPEG-2 was underway, but it was soon discovered that MPEG-2, at high data rates, would accommodate HDTV.[3] Thus, in 1992[4] HDTV was included as a separate profile in the MPEG-2 standard and MPEG-3 was rolled into MPEG-2.[5]

27
Q

WAV

A

Waveform Audio File Format (WAVE, or more commonly known as WAV due to its filename extension)[3][6][7][8] (rarely, Audio for Windows[9]) is a Microsoft and IBM audio file format standard for storing an audio bitstream on PCs. It is an application of the Resource Interchange File Format (RIFF) bitstream format method for storing data in “chunks”, and thus is also close to the 8SVX and the AIFF format used on Amiga and Macintosh computers, respectively. It is the main format used on Windows systems for raw and typically uncompressed audio. The usual bitstream encoding is the linear pulse-code modulation (LPCM) format.

Both WAVs and AIFFs are compatible with Windows, Macintosh, and Linux operating systems. The format takes into account some differences of the Intel CPU such as little-endian byte order. The RIFF format acts as a “wrapper” for various audio coding formats.

Though a WAV file can contain compressed audio, the most common WAV audio format is uncompressed audio in the linear pulse code modulation (LPCM) format. LPCM is also the standard audio coding format for audio CDs, which store two-channel LPCM audio sampled 44,100 times per second with 16 bits per sample. Since LPCM is uncompressed and retains all of the samples of an audio track, professional users or audio experts may use the WAV format with LPCM audio for maximum audio quality. WAV files can also be edited and manipulated with relative ease using software.

The WAV format supports compressed audio, using, on Windows, the Audio Compression Manager. Any ACM codec can be used to compress a WAV file. The user interface (UI) for Audio Compression Manager may be accessed through various programs that use it, including Sound Recorder in some versions of Windows.

Beginning with Windows 2000, a WAVE_FORMAT_EXTENSIBLE header was defined which specifies multiple audio channel data along with speaker positions, eliminates ambiguity regarding sample types and container sizes in the standard WAV format and supports defining custom extensions to the format chunk.[4][5][10]

There are some inconsistencies in the WAV format: for example, 8-bit data is unsigned while 16-bit data is signed, and many chunks duplicate information found in other chunks.

A Wave file is an audio file format, created by Microsoft, that has become a standard PC audio file format for everything from system and game sounds to CD-quality audio. A Wave file is identified by a file name extension of WAV (.wav). Used primarily in PCs, the Wave file format has been accepted as a viable interchange medium for other computer platforms, such as Macintosh. This allows content developers to freely move audio files between platforms for processing, for example.

In addition to the uncompressed raw audio data, the Wave file format stores information about the file’s number of tracks (mono or stereo), sample rate, and bit depth.

28
Q

display resolution

A

The display resolution of a digital television, computer monitor or display device is the number of distinct pixels in each dimension that can be displayed. It can be an ambiguous term especially as the displayed resolution is controlled by different factors in cathode ray tube (CRT), flat-panel display which includes liquid-crystal displays, or projection displays using fixed picture-element (pixel) arrays.

It is usually quoted as width × height, with the units in pixels: for example, “1024 × 768” means the width is 1024 pixels and the height is 768 pixels. This example would normally be spoken as “ten twenty-four by seven sixty-eight” or “ten twenty-four by seven six eight”.

One use of the term “display resolution” applies to fixed-pixel-array displays such as plasma display panels (PDPs), liquid-crystal displays (LCDs), digital light processing (DLP) projectors, or similar technologies, and is simply the physical number of columns and rows of pixels creating the display (e.g. 1920 × 1080). A consequence of having a fixed-grid display is that, for multi-format video inputs, all displays need a “scaling engine” (a digital video processor that includes a memory array) to match the incoming picture format to the display.

Note that for broadcast television standards the use of the word resolution here is a misnomer, though common. The term “display resolution” is usually used to mean pixel dimensions, the number of pixels in each dimension (e.g. 1920 × 1080), which does not tell anything about the pixel density of the display on which the image is actually formed: broadcast television resolution properly refers to the pixel density, the number of pixels per unit distance or area, not total number of pixels. In digital measurement, the display resolution would be given in pixels per inch. In analog measurement, if the screen is 10 inches high, then the horizontal resolution is measured across a square 10 inches wide. This is typically stated as “lines horizontal resolution, per picture height;”[1] for example, analog NTSC TVs can typically display about 340 lines of “per picture height” horizontal resolution from over-the-air sources, which is equivalent to about 440 total lines of actual picture information from left edge to right edge.

ome commentators also use display resolution to indicate a range of input formats that the display’s input electronics will accept and often include formats greater than the screen’s native grid size even though they have to be down-scaled to match the screen’s parameters (e.g. accepting a 1920 × 1080 input on a display with a native 1366 × 768 pixel array). In the case of television inputs, many manufacturers will take the input and zoom it out to “overscan” the display by as much as 5% so input resolution is not necessarily display resolution.

The eye’s perception of display resolution can be affected by a number of factors – see image resolution and optical resolution. One factor is the display screen’s rectangular shape, which is expressed as the ratio of the physical picture width to the physical picture height. This is known as the aspect ratio. A screen’s physical aspect ratio and the individual pixels’ aspect ratio may not necessarily be the same. An array of 1280 × 720 on a 16:9 display has square pixels, but an array of 1024 × 768 on a 16:9 display has rectangular pixels.

An example of pixel shape affecting “resolution” or perceived sharpness: displaying more information in a smaller area using a higher resolution makes the image much clearer or “sharper”. However, most recent screen technologies are fixed at a certain resolution; making the resolution lower on these kinds of screens will greatly decrease sharpness, as an interpolation process is used to “fix” the non-native resolution input into the display’s native resolution output.

While some CRT-based displays may use digital video processing that involves image scaling using memory arrays, ultimately “display resolution” in CRT-type displays is affected by different parameters such as spot size and focus, astigmatic effects in the display corners, the color phosphor pitch shadow mask (such as Trinitron) in color displays, and the video bandwidth.

Interlacing versus progressive scan[edit]
Main article: Interlaced video

Overscan and under scan

Most television display manufacturers “overscan” the pictures on their displays (CRTs and PDPs, LCDs etc.), so that the effective on-screen picture may be reduced from 720 × 576(480) to 680 × 550(450), for example. The size of the invisible area somewhat depends on the display device. HD televisions do this as well, to a similar extent.

Computer displays including projectors generally do not overscan although many models (particularly CRT displays) allow it. CRT displays tend to be underscanned in stock configurations, to compensate for the increasing distortions at the corners.

Current standards[edit]
Further information: List of common resolutions
Televisions[edit]
Televisions are of the following resolutions:

Standard-definition television (SDTV):
480i (NTSC-compatible digital standard employing two interlaced fields of 243 lines each)
576i (PAL-compatible digital standard employing two interlaced fields of 288 lines each)
Enhanced-definition television (EDTV):
480p (720 × 480 progressive scan)
576p (720 × 576 progressive scan)
High-definition television (HDTV):
720p (1280 × 720 progressive scan)
1080i (1920 × 1080 split into two interlaced fields of 540 lines)
1080p (1920 × 1080 progressive scan)
Ultra-high-definition television (UHDTV)
2160p (3840 × 2160 progressive scan)
4320p (7680 × 4320 progressive scan)
8640p (15360 × 8640 progressive scan)
Computer monitors[edit]
Further information: Computer display standard
Computer monitors have traditionally possessed higher resolutions than most televisions. As of July 2002, 1024 × 768 Extended Graphics Array was the most common display resolution.[2][3] Many web sites and multimedia products were re-designed from the previous 800 × 600 format to the layouts optimized for 1024 × 768.

The availability of inexpensive LCD monitors has made the 5:4 aspect ratio resolution of 1280 × 1024 more popular for desktop usage. Many computer users including CAD users, graphic artists and video game players run their computers at 1600 × 1200 resolution (UXGA) or higher if they have the necessary equipment. Other recently available resolutions include oversize aspects like 1400 × 1050 SXGA+ and wide aspects like 1280 × 800 WXGA, 1440 × 900 WXGA+, 1680 × 1050 WSXGA+, and 1920 × 1200 WUXGA; monitors built to the 720p and 1080p standard are also not unusual among home media and video game players, due to the perfect screen compatibility with movie and video game releases. A new more-than-HD resolution of 2560 × 1600 WQXGA was released in 30-inch LCD monitors in 2007. In 2010, 27-inch LCD monitors with the resolution 2560 × 1440 were released by multiple manufacturers including Apple,[4] and in 2012 Apple introduced a 2880 × 1800 display on the MacBook Pro.[5] Panels for professional environments, such as medical use and air traffic control, support resolutions up to 4096 × 2160.

Notes
The Steam user statistics were gathered from users of the Steam network in its hardware survey of April 2014.[9]
The web user statistics were gathered from visitors to three million websites, normalised to counteract geolocational bias. Covers the four-month period from January to April 2014.[10]
The numbers are not representative of computer users in general.
When a computer display resolution is set higher than the physical screen resolution (native resolution), some video drivers make the virtual screen scrollable over the physical screen thus realizing a two dimensional virtual desktop with its viewport. Most LCD manufacturers do make note of the panel’s native resolution as working in a non-native resolution on LCDs will result in a poorer image, due to dropping of pixels to make the image fit (when using DVI) or insufficient sampling of the analog signal (when using VGA connector). Few CRT manufacturers will quote the true native resolution, because CRTs are analog in nature and can vary their display from as low as 320 × 200 (emulation of older computers or game consoles) to as high as the internal board will allow, or the image becomes too detailed for the vacuum tube to recreate (i.e., analog blur). Thus, CRTs provide a variability in resolution that fixed resolution LCDs cannot provide.

In recent years the 16:9 aspect ratio has become more common in notebook displays. 1366 × 768 (HD) has become popular for most notebook sizes, while 1600 × 900 (HD+) and 1920 × 1080 (FHD) are available for larger notebooks.

As far as digital cinematography is concerned, video resolution standards depend first on the frames’ aspect ratio in the film stock (which is usually scanned for digital intermediate post-production) and then on the actual points’ count. Although there is not a unique set of standardized sizes, it is commonplace within the motion picture industry to refer to “nK” image “quality”, where n is a (small, usually even) integer number which translates into a set of actual resolutions, depending on the film format. As a reference consider that, for a 4:3 (around 1.33:1) aspect ratio which a film frame (no matter what is its format) is expected to horizontally fit in, n is the multiplier of 1024 such that the horizontal resolution is exactly 1024•n points. For example, 2K reference resolution is 2048 × 1536 pixels, whereas 4K reference resolution is 4096 × 3072 pixels. Nevertheless, 2K may also refer to resolutions like 2048 × 1556 (full-aperture), 2048 × 1152 (HDTV, 16:9 aspect ratio) or 2048 × 872 pixels (Cinemascope, 2.35:1 aspect ratio). It is also worth noting that while a frame resolution may be, for example, 3:2 (720 × 480 NTSC), that is not what you will see on-screen (i.e. 4:3 or 16:9 depending on the orientation of the rectangular pixels).

Evolution of standards

Many personal computers introduced in the late 1970s and the 1980s were designed to use television receivers as their display devices, making the resolutions dependent on the television standards in use, including PAL and NTSC. Picture sizes were usually limited to ensure the visibility of all the pixels in the major television standards and the broad range of television sets with varying amounts of overscan. The actual drawable picture area was, therefore, somewhat smaller than the whole screen, and was usually surrounded by a static-colored border (see image to right). Also, the interlace scanning was usually omitted in order to provide more stability to the picture, effectively halving the vertical resolution in progress. 160 × 200, 320 × 200 and 640 × 200 on NTSC were relatively common resolutions in the era (224, 240 or 256 scanlines were also common). In the IBM PC world, these resolutions came to be used by 16-color EGA video cards.

One of the drawbacks of using a classic television is that the computer display resolution is higher than the television could decode. Chroma resolution for NTSC/PAL televisions are bandwidth-limited to a maximum 1.5 megahertz, or approximately 160 pixels wide, which led to blurring of the color for 320- or 640-wide signals, and made text difficult to read (see second image to right). Many users upgraded to higher-quality televisions with S-Video or RGBI inputs that helped eliminate chroma blur and produce more legible displays. The earliest, lowest cost solution to the chroma problem was offered in the Atari 2600 Video Computer System and the Apple II+, both of which offered the option to disable the color and view a legacy black-and-white signal. On the Commodore 64, the GEOS mirrored the Mac OS method of using black-and-white to improve readability.

The 640 × 400i resolution (720 × 480i with borders disabled) was first introduced by home computers such as the Commodore Amiga and, later, Atari Falcon. These computers used interlace to boost the maximum vertical resolution. These modes were only suited to graphics or gaming, as the flickering interlace made reading text in word processor, database, or spreadsheet software difficult. (Modern game consoles solve this problem by pre-filtering the 480i video to a lower resolution. For example, Final Fantasy XII suffers from flicker when the filter is turned off, but stabilizes once filtering is restored. The computers of the 1980s lacked sufficient power to run similar filtering software.)

The advantage of a 720 × 480i overscanned computer was an easy interface with interlaced TV production, leading to the development of Newtek’s Video Toaster. This device allowed Amigas to be used for CGI creation in various news departments (example: weather overlays), drama programs such as NBC’s seaQuest, WB’s Babylon 5, and early computer-generated animation by Disney for The Little Mermaid, Beauty and the Beast, and Aladdin.

In the PC world, the IBM PS/2 VGA (multi-color) on-board graphics chips used a non-interlaced (progressive) 640 × 480 × 16 color resolution that was easier to read and thus more useful for office work. It was the standard resolution from 1990 to around 1996.[citation needed] The standard resolution was 800x600 until around 2000. Microsoft Windows XP, released in 2001, was designed to run at 800 × 600 minimum, although it is possible to select the original 640 × 480 in the Advanced Settings window.

Programs designed to mimic older hardware such as Atari, Sega, or Nintendo game consoles (emulators) when attached to multiscan CRTs, routinely use much lower resolutions, such as 160 × 200 or 320 × 400 for greater authenticity, though other emulators have taken advantage of pixelation recognition on circle, square, triangle and other geometric features on a lesser resolution for a more scaled vector rendering.

Commonly used[edit]
The list of common display resolutions article lists the most commonly used display resolutions for computer graphics, television, films, and video conferencing.

See also[edit]
Computer display standards has a detailed list of display resolutions (e.g. VGA 640 × 480, WUXGA 1920 × 1200, etc.).
Display aspect ratio
Display size
List of displays by pixel density
Pixel density of Computer displays – PPI (for example, a 20" 1680 × 1050 screen has a PPI of 99.06)
Resolution independence
Video scaler
Widescreen
29
Q

alpha channel

A

PNG, BMP and PSD file format actually support alpha channels. Alpha channels are masks through which you can display images. The alpha channel is an 8-bit channel, which means it has 256 levels of gray from 0 (black) to 255 (white). White acts as the visible area; black acts as the transparent area (you see the background behind the image when displayed). The level of gray in between determines the level of visibility. For example, 50 percent gray allows for 50 percent visibility. Alpha channels are usually used with 16.8M color RGB images. The resulting image is called RGBA (RGB+A, A means alpha channel).

30
Q

HTML

A

HyperText Markup Language: a set of standards, a variety of SGML, used to tag the elements of a hypertext document. It is the standard protocol for formatting and displaying documents on the World Wide Web.

31
Q

http

A

hypertext transfer protocol: the standard protocol for transferring hypertext documents on the World Wide Web

32
Q

Jump to next / jump to previous tab in Safari

A

⌘ + Shift + Right Arrow/Left Arrow

33
Q

How to make a degrees symbol

A

Shirt-Option-8

34
Q

How to make an upside-down exclamation mark

A

Option-1

35
Q

How to make an upside-down question mark

A

Shift-Option-?

36
Q

How to make an umlaut

A

Option+U, V (the letter you’re putting an umlaut over)

37
Q

How to make a tilde

A

Option+N, V (the letter you’re putting a tilde over)

38
Q

How to make a grave sign

A

Option+`, V (the letter you’re putting a grave sign over)

39
Q

How to make a circumflex

A

Option+I, V (the letter you’re putting a circumflex over)

40
Q

How to make an acute sign

A

Option+E, V (the letter you’re putting an acute sign over)