Monday, April 28, 2008

Transcode

Transcoding is the direct digital-to-digital conversion from one (usually lossy) codec to another. It involves decoding/decompressing the original data to a raw intermediate format (i.e. PCM for audio or YUV for video), in a way that mimics standard playback of the lossy content, and then re-encoding this into the target format. The simplest way to do transcoding is to decode a bitstream into YUV format using a compatible decoder and then encode the data using an encoder of a different standard. A better way to transcode is to change the bitstream format from one standard to another without its undergoing the complete decoding and encoding process. Many algorithms exist to achieve this.
Transrating is a process similar to transcoding in which files are coded to a lower bitrate without changing video formats. Need for transrating arises from the fact that the bitrate requirement varies from channel to channel because of vastness in the compression standards in use. Changing the picture size of video is known as transsizing.
Transcoding is the direct digital-to-digital conversion from one (usually lossy) codec to another. It involves decoding/decompressing the original data to a raw intermediate format (i.e. PCM for audio or YUV for video), in a way that mimics standard playback of the lossy content, and then re-encoding this into the target format. The simplest way to do transcoding is to decode a bitstream into YUV format using a compatible decoder and then encode the data using an encoder of a different standard. A better way to transcode is to change the bitstream format from one standard to another without its undergoing the complete decoding and encoding process. Many algorithms exist to achieve this.
Transrating is a process similar to transcoding in which files are coded to a lower bitrate without changing video formats. Need for transrating arises from the fact that the bitrate requirement varies from channel to channel because of vastness in the compression standards in use. Changing the picture size of video is known as transsizing.
Compression artifacts are cumulative; therefore transcoding between lossy codecs causes a progressive loss of quality with each successive generation. For this reason, it is generally discouraged unless unavoidable. For instance, if an individual owns a digital audio player that does not support a particular format (e.g., Apple iPod and Ogg Vorbis), then the only way for the owner to use content encoded in that format is to transcode it to a supported format. It is better to retain a copy in a lossless format (such as TTA, FLAC or WavPack), and then encode directly from the lossless source file to the lossy formats required.
Data transformation
Data transformation can be divided into two steps:
data mapping maps data elements from the source to the destination and captures any transformation that must occur
code generation that creates the actual transformation program
Data element to data element mapping is frequently complicated by complex transformations that requires one-to-many and many-to-one transformation rules.
The code generation step takes the data element mapping specification and creates an executable program that can be run on a computer system. Code generation can also create transformation in easy-to-maintain computer languages such as Java or XSLT.
When the mapping is indirect via a mediating data model, the process is also called data mediation.There are numerous languages available for performing data transformation. Many transformational languages require a grammar to be provided. In many cases the grammar is structured using something closely resembling Backus–Naur Form (BNF). There are numerous languages available for such purposes varying in their accessibility (cost) and general usefulness. Examples of such languages include:
XSLT - the XML transformation language
TXL - prototyping language-based descriptions using source transformation
It should be noted that though transformational languages are typically best suited for transformation, something as simple as regular expressions can be used to achieve useful transformation. Textpad supports the use of regular expressions with arguments. This would allow all instances of a particular pattern to be replaced with another pattern using parts of the original pattern.Another advantage to using regular expressions is that they will not fail the null transform test. That is, using your transformational language of choice, run a sample program through a transformation that doesn't perform any transformations. Many transformational languages will fail this test.In other words, all instances of a function invocation of foo with three arguments, followed by a function invocation with two invocations would be replaced with a single function invocation using some or all of the original set of arguments.A really general solution to handling this is very hard because such preprocessor directives can essentially edit the underlying language in arbitrary ways. However, because such directives are not, in practice, used in completely arbitrary ways, one can build practical tools for handling preprocessed languages. The DMS Software Reengineering Toolkit is capable of handling structured macros and preprocessor conditionals.

Monday, April 21, 2008

Multimeter

A multimeter or a multitester, also known as a volt/ohm meter or VOM, is an electronic measuring instrument that combines several functions in one unit. A standard multimeter may include features such as the ability to measure voltage, current and resistance. There are two categories of multimeters, analog multimeters and digital multimeters (often abbreviated DMM.)
A multimeter can be a hand-held device useful for basic fault finding and field service work or a bench instrument which can measure to a very high degree of accuracy. They can be used to troubleshoot electrical problems in a wide array of industrial and household devices such as batteries, motor controls, appliances, power supplies, and wiring systems.
Multimeters are available in a wide ranges of features and prices. Cheap multimeters can cost less than US$10, while the top of the line multimeters can cost more than US$5000.
The resolution of a multimeter is often specified in "digits" of resolution. For example, the term 5½ digits refers to the number of digits displayed on the readout of a multimeter.
By convention, a half digit can display either a zero or a one, while a three-quarters digit can display a numeral higher than a one but not nine. Commonly, a three-quarters digit refers to a maximum count of 3 or 5. The fractional digit is always the most significant digit in the displayed value. A 5½ digit multimeter would have five full digits that display values from 0 to 9 and one half digit that could only display 0 or 1.Such a meter could show positive or negative values from 0 to 199,999. A 3¾ digit meter can display a quantity from 0 to 3,999 or 5,999, depending on the manufacturer.
Resolution of analog multimeters is limited by the width of the scale pointer, vibration of the pointer, parallax observation errors, and the accuracy of printing of scales. Resistance measurements, in particular, are of low precision due to the typical resistance measurement circuit which compresses the scale at the higher resistance values. Mirrored scales and larger meter movements are used to improve resolution; two and a half to three digits equivalent resolution is usual (and may be adequate for the limited precision actually necessary for most measurements).
While a digital display can easily be extended in precision, the extra digits are of no value if not accompanied by care in the design and calibration of the analog portions of the multimeter. Meaningful high-resolution measurements require a good understanding of the instrument specifications, good control of the measurement conditions, and traceability of the calibration of the instrument.
Digital multimeters generally take measurements with superior accuracy to their analog counterparts. Analog multimeters typically measure with three to five percent accuracy.[citation needed] Standard portable digital multimeters claim to be capable of taking measurements with an accuracy of 0.5% on DC voltage and current scales. Mainstream bench-top multimeters make claim es to have as great accuracy as ±0.01%. Laboratory grade instruments can have accuracies in the parts per million figures.
Manufacturers can provide calibration services so that new meters may be purchased with a certificate of calibration indicating the meter has been adjusted to standards traceable to the National Institute of Standards and Technology. Such manufacturers usually provide calibration services after sales, as well, so that older equipment may be recertified. Multimeters used for critical measurements may be part of a metrology program to assure calibration.
The current load, or how much current is drawn from the circuit being tested may affect a multimeter's accuracy. A small current draw usually will result in more precise measurements. With improper usage or too much current load, a multimeter may be damaged therefore rendering its measurements unreliable and substandard.
Meters with electronic amplifiers in them, such as all digital multimeters and transistorized analog meters, have a standardized input impedance usually considered high enough not to disturb the circuit tested. This is often one million ohms, or ten million ohms. The standard input impedance allows use of external probes to extend the direct-current measuring range up to tens of thousands of volts.
Analog multimeters of the moving pointer type draw current from the circuit under test to deflect the meter pointer. The impedance of the meter varies depending on the basic sensitivity of the meter movement and the range which is selected. For example, a meter with a 20,000 ohms/volt sensitivity will have an input resistance of two million ohms on the 100 volt range (100 V * 20,000 ohms/volt = 2,000,000 ohms). Low-sensitivity meters are useful for general purpose testing especially in power circuits, where source impedances are low compared to the meter impedance. Measurements in signal circuits generally require higher sensitivity so as not to load down the circuit under test with the meter impedance.
The sensitivity of a meter is also a measure of the lowest voltage, current or resistance that can be measured with it. For general-purpose digital multimeters, a full-scale range of several hundred millivolts AC or DC is common, but the minimum full-scale current range may be several hundred milliamps. Since general-purpose mulitmeters have only two-wire resistance measurements, which do not compensate for the effect of the lead wire resistance, measurements below a few tens of ohms will be of low accuracy. The upper end of multimeter measurement ranges varies considerably by manufacturer; generally measurements over 1000 volts, over 10 amperes, or over 100 megohms would require a specialized test instrument, as would accurate measurement of currents on the order of microamperes or less.
Since the basic indicator system in either an analog or digital meter responds to DC only, a multimeter includes an AC to DC conversion circuit for making alternating current measurements. Basic multimeters may utilize a rectifier circuit, calibrated to evaluate the average value of a rectified sine wave. User guides for such meters will give correction factors for some simple waveforms, to allow the correct root mean square (RMS) equivalent value to be calculated for the average-responding meter. More expensive multimeters will include an AC to DC converter that responds to the RMS value of the waveform for a wide range of possible waveforms; the user manual for the meter will indicate the limits of the crest factor and frequency for which the meter calibration is valid. RMS sensing is necessary for measurement s of non-sinusoidal quantities, such as found in audio signals, or in variable-frequency drives.
Modern multimeters are often digital due their accuracy, durability and extra features.
In a DMM the signal under test is converted to a voltage and an amplifier with an electronically controlled gain preconditions the signal.
A DMM displays the quantity measured as a number, which prevents parallax errors.
The inclusion of solid state electronics, from a control circuit to small embedded computers, has provided a wealth of convenience features in modern digital meters. Commonly available measurement enhancements include:
Auto-ranging, which selects the correct range for the quantity under test so that the most significant digits are shown. For example, a four-digit multimeter would automatically select an appropriate range to display 1.234 instead of 0.012, or overloading. Auto-ranging meters usually include a facility to 'freeze' the meter to a particular range, because a measurement that causes frequent range changes is distracting to the user.
Auto-polarity for direct-current readings, shows if the applied voltage is positive (agrees with meter lead labels) or negative (opposite polarity to meter leads).
Sample and hold, which will latch the most recent reading for examination after the instrument is removed from the circuit under test.
Current-limited tests for voltage drop across semiconductor junctions. While not a replacement for a transistor tester, this facilitates testing diodes and a variety of transistor types.
A graphic representation of the quantity under test, as a bar graph. This makes go/no-go testing easy, and also allows spotting of fast-moving trends.
A low-bandwidth oscilloscope.
Automotive circuit testers, including tests for automotive timing and dwell signals.
Simple data acquisition features to record maximum and minimum readings over a given period, or to take a number of samples at fixed intervals.
Modern meters may be interfaced with a personal computer by IrDA links, RS-232 connections, USB, or an instrument bus such as IEEE-488. The interface allows the computer to record measurements as they are made. Some DMM's can store measurements and upload them to a computer.The first digital multimeter was manufactured in 1955 by Non Linear Systems.
Modern multimeters are often digital due their accuracy, durability and extra features.
In a DMM the signal under test is converted to a voltage and an amplifier with an electronically controlled gain preconditions the signal.
A DMM displays the quantity measured as a number, which prevents parallax errors.
The inclusion of solid state electronics, from a control circuit to small embedded computers, has provided a wealth of convenience features in modern digital meters. Commonly available measurement enhancements include:
Auto-ranging, which selects the correct range for the quantity under test so that the most significant digits are shown. For example, a four-digit multimeter would automatically select an appropriate range to display 1.234 instead of 0.012, or overloading. Auto-ranging meters usually include a facility to 'freeze' the meter to a particular range, because a measurement that causes frequent range changes is distracting to the user.
Auto-polarity for direct-current readings, shows if the applied voltage is positive (agrees with meter lead labels) or negative (opposite polarity to meter leads).
Sample and hold, which will latch the most recent reading for examination after the instrument is removed from the circuit under test.
Current-limited tests for voltage drop across semiconductor junctions. While not a replacement for a transistor tester, this facilitates testing diodes and a variety of transistor types.A graphic representation of the quantity under test, as a bar graph. This makes go/no-go testing easy, and also allows spotting of fast-moving trends.
A low-bandwidth oscilloscope.
Automotive circuit testers, including tests for automotive timing and dwell signals.
Simple data acquisition features to record maximum and minimum readings over a given period, or to take a number of samples at fixed intervals.
Modern meters may be interfaced with a personal computer by IrDA links, RS-232 connections, USB, or an instrument bus such as IEEE-488. The interface allows the computer to record measurements as they are made. Some DMM's can store measurements and upload them to a computer.The first digital multimeter was manufactured in 1955 by Non Linear Systems.
A multimeter can utilize a variety of test probes to connect to the circuit or device under test. Crocodile clips, retractable hook clips, and pointed probes are the three most common attachments. The connectors are attached to flexible, thickly-insulated leads that are terminated with connectors appropriate for the meter. Handheld meters typically use shrouded or recessed banana jacks, while benchtop meters may use banana jacks or BNC connectors.
Meters which measure high voltages or current may use non-contact attachment mechanism to trade accuracy for safety. Clamp meters provide a coil that clamps around a conductor in order to measure the current flowing through it.
Almost every multimeter includes a fuse, which will generally prevent damage to the multimeter if it is overloaded. A common error when operating a multimeter is to set the meter to measure resistance or current and then connect it directly to a low-impedance voltage source; meters without protection are quickly damaged by such errors and may cause injury to the operator.
Digital meters are category rated based on their intended application, as set forth by the CEN EN61010 standard.There are four categories:
Category I: used where current levels are low.
Category II: used on residential branch circuits.
Category III: used on permanently installed loads such as distribution panels, motors, and appliance outlets.
Category IV: used on locations where current levels are high, such as service entrances, main panels, and house meters.
Each category also specifies maximum transient voltages for selected measuring ranges in the meter.Category-rated meters also feature protections from over-current faults.
Multimeters were invented in the early 1920's as radio receivers and other vacuum tube electronic devices became more common. As modern systems become more complicated, the multimeter is becoming more complex or may be supplemented by more specialized equipment in a technician's toolkit. For example, where a general-purpose multimeter might only test for short-circuits, conductor resistance and some coarse measure of insulation quality, a modern technician may use a hand-held analyzer to test several parameters in order to validate the performance of a network cable.

Monday, April 14, 2008

The Mind

Mind collectively refers to the aspects of intellect and consciousness manifested as combinations of thought, perception, memory, emotion, will and imagination; mind is the stream of consciousness. It includes all of the brain's conscious processes. This denotation sometimes includes, in certain contexts, the working of the human unconscious or the conscious thoughts of animals. "Mind" is often used to refer especially to the thought processes of reason.
There are many theories of the mind and its function. The earliest recorded works on the mind are by Zarathushtra, the Buddha, Plato, Aristotle, Adi Shankara and other ancient Greek, Indian and Islamic philosophers. Pre-scientific theories, based in theology, concentrated on the relationship between the mind and the soul, the supposed supernatural, divine or god-given essence of the person. Modern theories, based on scientific understanding of the brain, theorise that the mind is a phenomenon of the brain and is synonymous with consciousness.
The question of which human attributes make up the mind is also much debated. Some argue that only the "higher" intellectual functions constitute mind: particularly reason and memory. In this view the emotions - love,hate, fear, joy - are more "primitive" or subjective in nature and should be seen as different from the mind. Others argue that the rational and the emotional sides of the human person cannot be separated, that they are of the same nature and origin, and that they should all be considered as part of the individual mind.
In popular usage mind is frequently synonymous with thought: It is that private conversation with ourselves that we carry on "inside our heads." Thus we "make up our minds," "change our minds" or are "of two minds" about something. One of the key attributes of the mind in this sense is that it is a private sphere to which no one but the owner has access. No-one else can "know our mind." They can only know what we communicate.
Mental faculties
Thought is a mental process which allows beings to model the world, and so to deal with it effectively according to their goals, plans, ends and desires. Words referring to similar concepts and processes include cognition, sentience, consciousness, idea, and imagination. Thinking involves the cerebral manipulation of information, as when we form concepts, engage in problem solving, reason and make decisions. Thinking is a higher cognitive function and the analysis of thinking processes is part of cognitive psychology.
Memory is an organism's ability to store, retain, and subsequently recall information. Although traditional studies of memory began in the realms of philosophy, the late nineteenth and early twentieth century put memory within the paradigms of cognitive psychology. In recent decades, it has become one of the principal pillars of a new branch of science called cognitive neuroscience, a marriage between cognitive psychology and neuroscience.
Imagination is accepted as the innate ability and process to invent partial or complete personal realms within the mind from elements derived from sense perceptions of the shared world. The term is technically used in psychology for the process of reviving in the mind percepts of objects formerly given in sense perception. Since this use of the term conflicts with that of ordinary language, some psychologists have preferred to describe this process as "imaging" or "imagery" or to speak of it as "reproductive" as opposed to "productive" or "constructive" imagination. Imagined images are seen with the "mind's eye". One hypothesis for the evolution of human imagination is that it allowed conscious beings to solve problems (and hence increase an individual's fitness) by use of mental simulation.
Consciousness is a quality of the mind generally regarded to comprise qualities such as subjectivity, self-awareness, sentience, sapience, and the ability to perceive the relationship between oneself and one's environment. It is a subject of much research in philosophy of mind, psychology, neuroscience, and cognitive science. Some philosophers divide consciousness into phenomenal consciousness, which is subjective experience itself, and access consciousness, which refers to the global availability of information to processing systems in the brain.Phenomenal consciousness is a state with qualia. Phenomenal consciousness is being something and access consciousness is being conscious of something.
Philosophy of mind
Main article: Philosophy of mind
Philosophy of mind is the branch of philosophy that studies the nature of the mind, mental events, mental functions, mental properties, consciousness and their relationship to the physical body. The mind-body problem, i.e. the relationship of the mind to the body, is commonly seen as the central issue in philosophy of mind, although there are other issues concerning the nature of the mind that do not involve its relation to the physical body.Dualism and monism are the two major schools of thought that attempt to resolve the mind-body problem. Dualism is the position that mind and body are in some way separate from each other. It can be traced back to Plato,Aristotle and the Samkhya and Yoga schools of Hindu philosophy,but it was most precisely formulated by René Descartes in the 17th century.Substance dualists argue that the mind is an independently existing substance, whereas Property dualists maintain that the mind is a group of independent properties that emerge from and cannot be reduced to the brain, but that it is not a distinct substance.
Monism is the position that mind and body are not ontologically distinct kinds of entities. This view was first advocated in Western Philosophy by Parmenides in the 5th Century BC and was later espoused by the 17th Century rationalist Baruch Spinoza.Physicalists argue that only the entities postulated by physical theory exist, and that the mind will eventually be explained in terms of these entities as physical theory continues to evolve. Idealists maintain that the mind is all that exists and that the external world is either mental itself, or an illusion created by the mind. Neutral monists adhere to the position that there is some other, neutral substance, and that both matter and mind are properties of this unknown substance. The most common monisms in the 20th and 21st centuries have all been variations of physicalism; these positions include behaviorism, the type identity theory, anomalous monism and functionalism.
Many modern philosophers of mind adopt either a reductive or non-reductive physicalist position, maintaining in their different ways that the mind is not something separate from the body.These approaches have been particularly influential in the sciences, particularly in the fields of sociobiology, computer science, evolutionary psychology and the various neurosciences.Other philosophers, however, adopt a non-physicalist position which challenges the notion that the mind is a purely physical construct. Reductive physicalists assert that all mental states and properties will eventually be explained by scientific accounts of physiological processes and states.Non-reductive physicalists argue that although the brain is all there is to the mind, the predicates and vocabulary used in mental descriptions and explanations are indispensable, and cannot be reduced to the language and lower-level explanations of physical science.Continued neuroscientific progress has helped to clarify some of these issues. However, they are far from having been resolved, and modern philosophers of mind continue to ask how the subjective qualities and the intentionality (aboutness) of mental states and properties can be explained in naturalistic terms.
Science of mind
Psychology the scientific study of human behaviour; Noology, the study of thought. As both an academic and applied discipline, Psychology involves the scientific study of mental processes such as perception, cognition, emotion, personality, as well as environmental influences, such as social and cultural influences, and interpersonal relationships, in order to devise theories of human behaviour. Psychology also refers to the application of such knowledge to various spheres of human activity, including problems of individuals' daily lives and the treatment of mental health problems.
Psychology differs from the other social sciences (e.g., anthropology, economics, political science, and sociology) due to its focus on experimentation at the scale of the individual, as opposed to groups or institutions. Historically, psychology differed from biology and neuroscience in that it was primarily concerned with mind rather than brain, a philosophy of mind known as dualism. Modern psychological science incorporates physiological and neurological processes into its conceptions of perception, cognition, behaviour, and mental disorders.
See Sigmund Freud,Carl Jung, and Unconscious mind
A new scientific initiative, the Decade of the Mind, seeks to advocate for the U.S. Government to invest $4 billion over the next ten years in the science of the mind.
Mental health
By analogy with the health of the body, one can speak metaphorically of a state of health of the mind, or mental health. Merriam-Webster defines mental health as "A state of emotional and psychological well-being in which an individual is able to use his or her cognitive and emotional capabilities, function in society, and meet the ordinary demands of everyday life." According to the World Health Organization (WHO), there is no one "official" definition of mental health. Cultural differences, subjective assessments, and competing professional theories all affect how "mental health" is defined. In general, most experts agree that "mental health" and "mental illness" are not opposites. In other words, the absence of a recognized mental disorder is not necessarily an indicator of mental health.
One way to think about mental health is by looking at how effectively and successfully a person functions. Feeling capable and competent; being able to handle normal levels of stress, maintaining satisfying relationships, and leading an independent life; and being able to "bounce back," or recover from difficult situations, are all signs of mental health.
Psychotherapy is an interpersonal, relational intervention used by trained psychotherapists to aid clients in problems of living. This usually includes increasing individual sense of well-being and reducing subjective discomforting experience. Psychotherapists employ a range of techniques based on experiential relationship building, dialogue, communication and behavior change and that are designed to improve the mental health of a client or patient, or to improve group relationships (such as in a family). Most forms of psychotherapy use only spoken conversation, though some also use various other forms of communication such as the written word, art, drama, narrative story, or therapeutic touch. Psychotherapy occurs within a structured encounter between a trained therapist and client(s). Purposeful, theoretically based psychotherapy began in the 19th century with psychoanalysis; since then, scores of other approaches have been developed and continue to be created.

Monday, April 07, 2008

Time-domain reflectometer

In telecommunication, an optical time domain reflectometer (OTDR) is an optoelectronic instrument used to characterize an optical fiber.
An OTDR injects a series of optical pulses into the fiber under test. It also extracts, from the same end of the fiber, light that is scattered back and reflected back from points in the fiber where the index of refraction changes. (This is equivalent to the way that an electronic TDR measures reflections caused by changes in the impedance of the cable under test.) The intensity of the return pulses is measured and integrated as a function of time, and is plotted as a function of fiber length.
An OTDR may be used for estimating the fiber's length and overall attenuation, including splice and mated-connector losses. It may also be used to locate faults, such as breaks.

A time-domain reflectometer (TDR) is an electronic instrument used to characterize and locate faults in metallic cables (for example, twisted wire pairs, coaxial cables) and, in the OTDR domain: optical fibers.A TDR transmits a fast rise time pulse along the conductor. If the conductor is of a uniform impedance and properly terminate, the entire transmitted pulse will be absorbed in the far-end termination and no signal will be reflected back to the TDR. But where impedance discontinuities exist, each discontinuity will create an echo that is reflected back to the reflectometer (hence the name). Increases in the impedance create an echo that reinforces the original pulse while decreases in the impedance create an echo that opposes the original pulse. The resulting reflected pulse that is measured at the output/input to the TDR is displayed or plotted as a function of time and, because the speed of signal propagation is relatively constant for a given transmission medium, can be read as a function of cable length. This is similar in principle to radar.
Because of this sensitivity to impedance variations, a TDR may be used to verify cable impedance characteristics, splice and connector locations and associated losses, and estimate cable lengths, as every nonhomogenity in the impedance of the cable will reflect some signal back in the form of echoes.

Consider the case where the far end of the cable is shorted (that is, it is terminated into zero ohms impedance). When the rising edge of the pulse is launched down the cable, the voltage at the launching point "steps up" to a given value instantly and the pulse begins propagating down the cable towards the short. When the pulse hits the short, no energy is absorbed at the far end. Instead, an opposing pulse reflects back from the short towards the launching end. It is only when this opposing reflection finally reaches the launch point that the voltage at this launching point abruptly drops back to zero, signaling the fact that there is a short at the end of the cable. That is, the TDR had no indication that there is a short at the end of the cable until its emitted pulse can travel down the cable at roughly the speed of light and the echo can return back up the cable at the same speed. It is only after this round-trip delay that the short can be perceived by the TDR. Assuming that one knows the signal propagation speed in the particular cable-under-test, then in this way, the distance to the short can be measured.
A similar effect occurs if the far end of the cable is an open circuit (terminated into an infinite impedance). In this case, though, the reflection from the far end is polarized identically with the original pulse and adds to it rather than cancelling it out. So after a round-trip delay, the voltage at the TDR abruptly jumps to twice the originally-applied voltage.
Note that a theoretical perfect termination at the far end of the cable would entirely absorb the applied pulse without causing any reflection. In this case, it would be impossible to determine the actual length of the cable. Luckily, perfect terminations are very rare and some small reflection is nearly always caused. (This property was employed by a now-defunct audio cable company to design unusual high-end audio cables, and while those cables can no longer be purchased, the site remains an excellent introduction to the principles of the technology.)
The magnitude of the reflection is referred to as the reflection coefficient or ρ. The coefficient ranges from 1 (open circuit) to -1 (short circuit). The value of zero means that there is no reflection. The reflection coefficient is calculated as follows:
Where Zo is defined as the characteristic impedance of the transmission medium and Zt is the impedance of the termination at the far end of the transmission line.
Any discontinuity can be viewed as a termination impedance and substituted as Zt. This includes abrupt changes in the characteristic impedance. As an example, a trace width on a printed circuit board doubled at its midsection would constitute a discontinuity. Some of the energy will be reflected back to the driving source; the remaining energy will be transmitted. This is also known as a scattering junction.
Time domain reflectometers are commonly used for in-place testing of very long cable runs, where it is impractical to dig up or remove what may be a kilometers-long cable. They are indispensable for preventive maintenance of telecommunication lines, as they can reveal growing resistance levels on joints and connectors as they corrode, and increasing insulation leakage as it degrades and absorbs moisture long before either leads to catastrophic failures. Using a TDR, it is possible to pinpoint a fault to within centimetres.
TDRs are also very useful tools for Technical Surveillance Counter-Measures, where they help determine the existence and location of wire taps. The slight change in line impedance caused by the introduction of a tap or splice will show up on the screen of a TDR when connected to a phone line.
TDR equipment is also an essential tool in the failure analysis of today's high-speed printed circuit boards. The signal traces on these boards are carefully crafted to emulate a transmission line. By observing reflections, any unsoldered pins of a ball grid array device can be detected. Additionally, short circuited pins can also be detected in a similar fashion.
The TDR principle is used in industrial settings, in situations as diverse as the testing of integrated circuit packages to measuring liquid levels. In the former, the time domain reflectometer is used to isolate failing sites in the same. The latter is primarily limited to the process industry.
TDR in level measurement
In a TDR-based level measurement device, a low-energy electromagnetic impulse generated by the sensor’s circuitry is propagated along a thin wave guide (also referred to as a probe) – usually a metal rod or a steel cable. When this impulse hits the surface of the medium to be measured, part of the impulse energy is reflected back up the probe to the circuitry which then calculates the fluid level from the time difference between the impulse sent and the impulse reflected (in nanoseconds). The sensors can output the analyzed level as a continuous analog signal or switch output signals. In TDR technology, the impulse velocity is primarily affected by the permittivity of the medium through which the pulse propagates, which can vary greatly by the moisture content and temperature of the medium. In most cases, this can be corrected for without undue difficulty. However, in complex environments, such as in boiling and/or high temperature environments, this can be a significant signal processing dilemma. In particular, determining the froth height and true collapsed liquid level in a frothy / boiling medium can be very difficult.
TDR used in the Earth and Agricultural Sciences
TDR is used to determine moisture content in soil and porous media, where over the last two decades substantial advances have been made; including in soils, grains and food stuffs, and in sediments. The key to TDR’s success is its ability to accurately determine the permittivity (dielectric constant) of a material from wave propagation, and the fact that there is a strong relationship between the permittivity of a material and its water content, as demonstrated in the pioneering works of Hoekstra and Delaney (1974) and Topp et al. (1980). Recent reviews and reference work on the subject include, Topp and Reynolds (1998), Noborio (2001), Pettinellia et al. (2002), Topp and Ferre (2002) and Robinson et al. (2003). The TDR method is a transmission line technique, and determines an apparent TDR permittivity (Ka) from the travel time of an electromagnetic wave that propagates along a transmission line, usually two or more parallel metal rods embedded in a soil or sediment. TDR probes are usually between 10 and 30 cm in length and connected to the TDR via a coaxial cable.
TDR in Geotechnical Usage
Time Domain Reflectometry (TDR) has also been utilized to monitor slope movement in a variety of geotechnical settings including highway cuts, rail beds, and open pit mines (Dowding & O'Connor, 1984, 2000a, 2000b; Kane & Beck, 1999). In stability monitoring applications using TDR, a coaxial cable is installed in a vertical borehole passing through the region of concern. The electrical impedance at any point along a coaxial cable changes with deformation of the insulator between the conductors. A brittle grout surrounds the cable to translate earth movement into an abrupt cable deformation that shows up as a detectable peak in the reflectance trace. Until recently, the technique was relatively insensitive to small slope movements and could not be automated because it relied on human detection of changes in the reflectance trace over time. Farrington and Sargand (2004) developed a simple signal processing technique using numerical derivatives to extract reliable indications of slope movement from the TDR data much earlier than by conventional interpretation.
TDR in Semiconductor Device Analysis
Time Domain Reflectometry is used in semiconductor failure analysis as a non-destructive method for the location of defects in semiconductor device packages. The TDR provides an electrical signature of individual conductive traces in the device package, and is useful for determining the location of opens and shorts.

Tuesday, April 01, 2008

Photonics system

Photonics is the science of generating, controlling, and detecting photons, particularly in the visible and near infra-red spectrum, but also extending to the ultraviolet (0.2 - 0.35 µm wavelength), long-wave infrared (8 - 12 µm wavelength), and far-infrared/THz portion of the spectrum (e.g., 2-4 THz corresponding to 75-150 µm wavelength) where today quantum cascade lasers are being actively developed. Photonics is an outgrowth of the first practical semiconductor light emitters invented in the early 1960s at General Electric, MIT Lincoln Laboratory, IBM, and RCA and made practical by Zhores Alferov and Dmitri Z. Garbuzov and collaborators working at the Ioffe Physico-Technical Institute and almost simultaneously by Izuo Hayashi and Mort Panish working at Bell Telephone Laboratories. Photonics most typically operates at frequencies on the order of hundreds of terahertz.
Just as applications of electronics have expanded dramatically since the first transistor was invented in 1948, the unique applications of photonics continue to emerge. Those which are established as economically important applications for semiconductor photonic devices include optical data recording, fiber optic telecommunications, laser printing (based on xerography), displays, and optical pumping of high-power lasers. The potential applications of photonics are virtually unlimited and include chemical synthesis, medical diagnostics, on-chip data communication, laser defense, and fusion energy to name several interesting additional examples.
Relationship to other fields:
Classical optics
Photonics is closely related to optics. However optics preceded the discovery that light is quantized (when the photoelectric effect was explained by Albert Einstein in 1905). The tools of optics are the refracting lens, the reflecting mirror, and various optical components which were known prior to 1900. The key tenets of classical optics, such as Huygens Principle, the Maxwell Equations, and wave equations, do not depend on quantum properties of light.
Modern optics
Photonics is approximately synonymous with quantum optics, quantum electronics, electro-optics, and optoelectronics. However each is used with slightly different connotations by scientific and government communities and in the marketplace. Quantum optics often connotes fundamental research, whereas photonics is used to connote applied research and development.
The term photonics more specifically connotes:
the particle properties of light,
the potential of creating signal processing device technologies using photons,
those quantum optical technologies which are manufacturable and can be low-cost, and
an analogy to electronics.
The term optoelectronics eponymously connotes devices or circuits comprising both electrical and optical functions, i.e., a thin-film semiconductor device. The term electro-optics came into earlier use and specifically encompasses nonlinear electrical-optical interactions applied, e.g, as bulk crystal modulators such as the Pockels cell, but also includes advanced imaging sensors typically used for surveillance by civilian or government organizations.
Emerging fields
Photonics also relates to the emerging science of quantum information in those cases where it employs photonic methods. Other emerging fields include opto-atomics in which devices integrate both photonic and atomic devices for applications such as precision timekeeping, navigation, and metrology. Another emerging field is polaritonics which differs with photonics in that the fundamental information carrier is a phonon-polariton, which is a mixture of photons and phonons, and operates in the range of frequencies from 300 gigahertz to approximately 10 terahertz.
Overview of photonics research:
The science of photonics includes the emission, transmission, amplification, detection, modulation, and switching of light.
Photonic devices include optoelectronic devices such as lasers and photodetectors, as well as optical fiber, photonic crystals, planar waveguides, and other passive optical elements.
Applications of photonics include light detection, telecommunications, information processing, illumination, metrology, spectroscopy, holography, medicine (surgery, vision correction, endoscopy, health monitoring), military technology, laser material processing, visual art, biophotonics, agriculture and robotics.
History of photonics
Photonics as a field really began in 1960, with the invention of the laser, and the laser diode followed in the 1970s by the development of optical fibers as a medium for transmitting information using light beams, and the Erbium-doped fiber amplifier. These inventions formed the basis for the telecommunications revolution of the late 20th century, and provided the infrastructure for the internet.
Historically , the term photonics only came into common use among the scientific community in the 1980s as fiber optic transmission of electronic data was adopted widely by telecommunications network operators (although it had earlier been coined). At that time, the term was adopted widely within Bell Laboratories. Its use was confirmed when the IEEE Lasers and Electro-Optics Society established an archival journal named Photonics Technology Letters at the end of the 1980s.
During the period leading up to the dot-com crash circa 2001, photonics as a field was largely focused on telecommunications. However, photonics covers a huge range of science and technology applications, including:
laser manufacturing,
biological and chemical sensing,
medical diagnostics and therapy,
display technology,
optical computing.
Various non-telecom photonics applications exhibit a strong growth particularly since the dot-com crash, partly because many companies have been looking for new application areas quite successfully. A huge further growth of photonics can be expected for the case that the current development of silicon photonics will be successful.
Applications of Photonics:

Consumer Equipment: Barcode scanner, printer, CD/DVD/Blu-ray devices, remote control devices
Telecommunications: Optical fiber communications , Optical Down converter to Microwave
Medicine: correction of poor eyesight, laser surgery, surgical endoscopy, tattoo removal
Industrial manufacturing: the use of lasers for welding, drilling, cutting, and various kinds of surface modification
Construction: laser levelling, laser rangefinding, smart structures
Aviation: photonic gyroscopes lacking any moving parts
Military: IR sensors, command and control, navigation, search and rescue, mine laying and detection
Entertainment: laser shows, beam effects, holographic art
Information processing
Metrology: time and frequency measurements, rangefinding
Photonic computing: clock distribution and communication between computers, circuit boards, or within optoelectronic integrated circuits; in the future: quantum computing.