Friday, November 23, 2007

Air conditioning

The term air conditioning most commonly refers to the cooling and dehumidification of indoor air for thermal comfort. In a broader sense, the term can refer to any form of cooling, heating, ventilation or disinfection that modifies the condition of air.An air conditioner (AC or A/C in North American English, aircon in British and Australian English) is an appliance, system, or mechanism designed to stabilise the air temperature and humidity within an area (used for cooling as well as heating depending on the air properties at a given time) , typically using a refrigeration cycle but sometimes using evaporation, most commonly for comfort cooling in buildings and transportation vehicles.
The concept of air conditioning is known to have been applied in Ancient Rome, where aqueduct water was circulated through the walls of certain houses to cool them. Similar techniques in medieval Persia involved the use of cisterns and wind towers to cool buildings during the hot season. Modern air conditioning emerged from advances in chemistry during the 19th Century, and the first large-scale electrical air conditioning was invented and used in 1902 by Willis Haviland Carrier.Comfort applications aim to provide a building indoor environment that remains relatively constant in a range preferred by humans despite changes in external weather conditions or in internal heat loads.
The highest performance for tasks performed by people seated in an office is expected to occur at 72 °F (22 °C) Performance is expected to degrade about 1% for every 2 °F change in room temperature.The highest performance for tasks performed while standing is expected to occur at slightly lower temperatures. The highest performance for tasks performed by larger people is expected to occur at slightly lower temperatures. The highest performance for tasks performed by smaller people is expected to occur at slightly higher temperatures. Although generally accepted, some dispute that thermal comfort enhances worker productivity, as is described in the Hawthorne effect.
Comfort air conditioning makes deep plan buildings feasible. Without air conditioning, buildings must be built narrower or with light wells so that inner spaces receive sufficient outdoor air via natural ventilation. Air conditioning also allows buildings to be taller since wind speed increases significantly with altitude making natural ventilation impractical for very tall buildings. Comfort applications for various building types are quite different and may be categorized as
Low-Rise Residential buildings, including single family houses, duplexes, and small apartment buildings
High-Rise Residential buildings, such as tall dormitories and apartment blocks
Commercial buildings, which are built for commerce, including offices, malls, shopping centers, restaurants, etc.
Institutional buildings, which includes hospitals, governmental, academic, and so on.
Industrial spaces where thermal comfort of workers is desired.
In addition to buildings, air conditioning can be used for comfort in a wide variety of transportation including land vehicles, trains, ships, aircraft, and spacecraft.
Process applications aim to provide a suitable environment for a process being carried out, regardless of internal heat and humidity loads and external weather conditions. Although often in the comfort range, it is the needs of the process that determine conditions, not human preference. Process applications include these:
Hospital operating theatres, in which air is filtered to high levels to reduce infection risk and the humidity controlled to limit patient dehydration. Although temperatures are often in the comfort range, some specialist procedures such as open heart surgery require low temperatures (about 18 °C, 64 °F) and others such as neonatal relatively high temperatures (about 28 °C, 82 °F).
Cleanrooms for the production of integrated circuits, pharmaceuticals, and the like, in which very high levels of air cleanliness and control of temperature and humidity are required for the success of the process.
Facilities for breeding laboratory animals. Since many animals normally only reproduce in spring, holding them in rooms at which conditions mirror spring all year can cause them to reproduce year round.
Aircraft air conditioning. Although nominally aimed at providing comfort for passengers and cooling of equipment, aircraft air conditioning presents a special process because of the low air pressure outside the aircraft.
Data processing centers
Textile factories
Physical testing facilities
Plants and farm growing areas
Nuclear facilities
Chemical and biological laboratories
Mines
Industrial environments
Food cooking and processing areas
In both comfort and process applications the objective may be to not only control temperature, but also humidity, air quality, air motion, and air movement from space to space.Refrigeration air conditioning equipment usually reduces the humidity of the air processed by the system. The relatively cold (below the dewpoint) evaporator coil condenses water vapor from the processed air, (much like an ice cold drink will condense water on the outside of a glass), sending the water to a drain and removing water vapor from the cooled space and lowering the relative humidity. Since humans perspire to provide natural cooling by the evaporation of perspiration from the skin, drier air (up to a point) improves the comfort provided. The comfort air conditioner is designed to create a 40% to 60% relative humidity in the occupied space. In food retailing establishments large open chiller cabinets act as highly effective air dehumidifying units.
Some air conditioning units dry the air without cooling it, and are better classified as dehumidifiers. They work like a normal air conditioner, except that a heat exchanger is placed between the intake and exhaust. In combination with convection fans they achieve a similar level of comfort as an air cooler in humid tropical climates, but only consume about a third of the electricity. They are also preferred by those who find the draft created by air coolers discomforting.It should be noted that in a thermodynamically closed system, any energy input into the system that is being maintained at a set temperature (which is a standard mode of operation for modern air conditioners) requires that the energy removal rate from the air conditioner increase . This increase has the effect that for each unit of energy input into the system (say to power a light bulb in the closed system) requires the air conditioner to remove that energy. In order to do that the air conditioner must increase its consumption by the inverse of its efficiency times the input unit of energy. As an example presume that inside the closed system a 100 watt light bulb is activated, and the air conditioner has an efficiency of 200%. The air conditioner's energy consumption will increase by 50 watts to compensate for this, thus making the 100 W light bulb utilise a total of 150 W of energy.
Note that it is typical for air conditioners to operate at "efficiencies" of significantly greater than 100%, see Coefficient of performance.

What is Chiller

A chiller is a machine that removes heat from a liquid via a vapor-compression or absorption refrigeration cycle. Most often water is chilled, but this water may also contain ~20% glycol and corrosion inhibitors; other fluids such as thin oils can be chilled as well.
Chilled water is used to cool and dehumidify air in mid- to large-size commercial, industrial, and institutional (CII) facilities. Most chillers are designed for indoor operation, but a few are weather-resistant. Chillers are precision machines that are very expensive to purchase and operate, so great care is needed in their selection and maintenance. Engineers are normally retained to evaluate applications' cooling needs, and to specify the optimal machines.In air conditioning systems, chilled water is distributed to heat exchangers, or coils, in air handling units, and used water is returned to the chiller. These cooling coils transfer sensible heat and latent heat from the air to the chilled water, thus cooling and usually dehumidifying the air stream. A typical chiller for air conditioning applications is rated between 15 to 1500 tons (180,000 to 18,000,000 BTU/h or 53 to 5,300 kW) in cooling capacity.In their industrial application, cooled water or other liquid from the chiller is pumped through process or laboratory equipment. Industrial chillers are used for controlled cooling of products, mechanisms and factory machinery in a wide range of industries. They are often used in the plastic industry in injection and blow molding, metal working cutting oils, welding equipment, die-casting and machine tooling, chemical processing, pharmaceutical formulation, food and beverage processing, vacuum systems, X-ray diffraction, power supplies and power generation stations, analytical equipment, semiconductors, compressed air and gas cooling. They are also used to cool high-heat specialized items such as MRI machines and lasers.
The chillers for industrial applications can be centralized, where multiple chillers serve multiple cooling needs, or decentralized where each application or machine has its own chiller. Each approach has its advantages. It is also possible to have a combination of both central and decentral chillers, especially if the cooling requirements are the same for some applications or points of use, but not all.
Decentral chillers are usually small in size (cooling capacity), usually from 0.2 tons to 10 tons. Central chillers generally have capacities ranging from ten tons to hundreds or thousands of tons.A vapor-compression chiller uses a refrigerant internally as its working fluid. Many refrigerants options are available; when selecting a chiller, the application cooling temperature requirements and refrigerant's cooling characteristics need to be matched. Important parameters to consider are the operating temperatures and pressures.
There are several environmental factors that concern refrigerants, and also affect the future availability for chiller applications. This is a key consideration in intermittent applications where a large chiller may last for 25 years or more. Ozone depletion potential (ODP) and global warming potential (GWP) of the refrigerant need to be considered. ODP and GWP data for some of the more common vapor-compression refrigerants; R-134a ODP = 0 and GWP = 1300; R-123 ODP = 0.012 and GWP = 120; R-22 ODP = 0.05, GWP = 1700.Important specifications to consider when searching for industrial chillers include the power source, chiller IP rating, chiller cooling capacity, evaporator capacity, evaporator material, evaporator type, condenser material, condenser capacity, ambient temperature, motor fan type, noise level, internal piping materials, number of compressors, type of compressor, number of fridge circuits, coolant requirements, fluid discharge temperature, and COP (the ratio between the cooling capacity in KW to the energy consumed by the whole chiller in KW). For medium to large chillers this should range from 3.5-4.8 with higher values meaning higher efficiency. Chiller efficiency is often specified in kilowatts per refrigeration ton (kW/RT).
Process pump specifications that are important to consider include the process flow, process pressure, pump material, elastomer and mechanical shaft seal material, motor voltage, motor electrical class, motor IP rating and pump rating. If the cold water temperature is lower than -5ÂșC, then a special pump needs to be used to be able to pump the high concentrations of ethylene glycol. Other important specifications include the internal water tank size and materials and full load amperage.
Control panel features that should be considered when selecting between industrial chillers include the local control panel, remote control panel, fault indicators, temperature indicators, and pressure indicators.
Additional features include emergency alarms, hot gas bypass, city water switchover, and casters.

Electrical Short circuit

A short circuit (sometimes abbreviated to short or s/c) allows a charge to flow along a different path from the one intended. The electrical opposite of a short circuit is an open circuit, which is infinite resistance between two nodes. It is common to misuse "short circuit" to describe any electrical malfunction, regardless of the actual problem.
A short circuit is an accidental low-resistance connection between two nodes of an electrical circuit that are meant to be at different voltages. This results in an excessive electric current limited only by the Thevenin equivalent resistance of the rest of the network and potentially causes circuit damage, overheating, fire or explosion. Although usually the result of a fault, there are cases where short circuits are caused intentionally, for example, for the purpose of voltage-sensing crowbar circuit protectors.
In circuit analysis, the term short circuit is used by analogy to designate a zero-impedance connection between two nodes. This forces the two nodes to be at the same voltage. In an ideal short circuit, this means there is no resistance and no voltage drop across the short. In simple circuit analysis, wires are considered to be shorts. In real circuits, the result is a connection of nearly zero impedance, and almost no resistance. In such a case, the current drawn is limited by the rest of the circuit.Examples,short circuit is to connect the positive and negative terminals of a battery together with a low-resistance conductor, like a wire. With low resistance in the connection, a high current flows, causing the cell to deliver a large amount of energy in a short time.
In electrical devices, unintentional short circuits are usually caused when a wire's insulation breaks down, or when another conducting material is introduced, allowing charge to flow along a different path than the one intended.
A large current through a battery (also called a cell) can cause the rapid buildup of heat, potentially resulting in an explosion or the release of hydrogen gas and electrolyte, which can burn tissue and may be either an acid or a base. Overloaded wires can also overheat, sometimes causing damage to the wire's insulation, or a fire. High current conditions may also occur with electric motor loads under stalled conditions, such as when the impeller of an electrically driven pump is jammed by debris.
In mains circuits, short circuits are most likely to occur between two phases, between a phase and neutral or between a phase and earth (ground). Such short circuits are likely to result in a very high current flowing and therefore quickly trigger an overcurrent protection device. However, it is possible for short circuits to arise between neutral and earth conductors, and between two conductors of the same phase. Such short circuits can be dangerous, particularly as they may not immediately result in a large current flowing and are therefore less likely to be detected. Possible effects include unexpected energisation of a circuit presumed to be isolated. To help reduce the negative effects of short circuits, power distribution transformers are deliberately designed to have a certain amount of leakage reactance. The leakage reactance (usually about 5 to 10% of the full load impedance) helps limit both the magnitude and rate of rise of the fault current.Damage from short circuits can be reduced or prevented by employing fuses, circuit breakers, or other overload protection, which disconnect the power in reaction to excessive current. Overload protection must be chosen according to the maximum prospective short circuit current in a circuit. For example, large home appliances (such as clothes dryers) typically draw 10 to 20 amperes, so it is common for them to be protected by 20 - 30 ampere circuit breakers, whereas lighting circuits typically draw less than 10 amperes and are protected by 10 - 15 ampere breakers. Wire sizes are specified in building and electrical codes, and must be carefully chosen for their specific application to ensure safe operation in conjunction with the overload protection.

What is the Wave power

Wave power refers to the energy of ocean surface waves and the capture of that energy to do useful work - including electricity generation, desalination, and the pumping of water (into reservoirs). Wavey power is a form of renewable energy. Though often co-mingled, wave power is distinct from the diurnal flux of tidal power and the steady gyre of ocean currents. Wave power generation is not a widely employed technology, and no commercial wave farm has yet been established. Plans to install three 750 kW Pelamis devices at the europe in 2006 have been delayed and no installation had taken place by August 2007. Other plans for wave farms include a 3MW array of four 750 kW Pelamis devices in the Orkneys, off northern europe, and the 20MW Wave hub development off the europe.
The north and south temperate zonss have the best sites for capturing wave power. The prevailing westerlies in these zones blow strongest in winter.Physical concepts are Waves generated by wind passing over the sea: organized waves form from disorganized turbulence because wind pressure pushes down wave troughs and lifts up wave crests, the later due to Bernoulli's principle. See Ocean surface wave.
In general, large waves are more powerful. Specifically, wave power is determined by wave height, wave speed, wavelength, and water density.
Wave size is determined by wind speed and fetch (the distance over which the wind excites the waves) and by the depth and topography of the seafloor (which can focus or disperse the energy of the waves). A given wind speed has a matching practical limit over which time or distance will not produce larger waves. This limit is called a "fully developed sea."
Wave motion is highest at the surface and diminishes exponentially with depth; however, wave energy is also present as pressure waves in deeper water.
The potential energy of a set of waves is proportional to wave height squared times wave period (the time between wave crests). Longer period waves have relatively longer wavelengths and move faster. The potential energy is equal to the kinetic energy (that can be expended). Wave power is expressed in kilowatts per meter (at a location such as a shoreline).
The formula below shows how wave power can be calculated. Excluding waves created by major storms, the largest waves are about 15 meters high and have a period of about 15 seconds. According to the formula, such waves carry about 1700 kilowatts of potential power across each meter of wavefront. A good wave power location will have an average flux much less than this: perhaps about 50 kW/m.
Formula: Power (in kW/m) = k H² T ~ 0.5 H² T,
where k = constant, H = wave height (crest to trough) in meters, and T = wave period (crest to crest) in seconds.Good wave power locations have a flux of about 50 kilowatts per meter of shoreline. Capturing 20 percent of this, or 10 kilowatts per meter, is plausible. Assuming very large scale deployment of (and investment in) wave power technology, coverage of 5000 kilometers of shoreline (worldwide) is plausible. Therefore, the potential for shoreline-based wave power is about 50 gigawatts. Deep water wave power resources are truly enormous, but perhaps impractical to capture.

What is Electricity pylon

An electricity pylon or transmission tower is a tall, usually steel lattice structure used to support overhead electricity conductors for electric power transmission.Electricity transmision towers have been used since at least the 1910s.For High voltage AC transmission towers,Three-phase electric power systems are used for high and extra-high voltage AC transmission lines (50 kV and above). The towers must be designed to carry three (or multiples of three) conductors. The towers are usually steel lattices or trusses (wooden structures are used in few europe country in exceptional cases) and the insulators are either glass or porcelain discs assembled in strings, whose length is dependent on the line voltage and environmental conditions. One or two earth conductors (alternative term: Ground conductors) for lightning protection are often mounted at the top of each tower.
In some countries, towers for high and extra-high voltage are usually designed to carry two or more electric circuits. For double circuit lines in Germany, the "Danube" towers or more rarely, the "fir tree" towers, are usually used. If a line is constructed using towers designed to carry several circuits, it is not necessary to install all the circuits at the time of construction.
Some high voltage circuits are often erected on the same tower as 110 kV lines. Paralleling circuits of 380 kV, 220 kV and 110 kV-lines on the same towers is common. Sometimes, especially with 110 kV circuits, a parallel circuit carries traction lines for railway electrification.High voltage DC transmission pylons are transmission lines either monopolar or bipolar systems. With bipolar systems a conductor arrangement with one conductor on each side of the tower is used. For single-pole HVDC transmission with ground return, towers with only one conductor can be used. In many cases, however, the towers are designed for later conversion to a two-pole system. In these cases, conductors are installed on both sides of the tower for mechanical reasons. Until the second pole is needed, it is either grounded, or joined in parallel with the pole in use. In the latter case the line from the converter station to the earthing (grounding) electrode is built as underground cable.Railway traction line pylons,Towers used for single phase AC railway traction lines are similar in construction to those towers used for 110 kV-three phase lines. Steel tube or concrete poles are also often used for these lines. However, railway traction current systems are two-pole AC systems, so traction lines are designed for two conductors (or multiples of two, usually four, eight, or twelve). As a rule, the towers of railway traction lines carry two electric circuits, so they have four conductors. These are usually arranged on one level, whereby each circuit occupies one half of the crossarm. For four traction circuits the arrangement of the conductors is in two-levels and for six electric circuits the arrangement of the conductors is in three levels.
With limited space conditions, it is possible to arrange the conductors of one traction circuit in two levels. Running a traction power line parallel to a high voltage transmission lines for three-phase AC on a separate crossarm of the same tower is possible. If traction lines are led parallel to 380 kV-lines, the insulation must be designed for 220 kV, because in the event of a fault, dangerous overvoltages to the three-phase alternating current line can occur. Traction lines are usually equipped with one earth conductor. In Austria, on some traction circuits, two earth conductors are used.There are tower testing stations for testing the mechanical properties of towers.Lattice towers can be assembled horizontally on the ground and erected by push-pull cable, but this method is rarely used because of the large assembly area needed. Lattice towers are more usually erected using a crane or, in inaccessible areas, a helicopter.Alternatives to pylons and the cables that they support are generally regarded to be unattractive. An alternative to pylons is underground cables. This is a more expensive solution than cables that are supported by pylons but have aesthetic advantages. There are schemes in various countries to improve the appearance of the environment by removing the pylons and undergrounding the cables

Monday, November 19, 2007

Wi-Fi

Wi-Fi is a wireless technology brand owned by the Wi-Fi Alliance intended to improve the interoperability of wireless local area network products based on the IEEE 802.11 standards. "Wi-Fi" was thought to be derived from "Wireless Fidelity", as the Wi-Fi Alliance used this term in some of its early press releases.However, the Wi-Fi Alliance's recent official position is, that the term has no meaning.
Common applications for Wi-Fi include Internet and VoIP phone access, gaming, and network connectivity for consumer electronics such as televisions, DVD players, and digital cameras.A Wi-Fi enabled device such as a PC, game console, cell phone, MP3 player or PDA can connect to the Internet when within range of a wireless network connected to the Internet. The area covered by one or more interconnected access points is called a hotspot. Hotspots can cover as little as a single room with wireless-opaque walls or as much as many square miles covered by overlapping access points. Wi-Fi has been used to create a mesh networks, for example, in the Europe.Both architectures are used in community networks.
Wi-Fi also allows connectivity in peer-to-peer (wireless ad-hoc network) mode, which enables devices to connect directly with each other. This connectivity mode is useful in consumer electronics and gaming applications.
When the technology was first commercialized there were many problems because consumers could not be sure that products from different vendors would work together. The Wi-Fi Alliance began as a community to solve this issue so as to address the needs of the end user and allow the technology to mature. The Alliance created the branding Wi-Fi CERTIFIED to show consumers that products are interoperable with other products displaying the same branding.
Many consumer devices use Wi-Fi. Amongst others, personal computers can network to each other and connect to the Internet, mobile computers can connect to the Internet from any Wi-Fi hotspot, and digital cameras can transfer images wirelessly.
Routers which incorporate a DSL or cable modem and a Wi-Fi access point are often used in homes and other premises, and provide Internet access and internetworking to all devices connected wirelessly or by cable into them. Devices supporting Wi-Fi can also be connected in ad-hoc mode for client-to-client connections without a router.
Business and industrial Wi-Fi is widespread as of 2007. In business environments, increasing the number of Wi-Fi access points provides redundancy, support for fast roaming and increased overall network capacity by using more channels or creating smaller cells. Wi-Fi enables wireless voice applications (VoWLAN or WVOIP). Over the years, Wi-Fi implementations have moved toward 'thin' access points, with more of the network intelligence housed in a centralized network appliance, relegating individual Access Points to be simply 'dumb' radios. Outdoor applications may utilize true mesh topologies. As of 2007 Wi-Fi installations can provide a secure computer networking gateway, firewall, DHCP server, intrusion detection system, and other functions.
In addition to restricted use in homes and offices, Wi-Fi is publicly available at Wi-Fi hotspots provided either free of charge or to subscribers to various providers. Free hotspots are often provided by businesses such as hotels, restaurants, and airports who offer the service to attract or assist clients. Sometimes free Wi-Fi is provided by enthusiasts, or by organizations or authorities who wish to promote business in their area. Metropolitan-wide WiFi (Mu-Fi) already has more than 300 projects in process.In the Europe, a portion of the 2.4 GHz Wi-Fi radio spectrum is also allocated to amateur radio users. In the U.S., FCC Part 15 rules govern non-licensed operators (i.e. most Wi-Fi equipment users). Under Part 15 rules, non-licensed users must "accept" (i.e. endure) interference from licensed users and not cause harmful interference to licensed users. Amateur radio operators are licensed users, and retain what the FCC terms "primary status" on the band, under a distinct set of rules (Part 97). Under Part 97, licensed amateur operators may construct their own equipment, use very high-gain antennas, and boost output power to 100 watts on frequencies covered by Wi-Fi channels 2-6. However, Part 97 rules mandate using only the minimum power necessary for communications, forbid obscuring the data, and require station identification every 10 minutes. Therefore, output power control is required to meet regulations, and the transmission of any encrypted data (for example https) is questionable.
In practice, microwave power amplifiers are expensive. On the other hand, the short wavelength at 2.4 GHz allows for simple construction of very high gain directional antennas. Although Part 15 rules forbid any modification of commercially constructed systems, amateur radio operators may modify commercial systems for optimized construction of long links, for example. Using only 200 mW link radios and high gain directional antennas, a very narrow beam may be used to construct reliable links with minimal radio frequency interference to other users.

Sunday, November 18, 2007

XHTML(Extensible HyperText Markup Language)

The Extensible HyperText Markup Language, or XHTML, is a markup language that has the same depth of expression as HTML(HTML, an initialism of Hypertext Markup Language, is the predominant markup language for web pages. It provides a means to describe the structure of text-based information in a document — by denoting certain text as headings, paragraphs, lists, and so on — and to supplement that text with interactive forms, embedded images, and other objects. HTML is written in the form of labels (known as tags), surrounded by angle brackets. HTML can also describe, to some degree, the appearance and semantics of a document, and can include embedded scripting language code which can affect the behavior of web browsers and other HTML processors.
HTML is also often used to refer to content of the MIME type text/html or even more broadly as a generic term for HTML whether in its XML-descended form (such as XHTML 1.0 and later) or its form descended directly from (such as HTML 4.01 and earlier) SGML)., but also conforms to XML syntax.
Whereas HTML is an application of Standard Generalized Markup Language (SGML), a very flexible markup language, XHTML is an application of XML, a more restrictive subset of SGML. Because they need to be well-formed, true XHTML documents allow for automated processing to be performed using standard XML tools—unlike HTML, which requires a relatively complex, lenient, and generally custom parser. XHTML can be thought of as the intersection of HTML and XML in many respects, since it is a reformulation of HTML in XML. XHTML 1.0 became a World Wide Web Consortium (W3C) Recommendation on January 26, 2000. XHTML 1.1 became a W3C recommendation on May 31, 2001.XHTML is the successor to HTML. As such, many consider XHTML as the current or latest version of HTML [attribution needed]. However, XHTML is a separate recommendation; the W3C continues to recommend the use of XHTML 1.1, XHTML 1.0, and HTML 4.01 for web publishing, and HTML 5 is currently being actively developed.

Universal Mobile Telecommunications System(UMTS)

What is the UTMS?Universal Mobile Telecommunications System (UMTS) is one of the third-generation (3G) cell phone technologies. Currently, the most common form uses W-CDMA as the underlying air interface, is standardized by the 3GPP, and is the European answer to the ITU IMT-2000 requirements for 3G cellular radio systems.
To differentiate UMTS from competing network technologies, UMTS is sometimes marketed as 3GSM, emphasizing the combination of the 3G nature of the technology and the GSM standard which it was designed to succeed.UMTS combines the W-CDMA, TD-CDMA, or TD-SCDMA air interfaces, GSM's Mobile Application Part (MAP) core, and the GSM family of speech codecs. In the most popular cellular mobile telephone variant of UMTS, W-CDMA is currently used. Note that other wireless standards use W-CDMA as their air interface, including FOMA.
UMTS over W-CDMA uses a pair of 5 MHz channels. In contrast, the competing CDMA2000 system uses one or more arbitrary 1.25 MHz channels for each direction of communication. UMTS and other W-CDMA systems are widely criticized for their large spectrum usage, which has delayed deployment in countries that acted relatively slowly in allocating new frequencies specifically for 3G services (such as the United States).
The specific frequency bands originally defined by the UMTS standard are 1885-2025 MHz for the mobile-to-base (uplink) and 2110-2200 MHz for the base-to-mobile (downlink). In the US, 1710-1755 MHz and 2110-2155 MHz will be used instead, as the 1900 MHz band was already utlized.Additionally, in some countries UMTS operators use the 850 MHz and 1900 MHz bands (independently, meaning uplink and downlink are within the same band), notably in the US by AT&T Mobility.
For existing GSM operators, it is a simple but costly migration path to UMTS: much of the infrastructure is shared with GSM, but the cost of obtaining new spectrum licenses and overlaying UMTS at existing towers can be prohibitively expensive.
A major difference of UMTS compared to GSM is the air interface forming Generic Radio Access Network (GeRAN). It can be connected to various backbone networks like the Internet, ISDN, GSM or to a UMTS network. GeRAN includes the three lowest layers of OSI model. The network layer (OSI 3) protocols form the Radio Resource Management protocol (RRM). They manage the bearer channels between the mobile terminals and the fixed network including the handovers.

Enhanced Data Rates for GSM Evolution(EDGE)

Enhanced Data rates for GSM Evolution (EDGE) or Enhanced GPRS (EGPRS), is a digital mobile phone technology that allows increased data transmission rates and improved data transmission reliability. Although technically a 3G network technology, it is generally classified as the unofficial standard 2.75G, due to its slower network speed. EDGE has been introduced into GSM networks around the world since 2003, initially in North America.
EDGE can be used for any packet switched application, such as an Internet connection. High-speed data applications such as video services and other multimedia benefit from EGPRS' increased data capacity. EDGE Circuit Switched is a possible future development.
EDGE Evolution continues in Release 7 of the 3GPP standard providing doubled performance e.g. to complement High-Speed Packet Access (HSPA).Technology,EDGE/EGPRS is implemented as a bolt-on enhancement to 2G and 2.5G GSM and GPRS networks, making it easier for existing GSM carriers to upgrade to it. EDGE/EGPRS is a superset to GPRS and can function on any network with GPRS deployed on it, provided the carrier implements the necessary upgrade.
Although EDGE requires no hardware or software changes to be made in GSM core networks, base stations must be modified. EDGE compatible transceiver units must be installed and the base station subsystem (BSS) needs to be upgraded to support EDGE. New mobile terminal hardware and software is also required to decode/encode the new modulation and coding schemes and carry the higher user data rates to implement new services.Transmission techniques,In addition to Gaussian minimum-shift keying (GMSK), EDGE uses higher-order PSK/8 phase shift keying (8PSK) for the upper five of its nine modulation and coding schemes. EDGE produces a 3-bit word for every change in carrier phase. This effectively triples the gross data rate offered by GSM. EDGE, like GPRS, uses a rate adaptation algorithm that adapts the modulation and coding scheme (MCS) according to the quality of the radio channel, and thus the bit rate and robustness of data transmission. It introduces a new technology not found in GPRS, Incremental Redundancy, which, instead of retransmitting disturbed packets, sends more redundancy information to be combined in the receiver. This increases the probability of correct decoding.
EDGE can carry data speeds up to 236.8 kbit/s for 4 timeslots (theoretical maximum is 473.6 kbit/s for 8 timeslots) in packet mode and will therefore meet the International Telecommunications Union's requirement for a 3G network, and has been accepted by the ITU as part of the IMT-2000 family of 3G standards. It also enhances the circuit data mode called HSCSD, increasing the data rate of this service.

GPRS(General Packet Radio Service)

General Packet Radio Service (GPRS) is a Mobile Data Service available to users of Global System for Mobile Communications (GSM) and IS-136 mobile phones. GPRS data transfer is typically charged per kilobyte of transferred data, while data communication via traditional circuit switching is billed per minute of connection time, independent of whether the user has actually transferred data or has been in an idle state. GPRS can be used for services such as Wireless Application Protocol (WAP) access, Short Message Service, Multimedia Messaging Service, and for Internet communication services such as email and World Wide Web access.
2G cellular systems combined with GPRS is often described as "2.5G", that is, a technology between the second (2G) and third (3G) generations of mobile telephony. It provides moderate speed data transfer, by using unused Time division multiple access (TDMA) channels in, for example, the GSM system. Originally there was some thought to extend GPRS to cover other standards, but instead those networks are being converted to use the GSM standard, so that GSM is the only kind of network where GPRS is in use. GPRS is integrated into GSM Release 97 and newer releases. It was originally standardized by European Telecommunications Standard Institute (ETSI), but now by the 3rd Generation Partnership Project.
Capability classes
Class A
Can be connected to GPRS service and GSM service (voice, SMS), using both at the same time. Such devices are known to be available today.
Class B
Can be connected to GPRS service and GSM service (voice, SMS), but using only one or the other at a given time. During GSM service (voice call or SMS), GPRS service is suspended, and then resumed automatically after the GSM service (voice call or SMS) has concluded. Most GPRS mobile devices are Class B.
Class C
Are connected to either GPRS service or GSM service (voice, SMS). Must be switched manually between one or the other service.
A true Class A device may be required to transmit on two different frequencies at the same time, and thus will need two radios. To get around this expensive requirement, a GPRS mobile may implement the dual transfer mode (DTM) feature. A DTM-capable mobile may use simultaneous voice and packet data, with the network coordinating to ensure that it is not required to transmit on two different frequencies at the same time. Such mobiles are considered pseudo-Class A, sometimes referred to as "simple class A". Some networks are expected to support DTM in 2007.
Services and hardware
GPRS upgrades GSM data services providing:
Multimedia Messaging Service (MMS)
Push to talk over Cellular PoC / PTT
Instant Messaging and Presence -- Wireless Village
Internet Applications for Smart Devices through Wireless Application Protocol (WAP)
Point-to-point (PTP) service: internetworking with the Internet (IP protocols)
Short Message Service (SMS)
Future enhancements: flexible to add new functions, such as more capacity, more users, new accesses, new protocols, new radio networks.
USB GPRS modem
USB GPRS modems use a terminal-like interface USB 2.0 and later, data formats V.42bis, and RFC 1144 and external antennas. Modems can be add in cards (for laptop) or external USB devices which are similar in shape and size to a computer mouse.
GPRS can be used as the bearer of SMS. If SMS over GPRS is used, an SMS transmission speed of about 30 SMS messages per minute may be achieved. This is much faster than using the ordinary SMS over GSM, whose SMS transmission speed is about 6 to 10 SMS messages per minute

WHAT IS GSM

Global System for Mobile communications (GSM) is the most popular standard for mobile phones in the world. Its promoter, the GSM Association, estimates that 82% of the global mobile market uses the standard. GSM is used by over 2 billion people across more than 212 countries and territories.Its ubiquity makes international roaming very common between mobile phone operators, enabling subscribers to use their phones in many parts of the world. GSM differs from its predecessors in that both signaling and speech channels are digital call quality, and so is considered a second generation (2G) mobile phone system. This has also meant that data communication were built into the system using the 3rd Generation Partnership Project (3GPP).The key advantage of GSM systems to consumers has been better voice quality and low-cost alternatives to making calls, such as the Short message service (SMS, also called "text messaging"). The advantage for network operators has been the ease of deploying equipment from any vendors that implement the standard.Like other cellular standards, GSM allows network operators to offer roaming services so that subscribers can use their phones on GSM networks all over the world.
Newer versions of the standard were backward-compatible with the original GSM phones. For example, Release '97 of the standard added packet data capabilities, by means of General Packet Radio Service (GPRS). Release '99 introduced higher speed data transmission using Enhanced Data Rates for GSM Evolution(EDGE).Technical details,GSM is a cellular network, which means that mobile phones connect to it by searching for cells in the immediate vicinity. GSM networks operate in four different frequency ranges. Most GSM networks operate in the 900 MHz or 1800 MHz bands. Some countries in the Americas (including Canada and the United States) use the 850 MHz and 1900 MHz bands because the 900 and 1800 MHz frequency bands were already allocated.
The rarer 400 and 450 MHz frequency bands are assigned in some countries, notably Scandinavia, where these frequencies were previously used for first-generation systems.
In the 900 MHz band the uplink frequency band is 890–915 MHz, and the downlink frequency band is 935–960 MHz. This 25 MHz bandwidth is subdivided into 124 carrier frequency channels, each spaced 200 kHz apart. Time division multiplexing is used to allow eight full-rate or sixteen half-rate speech channels per radio frequency channel. There are eight radio timeslots (giving eight burst periods) grouped into what is called a TDMA frame. Half rate channels use alternate frames in the same timeslot. The channel data rate is 270.833 kbit/s, and the frame duration is 4.615 ms.
The transmission power in the handset is limited to a maximum of 2 watts in GSM850/900 and 1 watt in GSM1800/1900.
GSM has used a variety of voice codecs to squeeze 3.1 kHz audio into between 5.6 and 13 kbit/s. Originally, two codecs, named after the types of data channel they were allocated, were used, called Half Rate (5.6 kbit/s) and Full Rate (13 kbit/s). These used a system based upon linear predictive coding (LPC). In addition to being efficient with bitrates, these codecs also made it easier to identify more important parts of the audio, allowing the air interface layer to prioritize and better protect these parts of the signal.
GSM was further enhanced in 1997 with the Enhanced Full Rate (EFR) codec, a 12.2 kbit/s codec that uses a full rate channel. Finally, with the development of UMTS, EFR was refactored into a variable-rate codec called AMR-Narrowband, which is high quality and robust against interference when used on full rate channels, and less robust but still relatively high quality when used in good radio conditions on half-rate channels.
There are four different cell sizes in a GSM network—macro, micro, pico and umbrella cells. The coverage area of each cell varies according to the implementation environment. Macro cells can be regarded as cells where the base station antenna is installed on a mast or a building above average roof top level. Micro cells are cells whose antenna height is under average roof top level; they are typically used in urban areas. Picocells are small cells whose coverage diameter is a few dozen meters; they are mainly used indoors. Umbrella cells are used to cover shadowed regions of smaller cells and fill in gaps in coverage between those cells.
Cell horizontal radius varies depending on antenna height, antenna gain and propagation conditions from a couple of hundred meters to several tens of kilometers. The longest distance the GSM specification supports in practical use is 35 kilometres (22 mi). There are also several implementations of the concept of an extended cell, where the cell radius could be double or even more, depending on the antenna system, the type of terrain and the timing advance.
Indoor coverage is also supported by GSM and may be achieved by using an indoor picocell base station, or an indoor repeater with distributed indoor antennas fed through power splitters, to deliver the radio signals from an antenna outdoors to the separate indoor distributed antenna system. These are typically deployed when a lot of call capacity is needed indoors, for example in shopping centers or airports. However, this is not a prerequisite, since indoor coverage is also provided by in-building penetration of the radio signals from nearby cells.
The modulation used in GSM is Gaussian minimum-shift keying (GMSK), a kind of continuous-phase frequency shift keying. In GMSK, the signal to be modulated onto the carrier is first smoothed with a Gaussian low-pass filter prior to being fed to a frequency modulator, which greatly reduces the interference to neighboring channels (adjacent channel interference).

Monday, November 12, 2007

tHE Electronic money

Electronic money (electronic cash, electronic currency, digital money, digital cash or digital currency) refers to money or scrip which is exchanged only electronically. Typically, this involves use of computer networks, the internet and digital stored value systems. Electroni Funds Transfer (EFT) and direct deposit are examples of electronic money. Also, it is a collective term for financial cryptography and technologies enabling it.
While electronic money has been an interesting problem for cryptography, to date, use of digital cash has been relatively low-scale.which started as a transit payment system and has grown into a widely used electronic cash system. Another success is EUROPE Interac network, which in 2000 at retail surpassed cash as a payment method. Asean also has an electronic money implementation for its public transportation system (commuter trains, bus, etc), which is very.Octopus card and based on the same type of card (FeliCa).Alternative systems,Technically electronic or digital money is a representation, or a system of debits and credits, used (but not limited to this) to exchange value, within another system, or itself as a stand alone system, online or offline. Also sometimes the term electronic money is used to refer to the provider itself. A private currency may use gold to provide extra security, such as digital gold currency. An e-currency system may be fully backed by gold, non-gold backed, or both gold and non-gold backed.
Many systems will sell their electronic currency directly to the end user, such as Paypal and WebMoney, but other systems, such as gold service, sell only through third party digital currency exchangers.
In the case of Octopus Card in some place, deposits work similarly to banks'. After Octopus Card Limited receives money for deposit from users, the money is deposited into banks, which is similar to debit-card-issuing banks redepositing money at central banks.
Some community currencies, like some LETS systems, work with electronic transactions. Cyclos Software allows creation of electronic community currencies.
Ripple monetary system is a project to develop a distributed system of electronic money independent of local currency.Virtual debit cards,Various companies now sell VISA, Mastercard or Maestro debit cards, which can be recharged via electronic money systems. This system has the advantage of greater privacy if a card provider is located offshore, and greater security since the client can never be debited more than the value on the prepaid card. Such debit cards are also useful for people who do not have a bank account. Generally cards can be recharged with either, or via a wire transfer

Saturday, November 10, 2007

testing of electrical instolation

1.0 AM
pengujian adalah perkara penting yg perlu dilakukan kpd setiap pemasangan elektrik sebelum bekalan boleh diberikan.bekalan elektrik tidak boleh disambung sebelum ujian-ujian tertentu dilaksanakan dan di sahkan tiada kecacatan pada pemasangan tersebut.
tujuan utama pengujian ialah utuk memastikan bahawa sesuatu pemasangan telah disiapkan dengan sempurna dan selamat digunakan selaras dng kehendak akta dan peraturan

  • Akta Bekalan Elektrik 1990
  • Peraturan-peraturan elektrik 1994
  • Peraturan IEE semasa.

ujian-ujian hendaklah dilakukan mengikut peraturan semasa dan disahkan oleh orang yang kompeten yg berdaftar dgn jabatan bekalan elektrik dan masih releven.sbg contoh,pemegang sijil PW2 hanya layak utk mengendos borang H,pengujian dan pemasangan fasa tunggal.manakala pemegang PW4 pula layak mengendos borang H,pengujian dan pemasangan fasa tunggal dan tiga fasa.

Tuesday, November 06, 2007

Circuit breakers(High-voltage circuit breakers)

Electrical power transmission networks are protected and controlled by high-voltage breakers. The definition of "high voltage" varies but in power transmission work is usually thought to be 72,500 V or over, according to a recent definition by the International Electrotechnical Commission. High-voltage breakers are nearly always solenoid-operated, with current sensing protective relays operated through current transformers. In substations the protection relay scheme can be complex, protecting equipment and busses from various types of overload or ground/earth fault.
High-voltage breakers are broadly classified by the medium used to extinguish the arc.
Oil-filled (dead tank and live tank)
Oil-filled, min oil volume
Air blast
Sulfur hexafluoride
High voltage breakers are routinely available up to 765 kV AC.
Live tank circuit breakers are where the enclosure that contains the breaking mechanism is at line potential, that is, "Live". Dead tank circuit breaker enclosures are at earth potential.

Interrupting principles for high-voltage circuit-breakers
High-voltage circuit-breakers have greatly changed since they were first introduced about more 40 years ago, and several interrupting principles have been developed that have contributed successively to a large reduction of the operating energy.
Current interruption in a high-voltage circuit-breaker is obtained by separating two contacts in a medium, such as sulfur hexafluoride (SF6), having excellent dielectrical and arc quenching properties. After contact separation, current is carried through an arc and is interrupted when this arc is cooled by a gas blast of sufficient intensity.
Gas blast applied on the arc must be able to cool it rapidly so that gas temperature between the contacts is reduced from 20,000 K to less than 2000 K in a few hundred microseconds, so that it is able to withstand the transient recovery voltage that is applied across the contacts after current interruption. Sulfur hexafluoride is generally used in present high-voltage circuit-breakers (of rated voltage higher than 52 kV).
In the 1980s and 1990s, the pressure necessary to blast the arc was generated mostly by gas heating using arc energy. It is now possible to use low energy spring-loaded mechanisms to drive high-voltage circuit-breakers up to 800 kV.
Brief history
The first industrial application of SF6 for current interruption dates back to 1953. High-voltage 15 kV to 161 kV load switches were developed with a breaking capacity of 600 A. The first high-voltage SF6 circuit-breaker built in 1956 by Westinghouse, could interrupt 5 kA under 115 kV, but it had 6 interrupting chambers in series per pole. In 1957, the puffer-type technique was introduced for SF6 circuit breakers where the relative movement of a piston and a cylinder linked to the moving part is used to generate the pressure rise necessary to blast the arc via a nozzle made of insulating material (figure 1). In this technique, the pressure rise is obtained mainly by gas compression. The first high-voltage SF6 circuit-breaker with a high short-circuit current capability was produced by Westinghouse in 1959. This dead tank circuit-breaker could interrupt 41.8 kA under 138 kV (10,000 MV·A) and 37.6 kA under 230 kV (15,000 MV·A). This performance were already significant, but the three chambers per pole and the high pressure source needed for the blast (1.35 MPa) was a constraint that had to be avoided in subsequent developments. The excellent properties of SF6 lead to the fast extension of this technique in the 1970s and to its use for the development of circuit breakers with high interrupting capability, up to 800 kV.
The achievement around 1983 of the first single-break 245 kV and the corresponding 420kV to 550 kV and 800 kV, with respectively 2, 3, and 4 chambers per pole, lead to the dominance of SF6 circuit breakers in the complete range of high voltages.
Several characteristics of SF6 circuit breakers can explain their success:
Simplicity of the interrupting chamber which does not need an auxiliary breaking chamber;
Autonomy provided by the puffer technique;
The possibility to obtain the highest performance, up to 63 kA, with a reduced number of interrupting chambers;
Short break time of 2 to 2.5 cycles;
High electrical endurance, allowing at least 25 years of operation without reconditioning;
Possible compact solutions when used for GIS or hybrid switchgear;
Integrated closing resistors or synchronised operations to reduce switching overvoltages;
Reliability and availability;
Low noise levels.
The reduction in the number of interrupting chambers per pole has led to a considerable simplification of circuit breakers as well as the number of parts and seals required. As a direct consequence, the reliability of circuit breakers improved, as verified later on by CIGRE surveys. Thermal blast chambers
New types of SF6 breaking chambers, which implement innovative interrupting principles, have been developed over the past more 15 years, with the objective of reducing the operating energy of the circuit-breaker. One aim of this evolution was to further increase the reliability by reducing the dynamic forces in the pole. Developments since 1996 have seen the use of the self-blast technique of interruption for SF6 interrupting chambers.
These developments have been facilitated by the progress made in digital simulations that were widely used to optimize the geometry of the interrupting chamber and the linkage between the poles and the mechanism.
This technique has proved to be very efficient and has been widely applied for high voltage circuit breakers up to 550 kV. It has allowed the development of new ranges of circuit breakers operated by low energy spring-operated mechanisms.
The reduction of operating energy was mainly achieved by the lowering energy used for gas compression and by making increased use of arc energy to produce the pressure necessary to quench the arc and obtain current interruption. Low current interruption, up to about 30% of rated short-circuit current, is obtained by a puffer blast.
Self-blast chambers
Further development in the thermal blast technique was made by the introduction of a valve between the expansion and compression volumes. When interrupting low currents the valve opens under the effect of the overpressure generated in the compression volume. The blow-out of the arc is made as in a puffer circuit breaker thanks to the compression of the gas obtained by the piston action. In the case of high currents interruption, the arc energy produces a high overpressure in the expansion volume, which leads to the closure of the valve and thus isolating the expansion volume from the compression volume. The overpressure necessary for breaking is obtained by the optimal use of the thermal effect and of the nozzle clogging effect produced whenever the cross-section of the arc significantly reduces the exhaust of gas in the nozzle. In order to avoid excessive energy consumption by gas compression, a valve is fitted on the piston in order to limit the overpressure in the compression to a value necessary for the interruption of low short circuit currents.


This technique, known as “self-blast” has now been used extensively since 1996 for the development of many types of interrupting chambers. The increased understanding of arc interruption obtained by digital simulations and validation through breaking tests, contribute to a higher reliability of these self-blast circuit breakers. In addition the reduction in operating energy, allowed by the self blast technique, leads to longer service life.

Double motion of contacts

An important decrease in operating energy can also be obtained by reducing the kinetic energy consumed during the tripping operation. One way is to displace the two arcing contacts in opposite directions so that the arc speed is half that of a conventional layout with a single mobile contact.

The thermal and self blast principles have enabled the use of low energy spring mechanisms for the operation of high voltage circuit breakers. They progressively replaced the puffer technique in the 1980s; first in 72.5 kV breakers, and then from 145 kV to 800 kV.

Comparison of single motion and double motion techniques

The double motion technique halves the tripping speed of the moving part. In principle, the kinetic energy could be quartered if the total moving mass was not increased. However, as the total moving mass is increased, the practical reduction in kinetic energy is closer to 60%. The total tripping energy also includes the compression energy, which is almost the same for both techniques. Thus, the reduction of the total tripping energy is lower, about 30%, although the exact value depends on the application and the operating mechanism. Depending on the specific case, either the double motion or the single motion technique can be cheaper. Other considerations, such as rationalization of the circuit-breaker range, can also influence the cost.
Thermal blast chamber with arc-assisted opening
In this interruption principle arc energy is used, on the one hand to generate the blast by thermal expansion and, on the other hand, to accelerate the moving part of the circuit breaker when interrupting high currents. The overpressure produced by the arc energy downstream of the interruption zone is applied on an auxiliary piston linked with the moving part. The resulting force accelerates the moving part, thus increasing the energy available for tripping.
With this interrupting principle it is possible, during high-current interruptions, to increase by about 30% the tripping energy delivered by the operating mechanism and to maintain the opening speed independently of the current. It is obviously better suited to circuit-breakers with high breaking currents such as Generator circuit-breakers.
Generator circuit-breakers
Generator circuit-breakers are connected between a generator and the step-up voltage transformer. They are generally used at the outlet of high power generators (100 MVA to 1800 MVA) in order to protect them in a reliable, fast and economic manner. Such circuit breakers must be able to allow the passage of high permanent currents under continuous service (6.3 kA to 40 kA), and have a high breaking capacity (63 kA to 275 kA). They belong to the medium voltage range, but the TRV withstand capability required by ANSI/IEEE Standard C37.013 is such that the interrupting principles developed for the high-voltage range must be used. A particular embodiment of the thermal blast technique has been developed and applied to generator circuit-breakers. The self-blast technique described above is also widely used in SF6 generator circuit breakers, in which the contact system is driven by a low-energy, spring-operated mechanism. An example of such a device is shown in the figure below; this circuit breaker is rated for 17.5 kV and 63 kA.Evolution of tripping energy
The operating energy has been reduced by 5 to 7 times during this period of 27 years. This illustrates well the great progress made in this field of interrupting techniques for high-voltage circuit-breakers. Future perspectives
In the near future, present interrupting technologies can be applied to circuit-breakers with the higher rated breaking currents (63 kA to 80 kA) required in some networks with increasing power generation.
Self blast or thermal blast circuit breakers are nowadays accepted world wide[citation needed] and they have been in service for high voltage applications for about 15 years[citation needed], starting with the voltage level of 72.5 kV. Today this technique is also available for the voltage levels 420/550/800 kV.


Circuit breaker(Types of circuit breaker)


There are many different technologies used in circuit breakers and they do not always fall into distinct categories. Types that are common in domestic, commercial and light industrial applications at low voltage (less than 1000 V) include:
MCB (Miniature Circuit Breaker)—rated current not more than 100 A. Trip characteristics normally not adjustable. Thermal or thermal-magnetic operation. Breakers illustrated above are in this category.
MCCB (Moulded Case Circuit Breaker)—rated current up to 1000 A. Thermal or thermal-magnetic operation. Trip current may be adjustable.
Electric power systems require the breaking of higher currents at higher voltages. Examples of high-voltage AC circuit breakers are:
Vacuum circuit breaker—With rated current up to 3000 A, these breakers interrupt the current by creating and extinguishing the arc in a vacuum container. These can only be practically applied for voltages up to about 35,000 V, which corresponds roughly to the medium-voltage range of power systems. Vacuum circuit breakers tend to have longer life expectancies between overhaul than do air circuit breakers.
Air circuit breaker—Rated current up to 10,000 A. Trip characteristics are often fully adjustable including configurable trip thresholds and delays. Usually electronically controlled, though some models are microprocessor controlled via an integral electronic trip unit. Often used for main power distribution in large industrial plant, where the breakers are arranged in draw-out enclosures for ease of maintenance.

Circuit breaker(Common trip breakers)


When supplying a branch circuit with more than one live conductor, each live conductor must be protected by a breaker pole. To ensure that all live conductors are interrupted when any pole trips, a "common trip" breaker must be used. These may either contain two or three tripping mechanisms within one case, or for small breakers, may externally tie the poles together via their operating handles. Two pole common trip breakers are common on 120/240 volt systems where 240 volt loads (including major appliances or further distribution boards) span the two live wires. Three pole common trip breakers are typically used to supply three phase power to large motors or further distribution boards.

Monday, November 05, 2007

Voice over IP

Voice over Internet Protocol, also called VoIP (pronounced voyp), IP Telephony, Internet telephony, Broadband telephony, Broadband Phone and Voice over Broadband is the routing of voice conversations over the Internet or through any other IP-based network.
Companies providing VoIP service are commonly referred to as providers, and protocols which are used to carry voice signals over the IP network are commonly referred to as Voice over IP or VoIP protocols. They may be viewed as commercial realizations of the experimental Network Voice Protocol (1973) invented for the ARPANET providers. Some cost savings are due to utilizing a single network to carry voice and data, especially where users have existing underutilized network capacity that can carry VoIP at no additional cost. VoIP to VoIP phone calls are sometimes free, while VoIP to public switched telephone networks, PSTN, may have a cost that is borne by the VoIP user.
Voice over IP protocols carry telephony signals as digital audio, typically reduced in data rate using speech data compression techniques, encapsulated in a data packet stream over IP.
There are two types of PSTN to VoIP services: Direct Inward Dialing (DID) and access numbers. DID will connect the caller directly to the VoIP user while access numbers require the caller to input the extension number of the VoIP user.
Functionality
VoIP can facilitate tasks that may be more difficult to achieve using traditional networks that have been typically used historically:
Ability to transmit more than one telephone call down the same broadband-connected telephone line. This can make VoIP a simple way to add an extra telephone line to a home or office.
Many VoIP packages include PSTN features that most telcos (telecommunication companies) normally charge extra for, or may be unavailable from your local telco,such as 3-way calling, call forwarding, automatic redial, and caller ID.
VoIP can be secured with existing off-the-shelf protocols such as Secure Real-time Transport Protocol. Most of the difficulties of creating a secure phone over traditional phone lines, like digitizing and digital transmission are already in place with VoIP. It is only necessary to encrypt and authenticate the existing data stream.
VoIP is location independent, only an internet connection is needed to get a connection to a VoIP provider; for instance call center agents using VoIP phones can work from anywhere with a sufficiently fast and stable Internet connection.
VoIP phones can integrate with other services available over the Internet, including video conversation, message or data file exchange in parallel with the conversation, audio conferencing, managing address books and passing information about whether others (e.g. friends or colleagues) are available online to interested parties.

Key telephone system

A key system or key telephone system is a multiline telephone system typically used in small office environments.
Key systems are noted for their expandability and having individual line selection buttons for each connected phone line, however some features of a private branch exchange such as dialable intercoms may also commonly be present.
Key systems can be built using three principal architectures:
Electromechanical shared-control
Electronic shared-control
Independent keysets
Electromechanical shared-control key systems
Before the advent of large-scale integrated circuits, key systems were typically built out of the same electromechanical components (relays) as larger telephone switching systems. The system marketed in North America as the 1A2 Key System was entirely typical and sold for many decades.
Electronic shared-control systems
With the advent of LSI ICs, the same architecture could be implemented much less expensively than was possible using relays. In addition, it was possible to eliminate the many-wire cabling and replace it with much simpler cable similar to (or even identical to) that used by non-key systems. One of the most recognized such systems is the AT&T Merlin.
Additionally, these more-modern systems allowed vastly more features including:
Answering machine functions
Remote supervision of the entire system
Automatic call accounting
Speed dialing
Caller ID
Etc.
Features could be added or modified simply using software, allowing easy customization of these systems.
Independent keysetsLSI also allowed smaller systems to distribute the control (and features) into individual telephone sets that don't require any single shared control unit. Generally, these systems are used with a relatively few telephone sets and it is often more difficult to keep the feature set (such as speed-dialing numbers) in synchrony between the various sets.
PBX systems
The line between the largest key systems and full PBX systems is blurred. In the 1A2 days, the line was clear: 1A2 systems did not allow the sharing of anonymous "trunk" lines and PBX systems did. Modern key systems blur this distinction by often allowing this feature.
Hybrid keyphone systems
Into the 21st century, the distinction between key systems and PBX has become increasingly confusing. Early electronic key systems used dedicated handsets which displayed and allowed access to all connected PSTN lines and stations. The modern key system now supports ISDN, analogue handsets (in addition to its own dedicated handsets - usually digital) as well as a raft of features more traditionally found on larger PBX systems. The fact that they support both analogue and digital signalling types gives rise to the "Hybrid" designation.
The modern key system is usually fully digital (although analogue variants persist) and some systems embrace VOIP. Indeed, key systems now can be considered to have left their humble roots and become small PBXes. Effectively, the aspects that separate a PBX from a key system are the amount, scope and complexity of the features and facilities offered.