Home insurance, also commonly called hazard insurance or homeowners insurance (often abbreviated in the real estate industry as HOI), is the type of property insurance that covers private homes. It is an insurance policy that combines various personal insurance protections, which can include losses occurring to one's home, its contents, loss of its use (additional living expenses), or loss of other personal possessions of the homeowner, as well as liability insurance for accidents that may happen at the home.
The cost of homeowners insurance often depends on what it would cost to replace the house and which additional riders—additional items to be insured—are attached to the policy. The insurance policy itself is a lengthy contract, and names what will and what will not be paid in the case of various events. Typically, claims due to earthquakes, floods, "Acts of God", or war (whose definition typically includes a nuclear explosion from any source) are excluded. Special insurance can be purchased for these possibilities, including flood insurance and earthquake insurance. Insurance must be updated to the present and existing value at whatever inflation up or down, and an appraisal paid by the insurance company will be added on to the policy premium.
The home insurance policy is usually a term contract—a contract that is in effect for a fixed period of time. The payment the insured makes to the insurer is called the premium. The insured must pay the insurer the premium each term. Most insurers charge a lower premium if it appears less likely the home will be damaged or destroyed: for example, if the house is situated next to a fire station, or if the house is equipped with fire sprinklers and fire alarms. Perpetual insurance, which is a type of home insurance without a fixed term, can also be obtained in certain areas.
In the United States, most home buyers borrow money in the form of a mortgage loan, and the mortgage lender always requires that the buyer purchase homeowners insurance as a condition of the loan, in order to protect the bank if the home were to be destroyed. Anyone with an insurable interest in the property should be listed on the policy. In some cases the mortagagee will waive the need for the mortgagor to carry homeowner's insurance if the value of the land exceeds the amount of the mortgage balance. In a case like this even the total destruction of any buildings would not affect the ability of the lender to be able to foreclose and recover the full amount of the loan.
The insurance crisis in Florida has meant that some waterfront property owners in that state have had to make that decision due to the high cost of premiums.
As described in Wiening et al, prior to the 1950s, there were separate policies for the various perils that could affect a home. A homeowner would have had to purchase separate policies covering fire losses, theft, personal property, and the like. During the 1950s, policy forms were developed, allowing the homeowner to purchase all the insurance they needed on one complete policy. However, these policies varied by insurance company, and were difficult to comprehend. The need for standardization grew so great that a private company based in Jersey City, New Jersey, Insurance Services Office, also known as the ISO, was formed in 1971 to provide risk information and issued a simplified homeowners policy for resell to insurance companies. These policies have been amended over the years until currently, the ISO has seven standardized homeowners insurance forms in general and consistent use . Of these HO-3 is the most common policy followed by HO-4 and HO-6. Others that are less used, though still significant, are HO-1, HO-2, HO-5, and HO-8. Each is summarized below:
HO-1
A limited policy that offers varying degrees of coverage but only for items specifically outlined in the policy. These might be used to cover a valuable object found in the home, such as a painting.
HO-2
Similar to HO-1, HO-2 is a limited policy in that it covers specific portions of a house against damage. The coverage is usually a "named perils" policy, which lists the events that would be covered. As above, these factors must be spelled out in the policy.
HO-3
This policy is the most commonly written policy for a homeowner and is designed to cover all aspects of the home, structure and its contents as well as any liability that may arise from daily use, as well as any visitors who may encounter accident or injury on the premises. Covered aspects as well as limits of liability must be clearly spelled out in the policy to insure proper coverage. The coverage is usually called "all risk". Also called an "open perils" policy.
HO-4
This is commonly referred to as renters insurance or renter's coverage. Similar to HO-6, this policy covers those aspects of the apartment and its contents not specifically covered in the blanket policy written for the complex. This policy can also cover liabilities arising from accidents and intentional injuries for guests as well as passers-by up to 150' of the domicile. Common coverage areas are events such as lightning, riot, aircraft, explosion, vandalism, smoke, theft, windstorm or hail, falling objects, volcanic eruption, snow, sleet, and weight of ice.
HO-5
This policy, similar to HO-3, covers a home (not a condo or apartment), the homeowner and its possessions as well as any liability that might arise from visitors or passers-by. This coverage is differentiated in that it covers a wider breadth and depth of incidents and losses than an HO-3.
HO-6
As a form of supplemental homeowner's insurance, HO-6, also known as a Condominium Coverage, is designed especially for the owners of condos. It includes coverage for the part of the building owned by the insured and for the property housed therein of the insured. Designed to span the gap between what the homeowner's association might cover in a blanket policy written for an entire neighborhood and those items of importance to the insured, typically the HO-6 covers liability for residents and guests of the insured in addition to personal property. The liability coverage, depending on the underwriter, premium paid, and other factors of the policy, can cover incidents up to 150' from the insured property, all valuables within the home from theft, fire or water damage or other forms of loss. It is important to read the Associations By-laws to determine the total amount of insurance needed on your dwelling.
HO-8
It is usually called "older home" insurance. It lets house owners with higher replacement cost than the market value insure them at the lower market value rate.
In addition, a Dwelling Fire policy is generally available for non-commercial owners of rented houses, covering property damage to the structure, and sometimes to the owner's personal property (such as appliances and furnishings). The owner's liability is generally extended from their own primary home insurance, and does not comprise part of the Dwelling Fire policy. It is a counterpart to the HO-4 renter's policy.
Sunday, July 13, 2008
Home insurance
Posted by DOKUTAKE at 11:43 pm 0 comments
Tuesday, July 01, 2008
Travel insurance
Travel insurance is insurance that is intended to cover medical expenses, financial (such as money invested in nonrefundable pre-payments), and other losses incurred while traveling, either within one's own country, or internationally.
Travel insurance can usually be arranged at the time of booking of a trip to cover exactly the duration of that trip or a more extensive, continuous insurance can be purchased from (most often) travel insurance companies, travel agents or directly from travel suppliers such as cruiselines or tour operators. However, travel insurance purchased from travel suppliers tends to be less inclusive than insurance offered by insurance companies.
Travel insurance often offers coverage for a variety of travelers. Student travel, business travel, leisure travel, adventure travel, cruise travel, and international travel are all various options that can be insured.
The most common risks that are covered by travel insurance are:
Medical expenses
Emergency evacuation/repatriation
Overseas funeral expenses
Accidental death, injury or disablement benefit
Cancellation
Curtailment
Delayed departure
Loss, theft or damage to personal possessions and money (including travel documents)
Delayed baggage (and emergency replacement of essential items)
Legal assistance
Personal liability and rental car damage excess
Some travel policies will also provide cover for additional costs, although these vary widely between providers.
And in addition, often separate insurance can be purchased for specific costs such as:
pre-existing medical conditions (e.g. asthma, diabetes)
high risk sports (e.g. skiing, scuba-diving)
travel to high risk countries (e.g. due to war or natural disasters or acts of terrorism)
Common Exclusions:
pre-existing medical conditions
war or terrorism - but some plans may cover this risk
pregnancy related expenses
injury or illness caused by alcohol or drug use
Travel insurance can also provide helpful services, often 24 hours a day, 7 days a week that can include concierge services and emergency travel assistance.
Typically travel insurance for the duration of a journey costs approximately 5-7% of the cost of the trip.
Posted by DOKUTAKE at 3:16 am 0 comments
Friday, June 13, 2008
Vehicle insurance
is insurance purchased for cars, trucks, and other vehicles. Its primary use is to provide protection against losses incurred as a result of traffic accidents and against liability that could be incurred in an accident.In many jurisdictions it is compulsory to have vehicle insurance before using or keeping a motor vehicle on public roads. Most jurisdictions relate insurance to both the car and the driver, however the degree of each varies greatly.
A 1994 study by Jeremy Jackson and Roger Blackman showed, consistent with the risk homeostasis theory, that increased accident costs caused large and significant reductions in accident frequencies.
Vehicle insurance can cover some or all of the following items:
The insured party
The insured vehicle
Third parties (car and people)
Different policies specify the circumstances under which each item is covered. For example, a vehicle can be insured against theft, fire damage, or accident damage independently.
An excess payment, also known as a deductible, is the fixed contribution you must pay each time your car is repaired through your car insurance policy. Normally the payment is made directly to the accident repair "garage" (The term "garage" refers to an establishment where vehicles are serviced and repaired) when you collect the car. If one's car is declared to be a "write off" ("write off" is commonly used in motor insurance to describe a vehicle which is cheaper to replace than to repair), the insurance company will deduct the excess agreed on the policy from the settlement payment it makes to you.
If the accident was the other driver's fault, and this is accepted by the third party's insurer, you'll be able to reclaim your excess payment from the other person's insurance company. If the other driver is uninsured, a policy's minimum limits include coverage for the uninsured/underinsured motorist(s) at fault.
A compulsory excess is the minimum excess payment your insurer will accept on your insurance policy. Minimum excesses vary according to your personal details, driving record and insurance company.
In order to reduce your insurance premium, you may offer to pay a higher excess than the compulsory excess demanded by your insurance company. Your voluntary excess is the extra amount over and above the compulsory excess that you agree to pay in the event of a claim on the policy. As a bigger excess reduces the financial risk carried by your insurer, your insurer is able to offer you a significantly lower premium.
Basis of premium charges:-
Depending on the jurisdiction, the insurance premium can be either mandated by the government or determined by the insurance company in accordance to a framework of regulations set by the government. Often, the insurer will have more freedom to set the price on physical damage coverages than on mandatory liability coverages.
When the premium is not mandated by the government, it is usually derived from the calculations of an actuary based on statistical data. The premium can vary depending on many factors that are believed to have an impact on the expected cost of future claims.Those factors can include the car characteristics, the coverage selected (deductible, limit, covered perils), the profile of the driver (age, gender, driving history) and the usage of the car (commute to work or not, predicted annual distance driven).
Gender
Men average more miles driven per year than women do, and have a proportionally higher accident involvement at all ages. Insurance companies cite women's lower accident involvement in keeping the youth surcharge lower for young women drivers than for their male counterparts, but adult rates are generally unisex. Reference to the lower rate for young women as "the women's discount" has caused confusion that was evident in news reports on a recently defeated EC proposal to make it illegal to consider gender in assessing insurance premiums.Ending the discount would have made no difference to most women's premiums.
Age
Teenage drivers who have no driving record will have higher car insurance premiums. However young drivers are often offered discounts if they undertake further driver training on recognised courses, such as the Pass Plus scheme in the UK. In the U.S. many insurers offer a good grade discount to students with a good academic record and resident student discounts to those who live away from home. Generally insurance premiums tend to become lower at the age of 25. Senior drivers are often eligible for retirement discounts reflecting lower average miles driven by this age group.
Distance
Some car insurance plans do not differentiate in regard to how much the car is used. However, methods of differentiation would include:
Reasonable estimation
Several car insurance plans rely on a reasonable estimation of the average annual distance expected to be driven which is provided by the insured. This discount benefits drivers who drive their cars infrequently but has no actuarial value since it is unverified.
Odometer-based systems
Cents Per Mile Now(1986) advocates classified odometer-mile rates. After the company's risk factors have been applied and the customer has accepted the per-mile rate offered, customers buy prepaid miles of insurance protection as needed, like buying gallons of gasoline. Insurance automatically ends when the odometer limit (recorded on the car’s insurance ID card) is reached unless more miles are bought. Customers keep track of miles on their own odometer to know when to buy more. The company does no after-the-fact billing of the customer, and the customer doesn't have to estimate a "future annual mileage" figure for the company to obtain a discount. In the event of a traffic stop, an officer could easily verify that the insurance is current by comparing the figure on the insurance card to that on the odometer.
Critics point out the possibility of cheating the system by odometer tampering. Although the newer electronic odometers are difficult to roll back, they can still be defeated by disconnecting the odometer wires and reconnecting them later. However, as the Cents Per Mile Now website points out:
As a practical matter, resetting odometers requires equipment plus expertise that makes stealing insurance risky and uneconomical. For example, in order to steal 20,000 miles of continuous protection while paying for only the 2,000 miles from 35,000 miles to 37,000 miles on the odometer, the resetting would have to be done at least nine times to keep the odometer reading within the narrow 2,000-mile covered range. There are also powerful legal deterrents to this way of stealing insurance protection. Odometers have always served as the measuring device for resale value, rental and leasing charges, warranty limits, mechanical breakdown insurance, and cents-per-mile tax deductions or reimbursements for business or government travel. Odometer tampering—detected during claim processing—voids the insurance and, under decades-old state and federal law, is punishable by heavy fines and jail.
Under the cents-per-mile system, rewards for driving less are delivered automatically without need for administratively cumbersome and costly GPS technology. Uniform per-mile exposure measurement for the first time provides the basis for statistically valid rate classes. Insurer premium income automatically keeps pace with increases or decreases in driving activity, cutting back on resulting insurer demand for rate increases and preventing today's windfalls to insurers when decreased driving activity lowers costs but not premiums.
Posted by DOKUTAKE at 3:49 am 0 comments
Tuesday, June 10, 2008
Life insurance
Life insurance or life assurance is a contract between the policy owner and the insurer, where the insurer agrees to pay a sum of money upon the occurrence of the insured individual's or individuals' death or other event, such as terminal illness or critical illness. In return, the policy owner (or policy payer) agrees to pay a stipulated amount called a premium at regular intervals or in lump sums. There may be designs in some countries where bills and death expenses plus catering for after funeral expenses should be included in Policy Premium. In the United States, the predominant form simply specifies a lump sum to be paid on the insured's demise.
As with most insurance policies, life insurance is a contract between the insurer and the policy owner (policyholder) whereby a benefit is paid to the designated Beneficiary (or Beneficiaries) if an insured event occurs which is covered by the policy. To be a life policy the insured event must be based upon life (or lives) of the people named in the policy.
Insured events that may be covered include:
Sickness
Life policies are legal contracts and the terms of the contract describe the limitations of the insured events. Specific exclusions are often written into the contract to limit the liability of the insurer; for example claims relating to suicide, fraud, war, riot and civil commotion.
Life based contracts tend to fall into two major categories:
Protection policies - designed to provide a benefit in the event of specified event, typically a lump sum payment. A common form of this design is term insurance.
Investment policies - where the main objective is to facilitate the growth of capital by regular or single premiums. Common forms (in the US anyway) are whole life, universal life, and variable life policies.
Parties to contract
There is a difference between the insured and the policy owner (policy holder), although the owner and the insured are often the same person. For example, if Joe buys a policy on his own life, he is both the owner and the insured. But if Jane, his wife, buys a policy on Joe's life, she is the owner and he is the insured. The policy owner is the guarantee and he or she will be the person who will pay for the policy. The insured is a participant in the contract, but not necessarily a party to it.
The beneficiary receives policy proceeds upon the insured's death. The owner designates the beneficiary, but the beneficiary is not a party to the policy. The owner can change the beneficiary unless the policy has an irrevocable beneficiary designation. With an irrevocable beneficiary, that beneficiary must agree to any beneficiary changes, policy assignments, or cash value borrowing.
In cases where the policy owner is not the insured (also referred to as the cestui qui vit or CQV), insurance companies have sought to limit policy purchases to those with an "insurable interest" in the CQV. For life insurance policies, close family members and business partners will usually be found to have an insurable interest. The "insurable interest" requirement usually demonstrates that the purchaser will actually suffer some kind of loss if the CQV dies. Such a requirement prevents people from benefiting from the purchase of purely speculative policies on people they expect to die. With no insurable interest requirement, the risk that a purchaser would murder the CQV for insurance proceeds would be great. In at least one case, an insurance company which sold a policy to a purchaser with no insurable interest (who later murdered the CQV for the proceeds), was found liable in court for contributing to the wrongful death of the victim (Liberty National Life v. Weldon, 267 Ala.171 (1957)).
Contract terms
Special provisions may apply, such as suicide clauses wherein the policy becomes null if the insured commits suicide within a specified time (usually two years after the purchase date; some states provide a statutory one-year suicide clause). Any misrepresentations by the insured on the application is also grounds for nullification. Most US states specify that the contestability period cannot be longer than two years; only if the insured dies within this period will the insurer have a legal right to contest the claim on the basis of misrepresentation and request additional information before deciding to pay or deny the claim.
The face amount on the policy is the initial amount that the policy will pay at the death of the insured or when the policy matures, although the actual death benefit can provide for greater or lesser than the face amount. The policy matures when the insured dies or reaches a specified age (such as 100 years old).
Costs, insurability, and underwriting
The insurer (the life insurance company) calculates the policy prices with intent to fund claims to be paid and administrative costs, and to make a profit. The cost of insurance is determined using mortality tables calculated by actuaries. Actuaries are professionals who employ actuarial science, which is based in mathematics (primarily probability and statistics). Mortality tables are statistically-based tables showing expected annual mortality rates. It is possible to derive life expectancy estimates from these mortality assumptions. Such estimates can be important in taxation regulation.
The three main variables in a mortality table have been age, gender, and use of tobacco. More recently in the US, preferred class specific tables were introduced. The mortality tables provide a baseline for the cost of insurance. In practice, these mortality tables are used in conjunction with the health and family history of the individual applying for a policy in order to determine premiums and insurability. Mortality tables currently in use by life insurance companies in the United States are individually modified by each company using pooled industry experience studies as a starting point. In the 1980s and 90's the SOA 1975-80 Basic Select & Ultimate tables were the typical reference points, while the 2001 VBT and 2001 CSO tables were published more recently. The newer tables include separate mortality tables for smokers and non-smokers and the CSO tables include separate tables for preferred classes.
Recent US select mortality tables predict that roughly 0.35 in 1,000 non-smoking males aged 25 will die during the first year of coverage after underwriting.Mortality approximately doubles for every extra ten years of age so that the mortality rate in the first year for underwritten non-smoking men is about 2.5 in 1,000 people at age 65.Compare this with the US population male mortality rates of 1.3 per 1,000 at age 25 and 19.3 at age 65 (without regard to health or smoking status).
The mortality of underwritten persons rises much more quickly than the general population. At the end of 10 years the mortality of that 25 year-old, non-smoking male is 0.66/1000/year. Consequently, in a group of one thousand 25 year old males with a $100,000 policy, all of average health, a life insurance company would have to collect approximately $50 a year from each of a large group to cover the relatively few expected claims. (0.35 to 0.66 expected deaths in each year x $100,000 payout per death = $35 per policy). Administrative and sales commissions need to be accounted for in order for this to make business sense. A 10 year policy for a 25 year old non-smoking male person with preferred medical history may get offers as low as $90 per year for a $100,000 policy in the competitive US life insurance market.
The insurance company receives the premiums from the policy owner and invests them to create a pool of money from which it can pay claims and finance the insurance company's operations. Contrary to popular belief, the majority of the money that insurance companies make comes directly from premiums paid, as money gained through investment of premiums can never, in even the most ideal market conditions, vest enough money per year to pay out claims.[citation needed] Rates charged for life insurance increase with the insured's age because, statistically, people are more likely to die as they get older.
Given that adverse selection can have a negative impact on the insurer's financial situation, the insurer investigates each proposed insured individual unless the policy is below a company-established minimum amount, beginning with the application process. Group Insurance policies are an exception.
This investigation and resulting evaluation of the risk is termed underwriting. Health and lifestyle questions are asked. Certain responses or information received may merit further investigation. Life insurance companies in the United States support the Medical Information Bureau (MIB), which is a clearinghouse of information on persons who have applied for life insurance with participating companies in the last seven years. As part of the application, the insurer receives permission to obtain information from the proposed insured's physicians.
Underwriters will determine the purpose of insurance. The most common is to protect the owner's family or financial interests in the event of the insured's demise. Other purposes include estate planning or, in the case of cash-value contracts, investment for retirement planning. Bank loans or buy-sell provisions of business agreements are another acceptable purpose.
Life insurance companies are never required by law to underwrite or to provide coverage to anyone, with the exception of Civil Rights Act compliance requirements. Insurance companies alone determine insurability, and some people, for their own health or lifestyle reasons, are deemed uninsurable. The policy can be declined (turned down) or rated.[citation needed] Rating increases the premiums to provide for additional risks relative to the particular insured.[citation needed]
Many companies use four general health categories for those evaluated for a life insurance policy. These categories are Preferred Best, Preferred, Standard, and Tobacco.[citation needed] Preferred Best is reserved only for the healthiest individuals in the general population. This means, for instance, that the proposed insured has no adverse medical history, is not under medication for any condition, and his family (immediate and extended) have no history of early cancer, diabetes, or other conditions.[citation needed] Preferred means that the proposed insured is currently under medication for a medical condition and has a family history of particular illnesses.[citation needed] Most people are in the Standard category.[citation needed] Profession, travel, and lifestyle factor into whether the proposed insured will be granted a policy, and which category the insured falls. For example, a person who would otherwise be classified as Preferred Best may be denied a policy if he or she travels to a high risk country.[citation needed] Underwriting practices can vary from insurer to insurer which provide for more competitive offers in certain circumstances.
Life insurance contracts are written on the basis of utmost good faith. That is, the proposer and the insurer both accept that the other is acting in good faith. This means that the proposer can assume the contract offers what it represents without having to fine comb the small print and the insurer assumes the proposer is being honest when providing details to underwriter.
Death proceeds
Upon the insured's death, the insurer requires acceptable proof of death before it pays the claim. The normal minimum proof required is a death certificate and the insurer's claim form completed, signed (and typically notarized).[citation needed] If the insured's death is suspicious and the policy amount is large, the insurer may investigate the circumstances surrounding the death before deciding whether it has an obligation to pay the claim.
Proceeds from the policy may be paid as a lump sum or as an annuity, which is paid over time in regular recurring payments for either a specified period or for a beneficiary's lifetime.
Posted by DOKUTAKE at 2:48 am 0 comments
Sunday, June 01, 2008
The Health insurance
The term health insurance is generally used to describe a form of insurance that pays for medical expenses. It is sometimes used more broadly to include insurance covering disability or long-term nursing or custodial care needs. It may be provided through a government-sponsored social insurance program, or from private insurance companies. It may be purchased on a group basis (e.g., by a firm to cover its employees) or purchased by individual consumers. In each case, the covered groups or individuals pay premiums or taxes to help protect themselves from high or unexpected healthcare expenses. Similar benefits paying for medical expenses may also be provided through social welfare programs funded by the government.
Health insurance works by estimating the overall risk of healthcare expenses and developing a routine finance structure (such as a monthly premium or annual tax) that will ensure that money is available to pay for the healthcare benefits specified in the insurance agreement. The benefit is administered by a central organization, most often either a government agency or a private or not-for-profit entity operating a health plan.
The concept of health insurance was proposed in 1694 by Hugh the Elder Chamberlen from the Peter Chamberlen family. In the late 19th century, "accident insurance" began to be available, which operated much like modern disability insurance.This payment model continued until the start of the 20th century in some jurisdictions (like California), where all laws regulating health insurance actually referred to disability insurance.
Accident insurance was first offered in the United States by the Franklin Health Assurance Company of Massachusetts. This firm, founded in 1850, offered insurance against injuries arising from railroad and steamboat accidents. Sixty organizations were offering accident insurance in the US by 1866, but the industry consolidated rapidly soon thereafter. While there were earlier experiments, the origins of sickness coverage in the US effectively date from 1890. The first employer-sponsored group disability policy was issued in 1911.
Before the development of medical expense insurance, patients were expected to pay all other health care costs out of their own pockets, under what is known as the fee-for-service business model. During the middle to late 20th century, traditional disability insurance evolved into modern health insurance programs. Today, most comprehensive private health insurance programs cover the cost of routine, preventive, and emergency health care procedures, and also most prescription drugs, but this was not always the case.
Hospital and medical expense policies were introduced during the first half of the 20th century. During the 1920s, individual hospitals began offering services to individuals on a pre-paid basis, eventually leading to the development of Blue Cross organizations.The predecessors of today's Health Maintenance Organizations (HMOs) originated beginning in 1929, through the 1930s and on during World War II.
A Health insurance policy is a contract between an insurance company and an individual. The contract can be renewable annually or monthly. The type and amount of health care costs that will be covered by the health plan are specified in advance, in the member contract or Evidence of Coverage booklet. The individual policy-holder's payment obligations may take several forms:-
- Premium: The amount the policy-holder pays to the health plan each month to purchase health coverage.
- Deductible: The amount that the policy-holder must pay out-of-pocket before the health plan pays its share. For example, a policy-holder might have to pay a $500 deductible per year, before any of their health care is covered by the health plan. It may take several doctor's visits or prescription refills before the policy-holder reaches the deductible and the health plan starts to pay for care.
- Copayment: The amount that the policy-holder must pay out of pocket before the health plan pays for a particular visit or service. For example, a policy-holder might pay a $45 copayment for a doctor's visit, or to obtain a prescription. A copayment must be paid each time a particular service is obtained.
- Coinsurance: Instead of paying a fixed amount up front (a copayment), the policy-holder must pay a percentage of the total cost. For example, the member might have to pay 20% of the cost of a surgery, while the health plan pays the other 80%. Because there is no upper limit on coinsurance, the policy-holder can end up owing very little, or a significant amount, depending on the actual costs of the services they obtain.
- Exclusions: Not all services are covered. The policy-holder is generally expected to pay the full cost of non-covered services out of their own pocket.
- Coverage limits: Some health plans only pay for health care up to a certain dollar amount. The policy-holder may be expected to pay any charges in excess of the health plan's maximum payment for a specific service. In addition, some plans have annual or lifetime coverage maximums. In these cases, the health plan will stop payment when they reach the benefit maximum, and the policy-holder must pay all remaining costs.
- Out-of-pocket maximums: Similar to coverage limits, except that in this case, the member's payment obligation ends when they reach the out-of-pocket maximum, and the health plan pays all further covered costs. Out-of-pocket maximums can be limited to a specific benefit category (such as prescription drugs) or can apply to all coverage provided during a specific benefit year.
- Capitation: An amount paid by an insurer to a health care provider, for which the provider agrees to treat all members of the insurer.
- In-Network Provider: A health care provider on a list of providers preselected by the insurer. The insurer will offer discounted coinsurance or copayments, or additional benefits, to a plan member to see an in-network provider. Generally, providers in network are providers who have a contract with the insurer to accept rates further discounted from the "usual and customary" charges the insurer pays to out-of-network providers.
Prescription drug plans are a form of insurance offered through some employer benefit plans in the US, where the patient pays a copayment and the prescription drug insurance part or all of the balance for drugs covered in the formulary of the plan.
Some, if not most, health care providers in the United States will agree to bill the insurance company if patients are willing to sign an agreement that they will be responsible for the amount that the insurance company doesn't pay. The insurance company pays out of network providers according to "reasonable and customary" charges, which may be less than the provider's usual fee. The provider may also have a separate contract with the insurer to accept what amounts to a discounted rate or capitation to the provider's standard charges. It generally costs the patient less to use an in-network provider.
Historically, HMOs tended to use the term "health plan", while commercial insurance companies used the term "health insurance". A health plan can also refer to a subscription-based medical care arrangement offered through health maintenance organization, HMO, PPO, or POS plan. These plans are similar to pre-paid dental, pre-paid legal, and pre-paid vision plans. Pre-paid health plans typically pay for a fixed number of services (for instance, $300 in preventive care, a certain number of days of hospice care or care in a skilled nursing facility, a fixed number of home health visits, a fixed number of spinal manipulation charges, etc.) The services offered are usually at the discretion of a utilization review nurse who is often contracted through the managed care entity providing the subscription health plan. This determination may be made either prior to or after hospital admission (concurrent utilization review).
Comprehensive health insurance pays a percentage (may be 100, 90, 80, 70, 60, 50, percent) of the cost of hospital and physician charges after a deductible (usually applies to hospital charges) or a co-pay (usually applies to physician charges, but may apply to some hospital services) is met by the insured. These plans are generally expensive because of the high potential benefit payout — $1,000,000 to 5,000,000 is common — and because of the vast array of covered benefits.Scheduled health insurance plans are not meant to replace a traditional comprehensive health insurance plans and are more of a basic policy providing access to day-to-day health care such as going to the doctor or getting a prescription drug. In recent years, these plans have taken the name mini-med plans or association plans. These plans may provide benefits for hospitalization and surgical, but these benefits will be limited. Scheduled plans are not meant to be effective for catastrophic events. These plans cost much less then comprehensive health insurance. They generally pay limited benefits amounts directly to the service provider, and payments are based upon the plan's "schedule of benefits". Annual benefits maximums for a typical scheduled health insurance plan may range from $1,000 to $25,000.
Social health insurance:-
Social health insurance (SHI) is a method for financing health care costs through a social insurance program based on the collection of funds contributed by individuals, employers, and sometimes government subsidies.It is one of the five main ways that health care systems are funded.
SHI systems are characterized by the presence of sickness funds which usually receive a proportional contribution of their members' wages. With this insurance contributions these funds pay medical costs of their members, to the extent that the services are included in the, sometimes nationally defined, benefit package. Affiliation to such funds is usually based on professional, geographic, religious/political and/or non-partisan criteria. (Saltman 2004, p.8-9) Usually, there are user fees for several health care services to inhibit usage and to keep social health insurance affordable.
Otto von Bismarck was the first to make social health insurance mandatory on a national scale (in Germany), but social health insurance was already common for many centuries before among guilds mainly in continental Europe. Countries with SHI systems include Austria, Belgium, Germany, France, and Luxembourg. Generally, their per capita health expenditures is higher than in tax-based systems. Such predominantly tax-based systems tend to be called "National Health Systems" (or, "Beveridge systems", named after William Beveridge, who was in charge of writing the Beveridge report). Some see this label as inappropriate as the health care systems have been largely decentralized beyond the national level in these countries.
Posted by DOKUTAKE at 10:52 pm 0 comments
Friday, May 23, 2008
E-commerce
Electronic commerce, commonly known as e-commerce or eCommerce, consists of the buying and selling of products or services over electronic systems such as the Internet and other computer networks. The amount of trade conducted electronically has grown extraordinarily since the spread of the Internet. A wide variety of commerce is conducted in this way, spurring and drawing on innovations in electronic funds transfer, supply chain management, Internet marketing, online transaction processing, electronic data interchange (EDI), inventory management systems, and automated data collection systems. Modern electronic commerce typically uses the World Wide Web at least at some point in the transaction's lifecycle, although it can encompass a wider range of technologies such as e-mail as well.
A large percentage of electronic commerce is conducted entirely electronically for virtual items such as access to premium content on a website, but most electronic commerce involves the transportation of physical items in some way. Online retailers are sometimes known as e-tailers and online retail is sometimes known as e-tail. Almost all big retailers have electronic commerce presence on the World Wide Web.
Electronic commerce that is conducted between businesses is referred to as Business-to-business or B2B. B2B can be open to all interested parties (e.g. commodity exchange) or limited to specific, pre-qualified participants (private electronic market).
Electronic commerce is generally considered to be the sales aspect of e-business. It also consists of the exchange of data to facilitate the financing and payment aspects of the business transactions.Early development
The meaning of electronic commerce has changed over the last 30 years. Originally, electronic commerce meant the facilitation of commercial transactions electronically, using technology such as Electronic Data Interchange (EDI) and Electronic Funds Transfer (EFT). These were both introduced in the late 1970s, allowing businesses to send commercial documents like purchase orders or invoices electronically. The growth and acceptance of credit cards, automated teller machines (ATM) and telephone banking in the 1980s were also forms of electronic commerce. From the 1990s onwards, electronic commerce would additionally include enterprise resource planning systems (ERP), data mining and data warehousing.
Perhaps it is introduced from the Telephone Exchange Office, or maybe not.The earliest example of many-to-many electronic commerce in physical goods was the Boston Computer Exchange, a marketplace for used computers launched in 1982. The first online information marketplace, including online consulting, was likely the American Information Exchange, another pre-Internet online system introduced in 1991.In the United States, some electronic commerce activities are regulated by the Federal Trade Commission (FTC). These activities include the use of commercial e-mails, online advertising and consumer privacy. The CAN-SPAM Act of 2003 establishes national standards for direct marketing over e-mail. The Federal Trade Commission Act regulates all forms of advertising, including online advertising, and states that advertising must be truthful and non-deceptive.Using its authority under Section 5 of the FTC Act, which prohibits unfair or deceptive practices, the FTC has brought a number of cases to enforce the promises in corporate privacy statements, including promises about the security of consumers’ personal information.As result, any corporate privacy policy related to e-commerce activity may be subject to enforcement by the FTC.Contemporary electronic commerce involves everything from ordering "digital" content for immediate online consumption, to ordering conventional goods and services, to "meta" services to facilitate other types of electronic commerce.
On the consumer level, electronic commerce is mostly conducted on the World Wide Web. An individual can go online to purchase anything from books, grocery to expensive items like real estate. Another example will be online banking like online bill payments, buying stocks, transferring funds from one account to another, and initiating wire payment to another country. All these activities can be done with a few keystrokes on the keyboard.
On the institutional level, big corporations and financial institutions use the internet to exchange financial data to facilitate domestic and international business. Data integrity and security are very hot and pressing issues for electronic commerce these days.
Posted by DOKUTAKE at 4:29 am 0 comments
Thursday, May 15, 2008
Financial of accountancy
Financial accountancy (or financial accounting) is the field of accountancy concerned with the preparation of financial statements for decision makers, such as stockholders, suppliers, banks, employees, government agencies, owners, and other stakeholders. The fundamental need for financial accounting is to reduce principal-agent problem by measuring and monitoring agents' performance and reporting the results to interested users.
Financial accountancy is used to prepare accounting information for people outside the organization or not involved in the day to day running of the company. Managerial accounting provides accounting information to help managers make decisions to manage the business.
Financial accountancy is governed by both local and international accounting standards.
Basic accounting concepts:-
Financial accountants produce financial statements based on Generally Accepted Accounting Principles (GAAP) of a respective country.
Financial accounting serves following purposes:
producing general purpose financial statements
provision of information used by management of a business entity for decision making, planning and performance evaluation
for meeting regulatory requirements
Meaning of the accounting equation:-
The value of a company can be understood simply as the useful assets that ownership of a company entitles one to claim. This value is known as Owners' Equity. Some assets of a company, however, cannot be claimed as equity by the owners of a company because other people have legal claim to them - for example if the company has borrowed money from the bank. The value of a resource claimable by a non-owner is called a liability. All of the Assets of a company can be claimed by someone, whether owner or not, so the sum of a company's equity and its liabilities must equal the value of its Assets. Thus the accounting equation describes what portion of a company's assets can be claimed by the owners.
Various account types are classified as 'credit' or 'debit' depending on the role they play in the accounting equation.
Assets = Liabilities + Equity or Assets - Liabilities - Equity = 0
Another way of stating it is:
Equity = Assets - Liabilities
which can be interpreted as: "Equity is what is left if all assets have been sold and all liabilities have been paid".
There are several related professional qualifications in the field of financial accountancy including:
Qualified Accountant qualifications (Chartered Certified Accountant(ACCA), Chartered Accountant (CA) and Certified Public Accountant (CPA))
CCA Chartered Cost Accountant (cost control) designation offered by the American Academy of Financial Management.
Accounting analyst:-
An accounting analyst evaluates and interprets public company financial statements. Public companies issue these (10-k) annual financial statements as required by the Security and Exchange Commission. The statements include the balance sheet, the income statement, the statement of cash flows and the notes to the financial statements. Specifically, the notes to the financial statements contain considerable quantitative detail supporting the financial statements along with narrative information.
This individual has extensive training in understanding financial accounting principles for public companies based on generally accepted accounting principles as provided by the Financial Accounting Standards Board. Or, he/she may have additional experience in applying international accounting standards based on the rules put out by the International Accounting Standards Board.
As an example, the accounting analyst may work for a financial research company evaluating differing financial accounting principles and how they influence the company's reported wealth.
The accounting analyst will most likely hold a Masters Degree in Accounting MSAcc and will have specialized in the financial accounting area. Or, the analyst may have a MBA degree with an Accounting specialization.
In addition, the analyst may hold the Chartered Certified Accountant (ACCA) or Certified Public Accountant (CPA) or Chartered Accountant (CA or ACA) designation.
Posted by DOKUTAKE at 3:15 am 0 comments
Types of accounts
In accountancy, an account is a label used for recording and reporting a quantity of almost anything. Most often it is a record of an amount of money owned or owed by or to a particular person or entity, or allocated to a particular purpose. It may represent amounts of money that have actually changed hands, or it may represent an estimate of the values of assets, or it may be a combination of these.
Types of accounts:-
- Asset accounts: represent the different types of economic resources owned by a business, common examples of Asset accounts are cash, cash in bank, building, inventory, prepaid rent, goodwill, accounts receivable.
- Liability accounts: represent the different types of economic obligations by a business, such as accounts payable, bank loan, bonds payable, accrued interest.
- Equity accounts: represent the residual equity of a business (after deducting from Assets all the liabilities) including Retained Earnings and Appropriations.
- Revenue or Income accounts: represent the company's gross earnings and common examples include Sales, Service revenue and Interest Income.
- Expense accounts: represent the company's expenditures to enable itself to operate. Common examples are electricity and water, rentals, depreciation, doubtful accounts, interest, insurance.
- Contra-accounts: from the term contra, meaning to deduct, the value of which are opposite the 5 above mentioned types of accounts. For instance, a contra-asset account is Accumulated depreciation. This label represent deductions to a relatively permanent asset like Building.
Chart of accounts:-
A chart of accounts (COA) is a list of all accounts tracked by a single accounting system, and should be designed to capture financial information to make good financial decisions. Each account in the chart is assigned a unique identifier, typically an account number. Each account in the Anglo-Saxon chart is classified into one of the five categories: assets, liabilities, equity, income and expenses.
Posted by DOKUTAKE at 3:07 am 0 comments
Wednesday, May 07, 2008
finance
An entity whose income exceeds its expenditure can lend or invest the excess income. On the other hand, an entity whose income is less than its expenditure can raise capital by borrowing or selling equity claims, decreasing its expenses, or increasing its income. The lender can find a borrower, a financial intermediary, such as a bank or buy notes or bonds in the bond market. The lender receives interest, the borrower pays a higher interest than the lender receives, and the financial intermediary pockets the difference.
A bank aggregates the activities of many borrowers and lenders. A bank accepts deposits from lenders, on which it pays the interest. The bank then lends these deposits to borrowers. Banks allow borrowers and lenders, of different sizes, to coordinate their activity. Banks are thus compensators of money flows in space.
A specific example of corporate finance is the sale of stock by a company to institutional investors like investment banks, who in turn generally sell it to the public. The stock gives whoever owns it part ownership in that company. If you buy one share of XYZ Inc, and they have 100 shares outstanding (held by investors), you are 1/100 owner of that company. Of course, in return for the stock, the company receives cash, which it uses to expand its business in a process called "equity financing". Equity financing mixed with the sale of bonds (or any other debt financing) is called the company's capital structure.
Finance is used by individuals (personal finance), by governments (public finance), by businesses (corporate finance), as well as by a wide variety of organizations including schools and non-profit organizations. In general, the goals of each of the above activities are achieved through the use of appropriate financial instruments, with consideration to their institutional setting.
Finance is one of the most important aspects of business management. Without proper financial planning a new enterprise is unlikely to be successful. Managing money (a liquid asset) is essential to ensure a secure future, both for the individual and an organization.
Corporate finance
Managerial or corporate finance is the task of providing the funds for a corporation's activities. For small business, this is referred to as SME finance. It generally involves balancing risk and profitability, while attempting to maximize an entity's wealth and the value of its stock.
Long term funds are provided by ownership equity and long-term credit, often in the form of bonds. The balance between these forms the company's capital structure. Short-term funding or working capital is mostly provided by banks extending a line of credit.
Another business decision concerning finance is investment, or fund management. An investment is an acquisition of an asset in the hope that it will maintain or increase its value. In investment management -- in choosing a portfolio -- one has to decide what, how much and when to invest. To do this, a company must:
Identify relevant objectives and constraints: institution or individual goals, time horizon, risk aversion and tax considerations;
Identify the appropriate strategy: active v. passive -- hedging strategy
Measure the portfolio performance
Financial management is duplicate with the financial function of the Accounting profession. However, financial accounting is more concerned with the reporting of historical financial information, while the financial decision is directed toward the future of the firm.
Personal finance
Questions in personal finance revolve around
How much money will be needed by an individual (or by a family) at various points in the future?
Where will this money come from (e.g. savings or borrowing)?
How can people protect themselves against unforeseen events in their lives, and risk in financial markets?
How can family assets be best transferred across generations (bequests and inheritance)?
How do taxes (tax subsidies or penalties) affect personal financial decisions?
How does credit affect an individual's financial standing?
How can one plan for a secure financial future in an environment of economic instability?
Personal financial decisions may involve paying for education, financing durable goods such as real estate and cars, buying insurance, e.g. health and property insurance, investing and saving for retirement.
Personal financial decisions may also involve paying for a loan.
Capital
Capital, in the financial sense, is the money which gives the business the power to buy goods to be used in the production of other goods or the offering of a service.
Sources of capital
Long Term - usually above 7 years
Share Capital
Mortgage
Retained Profit
Venture Capital
Debenture
Sale & Leaseback
Project Finance
Medium Term - usually between 2 and 7 years
Term Loans
Leasing
Hire Purchase
Short Term - usually under 2 years
Bank Overdraft
Trade Credit
Deferred Expenses
Factoring
Capital market
Long-term funds are bought and sold:
Shares
Debentures
Long-term loans, often with a mortgage bond as security
Reserve funds
Euro Bonds
Money market
Financial institutions can use short-term savings to lend out in the form of short-term loans:
Credit on open account
Bank overdraft
Short-term loans
Bills of exchange
Factoring of debtors
Borrowed capital
This is capital which the business borrows from institutions or people, and includes debentures:
Redeemable debentures
Irredeemable debentures
Debentures to bearer
Hardcore debentures
Own capital
This is capital that owners of a business (shareholders and partners, for example) provide:
Preference shares/hybrid source of finance
Ordinary preference shares
Cumulative preference shares
Participating preference share
Ordinary shares
Bonus shares
Founders' shares
Differences between shares and debentures
Shareholders are effectively owners; debenture-holders are creditors.
Shareholders may vote at AGMs and be elected as directors; debenture-holders may not vote at AGMs or be elected as directors.
Shareholders receive profit in the form of dividends; debenture-holders receive a fixed rate of interest.
If there is no profit, the shareholder does not receive a dividend; interest is paid to debenture-holders regardless of whether or not a profit has been made.
In case of dissolution of firms debenture holders are paid first as compared to shareholder.
Fixed capital
This is money which is used to purchase assets that will remain permanently in the business and help it to make a profit.
Factors determining fixed capital requirements
Nature of business
Size of business
Stage of development
Capital invested by the owners
location of that area
Working capital
This is money which is used to buy stock, pay expenses and finance credit.
Factors determining working capital requirements
Size of business
Stage of development
Time of production
Rate of stock turnover ratio
Buying and selling terms
Seasonal consumption
Seasonal production
Posted by DOKUTAKE at 1:57 am 0 comments
Monday, April 28, 2008
Transcode
Transcoding is the direct digital-to-digital conversion from one (usually lossy) codec to another. It involves decoding/decompressing the original data to a raw intermediate format (i.e. PCM for audio or YUV for video), in a way that mimics standard playback of the lossy content, and then re-encoding this into the target format. The simplest way to do transcoding is to decode a bitstream into YUV format using a compatible decoder and then encode the data using an encoder of a different standard. A better way to transcode is to change the bitstream format from one standard to another without its undergoing the complete decoding and encoding process. Many algorithms exist to achieve this.
Transrating is a process similar to transcoding in which files are coded to a lower bitrate without changing video formats. Need for transrating arises from the fact that the bitrate requirement varies from channel to channel because of vastness in the compression standards in use. Changing the picture size of video is known as transsizing.
Transcoding is the direct digital-to-digital conversion from one (usually lossy) codec to another. It involves decoding/decompressing the original data to a raw intermediate format (i.e. PCM for audio or YUV for video), in a way that mimics standard playback of the lossy content, and then re-encoding this into the target format. The simplest way to do transcoding is to decode a bitstream into YUV format using a compatible decoder and then encode the data using an encoder of a different standard. A better way to transcode is to change the bitstream format from one standard to another without its undergoing the complete decoding and encoding process. Many algorithms exist to achieve this.
Transrating is a process similar to transcoding in which files are coded to a lower bitrate without changing video formats. Need for transrating arises from the fact that the bitrate requirement varies from channel to channel because of vastness in the compression standards in use. Changing the picture size of video is known as transsizing.
Compression artifacts are cumulative; therefore transcoding between lossy codecs causes a progressive loss of quality with each successive generation. For this reason, it is generally discouraged unless unavoidable. For instance, if an individual owns a digital audio player that does not support a particular format (e.g., Apple iPod and Ogg Vorbis), then the only way for the owner to use content encoded in that format is to transcode it to a supported format. It is better to retain a copy in a lossless format (such as TTA, FLAC or WavPack), and then encode directly from the lossless source file to the lossy formats required.
Data transformation
Data transformation can be divided into two steps:
data mapping maps data elements from the source to the destination and captures any transformation that must occur
code generation that creates the actual transformation program
Data element to data element mapping is frequently complicated by complex transformations that requires one-to-many and many-to-one transformation rules.
The code generation step takes the data element mapping specification and creates an executable program that can be run on a computer system. Code generation can also create transformation in easy-to-maintain computer languages such as Java or XSLT.
When the mapping is indirect via a mediating data model, the process is also called data mediation.There are numerous languages available for performing data transformation. Many transformational languages require a grammar to be provided. In many cases the grammar is structured using something closely resembling Backus–Naur Form (BNF). There are numerous languages available for such purposes varying in their accessibility (cost) and general usefulness. Examples of such languages include:
XSLT - the XML transformation language
TXL - prototyping language-based descriptions using source transformation
It should be noted that though transformational languages are typically best suited for transformation, something as simple as regular expressions can be used to achieve useful transformation. Textpad supports the use of regular expressions with arguments. This would allow all instances of a particular pattern to be replaced with another pattern using parts of the original pattern.Another advantage to using regular expressions is that they will not fail the null transform test. That is, using your transformational language of choice, run a sample program through a transformation that doesn't perform any transformations. Many transformational languages will fail this test.In other words, all instances of a function invocation of foo with three arguments, followed by a function invocation with two invocations would be replaced with a single function invocation using some or all of the original set of arguments.A really general solution to handling this is very hard because such preprocessor directives can essentially edit the underlying language in arbitrary ways. However, because such directives are not, in practice, used in completely arbitrary ways, one can build practical tools for handling preprocessed languages. The DMS Software Reengineering Toolkit is capable of handling structured macros and preprocessor conditionals.
Posted by DOKUTAKE at 4:40 am 0 comments
Monday, April 21, 2008
Multimeter
A multimeter or a multitester, also known as a volt/ohm meter or VOM, is an electronic measuring instrument that combines several functions in one unit. A standard multimeter may include features such as the ability to measure voltage, current and resistance. There are two categories of multimeters, analog multimeters and digital multimeters (often abbreviated DMM.)
A multimeter can be a hand-held device useful for basic fault finding and field service work or a bench instrument which can measure to a very high degree of accuracy. They can be used to troubleshoot electrical problems in a wide array of industrial and household devices such as batteries, motor controls, appliances, power supplies, and wiring systems.
Multimeters are available in a wide ranges of features and prices. Cheap multimeters can cost less than US$10, while the top of the line multimeters can cost more than US$5000.
The resolution of a multimeter is often specified in "digits" of resolution. For example, the term 5½ digits refers to the number of digits displayed on the readout of a multimeter.
By convention, a half digit can display either a zero or a one, while a three-quarters digit can display a numeral higher than a one but not nine. Commonly, a three-quarters digit refers to a maximum count of 3 or 5. The fractional digit is always the most significant digit in the displayed value. A 5½ digit multimeter would have five full digits that display values from 0 to 9 and one half digit that could only display 0 or 1.Such a meter could show positive or negative values from 0 to 199,999. A 3¾ digit meter can display a quantity from 0 to 3,999 or 5,999, depending on the manufacturer.
Resolution of analog multimeters is limited by the width of the scale pointer, vibration of the pointer, parallax observation errors, and the accuracy of printing of scales. Resistance measurements, in particular, are of low precision due to the typical resistance measurement circuit which compresses the scale at the higher resistance values. Mirrored scales and larger meter movements are used to improve resolution; two and a half to three digits equivalent resolution is usual (and may be adequate for the limited precision actually necessary for most measurements).
While a digital display can easily be extended in precision, the extra digits are of no value if not accompanied by care in the design and calibration of the analog portions of the multimeter. Meaningful high-resolution measurements require a good understanding of the instrument specifications, good control of the measurement conditions, and traceability of the calibration of the instrument.
Digital multimeters generally take measurements with superior accuracy to their analog counterparts. Analog multimeters typically measure with three to five percent accuracy.[citation needed] Standard portable digital multimeters claim to be capable of taking measurements with an accuracy of 0.5% on DC voltage and current scales. Mainstream bench-top multimeters make claim es to have as great accuracy as ±0.01%. Laboratory grade instruments can have accuracies in the parts per million figures.
Manufacturers can provide calibration services so that new meters may be purchased with a certificate of calibration indicating the meter has been adjusted to standards traceable to the National Institute of Standards and Technology. Such manufacturers usually provide calibration services after sales, as well, so that older equipment may be recertified. Multimeters used for critical measurements may be part of a metrology program to assure calibration.
The current load, or how much current is drawn from the circuit being tested may affect a multimeter's accuracy. A small current draw usually will result in more precise measurements. With improper usage or too much current load, a multimeter may be damaged therefore rendering its measurements unreliable and substandard.
Meters with electronic amplifiers in them, such as all digital multimeters and transistorized analog meters, have a standardized input impedance usually considered high enough not to disturb the circuit tested. This is often one million ohms, or ten million ohms. The standard input impedance allows use of external probes to extend the direct-current measuring range up to tens of thousands of volts.
Analog multimeters of the moving pointer type draw current from the circuit under test to deflect the meter pointer. The impedance of the meter varies depending on the basic sensitivity of the meter movement and the range which is selected. For example, a meter with a 20,000 ohms/volt sensitivity will have an input resistance of two million ohms on the 100 volt range (100 V * 20,000 ohms/volt = 2,000,000 ohms). Low-sensitivity meters are useful for general purpose testing especially in power circuits, where source impedances are low compared to the meter impedance. Measurements in signal circuits generally require higher sensitivity so as not to load down the circuit under test with the meter impedance.
The sensitivity of a meter is also a measure of the lowest voltage, current or resistance that can be measured with it. For general-purpose digital multimeters, a full-scale range of several hundred millivolts AC or DC is common, but the minimum full-scale current range may be several hundred milliamps. Since general-purpose mulitmeters have only two-wire resistance measurements, which do not compensate for the effect of the lead wire resistance, measurements below a few tens of ohms will be of low accuracy. The upper end of multimeter measurement ranges varies considerably by manufacturer; generally measurements over 1000 volts, over 10 amperes, or over 100 megohms would require a specialized test instrument, as would accurate measurement of currents on the order of microamperes or less.
Since the basic indicator system in either an analog or digital meter responds to DC only, a multimeter includes an AC to DC conversion circuit for making alternating current measurements. Basic multimeters may utilize a rectifier circuit, calibrated to evaluate the average value of a rectified sine wave. User guides for such meters will give correction factors for some simple waveforms, to allow the correct root mean square (RMS) equivalent value to be calculated for the average-responding meter. More expensive multimeters will include an AC to DC converter that responds to the RMS value of the waveform for a wide range of possible waveforms; the user manual for the meter will indicate the limits of the crest factor and frequency for which the meter calibration is valid. RMS sensing is necessary for measurement s of non-sinusoidal quantities, such as found in audio signals, or in variable-frequency drives.
Modern multimeters are often digital due their accuracy, durability and extra features.
In a DMM the signal under test is converted to a voltage and an amplifier with an electronically controlled gain preconditions the signal.
A DMM displays the quantity measured as a number, which prevents parallax errors.
The inclusion of solid state electronics, from a control circuit to small embedded computers, has provided a wealth of convenience features in modern digital meters. Commonly available measurement enhancements include:
Auto-ranging, which selects the correct range for the quantity under test so that the most significant digits are shown. For example, a four-digit multimeter would automatically select an appropriate range to display 1.234 instead of 0.012, or overloading. Auto-ranging meters usually include a facility to 'freeze' the meter to a particular range, because a measurement that causes frequent range changes is distracting to the user.
Auto-polarity for direct-current readings, shows if the applied voltage is positive (agrees with meter lead labels) or negative (opposite polarity to meter leads).
Sample and hold, which will latch the most recent reading for examination after the instrument is removed from the circuit under test.
Current-limited tests for voltage drop across semiconductor junctions. While not a replacement for a transistor tester, this facilitates testing diodes and a variety of transistor types.
A graphic representation of the quantity under test, as a bar graph. This makes go/no-go testing easy, and also allows spotting of fast-moving trends.
A low-bandwidth oscilloscope.
Automotive circuit testers, including tests for automotive timing and dwell signals.
Simple data acquisition features to record maximum and minimum readings over a given period, or to take a number of samples at fixed intervals.
Modern meters may be interfaced with a personal computer by IrDA links, RS-232 connections, USB, or an instrument bus such as IEEE-488. The interface allows the computer to record measurements as they are made. Some DMM's can store measurements and upload them to a computer.The first digital multimeter was manufactured in 1955 by Non Linear Systems.
Modern multimeters are often digital due their accuracy, durability and extra features.
In a DMM the signal under test is converted to a voltage and an amplifier with an electronically controlled gain preconditions the signal.
A DMM displays the quantity measured as a number, which prevents parallax errors.
The inclusion of solid state electronics, from a control circuit to small embedded computers, has provided a wealth of convenience features in modern digital meters. Commonly available measurement enhancements include:
Auto-ranging, which selects the correct range for the quantity under test so that the most significant digits are shown. For example, a four-digit multimeter would automatically select an appropriate range to display 1.234 instead of 0.012, or overloading. Auto-ranging meters usually include a facility to 'freeze' the meter to a particular range, because a measurement that causes frequent range changes is distracting to the user.
Auto-polarity for direct-current readings, shows if the applied voltage is positive (agrees with meter lead labels) or negative (opposite polarity to meter leads).
Sample and hold, which will latch the most recent reading for examination after the instrument is removed from the circuit under test.
Current-limited tests for voltage drop across semiconductor junctions. While not a replacement for a transistor tester, this facilitates testing diodes and a variety of transistor types.A graphic representation of the quantity under test, as a bar graph. This makes go/no-go testing easy, and also allows spotting of fast-moving trends.
A low-bandwidth oscilloscope.
Automotive circuit testers, including tests for automotive timing and dwell signals.
Simple data acquisition features to record maximum and minimum readings over a given period, or to take a number of samples at fixed intervals.
Modern meters may be interfaced with a personal computer by IrDA links, RS-232 connections, USB, or an instrument bus such as IEEE-488. The interface allows the computer to record measurements as they are made. Some DMM's can store measurements and upload them to a computer.The first digital multimeter was manufactured in 1955 by Non Linear Systems.
A multimeter can utilize a variety of test probes to connect to the circuit or device under test. Crocodile clips, retractable hook clips, and pointed probes are the three most common attachments. The connectors are attached to flexible, thickly-insulated leads that are terminated with connectors appropriate for the meter. Handheld meters typically use shrouded or recessed banana jacks, while benchtop meters may use banana jacks or BNC connectors.
Meters which measure high voltages or current may use non-contact attachment mechanism to trade accuracy for safety. Clamp meters provide a coil that clamps around a conductor in order to measure the current flowing through it.
Almost every multimeter includes a fuse, which will generally prevent damage to the multimeter if it is overloaded. A common error when operating a multimeter is to set the meter to measure resistance or current and then connect it directly to a low-impedance voltage source; meters without protection are quickly damaged by such errors and may cause injury to the operator.
Digital meters are category rated based on their intended application, as set forth by the CEN EN61010 standard.There are four categories:
Category I: used where current levels are low.
Category II: used on residential branch circuits.
Category III: used on permanently installed loads such as distribution panels, motors, and appliance outlets.
Category IV: used on locations where current levels are high, such as service entrances, main panels, and house meters.
Each category also specifies maximum transient voltages for selected measuring ranges in the meter.Category-rated meters also feature protections from over-current faults.
Multimeters were invented in the early 1920's as radio receivers and other vacuum tube electronic devices became more common. As modern systems become more complicated, the multimeter is becoming more complex or may be supplemented by more specialized equipment in a technician's toolkit. For example, where a general-purpose multimeter might only test for short-circuits, conductor resistance and some coarse measure of insulation quality, a modern technician may use a hand-held analyzer to test several parameters in order to validate the performance of a network cable.
Posted by DOKUTAKE at 10:08 pm 0 comments
Monday, April 14, 2008
The Mind
Mind collectively refers to the aspects of intellect and consciousness manifested as combinations of thought, perception, memory, emotion, will and imagination; mind is the stream of consciousness. It includes all of the brain's conscious processes. This denotation sometimes includes, in certain contexts, the working of the human unconscious or the conscious thoughts of animals. "Mind" is often used to refer especially to the thought processes of reason.
There are many theories of the mind and its function. The earliest recorded works on the mind are by Zarathushtra, the Buddha, Plato, Aristotle, Adi Shankara and other ancient Greek, Indian and Islamic philosophers. Pre-scientific theories, based in theology, concentrated on the relationship between the mind and the soul, the supposed supernatural, divine or god-given essence of the person. Modern theories, based on scientific understanding of the brain, theorise that the mind is a phenomenon of the brain and is synonymous with consciousness.
The question of which human attributes make up the mind is also much debated. Some argue that only the "higher" intellectual functions constitute mind: particularly reason and memory. In this view the emotions - love,hate, fear, joy - are more "primitive" or subjective in nature and should be seen as different from the mind. Others argue that the rational and the emotional sides of the human person cannot be separated, that they are of the same nature and origin, and that they should all be considered as part of the individual mind.
In popular usage mind is frequently synonymous with thought: It is that private conversation with ourselves that we carry on "inside our heads." Thus we "make up our minds," "change our minds" or are "of two minds" about something. One of the key attributes of the mind in this sense is that it is a private sphere to which no one but the owner has access. No-one else can "know our mind." They can only know what we communicate.
Mental faculties
Thought is a mental process which allows beings to model the world, and so to deal with it effectively according to their goals, plans, ends and desires. Words referring to similar concepts and processes include cognition, sentience, consciousness, idea, and imagination. Thinking involves the cerebral manipulation of information, as when we form concepts, engage in problem solving, reason and make decisions. Thinking is a higher cognitive function and the analysis of thinking processes is part of cognitive psychology.
Memory is an organism's ability to store, retain, and subsequently recall information. Although traditional studies of memory began in the realms of philosophy, the late nineteenth and early twentieth century put memory within the paradigms of cognitive psychology. In recent decades, it has become one of the principal pillars of a new branch of science called cognitive neuroscience, a marriage between cognitive psychology and neuroscience.
Imagination is accepted as the innate ability and process to invent partial or complete personal realms within the mind from elements derived from sense perceptions of the shared world. The term is technically used in psychology for the process of reviving in the mind percepts of objects formerly given in sense perception. Since this use of the term conflicts with that of ordinary language, some psychologists have preferred to describe this process as "imaging" or "imagery" or to speak of it as "reproductive" as opposed to "productive" or "constructive" imagination. Imagined images are seen with the "mind's eye". One hypothesis for the evolution of human imagination is that it allowed conscious beings to solve problems (and hence increase an individual's fitness) by use of mental simulation.
Consciousness is a quality of the mind generally regarded to comprise qualities such as subjectivity, self-awareness, sentience, sapience, and the ability to perceive the relationship between oneself and one's environment. It is a subject of much research in philosophy of mind, psychology, neuroscience, and cognitive science. Some philosophers divide consciousness into phenomenal consciousness, which is subjective experience itself, and access consciousness, which refers to the global availability of information to processing systems in the brain.Phenomenal consciousness is a state with qualia. Phenomenal consciousness is being something and access consciousness is being conscious of something.
Philosophy of mind
Main article: Philosophy of mind
Philosophy of mind is the branch of philosophy that studies the nature of the mind, mental events, mental functions, mental properties, consciousness and their relationship to the physical body. The mind-body problem, i.e. the relationship of the mind to the body, is commonly seen as the central issue in philosophy of mind, although there are other issues concerning the nature of the mind that do not involve its relation to the physical body.Dualism and monism are the two major schools of thought that attempt to resolve the mind-body problem. Dualism is the position that mind and body are in some way separate from each other. It can be traced back to Plato,Aristotle and the Samkhya and Yoga schools of Hindu philosophy,but it was most precisely formulated by René Descartes in the 17th century.Substance dualists argue that the mind is an independently existing substance, whereas Property dualists maintain that the mind is a group of independent properties that emerge from and cannot be reduced to the brain, but that it is not a distinct substance.
Monism is the position that mind and body are not ontologically distinct kinds of entities. This view was first advocated in Western Philosophy by Parmenides in the 5th Century BC and was later espoused by the 17th Century rationalist Baruch Spinoza.Physicalists argue that only the entities postulated by physical theory exist, and that the mind will eventually be explained in terms of these entities as physical theory continues to evolve. Idealists maintain that the mind is all that exists and that the external world is either mental itself, or an illusion created by the mind. Neutral monists adhere to the position that there is some other, neutral substance, and that both matter and mind are properties of this unknown substance. The most common monisms in the 20th and 21st centuries have all been variations of physicalism; these positions include behaviorism, the type identity theory, anomalous monism and functionalism.
Many modern philosophers of mind adopt either a reductive or non-reductive physicalist position, maintaining in their different ways that the mind is not something separate from the body.These approaches have been particularly influential in the sciences, particularly in the fields of sociobiology, computer science, evolutionary psychology and the various neurosciences.Other philosophers, however, adopt a non-physicalist position which challenges the notion that the mind is a purely physical construct. Reductive physicalists assert that all mental states and properties will eventually be explained by scientific accounts of physiological processes and states.Non-reductive physicalists argue that although the brain is all there is to the mind, the predicates and vocabulary used in mental descriptions and explanations are indispensable, and cannot be reduced to the language and lower-level explanations of physical science.Continued neuroscientific progress has helped to clarify some of these issues. However, they are far from having been resolved, and modern philosophers of mind continue to ask how the subjective qualities and the intentionality (aboutness) of mental states and properties can be explained in naturalistic terms.
Science of mind
Psychology the scientific study of human behaviour; Noology, the study of thought. As both an academic and applied discipline, Psychology involves the scientific study of mental processes such as perception, cognition, emotion, personality, as well as environmental influences, such as social and cultural influences, and interpersonal relationships, in order to devise theories of human behaviour. Psychology also refers to the application of such knowledge to various spheres of human activity, including problems of individuals' daily lives and the treatment of mental health problems.
Psychology differs from the other social sciences (e.g., anthropology, economics, political science, and sociology) due to its focus on experimentation at the scale of the individual, as opposed to groups or institutions. Historically, psychology differed from biology and neuroscience in that it was primarily concerned with mind rather than brain, a philosophy of mind known as dualism. Modern psychological science incorporates physiological and neurological processes into its conceptions of perception, cognition, behaviour, and mental disorders.
See Sigmund Freud,Carl Jung, and Unconscious mind
A new scientific initiative, the Decade of the Mind, seeks to advocate for the U.S. Government to invest $4 billion over the next ten years in the science of the mind.
Mental health
By analogy with the health of the body, one can speak metaphorically of a state of health of the mind, or mental health. Merriam-Webster defines mental health as "A state of emotional and psychological well-being in which an individual is able to use his or her cognitive and emotional capabilities, function in society, and meet the ordinary demands of everyday life." According to the World Health Organization (WHO), there is no one "official" definition of mental health. Cultural differences, subjective assessments, and competing professional theories all affect how "mental health" is defined. In general, most experts agree that "mental health" and "mental illness" are not opposites. In other words, the absence of a recognized mental disorder is not necessarily an indicator of mental health.
One way to think about mental health is by looking at how effectively and successfully a person functions. Feeling capable and competent; being able to handle normal levels of stress, maintaining satisfying relationships, and leading an independent life; and being able to "bounce back," or recover from difficult situations, are all signs of mental health.
Psychotherapy is an interpersonal, relational intervention used by trained psychotherapists to aid clients in problems of living. This usually includes increasing individual sense of well-being and reducing subjective discomforting experience. Psychotherapists employ a range of techniques based on experiential relationship building, dialogue, communication and behavior change and that are designed to improve the mental health of a client or patient, or to improve group relationships (such as in a family). Most forms of psychotherapy use only spoken conversation, though some also use various other forms of communication such as the written word, art, drama, narrative story, or therapeutic touch. Psychotherapy occurs within a structured encounter between a trained therapist and client(s). Purposeful, theoretically based psychotherapy began in the 19th century with psychoanalysis; since then, scores of other approaches have been developed and continue to be created.
Posted by DOKUTAKE at 5:22 am 0 comments
Monday, April 07, 2008
Time-domain reflectometer
In telecommunication, an optical time domain reflectometer (OTDR) is an optoelectronic instrument used to characterize an optical fiber.
An OTDR injects a series of optical pulses into the fiber under test. It also extracts, from the same end of the fiber, light that is scattered back and reflected back from points in the fiber where the index of refraction changes. (This is equivalent to the way that an electronic TDR measures reflections caused by changes in the impedance of the cable under test.) The intensity of the return pulses is measured and integrated as a function of time, and is plotted as a function of fiber length.
An OTDR may be used for estimating the fiber's length and overall attenuation, including splice and mated-connector losses. It may also be used to locate faults, such as breaks.
Because of this sensitivity to impedance variations, a TDR may be used to verify cable impedance characteristics, splice and connector locations and associated losses, and estimate cable lengths, as every nonhomogenity in the impedance of the cable will reflect some signal back in the form of echoes.
A similar effect occurs if the far end of the cable is an open circuit (terminated into an infinite impedance). In this case, though, the reflection from the far end is polarized identically with the original pulse and adds to it rather than cancelling it out. So after a round-trip delay, the voltage at the TDR abruptly jumps to twice the originally-applied voltage.
Note that a theoretical perfect termination at the far end of the cable would entirely absorb the applied pulse without causing any reflection. In this case, it would be impossible to determine the actual length of the cable. Luckily, perfect terminations are very rare and some small reflection is nearly always caused. (This property was employed by a now-defunct audio cable company to design unusual high-end audio cables, and while those cables can no longer be purchased, the site remains an excellent introduction to the principles of the technology.)
The magnitude of the reflection is referred to as the reflection coefficient or ρ. The coefficient ranges from 1 (open circuit) to -1 (short circuit). The value of zero means that there is no reflection. The reflection coefficient is calculated as follows:
Any discontinuity can be viewed as a termination impedance and substituted as Zt. This includes abrupt changes in the characteristic impedance. As an example, a trace width on a printed circuit board doubled at its midsection would constitute a discontinuity. Some of the energy will be reflected back to the driving source; the remaining energy will be transmitted. This is also known as a scattering junction.
TDRs are also very useful tools for Technical Surveillance Counter-Measures, where they help determine the existence and location of wire taps. The slight change in line impedance caused by the introduction of a tap or splice will show up on the screen of a TDR when connected to a phone line.
TDR equipment is also an essential tool in the failure analysis of today's high-speed printed circuit boards. The signal traces on these boards are carefully crafted to emulate a transmission line. By observing reflections, any unsoldered pins of a ball grid array device can be detected. Additionally, short circuited pins can also be detected in a similar fashion.
The TDR principle is used in industrial settings, in situations as diverse as the testing of integrated circuit packages to measuring liquid levels. In the former, the time domain reflectometer is used to isolate failing sites in the same. The latter is primarily limited to the process industry.
In a TDR-based level measurement device, a low-energy electromagnetic impulse generated by the sensor’s circuitry is propagated along a thin wave guide (also referred to as a probe) – usually a metal rod or a steel cable. When this impulse hits the surface of the medium to be measured, part of the impulse energy is reflected back up the probe to the circuitry which then calculates the fluid level from the time difference between the impulse sent and the impulse reflected (in nanoseconds). The sensors can output the analyzed level as a continuous analog signal or switch output signals. In TDR technology, the impulse velocity is primarily affected by the permittivity of the medium through which the pulse propagates, which can vary greatly by the moisture content and temperature of the medium. In most cases, this can be corrected for without undue difficulty. However, in complex environments, such as in boiling and/or high temperature environments, this can be a significant signal processing dilemma. In particular, determining the froth height and true collapsed liquid level in a frothy / boiling medium can be very difficult.
TDR is used to determine moisture content in soil and porous media, where over the last two decades substantial advances have been made; including in soils, grains and food stuffs, and in sediments. The key to TDR’s success is its ability to accurately determine the permittivity (dielectric constant) of a material from wave propagation, and the fact that there is a strong relationship between the permittivity of a material and its water content, as demonstrated in the pioneering works of Hoekstra and Delaney (1974) and Topp et al. (1980). Recent reviews and reference work on the subject include, Topp and Reynolds (1998), Noborio (2001), Pettinellia et al. (2002), Topp and Ferre (2002) and Robinson et al. (2003). The TDR method is a transmission line technique, and determines an apparent TDR permittivity (Ka) from the travel time of an electromagnetic wave that propagates along a transmission line, usually two or more parallel metal rods embedded in a soil or sediment. TDR probes are usually between 10 and 30 cm in length and connected to the TDR via a coaxial cable.
Time Domain Reflectometry (TDR) has also been utilized to monitor slope movement in a variety of geotechnical settings including highway cuts, rail beds, and open pit mines (Dowding & O'Connor, 1984, 2000a, 2000b; Kane & Beck, 1999). In stability monitoring applications using TDR, a coaxial cable is installed in a vertical borehole passing through the region of concern. The electrical impedance at any point along a coaxial cable changes with deformation of the insulator between the conductors. A brittle grout surrounds the cable to translate earth movement into an abrupt cable deformation that shows up as a detectable peak in the reflectance trace. Until recently, the technique was relatively insensitive to small slope movements and could not be automated because it relied on human detection of changes in the reflectance trace over time. Farrington and Sargand (2004) developed a simple signal processing technique using numerical derivatives to extract reliable indications of slope movement from the TDR data much earlier than by conventional interpretation.
Time Domain Reflectometry is used in semiconductor failure analysis as a non-destructive method for the location of defects in semiconductor device packages. The TDR provides an electrical signature of individual conductive traces in the device package, and is useful for determining the location of opens and shorts.
Posted by DOKUTAKE at 1:46 am 0 comments